All systems operational

Refreshed

CuByte Status

This is the official status page of CuByte's global infrastructure. All related incidents will be published here. Contact our network operations center. Alternatively you can create a ticket in the CuByte Support Center.

Components

FRA1 - Telehouse ?

Operational

Network Core ?

Operational

Network Peer AS3223 ?

Operational

Network Peer AS47447 ?

Operational

Power & UPS ?

Operational

Cooling ?

Operational

Cloud Server

Operational

Compute Nodes FRA1 ?

Operational

Storage Nodes FRA1 ?

Operational

Dedicated Server

Operational

Dedicated Server FRA1 ?

Operational

CuByte Public Services

Operational

Nameserver ?

Operational

Mailing Service ?

Operational

Authentication Service (SSO) ?

Operational

Public Web Gateway ?

Operational

Git & Deployment ?

Operational

File Cloud ?

Operational

CuByte Internal Services

Operational

Kubernetes Orchestrator ?

Operational

VoIP Server (PBX) ?

Operational

Core Infrastructure status of Telehouse Frankfurt colocation in Kleyerstraße 75-87 Issues with CuByte's redundant n*10G network core Issues with uplink provider network Voxility Issues with uplink provider network 23m Issues with redundant power feeds or battery backup Issues with cooling system Issues with compute nodes from private cloud cluster Issues of ceph storage servers hosting virtual machines of private cloud cluster Issues with customer dedicated servers Issues with the DNS cluster
master.cubyte.name
slave1.cubyte.name
slave2.cubyte.name
Issues with the distributed CuByte mailing service (MX) Issues with the single sign on service at auth.app.cubyte.cloud Issues with central web proxies and load balancers Issues with the public deployment servers Issues with the public file cloud Issues with the central Kubernetes Orchestration platform Issues with our VoIP infrastructure

Incident history

Ceph node HPE storage controller failure

Closed | Feb 27, 2025 | 09:00 GMT+01:00

The last node has been successfully replaced. The cluster is already successfully restored, hence we are closing this incident

+Show history

  • Monitoring | Feb 16, 2025 | 06:00 GMT+01:00

    The OS disk of one of our storage nodes stopped working. The OS mounted read/only. There is no impact as this is a redundant node.

    Update #1: we inserted a new disk and the copy job is in progress.

    Update #2: during further investigation we found that the HPE storage controller that is used for OS and data disks is faulty and the actual cause of the issue. Hardware will be replaced soon - we are aiming to replace the whole server as the lifespan of the affected hardware is anyhow close to it's end.

    Update #3: multiple shiny new ceph nodes have been placed in the datacenter. Restoration process within ceph for the first new node is in progress. Due to the size of the cluster this action will last several hours. During the time the write and read performance will be reduced.

    Update #4: the first new node has successfully taken up its work and the pgs were rebalanced mostly

    Update #5: another node has been successfully replaced without any service interruption. We will replace the remaining node tomorrow on 26th of February. No service interruption is expected.

    Update #5: the last node has been successfully replaced. The cluster is already successfully restored, hence we are closing this incident

Compute Node crashed

Closed | Feb 20, 2025 | 02:00 GMT+01:00

Unfortunately it wasn't the CPU, the Mainboard seems to be the cause. We decided to remove the whole node from the data center completely.

+Show history

  • Resolution in progress | Feb 15, 2025 | 01:05 GMT+01:00

    One of the nodes in our compute cluster crashed. We are investigating the root cause and are relocating workloads to other healthy nodes.

    Update #1 after first on-site observation we assume the CPU on socket two died. There was a short spike in temperature and afterwards the server crashed. The CPU was removed and server came back online. We have ordered already a replacement CPU and will replace it once delivered. For now all workloads have been moved to the remaining compute nodes

    Update #2 CPU was delivered and will be inserted on 20th (Wednesday)

    Update #3 Unfortunately it wasn't the CPU, the Mainboard seems to be the cause. We decided to remove the whole node from the data center completely.

Ceph degraded performance

Closed | Jan 29, 2025 | 14:50 GMT+01:00

Mons have been restored and Ceph health is back to normal.

Affected components

  • Cloud Server: Storage Nodes FRA1

+Show history

  • Resolved | Jan 13, 2025 | 04:47 GMT+01:00

    Storage cluster currently faces a partial outage. Investigation ongoing.

    Update #1: only cephfs affected due to unavailable MDS.

    Update #2: Metadata has been repaired and services are recovering.

Maintenance

No maintenance reported.

Past maintenance

March

Network: Installation of latest patch release

Mar 10, 2025 22:00 - Mar 11, 2025 00:00 | GMT+01:00

During this maintenance we will be updating our gateway routers to a new service release.
Due to the redundant design of our network we do not expect any continuous service interruption. You may see occasional packet loss.

Affected components

    • FRA1 - Telehouse: Network Peer AS47447

February

Network: Software update on edge router

Feb 24, 2025 22:00 - Feb 25, 2025 00:00 | GMT+01:00

We will update the software on one of our edge routers. During that time-frame you may notice higher latencies to some destinations like DTAG (AS3320).

Affected components

    • FRA1 - Telehouse: Network Peer AS47447

February

Replacement of certificates for VPN infrastructure

Feb 02, 2025 | 14:00 - 14:31 | GMT+01:00

Our currently used CA certificate for our internal VPN infrastructure is about to expire at Fri, 14 Feb 2025 16:34:34. This also means that CuByte is celebrating it's 10th birthday!

With this change, we are introducing cubyte-ca-v2 and will re-sign all user certificates for authenticating at our VPN infrastructure. Therefore you will need to replace your existing VPN configuration at your client with new ones.
The new configuration for your organization will be send out directly to affected users.
In case you experience issues with connecting to the VPN after Sunday the 2nd February, please reach to our support directly.

Affected components

    • CuByte Public Services: Authentication Service (SSO)

March

Network: Maintenance of Lumen transit connection

Mar 13, 2023 22:00 - Mar 14, 2023 01:00 | GMT+01:00

Lumen (AS3356) intends to carry out an internal maintenance within its network.
We do not expect any downtime during the process due to the redundant design and connectivity of our network.

The expected duration is 1 hour.

Affected components

    • FRA1 - Telehouse: Network Peer AS47447

Status Page powered by Admin Labs

Subscribe to updates

Get email notifications about status page updates.

Thanks for subscribing.

Something went wrong with your email address, please try again.