Resolved -
This incident started at 22:27 UTC when latency began to rise noticeably.
At 22:40 UTC, A control‑plane fault caused several disks to be excluded at once from the cluster. The resulting load on the control plane then prevented the cluster from self‑recovering, progressively freezing read/write for a maximum of 31 % of this cluster.
At 00:25 UTC, The Storage team finished to reintegrate the excluded disks and the cluster returned to a healthy state.
Post‑00:UTC, Product teams cleared the remaining stuck Instances and Databases.
No data loss detected on the affected cluster.
Feb 28, 05:50 CET
Update -
We are working on managing some side effects.
The situation is still stable at the moment.
Feb 28, 03:07 CET
Monitoring -
Latencies should be back to normal.
We continue to monitor the situation.
Feb 28, 02:35 CET
Update -
The situation has been stabilized.
Some latencies may still be observed; we continue to monitor the situation.
We keep providing updates and will notify you once everything is fully back to normal.
Feb 28, 02:03 CET
Update -
We are still working to stabilize the situation.
Thank you again for your patience and understanding.
Feb 28, 01:53 CET
Identified -
The situation is now stabilizing.
Feb 28, 01:27 CET
Update -
Our team remains fully engaged in resolving the problem.
We truly appreciate your patience and understanding.
Feb 28, 01:21 CET
Update -
Our team is still actively working on the issue.
Thank you again for your patience and understanding.
Feb 28, 01:02 CET
Update -
Our team is still actively working on the issue.
We will continue to provide updates every 30 minutes
Thank you for your patience and understanding.
Feb 28, 00:46 CET
Investigating -
A block storage cluster on nl-ams-1 is having difficulties impacting multiple products. Our teams are on it.
Feb 28, 00:18 CET