Status impact color legend
  • Black impact: None
  • Yellow impact: Minor
  • Red impact: Major
  • Blue impact: Maintenance
Investigating - We are currently investigating this issue.
Jun 21, 2025 - 11:36 CEST
Investigating - We are currently investigating this issue.
Jun 20, 2025 - 13:48 CEST
Update - We are continuing to investigate this issue.
Jun 19, 2025 - 14:23 CEST
Investigating - We are currently investigating this issue.
Jun 13, 2025 - 15:26 CEST
Monitoring - A fix has been implemented and we are monitoring the results.
Jun 09, 2025 - 17:10 CEST
Identified - The IP 51.159.208.135 of our Webhosting platform is blacklisted by Microsoft.
We are currently working on it.

May 27, 2025 - 10:10 CEST
Investigating - S3 checksum error in some client/sdk, we alway return the full object checksum even when the request is ranged, causing issue in some high-level wrappers that try to compare the range's checksum with the full object one.

Full integrity of the objects aren't impacted, and all computed checksum are valid.

May 02, 2025 - 19:00 CEST

About This Site

Welcome to the Scaleway Status website. Here, you can view the status of all Scaleway services across all products and availability zones (AZs).
You can check how subscribe, manage your email status updates and how to receive status notifications on Slack here: https://www.scaleway.com/en/docs/account/reference-content/scaleway-status-updates/

Elements - Products Operational
Instances Operational
Elastic Metal Operational
Apple Silicon Operational
Object Storage Operational
Block Storage Operational
Container Registry Operational
Network Operational
Private Network Operational
Public Gateway Operational
Load Balancer Operational
Kubernetes Kapsule Operational
Serverless Functions and Containers Operational
Serverless-Database Operational
Jobs Operational
Databases Operational
Messaging and Queuing Operational
Domains Operational
IoT Hub Operational
Web Hosting ? Operational
Transactional Email Operational
IAM Operational
Observability Operational
Secret Manager Operational
Environmental Footprint ? Operational
Developer Tools Operational
Account API Operational
Billing API Operational
Edge service Operational
Elements Console Operational
Website Operational
Elements - AZ Operational
fr-par-1 Operational
fr-par-2 Operational
fr-par-3 Operational
nl-ams-1 Operational
nl-ams-2 Operational
nl-ams-3 Operational
pl-waw-1 Operational
pl-waw-2 Operational
pl-waw-3 Operational
Dedibox - Products Degraded Performance
Dedibox Degraded Performance
Hosting Operational
SAN Operational
Dedirack Operational
Dedibackup Operational
Domains Operational
RPN Operational
Dedibox Console Operational
Dedibox VPS Operational
Dedibox - Datacenters Degraded Performance
DC2 Operational
DC3 Degraded Performance
DC5 Operational
AMS Operational
Miscellaneous Operational
Excellence Operational
BookMyName Operational
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Jun 22, 2025

No incidents reported today.

Jun 21, 2025

Unresolved incident: [NETWORK] switch down in Datacenter : DC5, Room : 1 1, rack : A44.

Jun 20, 2025
Resolved - This incident has been resolved.
Jun 20, 10:13 CEST
Investigating - We are currently being targeted by a phishing campaign.
We invite you to be careful and check the invoices directly on your console.
Our Trust and Safety team is working to resolve this.

Jun 16, 14:09 CEST
Jun 19, 2025
Resolved - This incident has been resolved.
Jun 19, 14:08 CEST
Update - We are continuing to work on a fix for this issue.
Jun 19, 14:07 CEST
Identified - Due to exceptional circumstances, the response time is slowed down.
Sorry for the inconvenience

Jun 19, 09:31 CEST
Resolved - Uplink configuration has been updated and corrected.
The situation is resolved.
Please contact Support if the issue persists.

Jun 19, 11:34 CEST
Identified - We have detected a switch down in room 101-k31-2.
Servers in that rack currently have no public network access and are unreachable.

===================

17/06/25 09H00 UTC
Issue has been forwarded to our team for resolution.

Jun 17, 16:15 CEST
Resolved - This incident has been resolved.
Jun 19, 08:20 CEST
Update - The power on most of the chassis was recovered, we are booting the remaining customer nodes.
Jun 18, 17:51 CEST
Update - These rack are still impacted by this issue:
h2-dv5
h2-ee14
h2-ee17
h2-ee18
h2-ee19
h2-ee20
h2-ee21
h2-ee24
h2-ee25
h2-ee26
h2-ee28
h2-ee29
h2-ee2h2
h2-ee30
h2-ee31

Our teams are working on the resolution.

Jun 18, 11:33 CEST
Identified - We have detected a switch down in Hall 2 E-E, Baie : E30.
Servers in that rack currently have no public network access and are unreachable since 18/06/2025 05h30.

The issue has been forwarded to our team for resolution.

Jun 18, 09:11 CEST
Jun 18, 2025
Completed - The scheduled maintenance has been completed.
Jun 18, 21:30 CEST
Update - We are actively working about restoring the power on the racks following a power maintenance done by our DC provider.
Jun 18, 16:02 CEST
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
Jun 18, 10:30 CEST
Scheduled - In order to arrange for our new UPS to power up datahall 2, we have scheduled a maintenance window in which we will be implementing this power.
During this period, customers will experience a loss of redundancy on one feed in the following corridors
DY / DV/ EE / EB / EL / EH

Impact: Loss of redundancy on one feed.

Jun 18, 10:16 CEST
Resolved - This incident has been resolved. The issue should have only impacted a handful of DDX2ELT webhostings.
Jun 18, 19:41 CEST
Monitoring - Since 13:29 (CET), no files or emails were stored and access to webmail was inaccessible.
Jun 18, 18:43 CEST
Resolved - This incident has been resolved.
Jun 18, 19:01 CEST
Investigating - The issue has been identified at around 3pm (CET). We are currently investigating it.
Jun 18, 18:02 CEST
Resolved - This incident has been resolved.
Jun 18, 10:02 CEST
Monitoring - A fix has been implemented and we are monitoring the results.
Jun 13, 15:50 CEST
Investigating - We are currently investigating this issue.
Jun 13, 12:42 CEST
Jun 17, 2025
Completed - The scheduled maintenance has been completed.
Jun 17, 15:00 CEST
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
Jun 17, 08:00 CEST
Scheduled - Scheduled
A planned software upgrade on one of two central routers is scheduled in AMS region.
During this maintenance, there will be no redundancy for network connections in the region.
Packet losses or any disruptions of the connectivity are not expected.

Jun 10, 15:31 CEST
Resolved - This incident has been resolved.
Jun 17, 14:57 CEST
Investigating - Increasing number of 502 error.
Our team is working to solve this problem.

Jun 17, 09:33 CEST
Jun 16, 2025
Jun 15, 2025

No incidents reported.

Jun 14, 2025

No incidents reported.

Jun 13, 2025
Jun 12, 2025
Completed - The scheduled maintenance has been completed.
Jun 12, 15:35 CEST
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
Jun 3, 11:35 CEST
Update - We will be undergoing scheduled maintenance during this time.
Jun 3, 11:32 CEST
Scheduled - From June 3rd until June 12th, we are migrating b_ssd volumes, b_ssd snapshot, b_ssd snapshots and unified snapshots to Scaleway Block Storage (SBS). If your organization is affected, you have received a communication via email including the list of impacted resources. No action is required on your part, volumes will remain available during the migration window.
Jun 3, 11:32 CEST
Jun 11, 2025
Resolved - This incident has been resolved.
Jun 11, 11:23 CEST
Monitoring - A fix has been implemented and we are monitoring the results.
Jun 10, 10:31 CEST
Identified - The issue has been identified and a fix is being implemented.
Jun 10, 09:43 CEST
Monitoring - One of VPC managed services nodes in fr-par-2 had a partial failure evading automated detection. This led to parts of the VPC DNS, DHCP and NTP traffic being lost.
We have been removed the failing node and service should be recovering.
We are still monitoring.

Jun 10, 09:43 CEST
Investigating - Issue started at around 8:14 CEST
Some clusters unreachable via their LB

Jun 10, 09:09 CEST
Jun 10, 2025
Resolved - This incident has been resolved.
Jun 10, 16:31 CEST
Investigating - Following a deployment of api-compute, we notice an elevated rate of 500 errors, mainly on fr-par-2, endpoints POST /ips and POST /servers.
Jun 10, 16:30 CEST
Resolved - This incident has been resolved.
Jun 10, 10:21 CEST
Monitoring - A fix has been implemented and we are monitoring the results.
Jun 10, 10:19 CEST
Investigating - [fr-par-2] From 6:04 UTC to 7:28 UTC, customers could experience DHCP & DNS & NTP loss through VPC when using VPC's managed service. The situation is now resolved.
Jun 10, 10:19 CEST
Jun 9, 2025
Resolved - This incident has been resolved.
Jun 9, 17:51 CEST
Monitoring - Swith has been replaced per a new one and all the customer servers seems back for the business.
May 20, 06:05 CEST
Investigating - We have detected that public switch was down in DC2 Room 101 Rack A20.
Servers in that rack currently have no network access and are unreachable.

May 19, 21:35 CEST
Resolved - This incident has been resolved.
Jun 9, 17:50 CEST
Investigating - We have detected a switch down in DC5 room S1 Rack C34.
Servers in that rack currently have no network access and are unreachable.

===================

26.05.25 09h41 UTC
Issue has been forwarded to our team for resolution.

May 26, 10:52 CEST
Resolved - The cause of the incident has been identified, and a fix has been applied so that it doesn't happen again.
Jun 9, 17:41 CEST
Update - We are continuing to investigate this issue.
Jun 9, 17:40 CEST
Investigating - We are investigating a problem that generated RPNv2 connectivity issue between our datacenters this morning around 07:52 UTC
Jun 2, 12:02 CEST
Resolved - This incident has been resolved.
Jun 9, 14:06 CEST
Update - We are continuing to monitor for any further issues.
Jun 9, 00:46 CEST
Monitoring - It was a software problem that required a manual reboot of the switch.
We are now monitoring the status of the rack.

Jun 9, 00:46 CEST
Update - An intervention is planned and a technician will carry out all the necessary checks.
Jun 8, 23:13 CEST
Identified - We have detected a switch down in DC5 room 7 rack C57 bloc A
Servers in that rack currently have no public network access and are unreachable.

===================

Since 08.06.25 at 18h00.
Issue has been forwarded to our team for resolution.

Jun 8, 22:30 CEST
Resolved - Incident is now closed. After investigation, this was caused by a rare faulty auto-healing on LB side. Team will add more monitoring so we are alerted sooner.
Sorry again for any inconvenience.

Jun 9, 13:10 CEST
Update - Situation seems more stable as of now. Incident is resolved since around 15:20 UTC, users can access their endpoints again.
We will keep the status in monitoring until Monday though, for further investigation.
Sorry about any inconvenience.

Jun 7, 18:00 CEST
Monitoring - Affected users should now be able to access their functions/containers endpoints. In summary, this was indeed a faulty LB issue.
However, we are still monitoring and working on the situation, as it appears the issue might not be fully solved.
Thanks for your understanding. We will keep you updated.

Jun 7, 17:39 CEST
Update - After verification, this affects 80% of the clients on pl-waw, since 2025-06-07 09:13 UTC.
We are working to resolve the issue as soon as possible.
Sorry again for any inconvenience.

Jun 7, 17:15 CEST
Investigating - Since 2025-06-07 09:20 UTC, around half of the users cannot call their functions/containers endpoints in pl-waw region.
Users are experiencing "connect: connection refused" or "Could not connect to server" errors.
At first glance, this seems caused by a LB failure.
We apologize for any inconvenience. We are on it.

Jun 7, 16:50 CEST
Resolved - Switch has been rebooted per the field team.
Jun 9, 10:59 CEST
Update - We are continuing to investigate this issue.
Jun 9, 10:14 CEST
Investigating - We have detected a switch down in DC5 room 1 1 rack D27
Servers in that rack currently have no public network access and are unreachable.

An intervention is planned and a technician will carry out all the necessary checks.

Jun 9, 10:14 CEST
Jun 8, 2025