Status impact color legend
  • Black impact: None
  • Yellow impact: Minor
  • Red impact: Major
  • Blue impact: Maintenance
Monitoring - A fix has been implemented and we are monitoring the results.
May 07, 2025 - 12:03 CEST
Identified - The pf17-mysql have been out of service.

Our engineers are doing their utmost to restore the service as soon as possible.

May 07, 2025 - 10:02 CEST
Identified - The issue has been identified and a fix has been prepared, it will be deployed as soon as possible
May 04, 2025 - 09:48 CEST
Monitoring - A fix has been implemented and we are monitoring the results.
May 06, 2025 - 09:59 CEST
Update - The pf14-mysql is up and in monitoring.
May 06, 2025 - 08:49 CEST
Identified - The pf14-mysql & pf15-mysql have been out of service since 8h15 AM.

Our engineers are doing their utmost to restore the service as soon as possible.

May 06, 2025 - 08:32 CEST
Monitoring - As of 2025/05/04 13:00 CEST, the service degradation affecting the container registry has been resolved. The registry has since been operating normally, and our team will continue to monitor it to ensure ongoing stability.
May 05, 2025 - 14:29 CEST
Investigating - Registry is unusually slow, with a lot of latencies and instabilities, causing trouble to other services such as serverless container and function too.
May 03, 2025 - 19:03 CEST
Investigating - S3 checksum error in some client/sdk, we alway return the full object checksum even when the request is ranged, causing issue in some high-level wrappers that try to compare the range's checksum with the full object one.

Full integrity of the objects aren't impacted, and all computed checksum are valid.

May 02, 2025 - 19:00 CEST

About This Site

Welcome to the Scaleway Status website. Here, you can view the status of all Scaleway services across all products and availability zones (AZs).
You can check how subscribe, manage your email status updates and how to receive status notifications on Slack here: https://www.scaleway.com/en/docs/account/reference-content/scaleway-status-updates/

Elements - Products Partial Outage
Object Storage Degraded Performance
Serverless-Database Operational
Website Operational
Instances Operational
Block Storage Operational
Elastic Metal Operational
Apple Silicon Operational
Kubernetes Kapsule Operational
Container Registry Degraded Performance
Private Network Operational
Load Balancer Operational
Domains Operational
Serverless Functions and Containers Operational
Jobs Degraded Performance
Databases Operational
IoT Hub Operational
Web Hosting ? Operational
Observability Operational
Transactional Email Operational
Network Operational
Account API Operational
Billing API Operational
Elements Console Operational
Messaging and Queuing Partial Outage
Public Gateway Operational
Secret Manager Operational
Developer Tools Operational
IAM Operational
Edge service Operational
Environmental Footprint ? Operational
Elements - AZ Operational
fr-par-1 Operational
fr-par-2 Operational
fr-par-3 Operational
nl-ams-1 Operational
nl-ams-2 Operational
nl-ams-3 Operational
pl-waw-1 Operational
pl-waw-2 Operational
pl-waw-3 Operational
Dedibox - Products Operational
Dedibox Operational
Hosting Operational
SAN Operational
Dedirack Operational
Dedibackup Operational
Domains Operational
RPN Operational
Dedibox Console Operational
Dedibox - Datacenters Operational
DC2 Operational
DC3 Operational
DC5 Operational
AMS Operational
Miscellaneous Operational
Excellence Operational
BookMyName Operational
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance

Scheduled Maintenance

[BookMyName] Maintenance: Temporary interruption of BookMyName Services May 12, 2025 10:00-18:00 CEST

Update - We will be undergoing scheduled maintenance during this time.
May 02, 2025 - 16:56 CEST
Scheduled - This operation will result in the temporary unavailability of certain domain-related features.

This upgrade is a key step in modernizing our BookMyName infrastructure to ensure greater stability, performance, and long-term scalability.

What will be temporarily unavailable during the maintenance:
DNS modifications (zone updates, new records, etc.)
New customer creation on secondary DNS
Domain name purchases and orders
Domains API
What remains fully operational:
Email services
DNS resolution

May 02, 2025 - 16:11 CEST

[Domain] Temporary interruption of Domain Services May 12, 2025 10:00-18:00 CEST

Update - We will be undergoing scheduled maintenance during this time.
May 02, 2025 - 16:55 CEST
Scheduled - This operation will result in the temporary unavailability of certain domain-related features.

This upgrade is a key step in modernizing our Domain infrastructure to ensure greater stability, performance, and long-term scalability.

What will be temporarily unavailable during the maintenance:
Domain name purchases and orders
Domains API
What remains fully operational:
DNS resolution
DNS modifications (zone updates, new records, etc.)

May 02, 2025 - 16:13 CEST

[COCKPIT] Scheduled maintenance on Cockpit May 19, 2025 10:00-13:00 CEST

We will be performing necessary upgrades on Cockpit databases to improve future performance and reliability on 19/05/2025.

During the maintenance window, you may experience short latencies when accessing dashboards. We recommend you do not create new dashboards or edit existing ones during that window to avoid side effects.

No impact is expected on all other actions.

We apologize for any inconvenience this may cause and appreciate your understanding. We'll keep you informed as the maintenance progresses.

Posted on Apr 30, 2025 - 17:48 CEST

[K8S] Migration to Network VPC May 19, 2025 10:00 - May 30, 2025 10:00 CEST

We will migrate Kubernetes clusters to a network VPC to enable stricter isolation.
Check our documentation for more information
https://www.scaleway.com/en/docs/kubernetes/how-to/manage-allowed-ips/

No downtime or service impact is expected.

Note: This upgrade will not apply to clusters using legacy private gateways. These clusters will continue to operate without changes.

Here the schedule for each region :

WAR: 19/05 - 20/05
AMS: 21/05 - 23/05
PAR: 26/05 - 30/05

Posted on Apr 24, 2025 - 10:14 CEST
May 9, 2025

No incidents reported today.

May 8, 2025
Resolved - This incident has been resolved.
May 8, 17:36 CEST
Investigating - Some customers who are members of an organization are unable to connect to the console.
May 8, 16:57 CEST
Completed - The scheduled maintenance has been completed.
May 8, 08:00 CEST
Update - We have completed the cluster upgrades from 1.27 to 1.28 and from 1.28 to 1.29. The upgrades from 1.29 to 1.30 remain scheduled for the previously indicated dates. You can always upgrade yourself in advance if you want to choose the upgrade date and time.
Apr 25, 10:14 CEST
Update - Here the base date for the update per region :

1.27
WAW : April 8th
AMS : April 9th
PAR : April 10th

1.28
WAW : April 21st
AMS : April 22nd
PAR : April 23rd

1.29
WAW : May 14th
AMS : May 19th
PAR : May 21st

Please be aware that all clusters will be done progressively. Your cluster maintenance may not occur on the said date.

Apr 7, 17:26 CEST
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
Apr 7, 08:00 CEST
Scheduled - On April 7 2025, Kubernetes versions 1.27, 1.28, and 1.29 will reach their end-of-support date.

Clusters still running these versions will automatically be upgraded to Kubernetes 1.30 within 30 days to ensure continued security and performance.

No immediate action is required, but we recommend reviewing our upgrade documentation for best practices.

https://www.scaleway.com/en/docs/kubernetes/how-to/upgrade-kubernetes-version/

Contact our support team if you have any question.

Mar 18, 16:21 CET
May 7, 2025
Completed - The scheduled maintenance has been completed.
May 7, 18:00 CEST
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
May 7, 09:00 CEST
Scheduled - On 2025-05-07, 07:00 UTC, a necessary maintenance to our infrastructure (on all regions) will be made.

During this maintenance, we will need to redeploy users functions and containers. This is a tested process, but, as a result, users may or will experience the following:

- all running functions/containers instances will restart (once)
- depending on the functions/containers configuration, some long-running requests may fail during the restart
- as functions/containers will restart, they won't be able to process requests for a few seconds. During this, even functions/containers with minimum scale to 1 might experience something similar to a cold start

We will keep you updated and apologize for any inconvenience.

Thanks for your understanding.

Apr 29, 17:49 CEST
Resolved - This incident has been resolved.
May 7, 17:10 CEST
Update - We are continuing to investigate this issue.
May 7, 17:10 CEST
Investigating - All Dedibox servers in this rack are down.

Our team is working on the issue.

May 7, 16:44 CEST
Resolved - This incident has been resolved.
May 7, 17:08 CEST
Identified - Some customers are experiencing display errors on the console. Our team has identified the cause and is working to resolve the issue as quickly as possible.
May 7, 16:52 CEST
May 6, 2025
Resolved - This incident has been resolved.
May 6, 13:58 CEST
Monitoring - A fix has been implemented and we are monitoring the results.
May 5, 11:24 CEST
Investigating - All Dedibox servers in this rack are down.

Our team is working on the issue.

May 5, 10:55 CEST
Resolved - This incident has been resolved.
May 6, 13:57 CEST
Monitoring - We are informing you that an incident occurred on our Dedibox server located at fr-par-1 Hall 4-5 Rack C5.
Our services are currently stable.
We are closely monitoring the situation and investigating any potential additional impact.
To report any issues, you can submit a ticket to the assistance

May 5, 15:42 CEST
Resolved - This incident has been resolved.
May 6, 10:04 CEST
Identified - A fix has been deployed and we are now listing the orphans IPs in order to detach them from deleted servers.
May 5, 12:20 CEST
Investigating - Due to a software bug, terminating an instance will no longer automatically detach its IP addresses.
IPs can still be detached manually.
A fix will be deployed early next week.

May 2, 18:35 CEST
May 5, 2025
Resolved - This incident has been resolved.
May 5, 09:35 CEST
Monitoring - A fix has been implemented and we are monitoring the results.
Apr 30, 17:58 CEST
Investigating - We encountered a disruption on one of our switch in AMS datacenter in Hall 2 in rack DS7-2 et DS7-4.
Our team is currently conducting an investigation to determine the root cause of this downtime.

Apr 30, 17:22 CEST
May 4, 2025
May 3, 2025
May 2, 2025

Unresolved incident: [Object Storage] - Checksum error in some client/sdk.

May 1, 2025

No incidents reported.

Apr 30, 2025
Resolved - This incident has been resolved.
Apr 30, 16:52 CEST
Monitoring - Since 16h52 UTC we experienced some issues on Block Storage creation and operations on fr-par-1 that peaked around 17h30 UTC.

The issue is fixed now but the team is still monitoring it.

Apr 29, 20:14 CEST
Resolved - This incident has been resolved.
Apr 30, 01:00 CEST
Monitoring - The issue has been found and a fix has been deployed, error rate should come back to normal in a few minutes.
Apr 30, 00:49 CEST
Investigating - Object Storage is currently unavalaible in pl-waw. Our team is already investigating the situation.
Apr 30, 00:42 CEST
Apr 29, 2025
Apr 28, 2025
Resolved - This incident has been resolved.
Apr 28, 19:01 CEST
Monitoring - A fix has been implemented and we are monitoring the results.
Apr 28, 18:25 CEST
Investigating - We have noticed some latencies on our AZ WAW, and our team is already working on the issue.
We thank you in advance for your patience and cooperation in this situation.

Apr 28, 18:24 CEST
Resolved - This incident has been resolved.
Apr 28, 14:30 CEST
Monitoring - The switch rebooted and acted as the uplink for s45-d10.dc3
19 servers in total were impacted

Apr 28, 11:57 CEST
Completed - The scheduled maintenance has been completed.
Apr 28, 13:00 CEST
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
Apr 28, 10:01 CEST
Scheduled - Our Database Team is scheduling a maintenance for all Managed DB for PostgreSQL & MySQL resources in PAR region on Monday, April 28th at 8.00 UTC. No impact expected.
Apr 28, 10:00 CEST
Completed - The scheduled maintenance has been completed.
Apr 28, 11:39 CEST
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
Apr 28, 10:00 CEST
Scheduled - We will be performing a maintenance on our Alert-Manager to improve future performance and reliability on 28/04/2025
During the maintenance window spanning from 10AM to 2PM CET, you may experience a short delay in the reception of your firing alerts notifications. The actual state of alerts will remain visible in Grafana during the time frame.
We apologize for any inconvenience this may cause and appreciate your understanding. We'll keep you informed as the maintenance progresses.

Apr 17, 17:17 CEST
Resolved - This incident has been resolved.
Apr 28, 11:34 CEST
Update - We are continuing to work on a fix for this issue.
Apr 28, 10:58 CEST
Identified - Metric database used internally by IoT Hub become unresponsive.

Our engineers are currently working to resolve the situation.

Apr 28, 10:36 CEST
Apr 27, 2025

No incidents reported.

Apr 26, 2025

No incidents reported.

Apr 25, 2025
Resolved - This incident has been resolved.
Apr 25, 11:15 CEST
Update - We have completed development, but are now working on pre-release testing.
Mar 5, 09:19 CET
Update - We did discover that the trouble is broader than we thought.
As a first mitigation we do recommend that users wait before upgrading from aws v1 (we do recommand any version strictly prior to aws-cli 1.37.0 or boto3 < 1.36.0).
If aws v2 (that we do not recommend for the moment) is mandatory, we do recommend using aws-cli < 2.23.0
We are working actively to support those new versions, and plan to release fixes for the end of next week.
The details of the bug are :
support of CRC64NVME does not work properly
Tranfer-Encoding: chunked header does not work

Jan 24, 16:57 CET
Identified - Everyone using aws-cli versions versions v1: >=1.37.0 and v2: >=2.23.0 received a 400 errors on PUT and POST requests.
Jan 16, 15:04 CET
Investigating - aws-cli now enforces a CRC64NVME integrity checksum on all PUT and POST requests. This concerns the versions v1: >=1.37.0 and v2: >=2.23.0. We do not currently support this checksum, we are working on how best to handle it on our end. In the meantime, you can choose one of these options to keep your aws-cli request working:
- Use the --checksum-algorithm option with one of our supported checksums:
- SHA1
- SHA256
- CRC32
- CRC32C
- Use an older version of aws-cli, this way it will not enforce the CRC64NVME checksum

Jan 16, 14:57 CET
Resolved - This incident has been resolved.
Apr 25, 09:47 CEST
Monitoring - A physical reboot was performed by our on-site technical team, and the switch is now back online.
Apr 24, 08:05 CEST
Investigating - The rack is currently unresponsive since 2025-04-24 05:56.
A technical team is on-site conducting the necessary checks to restore service as quickly as possible.
We will keep you informed of any updates.

Apr 24, 08:04 CEST
Resolved - This incident has been resolved.
Apr 25, 09:44 CEST
Monitoring - A fix has been implemented and we are monitoring the results.
Apr 24, 18:49 CEST
Identified - Managed Databases delete instance operation can take more time than usual. We have identified the issue and we are working to fix it.
Apr 24, 18:04 CEST