Status impact color legend
  • Black impact: None
  • Yellow impact: Minor
  • Red impact: Major
  • Blue impact: Maintenance
Monitoring - A fix has been implemented and we are monitoring the results.
May 12, 2025 - 09:33 CEST
Identified - We have detected a switch down in room 4-5 s45-d9.dc3, s45-d10-dc3.
Servers in that rack currently have no public network access and are unreachable.

12.05.25 00h37 CEST
Issue has been forwarded to our team for resolution.

May 12, 2025 - 09:17 CEST
Monitoring - The fix has been deployed and we are monitoring the results.
May 09, 2025 - 10:15 CEST
Identified - The issue has been identified and a fix has been prepared, it will be deployed as soon as possible
May 04, 2025 - 09:48 CEST
Monitoring - As of 2025/05/04 13:00 CEST, the service degradation affecting the container registry has been resolved. The registry has since been operating normally, and our team will continue to monitor it to ensure ongoing stability.
May 05, 2025 - 14:29 CEST
Investigating - Registry is unusually slow, with a lot of latencies and instabilities, causing trouble to other services such as serverless container and function too.
May 03, 2025 - 19:03 CEST
Investigating - S3 checksum error in some client/sdk, we alway return the full object checksum even when the request is ranged, causing issue in some high-level wrappers that try to compare the range's checksum with the full object one.

Full integrity of the objects aren't impacted, and all computed checksum are valid.

May 02, 2025 - 19:00 CEST

About This Site

Welcome to the Scaleway Status website. Here, you can view the status of all Scaleway services across all products and availability zones (AZs).
You can check how subscribe, manage your email status updates and how to receive status notifications on Slack here: https://www.scaleway.com/en/docs/account/reference-content/scaleway-status-updates/

Elements - Products Partial Outage
Object Storage Degraded Performance
Serverless-Database Operational
Website Operational
Instances Operational
Block Storage Operational
Elastic Metal Operational
Apple Silicon Operational
Kubernetes Kapsule Operational
Container Registry Degraded Performance
Private Network Operational
Load Balancer Operational
Domains Operational
Serverless Functions and Containers Operational
Jobs Operational
Databases Operational
IoT Hub Operational
Web Hosting ? Operational
Observability Operational
Transactional Email Operational
Network Operational
Account API Operational
Billing API Operational
Elements Console Operational
Messaging and Queuing Partial Outage
Public Gateway Operational
Secret Manager Operational
Developer Tools Operational
IAM Operational
Edge service Operational
Environmental Footprint ? Operational
Elements - AZ Operational
fr-par-1 Operational
fr-par-2 Operational
fr-par-3 Operational
nl-ams-1 Operational
nl-ams-2 Operational
nl-ams-3 Operational
pl-waw-1 Operational
pl-waw-2 Operational
pl-waw-3 Operational
Dedibox - Products Operational
Dedibox Operational
Hosting Operational
SAN Operational
Dedirack Operational
Dedibackup Operational
Domains Operational
RPN Operational
Dedibox Console Operational
Dedibox - Datacenters Operational
DC2 Operational
DC3 Operational
DC5 Operational
AMS Operational
Miscellaneous Operational
Excellence Operational
BookMyName Operational
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance

Scheduled Maintenance

[COCKPIT] Scheduled maintenance on Cockpit May 19, 2025 10:00-13:00 CEST

We will be performing necessary upgrades on Cockpit databases to improve future performance and reliability on 19/05/2025.

During the maintenance window, you may experience short latencies when accessing dashboards. We recommend you do not create new dashboards or edit existing ones during that window to avoid side effects.

No impact is expected on all other actions.

We apologize for any inconvenience this may cause and appreciate your understanding. We'll keep you informed as the maintenance progresses.

Posted on Apr 30, 2025 - 17:48 CEST

[K8S] Migration to Network VPC May 19, 2025 10:00 - May 30, 2025 10:00 CEST

We will migrate Kubernetes clusters to a network VPC to enable stricter isolation.
Check our documentation for more information
https://www.scaleway.com/en/docs/kubernetes/how-to/manage-allowed-ips/

No downtime or service impact is expected.

Note: This upgrade will not apply to clusters using legacy private gateways. These clusters will continue to operate without changes.

Here the schedule for each region :

WAR: 19/05 - 20/05
AMS: 21/05 - 23/05
PAR: 26/05 - 30/05

Posted on Apr 24, 2025 - 10:14 CEST
May 16, 2025
Resolved - This incident has been resolved.
May 16, 11:34 CEST
Investigating - Partial VPC DNS loss from 14:33 UTC to 14:36 UTC and from 15:18 UTC to 15:28 UTC.
Customers can experience high DNS failures from within VPC during those two timeframes.

We are investigating on the root cause.
Everything is back to normal.

May 15, 17:57 CEST
Resolved - This incident has been resolved.
May 16, 11:20 CEST
Monitoring - All products above VPC and Instance have been recovered since 10.30 AM UTC.
A public postmortem will be available as soon as possible.

May 15, 12:31 CEST
Update - Most of impacted products (Instance, Kapsule, LbaaS, Serverless) have recovered since 10:10 UTC, some Load Balancers are still being fixed, our technical team is working on it.
May 15, 12:19 CEST
Update - We are continuing to investigate this issue.
May 15, 12:00 CEST
Update - Fully resolved on VPC network side, some products are still impacted.
May 15, 11:48 CEST
Update - We are continuing to investigate this issue.
May 15, 11:47 CEST
Investigating - We encountered an issue with our VPC product.
A network outage happened between 09:24 UTC to 09:29 UTC in FR-PAR-1.

During this time-frame VPC related services might have been impacted.
These related services are Instances, Kubernetes, Load Balancer, Databases, Network, Serverless Jobs.

The root cause has been found.

May 15, 11:46 CEST
Resolved - This incident has been resolved.
May 16, 10:33 CEST
Investigating - Since 1:08AM UTC Generative APIs encounter disruption of service. We are actively investigating the issue.
May 16, 08:49 CEST
Resolved - This incident has been resolved.
May 16, 10:31 CEST
Monitoring - A fix has been implemented and we are monitoring the results.
May 15, 18:42 CEST
Investigating - Since around 3:15PM UTC, secret versions cannot be created in the FR-PAR Secret Manager.
We are investigating the issue.

May 15, 17:50 CEST
Resolved - EM unreachable from 2025-05-16 05:07 UTC to 05:11 UTC (4 minutes).
The associated network device rebooted unexpectedly.

May 16, 09:17 CEST
May 15, 2025
Resolved - This incident has been resolved.
May 15, 14:07 CEST
Update - PSU issues on rack h22 block F and block N.
Some servers may be unreachable. Replacement are in progress.

May 13, 10:44 CEST
Identified - RPN service is down on Rack H22 block F.
May 12, 21:46 CEST
Investigating - DC2 rack H22 block N, block F unreachable since 11:25 CEST. Analysis in progress.
May 12, 12:41 CEST
Resolved - This incident has been resolved.
May 15, 14:06 CEST
Monitoring - A fix has been implemented and we are monitoring the results.
May 13, 10:23 CEST
Identified - We have detected a switch down in Room 4-3 : s43-d20.rpn.dc3.
Servers in that rack currently have no network access and are unreachable.
11.05.25 00h57 UTC
Issue has been forwarded to our team for resolution.

May 12, 09:36 CEST
Resolved - This incident has been resolved.
May 15, 14:05 CEST
Update - We are continuing to monitor for any further issues.
May 14, 11:08 CEST
Monitoring - A fix has been implemented and we are monitoring the results.
May 14, 11:08 CEST
Investigating - We have detected a switch down in room 101 - s101-g27-13.
Servers in that rack currently have no public network access and are unreachable.

11.05.25 05h22 UTC
Issue has been forwarded to our team for resolution.

May 12, 08:45 CEST
Resolved - The root cause was due to the VPC incident we encountered this morning.
https://status.scaleway.com/incidents/nygwhbpk0sd4

During this time you may have encountered instability on your Redis clusters.

Everything is back to normal. If you are still encountering an issue please open a ticket to our support team.

May 15, 13:03 CEST
Monitoring - A fix has been implemented and we are monitoring the results.
May 15, 12:43 CEST
Investigating - Auto-healing workflows based on monitoring are triggered and rebooted some clusters.
May 15, 12:37 CEST
Completed - The scheduled maintenance has been completed.
May 15, 11:30 CEST
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
May 15, 11:00 CEST
Scheduled - DNS modifications will be temporarily unavailable during the maintenance (zone updates, new records, etc.)
May 15, 10:41 CEST
May 14, 2025
May 13, 2025
Completed - The scheduled maintenance has been completed.
May 13, 15:00 CEST
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
May 13, 14:00 CEST
Scheduled - Today between 2PM and 3PM CEST an operation will take place in DC3 Room 4-5 Rack D9 to fix a power issue.
All associated servers will be unreachable during the operation.

May 13, 12:01 CEST
May 12, 2025
Completed - The scheduled maintenance has been completed.
May 12, 18:00 CEST
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
May 12, 10:00 CEST
Update - We will be undergoing scheduled maintenance during this time.
May 2, 16:56 CEST
Scheduled - This operation will result in the temporary unavailability of certain domain-related features.

This upgrade is a key step in modernizing our BookMyName infrastructure to ensure greater stability, performance, and long-term scalability.

What will be temporarily unavailable during the maintenance:
DNS modifications (zone updates, new records, etc.)
New customer creation on secondary DNS
Domain name purchases and orders
Domains API
What remains fully operational:
Email services
DNS resolution

May 2, 16:11 CEST
Completed - The scheduled maintenance has been completed.
May 12, 18:00 CEST
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
May 12, 10:00 CEST
Update - We will be undergoing scheduled maintenance during this time.
May 2, 16:55 CEST
Scheduled - This operation will result in the temporary unavailability of certain domain-related features.

This upgrade is a key step in modernizing our Domain infrastructure to ensure greater stability, performance, and long-term scalability.

What will be temporarily unavailable during the maintenance:
Domain name purchases and orders
Domains API
What remains fully operational:
DNS resolution
DNS modifications (zone updates, new records, etc.)

May 2, 16:13 CEST
Resolved - This incident has been resolved.
May 12, 13:48 CEST
Identified - Due to the migration of BookMyName, ordering Web Hosting is currently not possible on Elements.
Thank you for your understanding.

May 12, 11:49 CEST
Resolved - This incident has been resolved.
May 12, 11:51 CEST
Monitoring - A fix has been implemented and we are monitoring the results.
May 6, 09:59 CEST
Update - The pf14-mysql is up and in monitoring.
May 6, 08:49 CEST
Identified - The pf14-mysql & pf15-mysql have been out of service since 8h15 AM.

Our engineers are doing their utmost to restore the service as soon as possible.

May 6, 08:32 CEST
Resolved - This incident has been resolved.
May 12, 11:51 CEST
Monitoring - A fix has been implemented and we are monitoring the results.
May 7, 12:03 CEST
Identified - The pf17-mysql have been out of service.

Our engineers are doing their utmost to restore the service as soon as possible.

May 7, 10:02 CEST
Resolved - This incident has been resolved.
May 12, 11:23 CEST
Identified - Investigating - We are currently investigating the cause of the issue
May 12, 10:20 CEST
May 11, 2025

No incidents reported.

May 10, 2025

No incidents reported.

May 9, 2025
Resolved - A fix has been deployed, this incident has been resolved
May 9, 09:52 CEST
Investigating - We are currently experiencing an issue with retrying payments for DDX invoices via the ELT console. Our team has identified the root cause and is working to resolve it promptly. In the meantime, please contact support, and we will handle the payment retry on your behalf.
May 9, 09:19 CEST
May 8, 2025
Resolved - This incident has been resolved.
May 8, 17:36 CEST
Investigating - Some customers who are members of an organization are unable to connect to the console.
May 8, 16:57 CEST
Completed - The scheduled maintenance has been completed.
May 8, 08:00 CEST
Update - We have completed the cluster upgrades from 1.27 to 1.28 and from 1.28 to 1.29. The upgrades from 1.29 to 1.30 remain scheduled for the previously indicated dates. You can always upgrade yourself in advance if you want to choose the upgrade date and time.
Apr 25, 10:14 CEST
Update - Here the base date for the update per region :

1.27
WAW : April 8th
AMS : April 9th
PAR : April 10th

1.28
WAW : April 21st
AMS : April 22nd
PAR : April 23rd

1.29
WAW : May 14th
AMS : May 19th
PAR : May 21st

Please be aware that all clusters will be done progressively. Your cluster maintenance may not occur on the said date.

Apr 7, 17:26 CEST
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
Apr 7, 08:00 CEST
Scheduled - On April 7 2025, Kubernetes versions 1.27, 1.28, and 1.29 will reach their end-of-support date.

Clusters still running these versions will automatically be upgraded to Kubernetes 1.30 within 30 days to ensure continued security and performance.

No immediate action is required, but we recommend reviewing our upgrade documentation for best practices.

https://www.scaleway.com/en/docs/kubernetes/how-to/upgrade-kubernetes-version/

Contact our support team if you have any question.

Mar 18, 16:21 CET
May 7, 2025
Completed - The scheduled maintenance has been completed.
May 7, 18:00 CEST
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
May 7, 09:00 CEST
Scheduled - On 2025-05-07, 07:00 UTC, a necessary maintenance to our infrastructure (on all regions) will be made.

During this maintenance, we will need to redeploy users functions and containers. This is a tested process, but, as a result, users may or will experience the following:

- all running functions/containers instances will restart (once)
- depending on the functions/containers configuration, some long-running requests may fail during the restart
- as functions/containers will restart, they won't be able to process requests for a few seconds. During this, even functions/containers with minimum scale to 1 might experience something similar to a cold start

We will keep you updated and apologize for any inconvenience.

Thanks for your understanding.

Apr 29, 17:49 CEST
Resolved - This incident has been resolved.
May 7, 17:10 CEST
Update - We are continuing to investigate this issue.
May 7, 17:10 CEST
Investigating - All Dedibox servers in this rack are down.

Our team is working on the issue.

May 7, 16:44 CEST
Resolved - This incident has been resolved.
May 7, 17:08 CEST
Identified - Some customers are experiencing display errors on the console. Our team has identified the cause and is working to resolve the issue as quickly as possible.
May 7, 16:52 CEST
May 6, 2025
Resolved - This incident has been resolved.
May 6, 13:58 CEST
Monitoring - A fix has been implemented and we are monitoring the results.
May 5, 11:24 CEST
Investigating - All Dedibox servers in this rack are down.

Our team is working on the issue.

May 5, 10:55 CEST
Resolved - This incident has been resolved.
May 6, 13:57 CEST
Monitoring - We are informing you that an incident occurred on our Dedibox server located at fr-par-1 Hall 4-5 Rack C5.
Our services are currently stable.
We are closely monitoring the situation and investigating any potential additional impact.
To report any issues, you can submit a ticket to the assistance

May 5, 15:42 CEST
Resolved - This incident has been resolved.
May 6, 10:04 CEST
Identified - A fix has been deployed and we are now listing the orphans IPs in order to detach them from deleted servers.
May 5, 12:20 CEST
Investigating - Due to a software bug, terminating an instance will no longer automatically detach its IP addresses.
IPs can still be detached manually.
A fix will be deployed early next week.

May 2, 18:35 CEST
May 5, 2025
Resolved - This incident has been resolved.
May 5, 09:35 CEST
Monitoring - A fix has been implemented and we are monitoring the results.
Apr 30, 17:58 CEST
Investigating - We encountered a disruption on one of our switch in AMS datacenter in Hall 2 in rack DS7-2 et DS7-4.
Our team is currently conducting an investigation to determine the root cause of this downtime.

Apr 30, 17:22 CEST
May 4, 2025
May 3, 2025
May 2, 2025

Unresolved incident: [Object Storage] - Checksum error in some client/sdk.