Investigating - Our payment service provider has made double payments for some clients. We are working with them to cancel the duplicate payments.
Mar 13, 2026 - 19:02 CET
Investigating - We are currently experiencing a partial outage affecting the gpt-oss-120b and mistral-small-3.2-24b-instruct-2506 models. This may result in failures when attempting to use these models.
Our team already works to resolve this issues.
Mar 12, 2026 - 11:20 CET
Monitoring - Edge Services is experiencing a partial outage, causing truncated HTTP content on GET requests when the content is fetched from the origin. This issue is consecutive to a software upgrade on our stack. A rollback has been performed and we are monitoring the situation.
Mar 09, 2026 - 11:24 CET
Identified - Edge Services is randomly serving truncated HTTP content on GET requests, when the content is fetched from the origin. This behavior is consecutive to a software upgrade on our stack. A rollback is planned today.
Mar 09, 2026 - 09:24 CET
Investigating - Our Elastic Metal servers EM-RV are currently unavailable for rental due to being stuck in a cleaning state. Existing instances are unaffected.
Mar 04, 2026 - 16:50 CET
Monitoring - A fix has been implemented and we are monitoring the results.
Mar 06, 2026 - 09:05 CET
Identified - Since 03/03/26, daily Lifecycle jobs have failed to execute for some buckets.
Status: Root cause identified; fix ready for deployment. Action: Once deployed, jobs will run immediately to clear the backlog.
Mar 05, 2026 - 12:01 CET
Identified - We are noticing an increase in issues with Microsoft, with the message: "temporarily rate limited due to IP reputation." Our team is already taking action to mitigate the issue, but you may notice some delays and retries when sending to Outlook, MSN, or Hotmail. Microsoft has acknowledged the issue and is currently investigating on their side.
Feb 27, 2026 - 10:33 CET
Monitoring - A fix has been implemented and we are monitoring the results.
Feb 24, 2026 - 16:48 CET
Investigating - We have identified an issue on resource usage ingestion. The monthly ressource usage may be slightly inaccurate due to a delay in 24/02/2026 data processing.
Feb 24, 2026 - 09:46 CET
Identified - Our service is currently experiencing disruption due to blacklisting by Microsoft. We are actively working with Microsoft to resolve this issue as soon as possible.
Nov 24, 2025 - 19:34 CET
Update - We are still unable to send SMS to US numbers. The issue is due to A2P 10DLC regulations, which require registration and approval. We have submitted the necessary requests to our providers, the approval process takes a minimum of 3 weeks.
We will provide updates as soon as we have more information.
Jul 21, 2025 - 10:30 CEST
Identified - The issue has been identified and a fix is being implemented.
Jul 17, 2025 - 15:14 CEST
Investigating - We are currently experiencing an issue where messages containing a phone number in the USA are not being properly routed by our SMS provider.
A ticket has been opened on their side. We are actively monitoring the situation and will share updates as soon as available.
Jul 16, 2025 - 12:15 CEST
Welcome to the Scaleway Status website. Here, you can view the status of all Scaleway services across all products and availability zones (AZs).
You can check how subscribe, manage your email status updates and how to receive status notifications on Slack here: https://www.scaleway.com/en/docs/account/reference-content/scaleway-status-updates/
Elements - Products
Degraded Performance
90 days ago
100.0
% uptime
Today
Instances
Operational
Elastic Metal
Degraded Performance
Apple Silicon
Operational
Object Storage
Degraded Performance
Block Storage
Operational
Container Registry
Operational
Network
Operational
Private Network
Operational
Public Gateway
Operational
Load Balancer
Operational
Kubernetes Kapsule
Operational
Serverless Functions and Containers
Operational
Serverless-Database
Operational
Jobs
Operational
Databases
Operational
Messaging and Queuing
Operational
Domains
Operational
IoT Hub
Operational
Web Hosting
Operational
Transactional Email
Operational
IAM
Operational
Observability
Operational
Secret Manager
Operational
Environmental Footprint
Operational
Developer Tools
Operational
Account
Operational
Billing
Operational
Edge service
Operational
Elements Console
Operational
Website
Operational
Generative API
Degraded Performance
Managed Inference
Operational
APIs
Operational
Serverless Container
Operational
90 days ago
100.0
% uptime
Today
Elements - AZ
Operational
fr-par-1
Operational
fr-par-2
Operational
fr-par-3
Operational
nl-ams-1
Operational
nl-ams-2
Operational
nl-ams-3
Operational
pl-waw-1
Operational
pl-waw-2
Operational
pl-waw-3
Operational
Dedibox - Products
Operational
Dedibox
Operational
Hosting
Operational
SAN
Operational
Dedirack
Operational
Dedibackup
Operational
Domains
Operational
RPN
Operational
Dedibox Console
Operational
Dedibox VPS
Operational
Dedibox - Datacenters
Operational
DC1
Operational
DC2
Operational
DC3
Operational
DC4
Operational
DC5
Operational
AMS
Operational
Miscellaneous
Degraded Performance
Excellence
Degraded Performance
BookMyName
Operational
Saagie - Products
Operational
Product-1-ACO
Operational
Product-2-BOU
Operational
Product-3-BV
Operational
Product-4-DN
Operational
Product-5-MAT
Operational
Product-6-SIPDEV
Operational
Product-7-SIPINT
Operational
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Major outage
Partial outage
No downtime recorded on this day.
No data exists for this day.
had a major outage.
had a partial outage.
Related
No incidents or maintenance related to this downtime.
This maintenance will cause a brief downtime of a few seconds during the failover process.
During this window, you will not be able to make any changes to your MongoDB resources (e.g., creating/deleting instances or creating/restoring snapshots). Please ensure that no critical operations are scheduled during this time.
This update is part of our ongoing efforts to keep the system up to date and ensure the stability of our services. Posted on
Mar 06, 2026 - 11:25 CET
Due to the Radix TLD registry transition to Tucows on March 17, Radix TLD operations will be paused from Mon, March 16 through Tue, March 17. Due to the Radix TLD registry’s transition from CentralNic to Tucows (effective 17 March 2026), there will be a maintenance window from Mon, March 16 at 18:00 CET (UTC+1) to Tue, March 17 at 18:00 CET (UTC+1). Impacted Radix TLDs: - .fun - .host - .online - .press - .pw - .site - .space - .store - .tech - .uno - .website
Impact (only for Radix TLD domains): Unavailable: new registrations, renewals, transfers, restores, change owner and NS updates
Unaffected: existing Radix TLD domains will continue to resolve normally (websites, DNS, and email should continue to work).
What you can do: If your Radix domain is expiring soon, please renew before March 16 to avoid any issue. No action is needed if you’re not planning to renew your Radix TLD domains, or perform other modifications during this period. We’ll resume normal processing as soon as the maintenance completes. If you have any questions, please contact our support team.
Thank you for your understanding. Posted on
Mar 11, 2026 - 10:58 CET
We will be performing a planned maintenance on one of our core backbone routers located in the Paris DC3 data center. The maintenance will take place 03/17/2026 between 08:00 to 18:00 CET time
This router is part of a fully redundant architecture, and traffic will be offloaded to the remaining infrastructure devices during the operation. However, despite this offload and redundancy, some temporary side effects may occur during the maintenance window, including: - Brief connectivity interruptions - Temporary increases in latency - Packet loss - Routing changes - Network reconvergence
These impacts may affect services hosted in Paris and, due to interconnections between sites, could also potentially impact Amsterdam. We are taking all necessary measures to minimize disruption and ensure a swift return to normal operations. Posted on
Feb 27, 2026 - 15:50 CET
Update -
We will be undergoing scheduled maintenance during this time.
Mar 12, 2026 - 10:30 CET
Scheduled -
A planned software upgrade is scheduled on a core router in the WAW region. During this maintenance window redundancy for network connections in the region will be temporarily unavailable. Packet loss or service disruption is not expected.
Mar 12, 2026 - 10:29 CET
Resolved -
This incident has been resolved.
Mar 13, 11:30 CET
Update -
We are continuing to monitor for any further issues.
Feb 27, 17:31 CET
Monitoring -
This disconnection was caused by one of our transit provider misbehaving, we are removing this transit from the network paths until they have explained and stabilized their network
Feb 27, 13:49 CET
Investigating -
The API Gateway service in the pl-waw region is currently experiencing connectivity issues with api.scaleway.com. This may impact the ability to manage and access certain services through the API.
Feb 27, 12:39 CET
Resolved -
The incident is fully resolved, and the response times are back to normal.
Mar 12, 12:43 CET
Identified -
We have identified the likely culprit and applied a mitigation patch. We are closely monitoring the situation.
Mar 12, 11:41 CET
Investigating -
The Instance Orchestration API is currently not available in the fr-par region. This may affect your ability to manage your instances.
Mar 12, 11:24 CET
Resolved -
This incident has been resolved.
Mar 12, 10:07 CET
Investigating -
Since 09/03/2026, metrics and logs read and write in custom DS suffer performance issues. We are investigating this issue.
Mar 10, 10:54 CET
Resolved -
The incident has been resolved, and we are monitoring the issue still for a long-term fix to prevent the issue from occurring again.
Mar 12, 10:06 CET
Monitoring -
We have identified that the elevated error rate on ingestion for custom metrics data sources in fr-par is due to a constant increase in the number of series from Serverless. We are working on a solution to address this issue.
Mar 11, 19:03 CET
Investigating -
Elevated error rate on ingestion for custom metrics data sources in fr-par for some customers.
Mar 11, 16:29 CET
Resolved -
This incident has been resolved.
Mar 12, 10:04 CET
Update -
We are continuing to investigate this issue.
Mar 11, 10:17 CET
Investigating -
Since 1:00 (CEST), partial errors on metrics ingestion for Custom and SCW metrics, this should result in no data loss with a proper retry mechanism.
Mar 11, 10:17 CET
Resolved -
After investigation, we have found the root cause.
During the scheduled maintenance https://status.scaleway.com/incidents/p2cj27y80n9w, starting on 03/03 at 3:30 PM UTC, we experienced a lot of pressure on our infrastructure, saturating the number of connections of our LB. As a result, health checks struggled to complete, leading to our backends marked as "down", "up", "down", "up", etc., while they were healthy.
When "down", no more requests could be done to our infrastructure, leading to TLS errors. These "down" phases lasted a few minutes each time, and were frequently interleaved with "up" phases, leading to a partial disruption. All requests made during "down" phases were rejected. Situation improved during the night (03/03 10 PM -> 03/04 6:30 AM), as we were receiving a fewer amount of requests.
On 03/04 15:10 UTC, rebalancing our backends and closing idle connections freed capacity and solved the issue, as now, backends were always "up", so ready to receive traffic.
We will add more monitoring on our LB so it doesn't happen again.
We apologize once again for any inconvenience.
Mar 9, 19:59 CET
Update -
As per the last message states, we'd like to emphasize that the incident is now over. However, we will keep it open in "Monitoring" status until we find the real root cause.
The incident lasted between:
- 03/03 3:30 PM UTC to 03/03 10 PM UTC - 03/04 6:30 AM UTC to 03/04 3:10 PM UTC
So, around 15h10m in total, during where 5xx errors, connections closed, TLS issues happened.
We will provide updates and close this incident once the root cause has been found. We have a few leads, but we are checking the different hypothesis.
Mar 6, 11:29 CET
Monitoring -
Following the recurrence of intermittent errors earlier today, we have taken several mitigation steps: - 14:22 CET: Reloaded gateways - 15:24 CET: Rebalanced node workloads - 16:10 CET: Forcefully cleared stuck connections.
These actions appear to have stabilized the system. At this time, disruptions in the nl-ams region have returned to nominal levels, and error rates are within normal bounds.
Note: The connection cleanup at 16:10 CET may have caused brief 502 errors visible in your Cockpit for a short period. These are expected and should not persist if your services are healthy.
We are now closely monitoring the environment for any signs of elevated errors or latency. Our team remains on high alert and will act quickly if further issues arise.
We sincerely apologize for the repeated impact and thank you for your patience as we work to ensure long-term stability.
Mar 4, 16:44 CET
Identified -
We are currently observing new occurrences of the reported errors. Our team is actively investigating the situation and working on resolving the issue.
Mar 4, 13:59 CET
Monitoring -
The situation has improved following the mitigations we implemented. We are now closely monitoring the system to ensure stability across Serverless Containers in the nl-ams region.
If you are still experiencing issues, please reach out to our support team so we can assist you promptly.
Mar 4, 11:08 CET
Update -
Early monitoring shows some signs of improvement.
We've applied a mitigation and are currently evaluating its impact.
We will provide further updates as more information becomes available.
Mar 4, 10:48 CET
Investigating -
Since the conclusion of yesterday's maintenance (https://status.scaleway.com/incidents/p2cj27y80n9w), we have been made aware of intermittent errors affecting Serverless Containers in the nl-ams region. Users are experiencing issues such as TLS handshake failures, 502 errors, and connection timeouts (EOF), along with increased request latencies.
We sincerely apologize for the disruption and any impact this may have on your services. We are treating this with the highest priority and will provide updates as we make progress.
Mar 4, 10:03 CET
Resolved -
This incident has been resolved.
Mar 9, 12:57 CET
Investigating -
Since Saturday, 7 March, some customers on a webhosting server have been experiencing database errors, such as 'Too many connections'. This may cause websites to malfunction or become unavailable.
Mar 8, 01:45 CET
Resolved -
An internal issue has been resolved. Cluster creation in the Paris region (fr-par) is now operational.
Mar 9, 11:38 CET
Update -
An internal issue is preventing the creation of new clusters in the Paris region (fr-par). Our teams have identified the cause and are working on a resolution.
Mar 9, 11:18 CET
Identified -
An internal issue is preventing the creation of new clusters in the Paris region (fr-par). Our teams have identified the cause and are working on a resolution.
Mar 9, 11:17 CET
Investigating -
An internal issue prevents the creation of new clusters in the Paris region (fr-par).
Mar 9, 10:57 CET
Resolved -
We experienced an elevated error rate when querying Loki custom datasources in the fr-par region. This issue may have affected your ability to retrieve data from these datasources. We encounter 2% error rate on query and 5% on ingestion from 08:52 to 08:53 UTC
Mar 9, 11:09 CET
Resolved -
Replacing the faulty switch has restored service at this time.
Mar 8, 10:56 CET
Investigating -
We have detected that public switches were down in DC2 Room 203-B Rack J4. Servers in that rack currently have no network access and are unreachable, RPN network is still reachable.
Mar 8, 09:31 CET
Resolved -
This incident has been resolved.
Mar 6, 11:56 CET
Investigating -
We are observing an unexpectedly high amount of errors on queries for logs against Scaleway datasource. This is probably related to an ongoing internal migration.
Mar 5, 14:16 CET
Resolved -
This incident has been resolved.
Mar 6, 11:41 CET
Investigating -
Since February 23, some queries on Scaleway Logs datasources are returning errors. This may impact your ability to retrieve and analyze log data.
Feb 27, 17:09 CET
Resolved -
We have finished monitoring the situation.
Container instances are now running without a liveness probe, preventing them from restarting. Now, restarts are only triggered by users applicative errors (e.g. if the container crashes).
Note that after our fix made earlier, users might have seen their container waken up, due to the deployed change.
Sorry again for the inconvenience.
Mar 5, 17:32 CET
Monitoring -
We have deployed a fix rolling back the probe change done during the migration.
Now, health checks are used as before as only readiness probes, and are not used to restart the containers.
We are still monitoring, but containers shouldn't restart anymore because of aggressive probing.
We will reintroduce the "liveness probe" feature later (maybe optional), but we will make sure to communicate clearly and measure more thoroughly the impacts.
Resolved -
This incident has been resolved.
Mar 5, 10:20 CET
Investigating -
Since 09:30, we are experiencing a high rate of image pull/push failures on the Container Registry in the Paris region (fr-par). This may impact your ability to push or pull images from this registry.
Feb 24, 15:15 CET
Resolved -
This incident has been resolved.
Mar 4, 16:12 CET
Investigating -
A fraction of the services responsible for log queries were down and have been restarted. The situation is now stable.
Mar 4, 16:12 CET
Resolved -
The TLS certificate for free domains has expired, causing potential security warnings or access issues. Our teams have acknowledged the issue and are currently deploying new certificates and restarting the service in all affected regions.
Mar 4, 15:31 CET
Resolved -
This incident has been resolved.
Mar 4, 10:13 CET
Monitoring -
The switch has been replaced, and the config pushed. Please make sure your server do at least one DHCP request, or you might loose access due to our security features.
Feb 27, 18:59 CET
Update -
Our teams are actively working on resolving the issue and will provide further updates as soon as possible.
Feb 27, 10:20 CET
Update -
Our network and on-site teams are monitoring the situation. The switch replacement operation will be carried out on Friday, February 27, in the morning.
Feb 26, 23:26 CET
Update -
The switch is being replaced.
Feb 26, 21:34 CET
Identified -
Technician is on site, verifications are underway.
Feb 26, 20:18 CET
Investigating -
A Dedibox switch racked in 6.12.53 is currently unreachable. Consequently, all dedibox servers plugged to this switch are unreachable too. This issue is suspected to be caused by a power supply problem. We are investigating the cause and will provide updates as soon as possible.
Feb 26, 18:50 CET
Resolved -
This incident has been resolved.
Mar 4, 09:46 CET
Investigating -
From 3rd March 22:30 CET to 4th March 9:06 CET, the TFTP service for Dedibox running on fr-par-2 was not running properly. During this time, customers could not boot their Dedibox into rescue mode and OS reinstallation did not work.
Mar 4, 09:45 CET
A few containers that were in "error" status before the operation might now be unreachable. This is because the latest deployed container revision wasn't working, and redeploying it failed. Users with such containers (error message starts with `Container is unable to start`) must redeploy a working version, or configure the probes so that container instances can start in due time.
Some other containers might now be in "error" status if the image has been deleted from the registry. These containers are not reachable anymore. Users with such containers (error message is `image was not found in container registry`) have to redeploy the container with an existing image.
We invite users facing other kind of issues to reach us through a support ticket that will be prioritized.
Thanks for your understanding.
Mar 3, 17:00 CET
Update -
We have completed half of the maintenance as of now. No major issues have occurred so far, except for a small hiccup with crons between 4:00 and 5:00 PM. A few crons were stuck in "upgrading" status during that time; however, they were triggered correctly despite the reported status. Therefore, no crons should have been missed.
If you have encountered issues with your crons, or your containers in general, in the nl-ams region today, feel free to create a support ticket linking this maintenance. We will look into it as a priority.
We will continue with the remaining containers tomorrow.
Thank you for your understanding.
Mar 2, 19:09 CET
In progress -
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Mar 2, 08:00 CET
Scheduled -
On March 2nd and 3rd (between 8 AM and 5 PM UTC), we are going to perform on nl-ams a required operation to prepare the new v1 API arrival (available soon).
During this operation, changes will be performed to all Serverless Containers namespaces on nl-ams region.
While this operation is seamless most of the time, users may observe the following behaviors:
- Namespaces will pass temporarily in "upgrading" status. This may take up to 15 minutes, depending on the number of resources inside the namespace. During this time, all write operations on the namespace (update namespace, create container, update container, delete container, etc.) will be impossible. Read operations will work as usual. - Container instances will be created. This is mandatory to ensure the containers are working properly. If a container fails to start, the "status" field will change to "error", and the "error_message" field will show the error, so users can fix it. During that process, users could see multiple instances of their container running, even if it's configured with max_scale = 1, and even if there is no incoming traffic. The Container will downscale as soon as the operation is over. - Long running requests made during this operation may fail, as we roll out Container instances. - Crons might be triggered multiple times during the operation. Crons with frequent schedules (such as every minute) might be more impacted.
Please note that during this process, unless the containers cannot start because of an applicative error, the containers will always be reachable.
Resolved -
This incident has been resolved.
Mar 3, 14:21 CET
Monitoring -
Since 01:00 on February 28th, the creation and update of Public Gateways in the nl-ams-1 region may be impacted. Our teams are currently investigating the issue
Mar 2, 13:50 CET
Completed -
The scheduled maintenance has been completed.
Mar 3, 14:00 CET
In progress -
Scheduled maintenance is currently in progress. We will provide updates as necessary.
Mar 3, 09:00 CET
Scheduled -
A planned software upgrade is scheduled on a core router in the WAW region. During this maintenance window redundancy for network connections in the region will be temporarily unavailable. Packet loss or service disruption is not expected.
Feb 19, 10:49 CET
Resolved -
This incident has been resolved.
Mar 3, 10:18 CET
Monitoring -
A fix has been implemented and we are monitoring the results.
Mar 2, 17:50 CET
Update -
Due to an error during a maintenance, multiple IPs are missing from the IPAM on the zones pl-waw-1 and pl-waw2. Impacts: Internal errors when booking an IP, missing IPs from the IPAM API, missing reverse DNS. We are currently investigating the issue and working on restoring the missing IP records.
Mar 2, 17:11 CET
Investigating -
Due to an issue between IPAM and Instance, all IPs from pl-waw-1 and pl-waw-2 were released from the IPAM. We are currently investigating the cause and impact.
Mar 2, 16:57 CET
Resolved -
This incident has been resolved.
Mar 2, 16:31 CET
Investigating -
On 26 February at around 2 p.m. Paris time some nodes lost their internal DNS resolution, so they were unable to contact their API server and became not ready. For Kapsule pools with autoheal enabled, all nodes were deleted and replaced.
Mar 2, 16:31 CET