Status impact color legend
  • Black impact: None
  • Yellow impact: Minor
  • Red impact: Major
  • Blue impact: Maintenance
Investigating - An unexpected server failure occurred on san-ssd-3.rpn.online.net, causing the loss of the master node in the cluster. As a result, the RPN-SAN storage service became unavailable.

Storage team is actively investigating the root cause and working to restore service as quickly as possible.

Jul 12, 2025 - 21:00 CEST
Monitoring - A fix has been implemented and we are monitoring the results.
Jul 15, 2025 - 18:21 CEST
Investigating - Issues on fr-par-2 impacting multiple products. Our team is actively working on it.
Jul 15, 2025 - 17:40 CEST
Identified - Our engineers noticed that the billing for our Apple Silicon products is not up to date.
It will be retro-actively resumed 7/16/2025.

Jul 15, 2025 - 18:04 CEST
Investigating - Since 6th July 2025, 8:45 PM, Gemma 3 is currently experimenting slow response times and Llama 3.3 experienced a few short disruptions
Jul 07, 2025 - 16:00 CEST
Identified - On July 15th at 15:00:00, the Mistral Small 3.1 model experienced instabilities, leading to potential disruptions in service. This issue may affect the performance and availability of the Generative APIs relying on this model.
Jul 15, 2025 - 15:26 CEST
Investigating - Since 30th June 2025, 11:30am, we are experiencing periodic instabilities on the Mistral Small 3.1 inference engine.
This may result in short, intermittent service disruptions for affected users. Investigations are ongoing to find and fix the root cause.

Jul 03, 2025 - 10:01 CEST
Identified - A small subset of customers have an interruption of logs and metrics coming from Scaleway products since the 13th of June. Custom Data Sources are not affected.
Jul 15, 2025 - 15:10 CEST
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
Jul 15, 2025 - 10:00 CEST
Scheduled - From July 15th to July 17th, guest accounts are migrated to member accounts. Guests will be logged out during the migration. More information on https://www.scaleway.com/en/docs/iam/reference-content/guests-to-members-migration/
Jul 15, 2025 10:00 - Jul 17, 2025 18:00 CEST
Investigating - The IP 51.159.208.135 is blacklisted by Microsoft.

We are currently working on it.

Jun 30, 2025 - 15:06 CEST

About This Site

Welcome to the Scaleway Status website. Here, you can view the status of all Scaleway services across all products and availability zones (AZs).
You can check how subscribe, manage your email status updates and how to receive status notifications on Slack here: https://www.scaleway.com/en/docs/account/reference-content/scaleway-status-updates/

Elements - Products Partial Outage
Instances Operational
Elastic Metal Operational
Apple Silicon Operational
Object Storage Operational
Block Storage Operational
Container Registry Operational
Network Operational
Private Network Operational
Public Gateway Operational
Load Balancer Operational
Kubernetes Kapsule Operational
Serverless Functions and Containers Operational
Serverless-Database Operational
Jobs Operational
Databases Operational
Messaging and Queuing Operational
Domains Operational
IoT Hub Operational
Web Hosting ? Operational
Transactional Email Operational
IAM Operational
Observability Partial Outage
Secret Manager Operational
Environmental Footprint ? Operational
Developer Tools Operational
Account API Under Maintenance
Billing API Operational
Edge service Operational
Elements Console Operational
Website Operational
Elements - AZ Major Outage
fr-par-1 Operational
fr-par-2 Major Outage
fr-par-3 Operational
nl-ams-1 Operational
nl-ams-2 Operational
nl-ams-3 Operational
pl-waw-1 Operational
pl-waw-2 Operational
pl-waw-3 Operational
Dedibox - Products Partial Outage
Dedibox Operational
Hosting Degraded Performance
SAN Partial Outage
Dedirack Operational
Dedibackup Operational
Domains Operational
RPN Operational
Dedibox Console Operational
Dedibox VPS Operational
Dedibox - Datacenters Operational
DC2 Operational
DC3 Operational
DC5 Operational
AMS Operational
Miscellaneous Operational
Excellence Operational
BookMyName Operational
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance

Scheduled Maintenance

Téléhouse2 PoP Maintenance Jul 16, 2025 09:00-13:00 CEST

Due to a maintenance on equipment associated to Telehouse2 PoPs following service will be impacted :
Interlink delivered in TH2 will be unreachable from the A-Z
Increase of Latency toward other providers as OVH / Cloudflare.
No loss of traffic should be seen as traffic will be diverted to other path.

Posted on Jul 11, 2025 - 15:18 CEST

Amsterdam ITX9 PoP Maintenance - Latency higher than usual in AMS Region Jul 16, 2025 14:00-15:00 CEST

Due to a maintenance on a peering router located in Amsterdam Interxion9 customers might be impacted by higher latency than usual
No loss of traffic should be seen as traffic will be diverted to other paths

Posted on Jul 15, 2025 - 17:53 CEST
Jul 15, 2025
Resolved - This incident has been resolved.
Jul 15, 14:36 CEST
Investigating - The following OS cannot be installed on Dedibox : archlinux, esxi and windows.
Jul 15, 12:24 CEST
Jul 14, 2025

No incidents reported.

Jul 13, 2025
Resolved - This incident has been resolved.
Jul 13, 05:55 CEST
Investigating - We have detected a switch down
Servers in that rack currently have no public network access and are unreachable.

Issue has been forwarded to our team for resolution.

Jul 13, 00:52 CEST
Jul 12, 2025
Jul 11, 2025
Resolved - This incident has been resolved.
Jul 11, 11:40 CEST
Monitoring - Switch has been replaced.
Servers are back online.

We are monitoring the situation.

Jul 8, 10:12 CEST
Investigating - We are experiencing a temporary issue affecting some dedicated server.

Team has been warned.

Jul 8, 07:30 CEST
Resolved - This incident has been resolved.
Jul 11, 11:36 CEST
Monitoring - The faulty switch has been powered back on.
Optical modules have been replaced.
The switch is now operational again.

Connectivity is being restored for the affected Dedibox servers.
We continue to monitor the situation to ensure full recovery.

Jul 8, 10:57 CEST
Identified - The issue has been identified and a fix is being implemented.
Jul 8, 10:13 CEST
Monitoring - Physical intervention at the DC is ongoing
Jul 8, 10:03 CEST
Investigating - We are currently experiencing a network incident in DC3 due to a switch failure.
This is impacting a subset of Dedibox servers, which may face intermittent or total loss of network connectivity.

Our teams are actively working to restore full service as quickly as possible.

Jul 8, 09:00 CEST
Resolved - This incident has been resolved.
Jul 11, 11:35 CEST
Monitoring - A fix has been implemented and we are monitoring the results.
Jul 9, 10:16 CEST
Investigating - The switch has rebooted several times, same thing for some servers.
Jul 8, 15:41 CEST
Resolved - This incident has been resolved.
Jul 11, 11:35 CEST
Monitoring - The faulty switch has been replaced.
All affected servers have been back online since 09:41.

If you’re still experiencing issues, please contact our Support team.

Jul 10, 09:46 CEST
Identified - We are currently experiencing a network incident in DC5 due to a switch failure.
This is impacting a subset of Dedibox servers, which may face intermittent or total loss of network connectivity.

Our teams are actively working to restore full service as quickly as possible.

Jul 10, 08:50 CEST
Resolved - There was an issue with the uplink. The incident has been resolved.
Jul 11, 11:34 CEST
Investigating - The RPN switch is currently unreachable, resulting in a loss of RPN connectivity in rack H7. The public network remains operational.
Jul 11, 10:11 CEST
Resolved - This incident has been resolved.
Jul 11, 11:34 CEST
Monitoring - We experienced a temporary issue affecting a limited number of servers at DC2.
This was due to an internal operation in the datacenter.
As a result, some servers may have become unavailable during the incident.

All impacted servers are currently being restored.
A full recovery is in progress.

Jul 7, 13:57 CEST
Jul 10, 2025
Resolved - This incident has been resolved.
Jul 10, 20:05 CEST
Monitoring - Between July 3rd 2:00 PM UTC and July 4th 11:00 AM UTC, customers may have experienced high latencies on Object Storage in the NL-AMS region.
This was due to an internal rebuild process.

To limit the impact, operations were slowed down during the event.
The situation has now stabilized and we are currently monitoring the service closely.

Jul 4, 11:01 CEST
Jul 9, 2025
Jul 8, 2025
Resolved - This incident has been resolved.
Jul 8, 15:58 CEST
Investigating - We are currently investigating this issue.
Jul 8, 15:49 CEST
Jul 7, 2025
Jul 6, 2025

No incidents reported.

Jul 5, 2025

No incidents reported.

Jul 4, 2025
Resolved - This incident has been resolved.
Jul 4, 15:56 CEST
Investigating - The secondary DNS server 62.210.16.8/nssec.online.net is down. We are working to resolve that issue as quick as possible.
Jun 30, 16:42 CEST
Jul 3, 2025
Resolved - This incident has been resolved.
Jul 3, 15:14 CEST
Monitoring - Fix implemented. Monitoring ongoing.
Jul 3, 15:00 CEST
Identified - Issue identified, fix in progress
Jul 3, 14:55 CEST
Update - We are continuing to investigate this issue.
Jul 3, 14:51 CEST
Investigating - Our teams are currently investigating an issue affecting the container registry service in the AMS region.
This issue may prevent users from pulling or pushing container images, impacting the deployment and operation of containerized applications.

We apologize for any inconvenience caused and will provide updates as soon as more information is available.

Jul 3, 14:25 CEST
Completed - The scheduled maintenance has been completed.
Jul 3, 10:00 CEST
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
Jul 3, 09:00 CEST
Scheduled - Due to an issue on one of the PSU input of the RPN switch in C51, switch is currently attached to only one power feed.
Due to this, we will proceed with a switch replacement has issue is located on the device itself. Sorry for the inconvenience

Jun 30, 13:11 CEST
Jul 2, 2025
Resolved - This incident has been resolved.
Jul 2, 08:48 CEST
Monitoring - We continue to monitor the situation closely for any further issues.
Jul 1, 18:15 CEST
Identified - The error rate is below the nominal threshold; we are continuing to monitor closely.
Jul 1, 16:30 CEST
Investigating - Due to elevated temperatures in France, some of our services are currently experiencing degraded performance. Our teams are actively implementing mitigation measures and closely monitoring the situation to prevent further impact.
Jul 1, 15:35 CEST
Postmortem - Read details
Jul 3, 18:12 CEST
Resolved - This incident has been resolved.
Jul 2, 02:42 CEST
Update - Situation is back to normal, teams are keeping an eye on it . We are closing this incident.
Jul 2, 02:42 CEST
Update - Most of the services are back online. We are working to clear the few remaining side effects
Jul 2, 01:57 CEST
Update - all servers are now up and running, our teams are now working on restoring services with issues
Jul 2, 00:07 CEST
Update - Backend is now fully up and stable, customer services will begin to come back progressively, we are still monitoring temperature to ramp up the load for the cooling with caution.
Jul 1, 22:41 CEST
Update - We are adding more servers back in production, mostly backend for now, once we are confident all services are ready and cooling keeps stable, customers services will start progressively. We thank you for your patience.
Jul 1, 21:49 CEST
Update - We are seeing improvements in temperatures, we will begin to power up a few internal servers and check how the cooling holds the load
Jul 1, 20:46 CEST
Monitoring - Our datacenter provider has informed us that the situation is stabilized, they expect temperature to improve slowly in the coming hours.
We are checking our own sensors for now, once we are confident the cooling is sufficient we will begin to power on stopped services.

Jul 1, 19:22 CEST
Investigating - Our datacenter provider in nl-ams-1 is unable to provide an update regarding the cooling issue in one of our rooms, we cannot yet share an ETA to restore services on our side. We will provide updates as soon as we have them.
Jul 1, 18:35 CEST
Monitoring - We continue to monitor the situation closely for any further issues.
Jul 1, 18:21 CEST
Identified - To prevent further issues, we will preemptively shut down several services in the datacenter. This may result in temporary downtime for your service. We are doing our best to recover the situation as quickly as possible.
Jul 1, 17:42 CEST
Investigating - Due to elevated temperatures in Netherlands, some of our services are currently experiencing degraded performance. Our teams are actively implementing mitigation measures and closely monitoring the situation to prevent further impact.
Jul 1, 16:30 CEST
Jul 1, 2025
Resolved - This incident has been resolved.
Jul 1, 16:51 CEST
Investigating - The file system on a worker server hosting temporal data is full, causing all backups to fail and logs to not be handled properly.
This may result in data loss and potential disruptions in service.

Jun 30, 12:20 CEST
Resolved - The switch is back online.
Jul 1, 16:39 CEST
Investigating - A network switch in DC5 room 1 rack C62 has experienced a failure. Dedibox servers in this rack have lost their RPN connectivity. Our technical team is actively working to resolve the issue and restore full connectivity.
Jul 1, 16:07 CEST