Status impact color legend
  • Black impact: None
  • Yellow impact: Minor
  • Red impact: Major
  • Blue impact: Maintenance
Investigating - The secondary DNS server 62.210.16.8/nssec.online.net is down. We are working to resolve that issue as quick as possible.
Jun 30, 2025 - 16:42 CEST
Investigating - The IP 51.159.208.135 is blacklisted by Microsoft.

We are currently working on it.

Jun 30, 2025 - 15:06 CEST
Investigating - The file system on a worker server hosting temporal data is full, causing all backups to fail and logs to not be handled properly.
This may result in data loss and potential disruptions in service.

Jun 30, 2025 - 12:20 CEST

About This Site

Welcome to the Scaleway Status website. Here, you can view the status of all Scaleway services across all products and availability zones (AZs).
You can check how subscribe, manage your email status updates and how to receive status notifications on Slack here: https://www.scaleway.com/en/docs/account/reference-content/scaleway-status-updates/

Elements - Products Degraded Performance
Instances Operational
Elastic Metal Operational
Apple Silicon Operational
Object Storage Operational
Block Storage Operational
Container Registry Operational
Network Operational
Private Network Operational
Public Gateway Operational
Load Balancer Operational
Kubernetes Kapsule Operational
Serverless Functions and Containers Operational
Serverless-Database Degraded Performance
Jobs Operational
Databases Operational
Messaging and Queuing Operational
Domains Operational
IoT Hub Operational
Web Hosting ? Operational
Transactional Email Operational
IAM Operational
Observability Operational
Secret Manager Operational
Environmental Footprint ? Operational
Developer Tools Operational
Account API Operational
Billing API Operational
Edge service Operational
Elements Console Operational
Website Operational
Elements - AZ Operational
fr-par-1 Operational
fr-par-2 Operational
fr-par-3 Operational
nl-ams-1 Operational
nl-ams-2 Operational
nl-ams-3 Operational
pl-waw-1 Operational
pl-waw-2 Operational
pl-waw-3 Operational
Dedibox - Products Degraded Performance
Dedibox Operational
Hosting Degraded Performance
SAN Operational
Dedirack Operational
Dedibackup Operational
Domains Degraded Performance
RPN Operational
Dedibox Console Operational
Dedibox VPS Operational
Dedibox - Datacenters Operational
DC2 Operational
DC3 Operational
DC5 Operational
AMS Operational
Miscellaneous Operational
Excellence Operational
BookMyName Operational
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance

Scheduled Maintenance

[DC5]- RPN Switch replacement in DC5 Room 1 - Rack C51 Jul 3, 2025 09:00-10:00 CEST

Due to an issue on one of the PSU input of the RPN switch in C51, switch is currently attached to only one power feed.
Due to this, we will proceed with a switch replacement has issue is located on the device itself. Sorry for the inconvenience

Posted on Jun 30, 2025 - 13:11 CEST
Jun 30, 2025
Resolved - Equipment has been replaced and everything is back online.
Jun 30, 18:21 CEST
Investigating - Network redundancy loss due to an equipment failure.
All services are still running, but some customer may be impacted by a network redundancy loss.
We are currently working on the matter.

Jun 30, 09:35 CEST
Resolved - This incident has been resolved.
Jun 30, 16:40 CEST
Investigating - Our team has noted a new downtime and is actively working on a more sustainable solution.
Jun 30, 16:22 CEST
Monitoring - A fix has been implemented and we are monitoring the results.
Jun 30, 15:58 CEST
Investigating - The legacy DNS resolver 62.210.16.6, used on Dedibox installations before the introduction of new IP addresses, is currently experiencing connectivity issues. This may affect the resolution of DNS queries for impacted Dedibox instances.
Jun 30, 15:26 CEST
Jun 29, 2025

No incidents reported.

Jun 28, 2025

No incidents reported.

Jun 27, 2025
Resolved - This incident has been resolved.
Jun 27, 13:25 CEST
Identified - The issue has been identified and our team is now working on a fix.
Jun 26, 15:32 CEST
Investigating - We are currently experiencing issues with MongoDB deployments in the fr-par region, which are ending in error. This may result in delays or failures when attempting to create new MongoDB instances. Our technical team is actively investigating the issue to restore normal service as soon as possible.
Jun 26, 11:14 CEST
Resolved - This incident has been resolved.
Jun 27, 11:33 CEST
Monitoring - We sincerely apologize for the inconvenience caused during our recent maintenance operation. The issue has been resolved, and all services should now be back to normal. Thank you for your patience and understanding. If you have any further concerns, please don’t hesitate to contact us.
Jun 26, 17:30 CEST
Identified - Our team is currently working to resolve an issue that occurred during a scheduled maintenance operation. Unfortunately, an error was made when the wrong machines were serviced. Rest assured, our engineers are taking all necessary steps to fix this promptly.
Inter-AZ traffic (including InstanceElastic Metal) in VPC is actually OK. Only DHCP and DNS are affected by the incident on NL-AMS-3.

Jun 26, 17:28 CEST
Investigating - There is currently a connectivity issue affecting the Virtual Private Cloud (VPCL) in the Amsterdam region (nl-ams-3). This issue is causing degradation in DHCP services and DNS resolution.

Impact :
Users may experience difficulties in obtaining IP addresses via DHCP.
DNS resolution may be unavailable, affecting the ability to resolve domain names to IP addresses.
This also impacts the connectivity of instances and Elastic Metal.

Jun 26, 16:49 CEST
Jun 26, 2025
Resolved - This incident has been resolved.
Jun 26, 15:06 CEST
Investigating - S3 checksum error in some client/sdk, we alway return the full object checksum even when the request is ranged, causing issue in some high-level wrappers that try to compare the range's checksum with the full object one.

Full integrity of the objects aren't impacted, and all computed checksum are valid.

May 2, 19:00 CEST
Resolved - This incident has been resolved.
Jun 26, 13:22 CEST
Update - We are continuing to investigate this issue.
Jun 19, 14:23 CEST
Investigating - We are currently investigating this issue.
Jun 13, 15:26 CEST
Monitoring - A fix has been implemented and we are monitoring the results.
Jun 9, 17:10 CEST
Identified - The IP 51.159.208.135 of our Webhosting platform is blacklisted by Microsoft.
We are currently working on it.

May 27, 10:10 CEST
Jun 25, 2025
Resolved - This incident has been resolved.
Jun 25, 14:11 CEST
Investigating - We have analyzed an issue to access the autoritative DNS of Linode (at least ns1/2/3/4/5.linode.com) to resolve some domain names (FQDN) from our network.

You may use other DNS resolvers to solve this issue in the meantime than the ones provided with Scaleway and Dedibox if you have issue to resolve some FQDN behind Linode DNS autoritative servers (an example with Unbound and a public DNS like Google : 'unbound-control forward_add alpinelinux.org x.x.x.x').

We are not the sole provider to experience such issue, many other DNS public resolvers we have tested are experiencing the same anomalous behavior.

Jun 24, 14:53 CEST
Jun 24, 2025
Resolved - This incident has been resolved.
Jun 24, 13:31 CEST
Monitoring - A fix has been implemented and we are monitoring the results.
Jun 24, 10:42 CEST
Identified - We've identified an issue where the quotas returned are not always accurate, which can cause the console to display an incorrect maximum value. As a result, certain actions might be limited in the console, even though the actual quotas are higher.


Please note that this issue does not affect actions performed via API, CLI, or Terraform. Only the web console is impacted.

Jun 24, 09:38 CEST
Completed - The scheduled maintenance has been completed.
Jun 24, 12:30 CEST
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
Jun 24, 12:00 CEST
Scheduled - Maintenance will be carried out to improve reliability.
the impact could be a short unavailabilities to process payments.

Jun 24, 09:07 CEST
Jun 23, 2025
Resolved - This incident has been resolved.
Jun 23, 18:58 CEST
Investigating - Switch h6-6b54-2.ams1 is down, all the servers on it are unreachable. We're currently working on it.
Jun 23, 15:20 CEST
Resolved - This incident has been resolved.
Jun 23, 18:38 CEST
Investigating - s1-a12-2.rpn.dc5 since 15:03
The RPN connection is not available for the servers connected to the switch.

Jun 23, 15:31 CEST
Resolved - This incident has been resolved.
Jun 23, 14:36 CEST
Investigating - We are currently investigating this issue.
Jun 21, 11:36 CEST
Jun 22, 2025

No incidents reported.

Jun 21, 2025
Jun 20, 2025
Resolved - This incident has been resolved.
Jun 20, 14:13 CEST
Investigating - We are currently investigating this issue.
Jun 20, 13:48 CEST
Resolved - This incident has been resolved.
Jun 20, 10:13 CEST
Investigating - We are currently being targeted by a phishing campaign.
We invite you to be careful and check the invoices directly on your console.
Our Trust and Safety team is working to resolve this.

Jun 16, 14:09 CEST
Jun 19, 2025
Resolved - This incident has been resolved.
Jun 19, 14:08 CEST
Update - We are continuing to work on a fix for this issue.
Jun 19, 14:07 CEST
Identified - Due to exceptional circumstances, the response time is slowed down.
Sorry for the inconvenience

Jun 19, 09:31 CEST
Resolved - Uplink configuration has been updated and corrected.
The situation is resolved.
Please contact Support if the issue persists.

Jun 19, 11:34 CEST
Identified - We have detected a switch down in room 101-k31-2.
Servers in that rack currently have no public network access and are unreachable.

===================

17/06/25 09H00 UTC
Issue has been forwarded to our team for resolution.

Jun 17, 16:15 CEST
Resolved - This incident has been resolved.
Jun 19, 08:20 CEST
Update - The power on most of the chassis was recovered, we are booting the remaining customer nodes.
Jun 18, 17:51 CEST
Update - These rack are still impacted by this issue:
h2-dv5
h2-ee14
h2-ee17
h2-ee18
h2-ee19
h2-ee20
h2-ee21
h2-ee24
h2-ee25
h2-ee26
h2-ee28
h2-ee29
h2-ee2h2
h2-ee30
h2-ee31

Our teams are working on the resolution.

Jun 18, 11:33 CEST
Identified - We have detected a switch down in Hall 2 E-E, Baie : E30.
Servers in that rack currently have no public network access and are unreachable since 18/06/2025 05h30.

The issue has been forwarded to our team for resolution.

Jun 18, 09:11 CEST
Jun 18, 2025
Completed - The scheduled maintenance has been completed.
Jun 18, 21:30 CEST
Update - We are actively working about restoring the power on the racks following a power maintenance done by our DC provider.
Jun 18, 16:02 CEST
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
Jun 18, 10:30 CEST
Scheduled - In order to arrange for our new UPS to power up datahall 2, we have scheduled a maintenance window in which we will be implementing this power.
During this period, customers will experience a loss of redundancy on one feed in the following corridors
DY / DV/ EE / EB / EL / EH

Impact: Loss of redundancy on one feed.

Jun 18, 10:16 CEST
Resolved - This incident has been resolved. The issue should have only impacted a handful of DDX2ELT webhostings.
Jun 18, 19:41 CEST
Monitoring - Since 13:29 (CET), no files or emails were stored and access to webmail was inaccessible.
Jun 18, 18:43 CEST
Resolved - This incident has been resolved.
Jun 18, 19:01 CEST
Investigating - The issue has been identified at around 3pm (CET). We are currently investigating it.
Jun 18, 18:02 CEST
Resolved - This incident has been resolved.
Jun 18, 10:02 CEST
Monitoring - A fix has been implemented and we are monitoring the results.
Jun 13, 15:50 CEST
Investigating - We are currently investigating this issue.
Jun 13, 12:42 CEST
Jun 17, 2025
Completed - The scheduled maintenance has been completed.
Jun 17, 15:00 CEST
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
Jun 17, 08:00 CEST
Scheduled - Scheduled
A planned software upgrade on one of two central routers is scheduled in AMS region.
During this maintenance, there will be no redundancy for network connections in the region.
Packet losses or any disruptions of the connectivity are not expected.

Jun 10, 15:31 CEST
Resolved - This incident has been resolved.
Jun 17, 14:57 CEST
Investigating - Increasing number of 502 error.
Our team is working to solve this problem.

Jun 17, 09:33 CEST
Jun 16, 2025