Status impact color legend
  • Black impact: None
  • Yellow impact: Minor
  • Orange impact: Major
  • Blue impact: Maintenance
Investigating - The IP 51.159.128.65 of our Webhosting platform pf-1007 is blacklisted by Microsoft.
We are currently working on it.

Feb 06, 2025 - 11:50 CET
Update - We did discover that the trouble is broader than we thought.
As a first mitigation we do recommend that users wait before upgrading from aws v1 (we do recommand any version strictly prior to aws-cli 1.37.0 or boto3 < 1.36.0).
If aws v2 (that we do not recommend for the moment) is mandatory, we do recommend using aws-cli < 2.23.0
We are working actively to support those new versions, and plan to release fixes for the end of next week.
The details of the bug are :
support of CRC64NVME does not work properly
Tranfer-Encoding: chunked header does not work

Jan 24, 2025 - 16:57 CET
Identified - Everyone using aws-cli versions versions v1: >=1.37.0 and v2: >=2.23.0 received a 400 errors on PUT and POST requests.
Jan 16, 2025 - 15:04 CET
Investigating - aws-cli now enforces a CRC64NVME integrity checksum on all PUT and POST requests. This concerns the versions v1: >=1.37.0 and v2: >=2.23.0. We do not currently support this checksum, we are working on how best to handle it on our end. In the meantime, you can choose one of these options to keep your aws-cli request working:
- Use the --checksum-algorithm option with one of our supported checksums:
- SHA1
- SHA256
- CRC32
- CRC32C
- Use an older version of aws-cli, this way it will not enforce the CRC64NVME checksum

Jan 16, 2025 - 14:57 CET

About This Site

Welcome to the Scaleway Status website. Here, you can view the status of all Scaleway services across all products and availability zones (AZs). We are currently making a few adjustments to enhance your navigation and overall experience. Over the next couple of weeks, you will see some changes to the website. Our team is here to assist you, and we appreciate your patience.

Elements - Products Operational
Object Storage Operational
Serverless-Database Operational
Website Operational
Instances Operational
Block Storage Operational
Elastic Metal Operational
Apple Silicon Operational
Kubernetes Kapsule Operational
Container Registry Operational
Private Network Operational
Load Balancer Operational
Domains Operational
Serverless Functions and Containers Operational
Jobs Operational
Databases Operational
IoT Hub Operational
Web Hosting ? Operational
Observability Operational
Transactional Email Operational
Network Operational
Account API Operational
Billing API Operational
Elements Console Operational
Messaging and Queuing Operational
Public Gateway Operational
Secret Manager Operational
Developer Tools Operational
IAM Operational
Elements - AZ Operational
fr-par-1 Operational
fr-par-2 Operational
fr-par-3 Operational
nl-ams-1 Operational
nl-ams-2 Operational
nl-ams-3 Operational
pl-waw-1 Operational
pl-waw-2 Operational
pl-waw-3 Operational
Dedibox - Products Operational
Dedibox Operational
Hosting Operational
SAN Operational
Dedirack Operational
Dedibackup Operational
Domains Operational
RPN Operational
Dedibox Console Operational
Dedibox - Datacenters Operational
DC2 Operational
DC3 Operational
DC5 Operational
AMS Operational
Miscellaneous Operational
Excellence Operational
BookMyName Operational
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Scheduled Maintenance
We will be upgrading the Scaleway Block Volume Container Storage Interface (CSI) driver to its latest version (v.0.3) for clusters deployed in the WAW region.
When: 01/04/2025 at 11:00 AM
Impact: There will be no downtime on clusters with dedicated control planes. API servers on mutualized control planes will be unavailable for a few seconds.
The maintenance action will only affect clusters which have not been manually upgraded to the latest version of the CSI by 31/03/2025.
For more information and guidance regarding this upgrade, please refer to our documentation below.

https://www.scaleway.com/en/docs/containers/kubernetes/api-cli/managing-storage/#upgrading-to-csi-version-03

Posted on Jan 16, 2025 - 12:54 CET
We will be upgrading the Scaleway Block Volume Container Storage Interface (CSI) driver to its latest version (v.0.3) for clusters deployed in the AMS region.
When: 02/04/2025 at 11:00 AM
Impact: There will be no downtime on clusters with dedicated control planes. API servers on mutualized control planes will be unavailable for a few seconds.
The maintenance action will only affect clusters which have not been manually upgraded to the latest version of the CSI by 31/03/2025.
For more information and guidance regarding this upgrade, please refer to our documentation below.

https://www.scaleway.com/en/docs/containers/kubernetes/api-cli/managing-storage/#upgrading-to-csi-version-03

Posted on Jan 16, 2025 - 12:55 CET
We will be upgrading the Scaleway Block Volume Container Storage Interface (CSI) driver to its latest version (v.0.3) for clusters deployed in the PAR region.
When: 07/04/2025 at 11:00 AM
Impact: There will be no downtime on clusters with dedicated control planes. API servers on mutualized control planes will be unavailable for a few seconds.
The maintenance action will only affect clusters which have not been manually upgraded to the latest version of the CSI by 31/03/2025.
For more information and guidance regarding this upgrade, please refer to our documentation below.

https://www.scaleway.com/en/docs/containers/kubernetes/api-cli/managing-storage/#upgrading-to-csi-version-03

Posted on Jan 16, 2025 - 12:56 CET
Past Incidents
Feb 22, 2025

No incidents reported today.

Feb 21, 2025
Resolved - This incident has been resolved.
Feb 21, 16:34 CET
Update - No PUB and RPN service on servers connected to this switch for a few minutes from 2025 Feb 21 10:59 UTC to 11:09 UTC.
PUB and RPN both impacted.
Switch impacted:
- DC2, Salle : 101 101, Baie : G36, Bloc : A
- DC2, Salle : 101 101, Baie : G30, Bloc : N
- DC2, Salle : 101 101, Baie : G35, Bloc : B

Feb 21, 12:46 CET
Investigating - No PUB and RPN service on servers connected to this switch for a few minutes from 2025 Feb 21 10:59 UTC to 11:09 UTC.
PUB and RPN both impacted.

Feb 21, 12:21 CET
Resolved - This incident has been resolved.
Feb 21, 15:07 CET
Investigating - Cockpit query path is degraded
Some client may faced errors for query that fetch more than 12h of data.

Feb 21, 10:14 CET
Resolved - Since Feb 20th at 11h to Feb 21th at 12h, metrics from the NATS, Queues, Topic and Events products are not stored in cockpit.
Feb 21, 14:14 CET
Investigating - Since Feb 20th at 11, metrics from the NATS, Queues, Topic and Events products are not stored in cockpit.
Feb 21, 12:03 CET
Feb 20, 2025
Resolved - This incident has been resolved.
Feb 20, 10:53 CET
Investigating - No PUB service on servers connected to this switch for a few minutes from 2025 Feb 20 09:37 UTC to 09:39 UTC.
RPN not impacted.

Feb 20, 10:53 CET
Resolved - Data center: DC3, Room: 4 4-5, Rack: D9
The public switch has been restarted.
We have no public service on the servers connected to this switch for a few minutes from February 19 23:11 UTC to 23:16 UTC.

RPN is not impacted.

Feb 20, 09:25 CET
Feb 19, 2025
Resolved - This incident has been resolved.
Feb 19, 14:05 CET
Monitoring - A fix has been implemented and we are monitoring the results.
Feb 19, 11:59 CET
Update - We are continuing to work on a fix for this issue.
Feb 19, 11:47 CET
Update - We are continuing to work on a fix for this issue.
Feb 19, 11:47 CET
Identified - The issue has been identfied and a fix is being prepared
Feb 19, 11:43 CET
Resolved - This incident has been resolved.
Feb 19, 11:44 CET
Update - Our engineers installed a new fix this morning and the service has been back up and running since 9am.
Feb 17, 11:52 CET
Monitoring - A fix has been implemented and we are monitoring the results.
Feb 15, 01:14 CET
Investigating - Some website and their database are unavailable on the PF 1010.
The webhosting team has been alerted and will investigate it as soon as possible.

Feb 15, 00:11 CET
Feb 18, 2025
Resolved - RPN switch rebooted at DC2, room 101 101, rack K37 and L37. The service was down for a few minutes from 2025 Feb 18 11:33 UTC to 11:39 UTC., we apologize for the inconvenience.
Feb 18, 15:33 CET
Resolved - This incident has been resolved.
Feb 18, 14:51 CET
Identified - During the weekend, Microsoft blacklisted one of our shared IPs, we blocked the IP and are currently investigating.
Feb 17, 09:51 CET
Resolved - No RPN service on servers connected to this switch for 5 minutes from 2025 Feb 18 04:38 UTC to 04:43 UTC.
Feb 18, 10:08 CET
Feb 17, 2025
Completed - The scheduled maintenance has been completed.
Feb 17, 10:15 CET
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
Feb 17, 10:00 CET
Scheduled - In order to improve the underlying infrastructure for DHCPv6 on Dedibox, the services handling DHCPv6 requests will have to be temporarily shut down for a short time (~15 minutes)

During the maintenance, the following operations might be delayed and/or need to be retried:
- routing an IPv6 block from one Dedibox server to another
- routing a new IPv6 block to a Dedibox server
- in the unlikely event that the lease for a given IPv6 block expires during the migration and the DHCP client doesn't retry DHCP requests correctly, the routing for the block may be lost and require manual action (= requesting a new DHCP lease)

The following services will NOT be impacted:
- SLAAC on Dedibox will not be impacted
- IPv6 on Elastic Metal will not be impacted
- IPv4 connectivity, IPFO, RPN will not be impacted

Feb 12, 14:21 CET
Completed - The scheduled maintenance has been completed.
Feb 17, 10:00 CET
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
Feb 17, 09:30 CET
Scheduled - A downtime of 10 min can be expected at the end of maintenance when rebooting the platform
Feb 13, 12:33 CET
Resolved - This incident has been resolved.
Feb 17, 09:33 CET
Monitoring - A fix has been implemented and we are monitoring the results.
Feb 14, 13:57 CET
Investigating - Apple Mails services blacklist our ASN, we are currently in contact with them to solved the issue.
Feb 14, 09:56 CET
Feb 16, 2025
Resolved - No public services on servers connected to this switch for 5 minutes from 07:02 UTC to 07:07 UTC the 2025 Feb 16
Feb 16, 07:00 CET
Feb 15, 2025

Unresolved incident: [WEBHOSTING] Blacklist Microsoft.

Feb 14, 2025
Resolved - This incident has been resolved.
Feb 14, 09:56 CET
Identified - All Dedibox servers in this rack are down.

Our team is working on the issue.

Feb 12, 08:41 CET
Resolved - Public switch rebooted due to a power issue.

Impact : No Public service on servers connected to this switch for 5 minutes from 04:56 UTC to 05:01 UTC.

Feb 14, 08:21 CET
Feb 13, 2025
Completed - The scheduled maintenance has been completed.
Feb 13, 17:00 CET
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
Feb 13, 09:00 CET
Scheduled - A maintenance will be performed on our infrastructure for both Serverless Functions and Containers products, to update internal components.

During this operation, nodes hosting the users workloads (functions/containers instances) will be replaced. As we update underlying nodes, running functions/containers instances will need to be relocated to different nodes.

Impacts:

- functions/containers instances will restart once
- depending on functions/containers configuration (min scale or how they handle termination), some 5xx errors might be experienced
- cold start can also be experienced for some requests, until the new functions/containers instances are fully restarted and ready to receive requests

Feb 10, 18:22 CET
Feb 12, 2025
Completed - The scheduled maintenance has been completed.
Feb 12, 22:00 CET
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
Feb 12, 14:00 CET
Scheduled - A maintenance will be performed on our infrastructure for both Serverless Functions and Containers products, to update internal components.

During this operation, nodes hosting the users workloads (functions/containers instances) will be replaced. As we update underlying nodes, running functions/containers instances will need to be relocated to different nodes.

Impacts:

- functions/containers instances will restart once
- depending on functions/containers configuration (min scale or how they handle termination), some 5xx errors might be experienced
- cold start can also be experienced for some requests, until the new functions/containers instances are fully restarted and ready to receive requests

Feb 10, 18:20 CET
Completed - The scheduled maintenance has been completed.
Feb 12, 18:01 CET
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
Feb 12, 10:01 CET
Scheduled - A maintenance will be performed on our infrastructure for both Serverless Functions and Containers products, to update internal components.

During this operation, nodes hosting the users workloads (functions/containers instances) will be replaced. As we update underlying nodes, running functions/containers instances will need to be relocated to different nodes.

Impacts:

- functions/containers instances will restart once
- depending on functions/containers configuration (min scale or how they handle termination), some 5xx errors might be experienced
- cold start can also be experienced for some requests, until the new functions/containers instances are fully restarted and ready to receive requests

Feb 10, 18:19 CET
Completed - The scheduled maintenance has been completed.
Feb 12, 18:00 CET
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
Feb 12, 10:00 CET
Scheduled - Restarting kubelet process from kapsule/kosmos orchestrated nodes

fr-par: 12/02/2025~14/02/2025

No service interruptions should occur, but some workloads may restart

Feb 7, 14:12 CET
Resolved - Due to recent changes in network automation, Apple Silicon customers have experienced extended delivery time for new servers, or delivery failures (servers would be reported in error state).
This only impacted new orders or re-installation of existing server.
Issue started around 0:00am UTC & has been resolved this morning around 11:00am UTC
The issue has been fixed

Feb 12, 14:49 CET
Resolved - This incident has been resolved.
Feb 12, 14:11 CET
Update - Scaleway products' metrics ingestion suffered downtime between 11/101 1:05AM and 12/01 9:35AM. Some metrics may not have been properly ingested in this period and won't be available. A fix has already been deployed
Jan 14, 10:36 CET
Investigating - We have detected an issue on Grafana it may give 500 errors.


The issue has been forwarded to our team for resolution.

Jan 11, 19:45 CET
Resolved - This incident has been resolved
Feb 12, 11:14 CET
Identified - Following an incident with the Kapsule infrastructure, there was a high load on the load balancer product.

Kapsule issues were resolved, but the Load Balancer cannot be created anymore.

Meaning that it's now not possible to create Load Balancer or new Kapsule clusters.

Our engineers are working to resolve this situation.

Feb 12, 06:25 CET
Investigating - We are currently investigating this issue.
Feb 12, 06:17 CET
Resolved - Due to a mistake during the investigation of a switch issue, this device was rebooted.
This reboot generated the unreachability of its servers for 3 minutes.

Feb 12, 09:12 CET
Feb 11, 2025
Resolved - This incident has been resolved.
Feb 11, 16:15 CET
Monitoring - The Public switch inside Rack S1-B43 at DC5 has reloaded.
As a result, servers connected to this switch were unreachable for 5 minutes

Feb 4, 14:44 CET
Resolved - This incident has been resolved.
Feb 11, 16:15 CET
Monitoring - A fix has been implemented and we are monitoring the results.
Feb 10, 20:03 CET
Update - We are continuing to work on a fix for this issue.
Feb 10, 17:44 CET
Identified - We have detected a switch down in DC3, Room : 4 4-6, Rack : B23.
Servers in that rack currently have no network access and are unreachable.

10.01.2025 at 17h40 UTC
The issue has been forwarded to our team for resolution.

Feb 10, 17:43 CET
Resolved - On nl-ams region, between 2025-02-11 10:24 UTC and 2025-02-11 11:47 UTC, users might have received 503 errors (upstream connection termination) when calling their functions/containers: ~1.2% of requests estimated on that time frame, mostly grouped in the first hour: around ~3.6% from 10:24 UTC until 11:00 UTC.
The 503 errors root cause is not the same as we experienced in the past; this one is due to unattended restarts of system components.

We apologize for any inconvenience.

Feb 11, 11:30 CET
Feb 10, 2025
Completed - The scheduled maintenance has been completed.
Feb 10, 22:08 CET
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
Feb 10, 14:08 CET
Scheduled - Restarting kubelet process from kapsule/kosmos orchestrated nodes
nl-ams: 10/02/2025~11/02/2025
No service interruptions should occur, but some workloads may restart

Feb 7, 14:10 CET
Completed - The scheduled maintenance has been completed.
Feb 10, 18:00 CET
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
Feb 10, 10:00 CET
Scheduled - Restarting kubelet process from kapsule/kosmos orchestrated nodes :

pl-waw: 10/02/2025~11/02/2025

No service interruptions should occur, but some workloads may restart.

Feb 7, 14:08 CET
Resolved - This incident has been resolved.
Feb 10, 17:01 CET
Identified - The purpose of this maintenance is to resolve MySQL services.
We apologize for any inconvenience caused.

Feb 10, 15:22 CET
Resolved - This incident has been resolved.
Feb 10, 16:23 CET
Update - A few HTTP 502, HTTP 504, and closed connection errors have happened between around 2025-02-06 16:30 UTC and 2025-02-06 17:30 UTC. It affected both pulling and pushing images from/to rg.fr-par.scw.cloud. Situation seems better now.

As a consequence, during the period mentioned above:

- for serverless functions users, some builds might have failed. Retrying should work.
- for serverless containers users, some containers instances might have taken time to start, leading to higher cold starts.

We apologize again for any inconvenience.

Feb 6, 18:51 CET
Monitoring - Situation seems back to normal, since 18:30 UTC. Incident duration was 50 minutes, between 17:40 UTC and 18:30 UTC. We are now monitoring.
Sorry again for any inconvenience.

Feb 5, 20:25 CET
Identified - Users pushing on FR-PAR registry (rg.fr-par.scw.cloud) might encounter slowness, or failures (502 Bad Gateway, 504, closed network connections, etc. - list of errors is not exhaustive)
This affects transitively Serverless Function builds, as images can't be pushed to registry.
We are sorry about any inconvenience.

Feb 5, 19:35 CET
Completed - The scheduled maintenance has been completed.
Feb 10, 11:35 CET
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
Feb 10, 11:05 CET
Scheduled - The purpose of this maintenance is to resolve MySQL services.
We apologize for any inconvenience caused.

Feb 10, 11:15 CET
Resolved - No issues of this kind have appeared during the week-end, we are going to close the incident.
Sorry again for any inconvenience.

Feb 10, 10:58 CET
Monitoring - The issue has been identified. It was scoped to a single faulty node that experienced a similar issue before the fix was applied globally. Sorry for the inconvenience, we are monitoring the situation.
Feb 7, 19:00 CET
Investigating - We are still seeing problems in the AMS region linked to this previous incident and status : https://status.scaleway.com/incidents/7lh2qdqpyqys.

We are still investigating.

Feb 7, 17:38 CET
Feb 9, 2025

No incidents reported.

Feb 8, 2025

No incidents reported.