Status impact color legend
  • Black impact: None
  • Yellow impact: Minor
  • Orange impact: Major
  • Blue impact: Maintenance
Update - On October, 21st, we successfully released a new version in pl-waw region, which significantly improved the availability and performance of the service. The deployment of this update is currently ongoing in nl-ams region and is planned tomorrow morning on October, 23rd for fr-par region. Please note that no interruption of service is expected.
Oct 22, 2024 - 11:27 CEST
Monitoring - The actions of mitigation are completed and both performance and availability are almost back to normal. We will keep the incident open for monitoring until the long term fix is released within a couple of weeks.
Oct 10, 2024 - 15:51 CEST
Update - Out of the monitoring, we confirm that the latency due to the incident has significantly decreased in Paris Region and is almost back to normal (only the 99,9 percentiles are still above usual values). We are planning a hardware update today by the end of the day to remove most services that still generate 5xx errors. A more long term fix is planned in the coming days to fix the incident root cause. As always, we will keep you updated.
Oct 07, 2024 - 10:33 CEST
Update - After monitoring the impact of our intervention, we do see that the globale response time is better (even if there is still some cases, on 99.9 percentil where we have to much latency). We will pursue on this Monday, and remove the faulty services that still respond 5XX errors.
During the week end, we added ressources to our teams on-call to handle trouble that may rise in this situation
On the Bug Fix side, we do have a more long term fix in review today, and will test it next week before we decide when to push it in production.
As allways, we will keep you update.

Oct 04, 2024 - 17:06 CEST
Update - A maintenance will happen on some servers between 10/03/2024 3PM CEST and 10/04/2024 6PM CEST to upgrade the resources. There should be no service interruption.
Oct 03, 2024 - 15:02 CEST
Investigating - Even after the fixes, we're still experiencing time-outs and the error "Reduce your request rate".
Our team is actively working towards a lasting resolution, thank you for your patience and understanding.

Oct 01, 2024 - 12:36 CEST
Monitoring - Following a faulty fix deployed Friday 27 Sept at 10PM CEST, the object storage solution experienced instability. At 14:30PM CEST on the 28 Sept, we began rolling out a downgrade of that fix. Everything was deployed at 6PM CEST and the service is stable again.
Sep 28, 2024 - 18:12 CEST
Investigating - We are experiencing instability on the Object Storage solution in fr-par region since 8PM UTC on friday 27/09, resulting in HTTP 503 errors for customers.
We are currently investigating this issue.

Sep 28, 2024 - 13:40 CEST

About This Site

Welcome to the Scaleway Status website. Here, you can view the status of all Scaleway services across all products and availability zones (AZs). We are currently making a few adjustments to enhance your navigation and overall experience. Over the next couple of weeks, you will see some changes to the website. Our team is here to assist you, and we appreciate your patience.

Elements - Products Operational
Instances Operational
Object Storage Operational
Scaleway Glacier Operational
Block Storage Operational
Elastic Metal Operational
Apple Silicon M1 Operational
Kubernetes Kapsule Operational
Container Registry Operational
Private Network Operational
Load Balancer Operational
Domains Operational
Serverless Functions and Containers Operational
Jobs Operational
Databases Operational
IoT Hub Operational
Web Hosting ? Operational
Observability Operational
Transactional Email Operational
Network Operational
Account API Operational
Billing API Operational
Elements Console Operational
Dedibox - Products Operational
Dedibox Operational
Hosting Operational
SAN Operational
Dedirack Operational
Dedibackup Operational
Domains Operational
RPN Operational
Dedibox Console Operational
Elements - AZ Operational
fr-par-1 Operational
fr-par-2 Operational
fr-par-3 Operational
nl-ams-1 Operational
pl-waw-1 Operational
nl-ams-2 Operational
pl-waw-2 Operational
nl-ams-3 Operational
pl-waw-3 Operational
Dedibox - Datacenters Operational
DC2 Operational
DC3 Operational
DC5 Operational
AMS Operational
Miscellaneous Operational
Excellence Operational
Website Operational
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Scheduled Maintenance
A maintenance will be performed on our infrastructure for both Serverless Functions and Containers products, to update internal components.

During this operation, nodes hosting the users workloads (functions/containers instances) will be replaced. As we update underlying nodes, running functions/containers instances will need to be relocated to different nodes.

Impacts:

- functions/containers instances will restart (twice)
- depending on functions/containers configuration (min scale or how they handle termination), some 5xx errors might be experienced
- cold start can also be experienced for some requests, until the new functions/containers instances are fully restarted and ready to receive requests

Posted on Oct 22, 2024 - 11:06 CEST
A maintenance will be performed on our infrastructure for both Serverless Functions and Containers products, to update internal components.

During this operation, nodes hosting the users workloads (functions/containers instances) will be replaced. As we update underlying nodes, running functions/containers instances will need to be relocated to different nodes.

Impacts:

- functions/containers instances will restart (twice)
- depending on functions/containers configuration (min scale or how they handle termination), some 5xx errors might be experienced
- cold start can also be experienced for some requests, until the new functions/containers instances are fully restarted and ready to receive requests.

Posted on Oct 22, 2024 - 11:11 CEST
Past Incidents
Oct 22, 2024
Resolved - This incident has been resolved.
Oct 22, 13:01 CEST
Monitoring - Our team monitoring continues.
Updates to follow.

Sep 24, 12:51 CEST
Identified - san-dc2-32.rpn.online.net is slowed down during a maintenance operation. Clients on this SAN can notice unusual slowness. We are closely monitoring it.
Sep 9, 16:48 CEST
Resolved - This incident has been resolved.
Oct 22, 13:01 CEST
Investigating - We have noticed that problems with connecting to the dedibackup service can occur.
We will get back to you as soon as we have more information on the situation.

Apr 6, 12:23 CEST
Resolved - This incident has been resolved.
Oct 22, 12:50 CEST
Investigating - The installation fees with the Dedidscount for a 36 months commitment are actually billed, while they shouldn't be. Our Dedibox team is already aware of this issue and is working on it.
Oct 1, 15:00 CEST
Oct 21, 2024
Resolved - This incident has been resolved.
Oct 21, 18:00 CEST
Investigating - We're currently unable to handle the phone call to our support. If you need to reach us, don't hesitate to open a ticket. Thank you for your understanding.
Oct 21, 12:48 CEST
Resolved - This incident has been resolved.
Oct 21, 15:32 CEST
Investigating - We are experiencing some database instabilities, we are working to identify and fix it
Oct 21, 14:35 CEST
Resolved - This incident has been resolved.
Oct 21, 14:39 CEST
Identified - Following an electrical maintenance in a datacenter, around 30 machines unexpectedly rebooted, we are currently putting them back into production. This led to an increase in tail latencies. (switch to Identified)
Oct 21, 14:01 CEST
Investigating - Following an electrical maintenance in a datacenter, around 30 machines unexpectedly rebooted, we are currently putting them back into production. This led to an increase in tail latencies.
Oct 21, 10:55 CEST
Resolved - This incident has been resolved.
Oct 21, 14:19 CEST
Investigating - Today, from 11:33 to 11:46 AM (UTC+2:00), one of our machine in production was running with the wrong configuration. Leading to an InvalidStorageClass error even though the storage class was valid.
Oct 21, 14:12 CEST
Completed - The scheduled maintenance has been completed.
Oct 21, 11:30 CEST
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
Oct 21, 11:00 CEST
Scheduled - Maintenance for system update. Hosting on the 1005 platform will be unavailable during maintenance.
Oct 18, 11:56 CEST
Completed - The scheduled maintenance has been completed.
Oct 21, 08:00 CEST
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
Oct 14, 08:01 CEST
Scheduled - Maintenance is scheduled for your managed Kubernetes clusters to automatically migrate from NAT IPs to Routed IPs. This action will only affect clusters which have not been manually migrated by September 30th.

Please note that node public IP addresses that are not flexible will be change.

Impact: ~1 minute downtime for single-node clusters (no downtime if multiple nodes).
For more information and guidance on manually migrating to Routed IPs, please refer to our documentation below.

https://www.scaleway.com/en/docs/containers/kubernetes/reference-content/move-kubernetes-nodes-routed-ip/

Jul 15, 11:43 CEST
Oct 20, 2024

No incidents reported.

Oct 19, 2024

No incidents reported.

Oct 18, 2024
Postmortem - Read details
Oct 18, 17:12 CEST
Resolved - Post Mortem: 2024-10-18, Scaleway Elements Partial Loss of VPC Connectivity in FR-PAR-1

incident resolved on 2024-10-18 : https://status.scaleway.com/incidents/g3q3c4d43gkb

Oct 18, 16:56 CEST
Postmortem - Read details
Oct 18, 17:05 CEST
Resolved - This incident has been resolved.
Oct 18, 16:24 CEST
Update - We are continuing to monitor for any further issues.
Oct 18, 10:50 CEST
Monitoring - A fix has been implemented and we are monitoring the results.
Oct 18, 10:30 CEST
Update - The service has been degraded since 08.20AM CEST, down between 09.49 and 10.04AM CEST
Our teams are working towards a solution. Thank you for your patience.

Oct 18, 10:17 CEST
Investigating - We are investigating VPC issues in fr-par-1
Oct 18, 10:10 CEST
Oct 17, 2024
Resolved - All the machines are now back up
Oct 17, 16:44 CEST
Identified - We are in the process of powering back on the affected machines, 80% of the rack is back up at the moment
Oct 17, 16:20 CEST
Investigating - Following a planned operation in fr-par-3 some Apple Silicon have unexpectedly been powered off.
We are in the process of rebooting the affected machines.

IP of affected machines: 51.159.120.2 -> 51.159.120.97

Oct 17, 15:58 CEST
Resolved - This incident has been resolved.
Oct 17, 11:36 CEST
Investigating - Cluster scaling issues on WAW for clusters with ENT1-L pool on WAW2
Customers can remove the ENT1-L pools to fix the issue

Oct 17, 10:56 CEST
Completed - The scheduled maintenance has been completed.
Oct 17, 09:30 CEST
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
Oct 17, 09:00 CEST
Scheduled - Due to maintenance to update the system, hostings on the 1004 platform will be unavailable for the duration of the maintenance.

We apologize for any inconvenience caused.

Oct 15, 17:22 CEST
Oct 16, 2024
Resolved - This incident has been resolved.
Oct 16, 12:28 CEST
Monitoring - A fix has been implemented and we are monitoring the results.
Oct 10, 08:50 CEST
Investigating - We are currently investigating the functionality of our transfer and forwarding mailing service.

(Direct mailing remains operational)

Oct 8, 11:56 CEST
Oct 15, 2024
Resolved - This incident has been resolved.
Oct 15, 22:34 CEST
Monitoring - A fix has been implemented, the backups should be available again since 3rd October.
Oct 7, 12:43 CEST
Investigating - The daily mail backup process is blocked and recent mail backups are unavailable for some of our classic hosting customers. This does not affect website or database backups. Our teams are currently investigating.
Sep 30, 16:48 CEST
Oct 14, 2024
Completed - The scheduled maintenance has been completed.
Oct 14, 08:00 CEST
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
Oct 7, 08:00 CEST
Scheduled - Maintenance is scheduled for your managed Kubernetes clusters to automatically migrate from NAT IPs to Routed IPs. This action will only affect clusters which have not been manually migrated by September 30th.

Please note that node public IP addresses that are not flexible will be change.

Impact: ~1 minute downtime for single-node clusters (no downtime if multiple nodes).
For more information and guidance on manually migrating to Routed IPs, please refer to our documentation below.

https://www.scaleway.com/en/docs/containers/kubernetes/reference-content/move-kubernetes-nodes-routed-ip/

Jul 15, 11:40 CEST
Oct 13, 2024

No incidents reported.

Oct 12, 2024

No incidents reported.

Oct 11, 2024

No incidents reported.

Oct 10, 2024
Oct 9, 2024
Resolved - This incident has been resolved
Oct 9, 17:56 CEST
Investigating - Some user may experience some instabilities.
Our team is currently investigating the issue.

Oct 9, 16:43 CEST
Resolved - This incident has been resolved.
Oct 9, 09:49 CEST
Monitoring - A fix has been implemented and we are monitoring the results.
Oct 7, 11:40 CEST
Investigating - We are currently investigating this issue.
Oct 7, 11:26 CEST
Oct 8, 2024
Completed - The scheduled maintenance has been completed.
Oct 8, 00:15 CEST
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
Oct 7, 00:15 CEST
Scheduled - Creating NAT enabled Instance (with parameter routed_ip_enabled=false ) is not supported anymore as of 7th October.
If you have any script calling the API directly to create Instances with NAT IPs, please switch to Routed IPs. If you are using our CLI or Terraform adapter, please update to the latest version to support these changes.

Sep 17, 16:25 CEST