Status impact color legend
  • Black impact: None
  • Yellow impact: Minor
  • Orange impact: Major
  • Blue impact: Maintenance
Investigating - We are experiencing RPN connectivity downtime, we are currently investigating this issue.
Nov 13, 2024 - 16:32 CET
Monitoring - The fix has been applied, a job run should be able to be run normally

We are monitoring the situation

Nov 12, 2024 - 12:36 CET
Identified - The issue has been identified and a fix will be deployed
Nov 12, 2024 - 11:07 CET
Investigating - Jobs can no longer be running in the cluster, they stuck in queued status

Our team is actively working on restoring full functionality. Please bear with us as we address the issue to ensure reliable service

Nov 12, 2024 - 10:49 CET
Investigating - pf-1010.whm.fr-par.scw.cloud/51.159.128.187 blacklisted by Microsoft.

We are currently working on it.

Nov 05, 2024 - 17:06 CET

About This Site

Welcome to the Scaleway Status website. Here, you can view the status of all Scaleway services across all products and availability zones (AZs). We are currently making a few adjustments to enhance your navigation and overall experience. Over the next couple of weeks, you will see some changes to the website. Our team is here to assist you, and we appreciate your patience.

Dedibox - Products Degraded Performance
Dedibox Operational
Hosting Operational
SAN Operational
Dedirack Operational
Dedibackup Operational
Domains Operational
RPN Degraded Performance
Dedibox Console Operational
Elements - Products Degraded Performance
Object Storage Operational
Serverless-Database Operational
Website Operational
Instances Operational
Scaleway Glacier Operational
Block Storage Operational
Elastic Metal Operational
Apple Silicon Operational
Kubernetes Kapsule Operational
Container Registry Operational
Private Network Operational
Load Balancer Operational
Domains Operational
Serverless Functions and Containers Operational
Jobs Operational
Databases Operational
IoT Hub Operational
Web Hosting ? Degraded Performance
Observability Operational
Transactional Email Operational
Network Operational
Account API Operational
Billing API Operational
Elements Console Operational
Elements - AZ Operational
fr-par-1 Operational
fr-par-2 Operational
fr-par-3 Operational
nl-ams-1 Operational
pl-waw-1 Operational
nl-ams-2 Operational
pl-waw-2 Operational
nl-ams-3 Operational
pl-waw-3 Operational
Dedibox - Datacenters Operational
DC2 Operational
DC3 Operational
DC5 Operational
AMS Operational
Miscellaneous Operational
Excellence Operational
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Past Incidents
Nov 13, 2024

Unresolved incident: [Network]-RPN router ex33-s44-2.rpn.dc3 control-plane failure.

Nov 12, 2024
Resolved - This incident has been resolved.
Nov 12, 15:17 CET
Identified - Servers in these racks currently (18 servers) are without network connectivity (pub and rpn included)
Nov 12, 10:16 CET
Resolved - The issue has been resolved
Nov 12, 12:50 CET
Monitoring - The fix has been applied and now we are monitoring the situation
Nov 8, 17:18 CET
Update - The fix has been applied, new jobs should be working normally.

However running jobs may still see failed DNS requests or be terminated.

We are currently finishing the update which should be finished this afternoon.

Nov 8, 12:15 CET
Identified - The issue has been identified and a fix will be deployed

However there will be a limited network disruption during the deployment process

Nov 8, 10:30 CET
Investigating - The DNS service is currently experiencing degradation, leading to intermittent request failures.

Our team is actively working on restoring full functionality. Please bear with us as we address the issue to ensure reliable service

Nov 7, 17:38 CET
Nov 11, 2024
Resolved - This incident has been resolved.
Nov 11, 17:55 CET
Monitoring - A fix has been implemented. We are monitoring the issue.
Nov 11, 14:09 CET
Update - Queries may be slow or result to 500 response.
This impact instance on snapshot feature and Kapsule autoscaling.

Nov 11, 13:05 CET
Update - We are continuing to investigate this issue.
Nov 11, 12:17 CET
Investigating - Some pods and nodes stuck in creating state

Our engineers are currently investigating the cause

Nov 11, 11:59 CET
Nov 10, 2024

No incidents reported.

Nov 9, 2024

No incidents reported.

Nov 8, 2024
Resolved - This incident has been resolved.
Nov 8, 16:25 CET
Update - The root cause of this incident has been identified and resolved, times to attach a disk are back to normal. We are actively monitoring to ensure stability.
Nov 7, 16:18 CET
Monitoring - As of 10:05 UTC, a workaround has been implemented to resolve the issue. However, you may still experience longer-than-usual times when attaching a b_ssd volume to an instance.
If your volume is marked as attached to your Instance but is not visible in the OS, please detach and reattach the volume to your Instance to fix the issue.
We are still investigating the root cause, and we will provide further updates as soon as possible.

Nov 7, 11:59 CET
Investigating - We have detected a small proportion of b_ssd volumes stuck in hotsyncing during attach or detach on fr-par-1.
Nov 6, 16:12 CET
Resolved - Reboot occurred this afternoon at 3:06 PM (Paris timezone) and production is back to the normal
For information:
Room s46
Cabinet: F2

Nov 8, 15:22 CET
Nov 7, 2024
Resolved - This incident has been resolved.
Nov 7, 17:50 CET
Investigating - The time to check a domain is abnormally long. Users that use auto-configuring setting are not able to configure their domain.
Nov 7, 15:45 CET
Resolved - [14:25 UTC] Internal maintenance led to an incorrect network configuration making 3 servers unreachable by the Cluster.
[14:59 UTC] The configuration has been fixed and the Cluster has returned to a healthy state.

Preventive measures have been taken to prevent another mishandling during this kind of maintenance.

Impact: Cluster remained with degraded performance (slow I/O) and limited unavailability (2% of clients). Creation of new volumes was blocked until full cluster recovery.

Nov 7, 17:22 CET
Monitoring - Cluster is now healthy, We are under investigation
Nov 7, 16:08 CET
Update - Cluster is now healthy, We are under investigation
Nov 7, 16:06 CET
Investigating - Since 13H40 UTC block storage cluster is experiencing difficulties following crash of 2 servers.
Our team is mobilized to restore service as soon as possible.

Nov 7, 15:58 CET
Resolved - This incident has been resolved.
Nov 7, 12:03 CET
Identified - Since Nov 6th at 10 PM (FR), some databases have failed to start, and users are unable to connect to their databases. The issue is linked to the incident detailed at https://status.scaleway.com/incidents/n7k8t96f4cjy.
Nov 7, 10:48 CET
Nov 6, 2024
Resolved - This incident has been resolved.
Nov 6, 16:55 CET
Identified - The issue has been identified and a fix is being implemented.
Nov 6, 16:00 CET
Investigating - Manipulating buckets and objects through the Scaleway Console is currently facing issues, we are investigating.
Nov 6, 15:25 CET
Resolved - Servers behind this switch were unreachable for 2/3 minutes. From 10h05 to 10h08 (Paris timezone).
Nov 6, 11:00 CET
Investigating - We are currently investigating this issue.
Nov 6, 11:00 CET
Resolved - This incident has been resolved.
Nov 6, 10:56 CET
Monitoring - A fix has been implemented and we are monitoring the results.
Oct 29, 11:31 CET
Investigating - We have some network instability for several minutes, we are investigating.
Thank you for your patience and understanding.

Oct 29, 10:19 CET
Resolved - This incident has been resolved.
Nov 6, 10:34 CET
Monitoring - The problem has been identified and fixed. We will now put this device under observation for 2 hours.
Nov 5, 10:12 CET
Investigating - We have detected a downtime on RPN Switch in DC5 room 1 rack C53-1, our team is currently investigating this issue
Nov 5, 09:00 CET
Nov 5, 2024

Unresolved incident: [WEBHOSTING] Blacklist Microsoft.

Nov 4, 2024
Resolved - This incident has been resolved.
Nov 4, 21:04 CET
Investigating - There is a high error rate on a router switchport connected to dedibox switch located in DC2 s203b i5.
Packet loss on every server connected to the switch.

Nov 4, 18:10 CET
Resolved - This incident has been resolved.
Nov 4, 18:38 CET
Monitoring - A maintenance operation on one of our server of a block storage cluster on PAR1 resulted on a bad network configuration.
This incident implied IOWaits and IOTimeouts on a small fraction of b_ssd class volumes.

Managed Databases and Cockipt production were impacted as well as a small percentage of instances using b_ssd class block volumes.

The server was shutdown at 14h08 UTC all IOWait/Timeout issues were fixed at that time.

We will continue to investigate and fix the server configuration before re-injecting in this block storage cluster.

Nov 4, 15:30 CET
Investigating - Services using block storage may be partially unavailable.
Nov 4, 14:55 CET
Resolved - We've resolved an issue affecting our Serverless Functions and Containers products. Our platform uses Knative under the hood, and a bug in Knative was causing some services to be incorrectly marked as "Not Ready" after encountering temporary errors or crashes.
The root cause of this issue is a known bug in Knative, which is being tracked here: https://github.com/knative/serving/issues/15487.

Impact: If your service encountered a temporary error or crash, it may have been incorrectly marked as "Not Ready" and subsequently deleted by our system after a week. We apologize for any inconvenience this may have caused.

Resolution: We've deployed a temporary fix that prevents further services from being deleted due to this issue. This fix ensures that services will not be mistakenly garbage collected due to temporary errors or crashes. With this fix in place, the issue is now resolved.

Nov 4, 14:00 CET
Resolved - This incident has been resolved.
Nov 4, 09:55 CET
Monitoring - A fix has been implemented and we are monitoring the results.
Oct 31, 17:47 CET
Update - We are continuing to work on a fix for this issue.
Oct 31, 17:29 CET
Update - We are continuing to work on a fix for this issue.
Oct 31, 17:21 CET
Update - We are continuing to work on a fix for this issue.
Oct 31, 17:21 CET
Identified - Impact on DB using block storage
Oct 31, 17:08 CET
Investigating - Network instability that has disrupted several databases over a period of time (15h26 UTC and 15h50 UTC in fr-par )
Oct 31, 17:02 CET
Nov 3, 2024

No incidents reported.

Nov 2, 2024

No incidents reported.

Nov 1, 2024

No incidents reported.

Oct 31, 2024
Oct 30, 2024
Resolved - An issue affected the Serverless Function and Container request metrics displayed in the Serverless Functions and Serverless Containers Overview dashboards.
The metrics serverless_function_requests_per_second, serverless_container_requests_per_second, serverless_function_info, and serverless_container_info were no longer being sent to the Cockpit. Therefore, some panels showed incomplete data.
This issue began on October 28, 2024, at 10:00 UTC and was fixed on October 30, 2024, at around 9:50 UTC.

Oct 30, 11:57 CET