Status impact color legend
  • Black impact: None
  • Yellow impact: Minor
  • Orange impact: Major
  • Blue impact: Maintenance
Identified - Our IP ranges are currently blacklisted at UCEPROTECTL3.

Our engineers are currently working on resolving this.

Nov 18, 2024 - 08:16 CET
Monitoring - Since Friday morning (around 11:30am), our API Gateway has been experiencing instabilities in the fr-par zone. These instabilities have since been restored by our engineers, who have been monitoring the situation since 2:55pm.

If you are still experiencing slowness, we invite you to create an incident ticket with our support team.

The Scaleway team thanks you for your cooperation and understanding!

Nov 15, 2024 - 15:59 CET
Monitoring - A fix has been implemented and we are monitoring the results.
Nov 14, 2024 - 20:36 CET
Update - We are continuing to investigate this issue.
Nov 14, 2024 - 19:30 CET
Update - We are continuing to investigate this issue.
Nov 14, 2024 - 19:17 CET
Investigating - Starting on 2024/11/14 17:47 UTC, clients might experience issues pushing/pulling images from rg.fr-par.scw.cloud due to a certificate issue.

As a result, users from Serverless Functions/Containers using an image hosted on rg.fr-par.scw.cloud might also get impacted, as their functions/containers won't be able to start (unable to pull images). Impacts for Serverless Functions/Containers are:
- instances that cannot start (scale up might be impossible)
- requests timeouting or 5xx errors if no instances are available

We are currently investigating this issue, and apologize for the inconvenience.

Nov 14, 2024 - 19:13 CET
Monitoring - The fix has been applied, a job run should be able to be run normally

We are monitoring the situation

Nov 12, 2024 - 12:36 CET
Identified - The issue has been identified and a fix will be deployed
Nov 12, 2024 - 11:07 CET
Investigating - Jobs can no longer be running in the cluster, they stuck in queued status

Our team is actively working on restoring full functionality. Please bear with us as we address the issue to ensure reliable service

Nov 12, 2024 - 10:49 CET
Investigating - pf-1010.whm.fr-par.scw.cloud/51.159.128.187 blacklisted by Microsoft.

We are currently working on it.

Nov 05, 2024 - 17:06 CET

About This Site

Welcome to the Scaleway Status website. Here, you can view the status of all Scaleway services across all products and availability zones (AZs). We are currently making a few adjustments to enhance your navigation and overall experience. Over the next couple of weeks, you will see some changes to the website. Our team is here to assist you, and we appreciate your patience.

Dedibox - Products Partial Outage
Dedibox Partial Outage
Hosting Operational
SAN Operational
Dedirack Operational
Dedibackup Operational
Domains Partial Outage
RPN Operational
Dedibox Console Operational
Elements - Products Partial Outage
Object Storage Operational
Serverless-Database Operational
Website Operational
Instances Operational
Scaleway Glacier Operational
Block Storage Operational
Elastic Metal Operational
Apple Silicon Operational
Kubernetes Kapsule Operational
Container Registry Partial Outage
Private Network Operational
Load Balancer Operational
Domains Operational
Serverless Functions and Containers Partial Outage
Jobs Operational
Databases Operational
IoT Hub Operational
Web Hosting ? Operational
Observability Operational
Transactional Email Operational
Network Operational
Account API Operational
Billing API Operational
Elements Console Operational
Elements - AZ Partial Outage
fr-par-1 Partial Outage
fr-par-2 Operational
fr-par-3 Partial Outage
nl-ams-1 Partial Outage
pl-waw-1 Partial Outage
nl-ams-2 Partial Outage
pl-waw-2 Partial Outage
nl-ams-3 Partial Outage
pl-waw-3 Partial Outage
Dedibox - Datacenters Partial Outage
DC2 Partial Outage
DC3 Partial Outage
DC5 Partial Outage
AMS Partial Outage
Miscellaneous Operational
Excellence Operational
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Past Incidents
Nov 20, 2024

No incidents reported today.

Nov 19, 2024
Completed - The scheduled maintenance has been completed.
Nov 19, 19:00 CET
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
Nov 19, 07:00 CET
Scheduled - As part of our backbone resiliency upgrade plan, we are planning a network migration of our backbone infrastructure in PAR2 from 7AM to 7PM, Nov 19 2024.
There is no impact to be expected during this maintenance, feel free to contact the support team if you have any questions.

Nov 15, 14:32 CET
Nov 18, 2024
Completed - The scheduled maintenance has been completed.
Nov 18, 11:00 CET
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
Nov 18, 10:00 CET
Scheduled - Operating System, kernel update on pf-012
Nov 15, 15:36 CET
Completed - The scheduled maintenance has been completed.
Nov 18, 11:00 CET
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
Nov 18, 10:00 CET
Scheduled - Reason: Operating System, kernel update on pf-1007
Nov 14, 16:04 CET
Nov 17, 2024

No incidents reported.

Nov 16, 2024

No incidents reported.

Nov 15, 2024
Resolved - This incident has been resolved.
Nov 15, 05:31 CET
Investigating - The Chouchou switch in DC5 room 1 rack A39 is down.
We are currently investigating this issue.

Nov 14, 22:10 CET
Nov 14, 2024
Resolved - This incident has been resolved.
Nov 14, 17:45 CET
Investigating - We are currently investigating this issue.
Nov 14, 17:21 CET
Resolved - This incident has been resolved.
Nov 14, 10:45 CET
Investigating - We are experiencing RPN connectivity downtime, we are currently investigating this issue.
Nov 13, 16:32 CET
Nov 13, 2024
Nov 12, 2024
Resolved - This incident has been resolved.
Nov 12, 15:17 CET
Identified - Servers in these racks currently (18 servers) are without network connectivity (pub and rpn included)
Nov 12, 10:16 CET
Resolved - The issue has been resolved
Nov 12, 12:50 CET
Monitoring - The fix has been applied and now we are monitoring the situation
Nov 8, 17:18 CET
Update - The fix has been applied, new jobs should be working normally.

However running jobs may still see failed DNS requests or be terminated.

We are currently finishing the update which should be finished this afternoon.

Nov 8, 12:15 CET
Identified - The issue has been identified and a fix will be deployed

However there will be a limited network disruption during the deployment process

Nov 8, 10:30 CET
Investigating - The DNS service is currently experiencing degradation, leading to intermittent request failures.

Our team is actively working on restoring full functionality. Please bear with us as we address the issue to ensure reliable service

Nov 7, 17:38 CET
Nov 11, 2024
Resolved - This incident has been resolved.
Nov 11, 17:55 CET
Monitoring - A fix has been implemented. We are monitoring the issue.
Nov 11, 14:09 CET
Update - Queries may be slow or result to 500 response.
This impact instance on snapshot feature and Kapsule autoscaling.

Nov 11, 13:05 CET
Update - We are continuing to investigate this issue.
Nov 11, 12:17 CET
Investigating - Some pods and nodes stuck in creating state

Our engineers are currently investigating the cause

Nov 11, 11:59 CET
Nov 10, 2024

No incidents reported.

Nov 9, 2024

No incidents reported.

Nov 8, 2024
Resolved - This incident has been resolved.
Nov 8, 16:25 CET
Update - The root cause of this incident has been identified and resolved, times to attach a disk are back to normal. We are actively monitoring to ensure stability.
Nov 7, 16:18 CET
Monitoring - As of 10:05 UTC, a workaround has been implemented to resolve the issue. However, you may still experience longer-than-usual times when attaching a b_ssd volume to an instance.
If your volume is marked as attached to your Instance but is not visible in the OS, please detach and reattach the volume to your Instance to fix the issue.
We are still investigating the root cause, and we will provide further updates as soon as possible.

Nov 7, 11:59 CET
Investigating - We have detected a small proportion of b_ssd volumes stuck in hotsyncing during attach or detach on fr-par-1.
Nov 6, 16:12 CET
Resolved - Reboot occurred this afternoon at 3:06 PM (Paris timezone) and production is back to the normal
For information:
Room s46
Cabinet: F2

Nov 8, 15:22 CET
Nov 7, 2024
Resolved - This incident has been resolved.
Nov 7, 17:50 CET
Investigating - The time to check a domain is abnormally long. Users that use auto-configuring setting are not able to configure their domain.
Nov 7, 15:45 CET
Resolved - [14:25 UTC] Internal maintenance led to an incorrect network configuration making 3 servers unreachable by the Cluster.
[14:59 UTC] The configuration has been fixed and the Cluster has returned to a healthy state.

Preventive measures have been taken to prevent another mishandling during this kind of maintenance.

Impact: Cluster remained with degraded performance (slow I/O) and limited unavailability (2% of clients). Creation of new volumes was blocked until full cluster recovery.

Nov 7, 17:22 CET
Monitoring - Cluster is now healthy, We are under investigation
Nov 7, 16:08 CET
Update - Cluster is now healthy, We are under investigation
Nov 7, 16:06 CET
Investigating - Since 13H40 UTC block storage cluster is experiencing difficulties following crash of 2 servers.
Our team is mobilized to restore service as soon as possible.

Nov 7, 15:58 CET
Resolved - This incident has been resolved.
Nov 7, 12:03 CET
Identified - Since Nov 6th at 10 PM (FR), some databases have failed to start, and users are unable to connect to their databases. The issue is linked to the incident detailed at https://status.scaleway.com/incidents/n7k8t96f4cjy.
Nov 7, 10:48 CET
Nov 6, 2024
Resolved - This incident has been resolved.
Nov 6, 16:55 CET
Identified - The issue has been identified and a fix is being implemented.
Nov 6, 16:00 CET
Investigating - Manipulating buckets and objects through the Scaleway Console is currently facing issues, we are investigating.
Nov 6, 15:25 CET
Resolved - Servers behind this switch were unreachable for 2/3 minutes. From 10h05 to 10h08 (Paris timezone).
Nov 6, 11:00 CET
Investigating - We are currently investigating this issue.
Nov 6, 11:00 CET
Resolved - This incident has been resolved.
Nov 6, 10:56 CET
Monitoring - A fix has been implemented and we are monitoring the results.
Oct 29, 11:31 CET
Investigating - We have some network instability for several minutes, we are investigating.
Thank you for your patience and understanding.

Oct 29, 10:19 CET
Resolved - This incident has been resolved.
Nov 6, 10:34 CET
Monitoring - The problem has been identified and fixed. We will now put this device under observation for 2 hours.
Nov 5, 10:12 CET
Investigating - We have detected a downtime on RPN Switch in DC5 room 1 rack C53-1, our team is currently investigating this issue
Nov 5, 09:00 CET