Scaleway
Monitoring - The necessary actions have been taken to fix the issue, and the situation seems to have stabilized.
In a quality approach, we will continue to monitor the situation on our side.
Nov 25, 17:54 CET
Investigating - The registry service still appears to face issues with push attempts.

This operation may yield internal server errors for some images.
Our product team is still investigating the root cause.
Nov 20, 14:26 CET
Monitoring - A fix has been fully implemented on November 19th at 10 PM UTC, we are no longer seeing any issue but will keep monitoring results. We will close this status once the registry service's stability is fully confirmed.
Nov 20, 11:14 CET
Update - The issue is now only in FR-PAR.

Our product team is still investigating on their side to find the root cause.
Our Excellence Customer Team is still available to address individual issues.
Nov 19, 11:56 CET
Investigating - We are currently experiencing a problem with our registry service.
Any action can lead to a 500 Internal Server Error.
Our team is aware of the problem, and we are working hard to resolve it.
Nov 19, 10:14 CET
Monitoring - Our product team found and fixed the issue, the FTP service should now accept connections again. We will monitor the situation for some time then close this status to ensure the situation remains stable.
Nov 24, 16:36 CET
Investigating - We are currently observing connection issues with our FTP service, used to manage websites' content on our classic webhosting offers. Our product team has been notified and is currently investigating on this matter.
Nov 24, 16:24 CET
Update - Our Trust & Safety team is continuing to work on this issue, we're making progress on the subject.
This status will be updated as soon as we have more details.
Nov 19, 18:07 CET
Investigating - We have identified that the IP address of our outbound SMTP service used by Webhosting Classic offers (PERSO, PRO, and BUSINESS) has been blacklisted by Abusix.

This is currently being handled by our Trust & Safety team, we will update this status as soon as we have more information on this matter.
Nov 5, 11:34 CET
Investigating - For various internal technical reasons, the rescue mode is currently not available on the "Stardust" type instances.
This problem is temporary, and our team is working on it.
Nov 19, 14:45 CET
Investigating - In a continuous effort to modernize our infrastructure, we updated the DNS resolvers of AMS-1 (announced by DHCP) a few months ago.
The old ones identified by :
- 10.6.30.8
- 10.6.30.9
will be decommissioned on November 16th, 2021.

Each identified customer has been contacted by mail.

We invite you to change and use the new resolver IP (10.196.2.3) as soon as possible, or better, use DHCP to automatically set the resolver IP address.
If you require assistance, please contact us directly from the Scaleway console, and create a ticket so that we can help.

Thank you for your trust.
Nov 8, 14:23 CET
Elements - AZ Operational
90 days ago
98.71 % uptime
Today
fr-par-1 Operational
90 days ago
99.96 % uptime
Today
fr-par-2 Operational
90 days ago
99.98 % uptime
Today
fr-par-3 Operational
90 days ago
100.0 % uptime
Today
nl-ams-1 Operational
90 days ago
93.62 % uptime
Today
pl-waw-1 Operational
90 days ago
100.0 % uptime
Today
Elements - Products Operational
90 days ago
99.95 % uptime
Today
Instances Operational
90 days ago
100.0 % uptime
Today
BMaaS Operational
90 days ago
99.71 % uptime
Today
Object Storage Operational
90 days ago
100.0 % uptime
Today
C14 Cold Storage Operational
90 days ago
100.0 % uptime
Today
Kapsule Operational
90 days ago
99.71 % uptime
Today
DBaaS Operational
90 days ago
100.0 % uptime
Today
LBaaS Operational
90 days ago
100.0 % uptime
Today
Container Registry Operational
90 days ago
99.93 % uptime
Today
Domains Operational
90 days ago
100.0 % uptime
Today
Elements Console Operational
90 days ago
100.0 % uptime
Today
IoT Hub Operational
90 days ago
100.0 % uptime
Today
Account API Operational
90 days ago
100.0 % uptime
Today
Billing API Operational
90 days ago
100.0 % uptime
Today
Functions and Containers Operational
90 days ago
99.93 % uptime
Today
Dedibox - Datacenters Operational
90 days ago
99.97 % uptime
Today
DC2 Operational
90 days ago
100.0 % uptime
Today
DC3 Operational
90 days ago
99.96 % uptime
Today
DC5 Operational
90 days ago
99.92 % uptime
Today
AMS Operational
90 days ago
100.0 % uptime
Today
Dedibox - Products Operational
90 days ago
99.67 % uptime
Today
Dedibox Operational
90 days ago
99.71 % uptime
Today
Hosting Operational
90 days ago
98.31 % uptime
Today
SAN Operational
90 days ago
99.99 % uptime
Today
Dedirack Operational
90 days ago
100.0 % uptime
Today
Dedibackup Operational
90 days ago
99.99 % uptime
Today
Dedibox Console Operational
90 days ago
99.98 % uptime
Today
Domains Operational
90 days ago
100.0 % uptime
Today
RPN Operational
90 days ago
99.36 % uptime
Today
Miscellaneous Operational
90 days ago
99.86 % uptime
Today
Excellence Operational
90 days ago
99.86 % uptime
Today
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Major outage
Partial outage
No downtime recorded on this day.
No data exists for this day.
had a major outage.
had a partial outage.
Scheduled Maintenance
A maintenance is planned in DC2/DC3/DC5 to fix an underlying issue on a few switches we will need to reboot to apply a fix. The list of the impacted racks are :

DC5 - Room 1 - A40 / A50 / B15 / B16 / B50 / C50 / B7
DC3 - Room 4-6 - A2 / A6
DC2 - Room 101 - L35

It will impact the rack with a loss of private network during the time window. We expect a maximum of 10 minutes of unavailability per rack.

Servers will remain powered on with public network. For private network, everything should come back online once it's finished, but if you are using RPNv1 you might need to restart a dhcp lease.

Start :
November 29th, 2021 - 08:00 UTC - 09:00 CET

Duration :
15 minutes
Posted on Nov 25, 11:31 CET
One of our transit providers, related to our PL-WAW1 region, will perform hardware maintenance. It is possible that network disruptions may occur during this intervention.
Posted on Nov 24, 15:25 CET
[DC5] Load testing of generators Dec 1, 08:30-14:30 CET
As part of the curative maintenance operations of the Datacenter, we will carry out tests of the degraded operation of the generators.
The operation consists of simulating an electrical power cut to the LVMs and validating load recovery as well as operation on generator sets after modification of the GE PLC according to the following provisional schedule:

• GE LT1.1: 11/15/21 (between 8:30 am and 3:00 pm)
• GE LT1.2: 11/16/21 (between 8:30 am and 3:00 pm)

Our teams as well as the usual maintenance companies will be mobilized for the perfect progress of operations.
This intervention does not present any particular risk of service interruption and will be processed on worked hours.
Posted on Nov 22, 15:05 CET
Elements - Old API keys deletion Dec 13, 10:00-10:05 CET
Scaleway is working on new IAM features and will soon deploy a new system in order to manage tokens.
As a consequence, we exceptionally need to expire all current keys generated until July 19, 2020, effective on 2021-12-13 09:00:00 UTC.

All impacted API keys are linked to your "default" project and can be viewed here : https://console.scaleway.com/project/credentials

If you are impacted, you did or will receive either an email or a ticket with details regarding the operation.

You can already create new API keys using our simple tutorial here : https://www.scaleway.com/en/docs/console/my-project/how-to/generate-api-key/
Posted on Oct 13, 15:30 CEST
Past Incidents
Nov 28, 2021
Resolved - The issue has been fixed by our product team.

If you still have network issues, please try to reboot your server or send us a ticket.
Nov 28, 11:13 CET
Investigating - Some customers may experience a network outage on our DC2 datacenter.

Our teams are investigating on site.
Nov 28, 10:39 CET
Nov 27, 2021

No incidents reported.

Nov 26, 2021
Completed - The scheduled maintenance has been completed.
Nov 26, 16:00 CET
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
Nov 22, 09:00 CET
Scheduled - We are going to carry out annual maintenance on the inverters of the electrical chains of TGBT 1.1 and 1.2 then 2.1
and 2.2 The inverters of these electrical chains supply the racks in computer rooms 1 and 2.
The operations will be carried out by our maintainer and under our permanent supervision, according to the following schedule :

Monday 11/22 : LT1.1
Tuesday 11/23 : LT1.2
Wednesday 11/24 : LT2.1
Tuesday 11/25 : LT2.2
Friday 11/26 : Control and report

Our teams as well as the usual maintenance companies will be mobilized for the perfect progress of operations.

Impact : None
Duration : 7 hours per day from 09:00 AM to 05:00PM each day.
Oct 14, 14:47 CEST
Resolved - This incident has been resolved.
Nov 26, 15:06 CET
Investigating - We have detected a switch down in DC5 Room 1 Rack D26.
Servers in that rack currently have no public network access and are unreachable.
Nov 26, 14:04 CET
Nov 25, 2021

Unresolved incident: [REGISTRY] Internal Server issue when pushing/pulling.

Nov 24, 2021
Resolved - This incident has been resolved.
Nov 24, 19:32 CET
Investigating - Due to a technical problem, the provisioning and upgrade of managed databases is unstable on fr-par1.
Please be sure that our engineers are working on a fix.
Nov 23, 15:08 CET
Completed - The scheduled maintenance has been completed.
Nov 24, 16:30 CET
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
Nov 24, 16:18 CET
Scheduled - In order to always improve the quality of service on our Object Storage, we carry out an operation on the fr-par region in order to perform an update of the internal backend of our service. You may encounter some errors, but they are temporary.
Maintenance will be performed between 14:30 and 16:30 UTC+1.
Nov 24, 14:30 CET
Nov 23, 2021
Resolved - This incident has been resolved.
Nov 23, 11:28 CET
Investigating - We are currently encountering issues with domain renewal, where the Console does not allow the renewal of some domains.
The issue is under investigation.
Nov 23, 09:43 CET
Completed - The scheduled maintenance has been completed.
Nov 23, 10:00 CET
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
Nov 23, 09:00 CET
Scheduled - A maintenance is planned in DC5, Room 1, Rack B7 to reboot a RPN switch to fix an underlying issue. This is a follow up of https://status.scaleway.com/incidents/l177j9nzffkz

It will impact the rack with a loss of private network during the time window. Note that we might need several reboots to apply all modifications and make sure everything is back in order.

Servers will remain powered on with public network. For private network, everything should come back online once it's finished, but if you are using RPNv1 you might need to restart a dhcp lease.

Start :
November 23th, 2021 - 08:00 UTC - 09:00 CET

Duration :
1 hour
Nov 19, 16:18 CET
Completed - The scheduled maintenance has been completed.
Nov 23, 08:30 CET
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
Nov 23, 07:30 CET
Scheduled - We will be undergoing maintenance on the Compute database in PAR 1.

Actions on instances, either through the Console, CLI or Terraform, may be slower or fail with an error message.

Actions on managed services may also be impacted during the maintenance.

Already running instances will not be affected and will continue working during the maintenance.

Start: November 23rd, 2021 at 06:30 UTC
Duration: 1 hour
Nov 16, 16:23 CET
Nov 22, 2021
Nov 21, 2021
Resolved - This incident has been resolved by our provider, which experienced a fiber cut.
It has been stable since it was repaired, no further issue is expected on that point.
Nov 21, 11:13 CET
Investigating - We have lost connectivity on RPN between AMS and PAR.
There is currently no RPN between AMS and PAR, public connectivity is not affected.

A case has been opened with our provider.
Nov 19, 16:27 CET
Nov 20, 2021
Resolved - We have noticed issues with the DC5 public switch room 1 rack C63, with which some flapping occurred.

After having confirmation it was not a one-time issue such as an unexpected reboot, our on-call datacenter artisan found the QSFP optic to be degraded and therefore replaced it on November 21st at around 5 AM UTC.

If you have a machine in this rack that still experiences any kind of network degradation, please get in touch with our technical assistance in order to troubleshoot the issue, thank you.
Nov 20, 05:30 CET
Nov 19, 2021
Resolved - This incident has been resolved.
Nov 19, 15:10 CET
Investigating - We have detected a switch down in DC2 Room 202-B Rack H11.
Servers in that rack currently have no public network access and are unreachable.
Nov 19, 13:29 CET
Completed - The scheduled maintenance has been completed.
Nov 19, 13:00 CET
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
Nov 15, 07:31 CET
Scheduled - As part of the preventive maintenance operations of DC5, we will perform the mechanical maintenance of the generators followed by an infrastructure redundancy test.

The operation consists of simulating an electrical supply cut-off to the main LV boards and validating load recovery as well as operation of generators after the mechanical maintenance of each group according to the following provisional schedule:

- GE LT1.1: 2021-11-15 (between 6:30 and 13:00 UTC)
- GE LT1.2: 2021-11-16 (between 6:30 and 13:00 UTC)
- GE LT2.1: 2021-11-17 (between 6:30 and 13:00 UTC)
- GE LT2.2: 2021-11-18 (between 6:30 and 13:00 UTC)
- GE BAT 1: 2021-11-19 (between 6:00 and 12:00 UTC)

Our teams as well as the usual maintenance companies will be involved for the best handling of operations. This intervention does not present any particular risk of service interruption and will be processed during working hours.

In the event of an electrical cut-off from the HV network during the operation, ongoing maintenance will be interrupted for automatic restart of the generator set. The load will be supplied by the batteries of the inverter of the electric chain during the fallback of the maintainer. Generators whose maintenance is not in progress will resume electrical charge automatically
Oct 11, 17:57 CEST
Nov 18, 2021
Resolved - This issue only impacted certain customers. If you have any problems, please contact support.
Nov 18, 17:08 CET
Investigating - 500 errors are returned in client-side output.
Our teams are currently working on this issue to solve the problem as soon as possible.
Nov 18, 12:59 CET
Resolved - This incident has been resolved.
Nov 18, 13:33 CET
Investigating - We have detected a switch down in DC5 Room 1 Rack B63.
Servers in that rack currently have no private network access and are unreachable through RPN.
Nov 18, 13:26 CET
Nov 17, 2021
Resolved - This incident has been resolved.
Nov 17, 21:51 CET
Update - We are continuing to investigate this issue.
Nov 17, 21:50 CET
Update - We are continuing to investigate this issue.
Nov 17, 21:32 CET
Investigating - Compute Instance fails to boot in PAR1 and AMS.
Block storage snapshots seem not to be processed.
Our teams are working on solving this swiftly.
Nov 17, 21:26 CET
Resolved - This incident has been resolved.
Nov 17, 13:46 CET
Update - We are continuing to investigate this issue.
Nov 17, 07:39 CET
Investigating - We're currently experiencing delays with our Registry service. Delays pushing images on FR-PAR region.
Our product team is currently investigating.
Nov 16, 22:22 CET
Resolved - This incident has been resolved.
Nov 17, 13:41 CET
Investigating - We have detected a switch down in DC5 Room 1 Rack C65.
Servers in that rack currently have no private network access and are unreachable through RPN.
Nov 17, 12:56 CET
Nov 16, 2021
Resolved - This incident has been resolved.
Nov 16, 17:36 CET
Investigating - We are currently facing issues with the domain search function on our Dedibox website and the Dedibox Console.
Domain search queries might encounter a timeout.
Nov 16, 16:14 CET
Resolved - This incident has been resolved.
Nov 16, 15:06 CET
Update - We are continuing to investigate this issue.
Nov 16, 15:05 CET
Investigating - We are encountering some slowness on our APIs responses on Fr-PAR.
Our products teams are investigating, we will update this status as soon as we have more information.
Nov 16, 15:04 CET
Completed - The scheduled maintenance has been completed.
Nov 16, 10:30 CET
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
Nov 16, 10:00 CET
Scheduled - For internal service updates, the public abuse form will not be available during this time. The form will return once the maintenance is complete.
Nov 15, 15:56 CET
Completed - The scheduled maintenance has been completed.
Nov 16, 08:30 CET
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
Nov 16, 07:32 CET
Scheduled - We will be undergoing maintenance on the Compute database in PAR 1.

Actions on instances, either through the Console, CLI or Terraform, may be slower or fail with an error message.

Actions on managed services may also be impacted during the maintenance.

Already running instances will not be affected and will continue working during the maintenance.

Start : 16th november, 2021 - 06.30 UTC (07.30 CET)
Duration : 1 hour
Nov 15, 09:52 CET
Resolved - Due to an internal operation, the compute API was not available from 8:15am to 9am (Paris time). The problem has been solved and fixed by our teams.
Nov 16, 08:15 CET
Nov 15, 2021
Completed - The scheduled had to be postponned to the 16th of November. Therefore a new maintenance will be created.
Nov 15, 08:30 CET
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
Nov 15, 08:00 CET
Scheduled - We will be undergoing maintenance on the Compute database in PAR 1.

Actions on instances, either through the Console, CLI or Terraform, may be slower or fail with an error message.

Actions on managed services may also be impacted during the maintenance.

Already running instances will not be affected and will continue working during the maintenance.

Start : 15th november, 2021 - 07.00 UTC (08.00 CET)
Duration : 30 minutes
Nov 2, 17:41 CET
Nov 14, 2021
Resolved - The switch has been replaced and reconfigured, it is now back online.
You will need to renew your RPNv1 DHCP lease to reconnect the servers to RPN.
Nov 14, 12:58 CET
Investigating - We have detected an RPN switch down in DC3 Room 31 Rack D21.
Servers in that rack currently have no private network access and are unreachable through RPN.
Nov 14, 11:23 CET