Scaleway
Monitoring - Racks F1 to F5 successfully went down and recovered, all switchs in room 4-4 are currently up.
Jul 5, 19:36 CEST
Identified - Public switches on racks F1 to F5 are currently impacted by the maintenance.
This status will be updated as soon as connectivity is restored for related servers.

Jul 5, 18:50 CEST
Monitoring - All switches have now recovered, we will update this status if any other issue arises.
Jul 5, 17:41 CEST
Update - The RPN switch for the rack A7 has recovered, work is still in progress for racks A1 and D1.
Jul 5, 17:39 CEST
Identified - The issue has been fixed for all public switches and most RPN switches.
Three RPN switches are still experiencing issues: racks A1, A7 and D1.

Jul 5, 17:33 CEST
Investigating - The following public switches are currently unavailable: racks E7 and E19.
Same list for RPN switches: racks A1, A7, C14, C15, C16, C17, D1 and D8.

We will update this status as soon as we have more information.

Jul 5, 17:23 CEST
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
Jul 5, 09:00 CEST
Scheduled - Following tests on generator of Datacenter, we have identified a malfunction of the DR circuit breaker (Generator of chain) at the level of triple circuit breaker inverter DN / DR / DS on TGBT C.

Following an expert evaluation by the manufacturer Schneider Electric, it was recommended to completely replace the triple locking system between the DN / DR / DS circuit breakers and to carry out new tests with no load, then with load.

To do this, we will have to shut down the TGBT of the C electrical chain and switch its electrical load to the 3 other A-B-D electrical chains supplying the computer rooms.

Our “Hexacore” design allows this type of maintenance to be carried out while maintening the continuity of service of your installations bbut downgrading thir level of redundancy to N.

The computer rooms concerned will be as follows :

- Room 1
- Room 2
- Room 3 – Module 3-3
- Room 3 – Module 3-4
- Room 3 – Module 3-6
- Room 4 – Module 4-3
- Room 4 – Module 4-4
- Room 4 – Module 4-6
- Operator Room 1

The electrical load of your computer racks supplied on one of the 2 electrical channels by the electrical chain C (TDHQ with blue label) will be switched to the second electrical channel of the computer rack.

Make sure you have properly connected your dual-powered equipment to the 2 electrical channels made available in each rack so that the scale can operate safely :
- Your equipment must be powered by both electrical channels. (dual 2N supply)
- Your mono-powered equipment must be powered on the STS channel.

Here are the different stages of the intervention :
- D Day : Switching the load of channel C to the other channels.
- D+1 Day : Shutdown of the TGBT C at 8 a.m + Maintenance + Recommissioning of the TGBT C + Off-load tests
- D+2 Day : Switching of the load to the C chain + Generator Load tests.

These interventions will be piloted and supervised by the Scaleway Datacenter teams with the manufacturers and maintainers of Schneider Electric and Eaton, as well as our generator set maintainer until their perfect completion.

A possible postponement of the maintenance is possible from July 11 to 13 or from July 18 to 20. If this is the case, a new notification will be sent to you.

Update - We are continuing to work on a fix for this issue.
Jun 27, 16:41 CEST
Identified - Due to a memory problem, some Control-Plane may crash. To solve this problem, our team will update some parts of our infrastructure. We will add new servers, and review some memory limits. Some operations have already been done and other maintenance operations are planned during the next 2 weeks.
Jun 27, 16:38 CEST
Monitoring - We've added 25% more compute capacity last week on our fr-par cluster, that leads to significant improvements on tail latency, resulting on more stable performances. To provide an even better service, we plan to double the compute capacity by 10 days. The team are monitoring carefully in the mean time
Jun 22, 15:45 CEST
Identified - Services are partially degraded.
Our servers are showing some highs and low in terms of workload, this might lead to potential slowness / timeouts for the user.

Our team is still working on a durable fix to those issues, your data remains safe in the meantime.

May 25, 13:58 CEST
Monitoring - A fix has been deployed and customers seem to have gained back their original performances.
We are still monitoring the situation at this time

May 24, 13:35 CEST
Update - The issue also impacts our Container Registry service, you may encounter some timeouts while pulling images.
May 24, 11:42 CEST
Investigating - We have noticed some issues with multi-AZ buckets.
You may experience an increase in latency with your buckets, as well as some 5xx errors.

May 24, 11:26 CEST
Monitoring - The restore process is now fully functional again on nl-ams. It is also fully functional again on fr-par, however expect some slowdowns in the restore process.
Jun 6, 16:01 CEST
Identified - The restore process from nl-ams is operational again.
Our team is still working on the same operation for fr-par.

Jun 6, 15:24 CEST
Investigating - Following a problem detected by our teams, the restore from GLACIER storage class on fr-par, nl-ams is not available since 03:13. Our teams are investigating.
Jun 6, 11:05 CEST
Update - We have identified that the IP address of our outbound SMTP service used by Webhosting Cloud offers (ESSENTIAL, PERFORMANCE, PREMIUM ) has been blacklisted by Abusix.

This is currently being handled by our Trust & Safety team, we will update this status as soon as we have more information on this matter.

May 19, 18:07 CEST
Investigating - We have identified that the IP address of our outbound SMTP service used by Webhosting Classic offers (PERSO, PRO, and BUSINESS) has been blacklisted by Abusix.

This is currently being handled by our Trust & Safety team, we will update this status as soon as we have more information on this matter.

Dec 8, 10:51 CET
Elements - AZ Operational
90 days ago
100.0 % uptime
Today
fr-par-1 Operational
90 days ago
100.0 % uptime
Today
fr-par-2 Operational
90 days ago
100.0 % uptime
Today
fr-par-3 Operational
90 days ago
100.0 % uptime
Today
nl-ams-1 Operational
90 days ago
100.0 % uptime
Today
pl-waw-1 Operational
90 days ago
100.0 % uptime
Today
nl-ams-2 Operational
90 days ago
100.0 % uptime
Today
Elements - Products Partial Outage
90 days ago
99.09 % uptime
Today
Instances Operational
90 days ago
95.62 % uptime
Today
BMaaS Operational
90 days ago
100.0 % uptime
Today
Object Storage Degraded Performance
90 days ago
99.08 % uptime
Today
C14 Cold Storage Partial Outage
90 days ago
90.02 % uptime
Today
Kapsule Degraded Performance
90 days ago
99.94 % uptime
Today
DBaaS Operational
90 days ago
100.0 % uptime
Today
LBaaS Operational
90 days ago
100.0 % uptime
Today
Container Registry Degraded Performance
90 days ago
100.0 % uptime
Today
Domains Operational
90 days ago
100.0 % uptime
Today
Elements Console Operational
90 days ago
100.0 % uptime
Today
IoT Hub Operational
90 days ago
100.0 % uptime
Today
Account API Operational
90 days ago
100.0 % uptime
Today
Billing API Operational
90 days ago
100.0 % uptime
Today
Functions and Containers Operational
90 days ago
100.0 % uptime
Today
Block Storage Operational
90 days ago
100.0 % uptime
Today
Elastic Metal Operational
90 days ago
99.99 % uptime
Today
Apple Silicon M1 Operational
90 days ago
99.83 % uptime
Today
Dedibox - Datacenters Under Maintenance
90 days ago
99.85 % uptime
Today
DC2 Operational
90 days ago
99.44 % uptime
Today
DC3 Under Maintenance
90 days ago
100.0 % uptime
Today
DC5 Operational
90 days ago
99.97 % uptime
Today
AMS Operational
90 days ago
100.0 % uptime
Today
Dedibox - Products Operational
90 days ago
99.95 % uptime
Today
Dedibox Operational
90 days ago
99.88 % uptime
Today
Hosting Operational
90 days ago
99.89 % uptime
Today
SAN Operational
90 days ago
100.0 % uptime
Today
Dedirack Operational
90 days ago
100.0 % uptime
Today
Dedibackup Operational
90 days ago
100.0 % uptime
Today
Dedibox Console Operational
90 days ago
100.0 % uptime
Today
Domains Operational
90 days ago
100.0 % uptime
Today
RPN Operational
90 days ago
99.89 % uptime
Today
Miscellaneous Operational
90 days ago
100.0 % uptime
Today
Excellence Operational
90 days ago
100.0 % uptime
Today
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Major outage
Partial outage
No downtime recorded on this day.
No data exists for this day.
had a major outage.
had a partial outage.
Scheduled Maintenance
Our Database team is scheduling an infrastructure upgrade on PostgreSQL RDB instances. This maintenance deploy the latest PostgreSQL minor version: 14.4, 13.7, 12.11, 11.16 and 10.21.
We expect a short downtime on both classic and HA databases.
All instances will be updated progressively in a 2-hours window.

Impact: A few seconds downtime on HA RDB instances. ~1min downtime on standalone RDB instances.

Posted on Jun 28, 15:25 CEST
We need to change the BGP/OSPF configuration on one of our routers in DC4.
Object restorations from C14 Cold Storage will be on hold for about 30 minutes during this intervention.

The maintenance will last from 09:30 to 11:00 (local time) and will not impact other products.

Posted on Jun 21, 17:36 CEST
Past Incidents
Jul 6, 2022

No incidents reported today.

Jul 5, 2022
Resolved - This incident has been resolved.
Jul 5, 19:44 CEST
Identified - We have detected both the public and RPN switch are down in DC3 Room 4-3 Rack E18.
Servers in that rack currently have no public or private network access and are unreachable.

Jul 5, 19:20 CEST
Resolved - This incident has been resolved.
Jul 5, 19:44 CEST
Identified - We have detected both the public and RPN switch are down in DC3 Room 4-3 Rack E19.
Servers in that rack currently have no public or private network access and are unreachable.

Jul 5, 19:21 CEST
Resolved - This incident has been resolved.
Jul 5, 19:38 CEST
Update - The public switch is back online, RPN is still unavailable at this time.
Jul 5, 19:35 CEST
Identified - We have detected both the public and RPN switch are down in DC3 Room 4-3 Rack C19.
Servers in that rack currently have no public or private network access and are unreachable.

Jul 5, 19:22 CEST
Resolved - This incident has been resolved.
Jul 5, 19:34 CEST
Identified - We have detected the public switch is down in DC3 Room 4-3 Rack C20.
Servers in that rack currently have no public network access.

Jul 5, 19:23 CEST
Resolved - This incident has been resolved.
Jul 5, 17:21 CEST
Investigating - We have detected both the public and RPN switch are down in DC3 Room 4-4 Rack B3.
Servers in that rack currently have no public or private network access and are unreachable.

Jul 5, 16:53 CEST
Resolved - This incident has been resolved.
Jul 5, 13:15 CEST
Investigating - We have detected a switch down in DC3 Room 4-6 Rack F7.
Servers in that rack currently have no public network access and are unreachable.

Jul 5, 13:07 CEST
Resolved - This incident has been resolved.
Jul 5, 13:15 CEST
Identified - We have detected a switch down in DC3 Room 4-6 Rack F11.
Servers in that rack currently have no public network access and are unreachable.

Jul 5, 13:14 CEST
Resolved - This incident has been resolved.
Jul 5, 13:06 CEST
Identified - We have detected a switch down in DC3 Room 4-6 Rack F3.
Servers in that rack currently have no public network access and are unreachable

Jul 5, 12:57 CEST
Resolved - This incident has been resolved.
Jul 5, 12:56 CEST
Identified - We have detected a switch down in DC3 Room 4-6 Rack E17.
Servers in that rack currently have no public network access and are unreachable.

Jul 5, 12:37 CEST
Resolved - This incident has been resolved.
Jul 5, 12:08 CEST
Investigating - We have detected a switch down in DC3 Room 4-6 Rack C14.
Servers in that rack currently have no public network access and are unreachable.

Jul 5, 11:39 CEST
Resolved - This incident has been resolved.
Jul 5, 12:08 CEST
Investigating - We have detected a switch down in DC3 Room 4-6 Rack C12.
Servers in that rack currently have no public network access and are unreachable.

Jul 5, 11:52 CEST
Resolved - This incident has been resolved.
Jul 5, 10:57 CEST
Investigating - We have detected a switch down in DC3 Room 4-6 Rack B7.
Servers in that rack currently have no public network access and are unreachable.

Jul 5, 10:46 CEST
Resolved - This incident has been resolved.
Jul 5, 10:33 CEST
Investigating - We have detected a switch down in DC3 Room 4-6 Rack A12.
Servers in that rack currently have no public network access and are unreachable.

Jul 5, 10:32 CEST
Jul 4, 2022
Resolved - This incident has been resolved.
Jul 4, 09:03 CEST
Update - Our Network team is still investigating the issue. We will get back to you as soon as we have more update.
Jul 1, 10:19 CEST
Investigating - We have detected a switch down in DC2 Room 101 Rack G28
Servers in that rack currently have no public network access and are unreachable.

Jul 1, 09:13 CEST
Jul 3, 2022

No incidents reported.

Jul 2, 2022

No incidents reported.

Jul 1, 2022
Resolved - This incident has been resolved.
Jul 1, 10:17 CEST
Update - The issue has been forwarded to our team for resolution
Jul 1, 09:09 CEST
Investigating - We have detected a switch down in DC3 Room 4-4-6 Rack A13
Servers in that rack currently have no public network access and are unreachable.

Jul 1, 09:07 CEST
Jun 30, 2022
Completed - The scheduled maintenance has been completed.
Jun 30, 19:00 CEST
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
Jun 30, 09:00 CEST
Scheduled - As part of the preparatory work, we are going to carry out an alimentation modification of racks listed below.

The electrical load of your computer racks supplied by electrical chain C (TDHQ with blue label), will be switched to the second electrical chain of the computer rack during the modification.

At the end of intervention the computer rack will have recovered two alimentations.

The list of the impacted racks:

A07, A11, A13, A17, A19
B10, B14
C01, C05, C09, C11
D15, D16, D17, D18, D19
E05, E06, E08, E09, E10, E14

Jun 10, 13:46 CEST
Resolved - This incident has been resolved.
Jun 30, 16:58 CEST
Identified - The switch is being replaced by our datacenter team.
Jun 30, 16:35 CEST
Investigating - We have detected a switch down in DC3 Room 4-4-6 Rack A19
Servers in that rack currently have no public network access and are unreachable.

Jun 30, 16:30 CEST
Resolved - This incident has been resolved.
Jun 30, 14:24 CEST
Identified - The switch is being replaced by our datacenter team.
Jun 30, 14:15 CEST
Investigating - We have detected a switch down in DC3 Room 4-4-6 Rack A9
Servers in that rack currently have no public network access and are unreachable.

Jun 30, 14:11 CEST
Resolved - This incident has been resolved.
Jun 30, 11:46 CEST
Identified - The switch is being replaced by our datacenter team.
Jun 30, 11:09 CEST
Investigating - We have detected a switch down in DC3 Room 4-4-6 Rack A7
Servers in that rack currently have no public network access and are unreachable.

Jun 30, 11:08 CEST
Resolved - The switch is now back online, public connectivity is restored to servers in the rack.
Jun 30, 01:01 CEST
Investigating - We have detected a switch down in DC3 Room 4-4-6 Rack B6
Servers in that rack currently have no public network access and are unreachable.

Jun 29, 23:59 CEST
Jun 29, 2022
Jun 28, 2022
Resolved - The 5.6 version of PHP was not working anymore on Pf44.
All the customers' websites hosted on this platform were not reachable and showing an "internal server" error page.
The issue was promptly identified and solved after a few mintutes.

Jun 28, 10:19 CEST
Jun 27, 2022
Resolved - Today between 12:30 and 13:00, we have performed a maintenance operation, to update some configurations of our ETCD servers. It is possible that some nodes were set to "NotReady", due to an authentication problem.

The problem is now solved. Feel free to reach our support team if need.

Jun 27, 12:30 CEST
Jun 26, 2022
Resolved - This incident has been resolved.
Jun 26, 09:58 CEST
Investigating - We have detected a switch down in DC3 Room 4-4-6 Rack A
Servers in that rack currently have no public network access and are unreachable.

Our Network team is investigating.

Jun 25, 21:16 CEST
Jun 25, 2022
Jun 24, 2022

No incidents reported.

Jun 23, 2022
Resolved - This incident has been resolved.
Jun 23, 12:37 CEST
Monitoring - A fix has been implemented and we are monitoring the results.
Jun 16, 15:10 CEST
Investigating - We are currently experiencing an issue with the public IPv4 of the S3 endpoint on nl-ams only. Our team is currently working on it.
Jun 16, 14:51 CEST
Resolved - The heat is now considered as tolerable, we will reopen a status later if necessary.
Jun 23, 12:37 CEST
Monitoring - Our product and datacenter teams are mobilized to monitor our infrastructures.
This status will be updated should any event be caused by the current heat wave.

There is no impact on any service at the moment, systems are all properly cooled.

Jun 16, 16:34 CEST
Resolved - This incident has been resolved.
Jun 23, 12:36 CEST
Monitoring - Our engineering team found the source of the anomaly and deployed a fix at their level.
They are now monitoring the situation.

Jun 18, 15:50 CEST
Investigating - Some increase in tail latency and some occasional 5xx might happen.
Our engineering team is already working on the situation in the datacenter and is taking action as soon as possible.

We will get back to you as soon as we have a feedback on the situation.

Jun 18, 13:48 CEST
Resolved - The last platform (1004) is now back online and fully operational. Thank you for your patience and sorry for the extra inconvenience.
Jun 23, 12:27 CEST
Update - Maintenance is completed on platforms 1005 and 1006 but we ran into an unexpected issue on 1004.

Our team is still working on the issue to resolve it as quickly as possible, as soon as we have more information, we will update this status directly to keep you informed.

Thank you for your patience!

Jun 23, 10:14 CEST
Update - We are continuing to investigate this issue.
Jun 21, 09:42 CEST
Investigating - In the continuity of the improvement of our Webhosting services, our platforms ( 1004, 1005, 1006 ) will be unavailable on June 23rd from 9:00 am (Paris time) to 9:10 am, to allow our teams to update the platforms.
Jun 20, 13:36 CEST
Resolved - This incident has been resolved.
Jun 23, 11:42 CEST
Update - Now only one switch is still not recovering, so our team still continues to work on it :

H22 block E

Jun 23, 10:50 CEST
Identified - The maintenance is complete and most servers are back online except for those one :

N14 block K
N14 block N
G22 block J
G22 block M
G32 block F
G33 block C
H22 block E

Our team is still working on it to restore full access to it.

Jun 23, 10:41 CEST
Update - A maintenance operation will take place on the affected router on Thursday, June 23, 2022 starting at 9:00 am to fix the situation.
Lost of public connectivity are expected during the maintenance.

Jun 22, 18:34 CEST
Investigating - Our network teams have detected an anomaly on the router located in DC2, room 101
This may affect the connectivity of the dedicated boxes in racks N13 to N18, G20 to G37 and H20 to H22 in room 101 of DC2.

Our teams are actively working to stabilize this as soon as possible, we will keep you informed of any developments on this status as needed

Jun 21, 18:19 CEST
Jun 22, 2022

Unresolved incident: [Elements] Performance issues on Multi-AZ Object Storage buckets.