Scaleway
Update - The situation is under control for now.
Jun 22, 15:59 CEST
Monitoring - Our storm basin, which is used to contain storm water in case the sewer are full, is currently full due to strong rains in the last few days. This can create a risk of flood near the datacenter, we are therefore monitoring the situation to prevent such an issue from happening.
Jun 22, 14:00 CEST
Update - We are still experiencing a partial unavailability of our C14 Cold Storage service. Part of the service may be unavailable.
Our product team is currently on site.
Jun 18, 16:58 CEST
Identified - Our product team has identified the issue and is working on restoring the service.
We are still facing a partial unavailability of C14 Cold Storage on the "fr-par" region.
Jun 14, 16:03 CEST
Investigating - We have noticed service degradation on the C14 Cold Storage platform due to hardware issues.
Our product team is investigating in order to find a solution.
Jun 10, 09:58 CEST
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
Jun 17, 13:19 CEST
Scheduled - We are doing a maintenance on the following :

-Control VESDA system
-Control manual triggers
-Control sound alarms and seeing alarms
-Control Actuated Devices ofSafety
-Control alarm reports,
-Control valves fire suppression.(outside datarooms)
-Control smoke extraction and extraction engine.

This intervention does not present any particular risk of service interruption and will be processedon worked hours.
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
Jun 14, 08:00 CEST
Scheduled - We will carry out the semi-annual preventive maintenance of CRAC units according to the schedule below.
CRAC units will be shut down one at a time, so as not to impact the cooling of halls.

Each IT zone has CRAC unit called «backup»: each CRAC unit is doubled.
Each MVLB hall has a main air hanling unit (AHU).

The maintenance will be done according to the following schedule:
• MLVBs : 14/06/2021 to 18/06/2021
• Operators halls : 14/06/2021 and 18/06/2021
• IT halls n°4 : 17/06/2021 and18/06/2021
• IT halls n°3 : 16/06/2021 and17/06/2021
• IT halls n°2 : 14/06/2021 and15/06/2021
• IT halls n°1 : 14/06/2021 and15/06/2021

The maximum cut-off time per CRAC unit is otherwise defined at 1 hour.

As a result, we will experience a loss of cooling redundancy during the preventive maintenance operation of a CRAC unit (max 1h).
No impact on the cooling of facilities.
Elements - AZ Operational
90 days ago
100.0 % uptime
Today
fr-par-1 Operational
90 days ago
100.0 % uptime
Today
fr-par-2 Operational
90 days ago
100.0 % uptime
Today
nl-ams-1 Operational
90 days ago
100.0 % uptime
Today
pl-waw-1 Operational
90 days ago
100.0 % uptime
Today
Elements - Products Degraded Performance
90 days ago
99.94 % uptime
Today
Instances Operational
90 days ago
99.99 % uptime
Today
BMaaS Operational
90 days ago
100.0 % uptime
Today
Object Storage Operational
90 days ago
99.99 % uptime
Today
C14 Cold Storage Degraded Performance
90 days ago
100.0 % uptime
Today
Kapsule Operational
90 days ago
99.83 % uptime
Today
DBaaS Operational
90 days ago
99.83 % uptime
Today
LBaaS Operational
90 days ago
99.83 % uptime
Today
Container Registry Operational
90 days ago
100.0 % uptime
Today
Domains Operational
90 days ago
100.0 % uptime
Today
Elements Console Operational
90 days ago
99.97 % uptime
Today
IoT Hub Operational
90 days ago
99.83 % uptime
Today
Account API Operational
90 days ago
99.98 % uptime
Today
Billing API Operational
90 days ago
100.0 % uptime
Today
Dedibox - Datacenters Operational
90 days ago
99.65 % uptime
Today
DC2 Operational
90 days ago
99.74 % uptime
Today
DC3 Operational
90 days ago
99.91 % uptime
Today
DC5 Operational
90 days ago
98.98 % uptime
Today
AMS Operational
90 days ago
99.97 % uptime
Today
Dedibox - Products Operational
90 days ago
99.18 % uptime
Today
Dedibox Operational
90 days ago
99.82 % uptime
Today
Hosting Operational
90 days ago
94.51 % uptime
Today
SAN Operational
90 days ago
100.0 % uptime
Today
Dedirack Operational
90 days ago
100.0 % uptime
Today
Dedibackup Operational
90 days ago
99.97 % uptime
Today
Dedibox Console Operational
90 days ago
100.0 % uptime
Today
Domains Operational
90 days ago
100.0 % uptime
Today
RPN Operational
90 days ago
99.83 % uptime
Today
Miscellaneous Operational
90 days ago
100.0 % uptime
Today
Excellence Operational
90 days ago
100.0 % uptime
Today
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Major outage
Partial outage
No downtime recorded on this day.
No data exists for this day.
had a major outage.
had a partial outage.
Scheduled Maintenance
We’re going to realize the preventive maintenance of air handling units of all site.
Maximum cut-off time per CTA : 60 min

Monday28/06 - Room 01 & 02 & 03
Thusday 29/06 - Room 04 & TGBT A
Wednesday 30/06 - TGBT B & C
Thursday 01/07 - TGBT D & F1 & F2

As a result, we will experience a loss of redundancy during the preventive maintenance of a TGBT air handling units.
No impact on TGBT cooling.

Starting CRAC backup of the local TGBT before each intervention on the concerned air handling units.
Interventions controlled by the maintainer with the possibility to restart the air handling unit very quickly.
Posted on May 27, 12:14 CEST
[DC2] Router upgrade S207-4 Jun 28, 09:30-10:30 CEST
We will proceed to an router upgrade s207-4 at DC2.
Please note that there is no expected impact from the customer side.
Posted on Jun 25, 15:21 CEST
[DC5] Load shaving test Jul 7, 08:00 - Jul 8, 15:00 CEST
We’re going to realize the preventive electrical maintenance of generators. The maintenance will be performed electrical chain by electrical chain, in the event of an electrical incident coming from the usual network of energy supply.

Maintenance will be carried out according to the following forecast schedule:
- GE LT1.1: 2021-07-07 (between 8:00 and 12:30 LT)
- GE LT1.2: 2021-07-07 (between 13:00 and 17:00 LT)
- GE BAT 1 : 2021-07-08 (between 8:00 and 14:00 LT)

No generator will remain out of service in absence of the maintenance company.

Impact: Degradation of the electrical redundancy.
Start: Wednesday, July 7th 2021 at 06:00 UTC (0800 LT)
Duration: up to 6 hours per generator
Posted on Jun 3, 12:11 CEST
We will be performing the preventive maintenance of the 48V UPSes in the operator room.
The maintenance will be carried out according to the following schedule:
- UPS A: 12/07 from 09:00 CEST to 17:00 CEST
- UPS B: 13/07 from 09:00 CEST to 17:00 CEST

Impact: Electrical redundancy degradation, no impact is expected on services.
Posted on Jun 24, 13:55 CEST
We’re going to realize the preventive electrical maintenance of generators.

The maintenance will be performed electrical chain by electrical chain, so that super-rescue generators (chain F1) can recover the load, in the event of an electrical incident coming from the usual network of energy supply.

Maintenance will be carried out according to the following forecast schedule :

- Chaine A : 16/08
- Chaine B : 17/08
- Chaine C : 18/08
- Chaine D : 19/08
- Chaine F1 : 24/08
- Chaine F2 : 23/08

Every evening, generators on which maintenance has been done will be put back into service and a generators off-load test will be performed. Load test of at least 30 minutes will be performed after each maintenance day. No generator will remain out of service in absence of the maintenance company.
Posted on May 27, 17:39 CEST
Past Incidents
Jun 25, 2021
Resolved - This incident has been resolved.
Jun 25, 14:30 CEST
Investigating - We are noticing intermittent issues with the API again, it may currently be slow to respond or will fail to respond for some requests.
Jun 24, 19:56 CEST
Monitoring - The situation has improved and the API is working again.
We're still monitoring the situation.
Jun 24, 17:59 CEST
Investigating - We are currently facing an issue with the Compute API on fr-par-1.
The API is unavailable and returns an error when called.
Jun 24, 17:36 CEST
Resolved - This incident has been resolved.
Jun 25, 12:35 CEST
Monitoring - The platform is now back online and websites are available again.
We're still monitoring the situation.
Jun 24, 19:58 CEST
Investigating - We are currently facing an issue with PF1001 of our cPanel hostings.
Websites on that platform are currently unavailable.
Jun 24, 18:30 CEST
Jun 24, 2021
Completed - The scheduled maintenance has been completed.
Jun 24, 10:30 CEST
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
Jun 24, 09:30 CEST
Update - We will be undergoing scheduled maintenance during this time.
Jun 23, 23:08 CEST
Scheduled - We will proceed to an router upgrade s207-3 at DC2.
Please note that there is no expected impact from the customer side.
Jun 23, 15:11 CEST
Jun 23, 2021
Resolved - The issue has been fully resolved by the product team.
Jun 23, 22:41 CEST
Monitoring - A fix has been implemented by our team.
We are currently monitoring the situation, please contact the assistance if you encounter any issue.
Jun 23, 22:24 CEST
Investigating - We are currently facing an issue with our API accesses, which returns a 502 BAD GATEWAY error on FR-PAR-1.
The issue has been escalated to the product team.
Jun 23, 22:11 CEST
Resolved - We've experienced an issue with our registration, and login system.
The issue has been fixed by our engineers.
Jun 23, 11:01 CEST
Jun 22, 2021
Resolved - This incident has been resolved, public connectivity is now back up in that rack.
Jun 22, 19:04 CEST
Investigating - We have detected switches down in DC2 Room 101 Rack M8.
Servers in that rack currently have no public network access and are unreachable.
Jun 22, 18:32 CEST
Completed - The scheduled maintenance has been completed.
Jun 22, 17:00 CEST
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
Jun 22, 10:00 CEST
Scheduled - We are going to replace the energy count free-chilling damaged during the last intervention on the SSA PLC.
Installations will be switched to manual mode during the maintenance of the PLC.

No impact on services is expected.
Jun 18, 09:50 CEST
Completed - The scheduled maintenance has been completed.
Jun 22, 08:00 CEST
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
Jun 22, 07:30 CEST
Scheduled - We will be undergoing maintenance on the Compute database in fr-par-1.
No actions on instances will be possible during the maintenance, either through the Console, CLI or Terraform.

Managed services may not be available during the maintenance.

Already running instances will not be affected and will continue working during the maintenance.
Jun 15, 11:06 CEST
Jun 21, 2021
Resolved - This incident has been resolved.
Jun 21, 18:38 CEST
Identified - There is currently an issue with NL-IX, some network peers were not reachable from 11:13 to 11:48 (UTC). Our network team has mitigated the issue and is currently working on fixing it entirely, traffic is not impacted.
Jun 21, 13:13 CEST
Jun 20, 2021

No incidents reported.

Jun 19, 2021

No incidents reported.

Jun 18, 2021
Resolved - While some load-balancer services were reloaded at 9:25 UTC, NAT rules were not announced. The unavailability lasted for about 30 seconds, during which the service was unreachable.

Some of our other products depending on load-balancers (e.g. Kubernetes Kapsule, Container Registry…) may have been temporarily unavailable following this short incident while they were recovering from the loss of connection.

Please open a support ticket if you face an issue with a load-balancer or have any question.
Jun 18, 11:25 CEST
Jun 17, 2021
Resolved - The switch has been replaced and configured. Servers are now responding properly.
Jun 17, 21:44 CEST
Identified - The public switch appears to be out of service, we are currently replacing it.
Jun 17, 20:24 CEST
Investigating - The public switch for the bay is currently unreachable, a datacenter technician is investigating.
Jun 17, 20:02 CEST
Resolved - The switches are now back online, connectivity in the racks is restored.
Jun 17, 13:04 CEST
Investigating - We have detected a switch down in DC3 Room 44 Racks C5 and C6.
Servers in those racks currently have no public network access and are unreachable.
Jun 17, 11:44 CEST
Resolved - This incident has been resolved.
Jun 17, 07:05 CEST
Investigating - We have detected a switch down in DC3 Room 34 Rack C8.
Servers in that rack currently have no private network access and are unreachable through RPN.
Jun 16, 15:04 CEST
Resolved - This incident has been resolved.
Jun 17, 00:47 CEST
Update - We are continuing to investigate this issue.
Jun 17, 00:47 CEST
Investigating - An issue with our connectivity towards DC5 ( fr-par-2 ) has been detected since 11:39 UTC no impact on production but redundancy is lost for now, teams are investigating the cause.
Jun 16, 14:05 CEST
Jun 16, 2021
Resolved - The switch has been rebooted. Everything is back to normal.
Jun 16, 15:24 CEST
Investigating - The RPN switch for DC3 - Room 3-1 Rack E15 is currently down.
The service is unreachable and we are still investigating on the situation.
Jun 16, 15:18 CEST
Jun 15, 2021
Resolved - The switch has been reinstalled and reconfigured in order to fix the issue.
It is now up and running, the network is stable. This incident is resolved.
Jun 15, 17:11 CEST
Investigating - The RPN switch for the room 3-1 rack A18 is currently down.
Our network team is currently investigating this issue.
Jun 15, 14:15 CEST
Resolved - This incident has been resolved.
Jun 15, 01:25 CEST
Investigating - We are currently investigating an issue regarding switches on the following racks :
- S45 E16
- S45 E17
- S45 E18
Servers connected to those switches do not have any public connectivity at the moment.

Issue is under investigation, we will update the status as soon as possible.

We are currently investigating an issue regarding switches on the following racks :

S45-C1 > C3

The DC has repaired s45-E16,E17 and E18
Jun 15, 00:10 CEST
Jun 14, 2021
Completed - The scheduled maintenance has been completed.
Jun 14, 17:00 CEST
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
Jun 14, 11:01 CEST
Scheduled - We are going to replace the energy count free-chilling damaged during the last intervention on the SSA PLC.
Installations will be switched to manual mode during the maintenance of the PLC.

No impact on services is expected.
Jun 14, 10:50 CEST
Resolved - Switch in D10 is now back online too, RPN is fully back in the impacted racks.
Jun 14, 14:40 CEST
Identified - RPN switch in rack D18 is now back online.
We're still working on repairing the switch in D10.
Jun 14, 13:52 CEST
Investigating - We have detected down RPN switches in DC3 Room 31 Racks D10 and D18.
Servers in these racks currently have no private network access and are unreachable through RPN.
Jun 14, 10:47 CEST
Resolved - This incident has been resolved.
Jun 14, 09:19 CEST
Identified - The DNS resolvers of PAR-1 have been updated (and announced by DHCP) more than one year ago.
The old ones identified by :
* 10.1.31.38
* 10.1.31.39
* 10.1.94.8
* 10.1.94.9
will be decommissioned on June 1st.
Each identified customer has been contacted by email to update the VM's configuration to use the new resolver: 10.194.3.3
May 26, 10:50 CEST
Jun 13, 2021

No incidents reported.

Jun 12, 2021

No incidents reported.

Jun 11, 2021
Resolved - The fire has been contained by the firefighters, there is no longer any risk for the datacenter.
Jun 11, 18:02 CEST
Monitoring - A building in DC2's neighborhood is currently on fire.
There is currently no immediate risk for the datacenter, firefighters are on site.

We are closely monitoring the situation.
Jun 11, 16:08 CEST