Some systems are experiencing issues

About This Site

Maintenances and incidents regarding our services, networks and datacenters can be followed here.

Scheduled Maintenance
[RPN Network] maintenance - PAR > AMS - Scheduled Maintenance

RPN Network maintenance - PAR > AMS - Scheduled Maintenance

Region : PAR/AMS

Time : from April 29th, 2019 at 20:00 UTC to April 30th, 2019 at 04:00 am UTC

Maintenance will be carried out on our network link between AMS and PARIS. You might encounter some disconnections throught the RPN network, but only between AMS and PAR.

Start : Maintenance to start in the window 29-April-2019 2000Z - 30-April-2019 0400Z

Duration : 2 hours

[DC3 Optical link maintenance]

[DC3 Optical link maintenance]

Region : PAR

Time : from April 30th, 2019 at 06:00 UTC

Maintenance will be carried out on 2 DC3 optical links. This will impact servers from these racks Room 31 rack E13 Room 32 rack A7

There will be 2 short 2 minutes disconnection for the servers indicated before.

Start : April 30th, 2019 at 0600Z 0800LT

Duration : 2 time 2 minutes within a 1 hour window.

[RPN Network] maintenance - PAR > AMS - Scheduled Maintenance

RPN Network maintenance - PAR > AMS - Scheduled Maintenance

Region : PAR/AMS

Time : from May 3rd, 2019 at 21:00 UTC to May 4th, 2019 at 04:00 am UTC

Maintenance will be carried out on our network link between AMS and PARIS. You might encounter some disconnections through the RPN network, but only between AMS and PAR.

Start : May 3rd, 2019 at 2100Z 2300LT

Duration : 7 hours

Past Incidents

Thursday 24th January 2019

[NETWORK] Scheduled Maintenance on B2B Network, scheduled 3 months ago

Our engineers will performed maintenance on our B2B network.

Start : 24-Jan-2019 1300Z 1400LT
Ending : 24-Jan-2019 1500Z 1700LT

Duration : 180 minutes

DC2 Network [Network] PXE error

There is an ongoing incident on our installation and rescue system. This outage is caused by a PXE error, our teams are working on identifying and fixing the issue.

===================

24/01/19 2130Z (2230LT)

Our network team made the necessary action. Everything is back to normal.

24/01/19 2015Z (2115LT)

Issue has been escalated to local team

Wednesday 23rd January 2019

Mail [Webmail] Graphic interface issue

There is a UI with our new webmail interface. Some feature may be unavailable ( filters, etc.. ). Ingoing/outgoing traffics are not impacted.

===================

23/01/19 1650Z (1750LT)

The product team performed a rollback. Everything should be back to normal. Please contact our assistance if you are still encountering an issue.

23/01/19 1630Z (1730LT)

The product team has identified the root cause and working on a fix

23/01/19 1610Z (1710LT)

Issue has been escalated to local team

DC3 DHCP issue on DC3/DC5

Hello,

There is a DHCP issue affecting our services, servers may not obtain IPv4 and IPv6 leases. Any server rebooted or installed won't be able to reconnect as long as the issue persists. Our teams are working on a solution.

Next update by 23.01.19 0900Z (1000LT)

===================

23.01.19 0614Z (0714LT)

The issue is now fixed

23.01.19 0200Z (0300LT)

Certain servers from DC2 may be also impacted.

23.01.19 0100Z (0200LT)

An issue was identified and further actions are needed in order to fully restore impacted servers.

23.01.19 0000Z (0100LT)

Issue has been escalated to local team

Tuesday 22nd January 2019

[DC5] Scheduled Maintenance on our Infrastructure, scheduled 3 months ago

Our teams will carry out maintenance at 3:30 pm on our infrastructures.

We will keep you informed if we have more information.

AMS1 Network [Network] Amsterdam saturation

We are currently facing saturation on our Network in AMS.
Our Network team is working on a fix ASAP.

===================

22/01/19 1525Z (1625LT)

The issue has been fixed. If you are still encountering network issue, please contact our assistance.

22/01/19 1515Z (1615LT)

The product team has identified the root cause and working on a fix

22/01/19 1510Z (1610LT)

Issue has been escalated to local team

DC3 Network [Network] Link Issue France-IX

Our partner FRANCE-IX has identified a network issue on Paris
They are currently working to fix this issue ASAP.
In the meantime, we will re-route the traffic to avoid FRANCE-IXE until the issue is fixed on their end.
Next update once the re-routing is done.

===================

22/01/19 1540Z (1640LT)

Re-routing is done. Everything is back to normal. If you are still encountering network issue on Paris, please contact our assistance.

22/01/19 1500Z (1600LT)

Re-routing is ongoing.

22/01/19 1450Z (1550LT)

Issue has been escalated.

23/01/19 1050Z (1150LT)

The issue has been fixed by France-IX and our team. Thank you for your cooperation.

PAR1 Compute API [API] Maintenances on Scaleway's API

[API] 2 maintenances on Scaleway's API are planned.

The first one will be at 1400LT for Paris' API and the second one will be at 1500LT for Amsterdam's API.

Both maintenances will last for a maximum of 5 minutes.

Compute Nodes [Network] Network instaibility

We experiencing some Network instability in PARIS region.

Symptoms : Network instability

Our engineering team is currently investigating the issue.

We will provide additional information through this status as soon as we have more to share.

UPDATE: This problem is now fixed.

DC2 Room 101 - Zone 101 - Rack K33 Unavailable

Servers in DC2 - Room 101 - Zone 101 - Rack K33 are currently unavailable, we are investigating the issue.

We will update this status as soon as we have more details to share.

UPDATE: The switch is currently rebooting

UPDATE 2: The swith needs to be replaced. An intervention has been launched

UPDATE 3: The second switch has gone down during the intervention. Block 1 and 2 are unavailable

UPDATE 4: Block 1 is now avalable.

UPDATE 5: Services are now back online, thank you for your patience

Console [DEDIBOX] Issue with tasks (RAID configurations, few installs)

We are currently experiencing an issue with the tasks awaiting on servers. It prevents RAID configurations to finish, and a few installations like ESXi as well.

Our team found the root cause of the issue and is currently working on it in order to resolve it as quickly as possible.

If you are experiencing this issue we invite you to restart the installation as soon as the status is resolved by our team.

=======================

The issue is fixed, you may restart all pending tasks from your console.

Monday 21st January 2019

No incidents reported

Sunday 20th January 2019

No incidents reported

Saturday 19th January 2019

Compute Nodes AMS - Node stuck on ARM64 range

We have identified an issue regarding some ARM64 Instances for Amsterdam region. Symptoms : node blocked on “rebooting server” or “Unable to reboot”

Our engineering team is currently investigating the issue.

We will provide additional information through this status as soon as we have more to share. In the meantime, our support team is available to answer any further questions.

Thank you for your patience

UPDATE: The issue is now fixed the instances are up

Friday 18th January 2019

RPN services [RPN] RPNv2 group - outsourcing error

There is an ongoing issur with RPNv2 group creation. You may not be able to add some servers to a group and the console will display the following error : "Error: The group contains incompatible servers: You can't add outsourced server"

Next update in two hours

===================

18/01/2019 1750Z (1850LT)

The product team has identified the root cause and working on a fix

18/01/2019 1430Z (1530LT)

Issue has been escalated to local team

18/01/2019 2200Z (2300LT)

Issue has been fixed.

Databases [ODS] Instability on ODS

We have detected a hardware issue with one of our ODS server. Which may cause the ODS service unavailable. All our team are warned and working on it.

It should be fixed now, but we will monitor ODS service closely for the next days.

Dedibackup Dedibackup delivery latence

Our team identified an issue regarding Dedibackup delivery. We are currently investigating the issue and will update this status as soon as we have more details to share.

Update :
Issue has now been fully fixed.
Feel free to contact our support team if you are still facing issues.