All systems are operational

About This Site

Maintenances and incidents regarding our services, networks and datacenters can be followed here.

Scheduled Maintenance
[DC3] - Switch Maintenance - Room 4-3 Rack E21

We will need to replace the switch in DC3 Room 4-3 Rack E21

The replacement will last about 15min.

Servers will be rebooted on our side to populate the DHCP snooping database.

During this time, all the servers won't be reachable over the public network

Start : 23-Jul-2019 0800Z 1000LT

Duration : 15 minutes

Preventive maintenance due to current heatwave

On 24th & 25th July we will reach record highs, with 42°C being the highest recorded temperature in the last 20 years.
This will be a challenge for our equipment and teams, but everyone is mobilized, and exceptional means have been put in place to keep everything up and running.

This status has been created preemptively and will be updated on July 24th as needed.

Risk evaluation per DC : DC2 High DC3 Very High DC4 No risk DC5 Low risk

[Paris core router bb1.dc3] Router upgrade

Location : Paris - core router bb1.dc3

We are planning to upgrade the router firmware to the latest recommended by Cisco. The maintenance will be done in 2 steps to prepare the upgrade and then to apply.

Impact : This might lead to rerouting / latencies during the maintenance window. All Paris DCs are impacted.

Preparation : 30-Jul-2019 0400Z (0600LT) - duration 2h
Full upgrade : 31-Jul-2019 0200Z (0400LT) - duration 5h

UPDATE: Maintenance was planned for the 23th and 24th, due to high temperatures, we are changing the dates for the 30th and 31st.

TGBT C – DRC control relays replacement

Following a fault on TGBT C DRC control observed on June 12 and the fault identification on June 18 we will replace the PLC control relays and carry out a load test to confirm the installation is fully operational.

impacts: This intervention does not present any particular risk of service interruption and will be processed on worked hours.

Start : 19-August-2019 1200Z (1400LT)

Duration: 4 hours

DC3 - Semi-Annual Refrigeration Units Maintenance

We will carry out the semi-annual maintenance of the refrigeration units.

No impact is expected.

Start : 23-Aug-2019 0600Z 0800LT

End : 03-Sept-2019 1530Z 1730LT

Past Incidents

Sunday 30th June 2019

DC2 Network [DC2] Switch unavailability - Room : 202-A ; Rack : B12

We have detected an issue on the switch for this rack. Our datacenter and network teams are working on it to restore the service as quickly as possible.

===================

06/30/2019 1850Z (1950LT)

The issue has been fixed. Everything is back to normal.

06/30/2019 1750Z (1850LT)

The product team has identified the root cause and working on a fix

06/30/2019 1730Z (1830LT)

Issue has been escalated to local team

Saturday 29th June 2019

No incidents reported

Friday 28th June 2019

DC3 [PAR] Switch unavailability - Room 4-6 Rack B16

We have detected an issue on the switch for this rack. Our datacenter and network teams are working on it to restore the service as quickly as possible. We will keep you updated as soon as we have more information.

28/06/19 1756 (1856LT)

The product team has identified the root cause and working on a fix

28/06/19 1830 (1930LT)

The issue has been solved, all server are now up and running.

Console Magic Link down on Scaleway console

We are currently facing an issue with Scaleway console, magic link is down. It is not possible to login for the moment.

Please use password login instead of magic link.

We apologize for the inconvenience

Next update by 06.28.19 1200Z (1300LT)

==============

Issue has been fixed, you can use again the magic link.

Thursday 27th June 2019

No incidents reported

Wednesday 26th June 2019

[DEDIBACKUP] Infrastructure refresh - server backup-dc3-3, scheduled 3 weeks ago

[DEDIBACKUP] Infrastructure refresh : dedibackup server backup-dc3-3

Location : DC3 - dedibackup backup-dc3-3

What is done : data from one dedibackup repository will be migrated on a new storage repository.

Impact : Ftp connections during the reboot time only will be closed. The service will be again operational as soon as the reboot is done.. The maintenance will be transparent for customer and data will stay available in the same way after the change.

Start : 26-Jun-2019 1200Z 1400LT

Duration : 4 hours

Web [WEB HOSTING] Some web sites are unreachable and returning 404 error

We are currently facing an issue with multiple web hosting platforms. Some websites might be unreachable.

We will keep you updated as soon as we have more information.

===================

06.26.19 1030 (1230LT)

Issue has been escalated to local team

Web [WEB HOSTING] Connection issue

We are currently facing an issue with multiple web hosting platforms.

Some websites might be unreachable.

We will keep you updated as soon as we have more information.

===================

06.26.19 0100 (0300LT)

Issue has been escalated to local team

06.26.19 0720 (0920LT)

The dedicated team is still working on the issue, we will update this status as soon as we know more.

06.26.19 0805 (1005LT)

The product team has identified the root cause and working on a fix.

06.26.19 0850 (1050LT)

We have fixed the issue

Tuesday 25th June 2019

Product Catalog API Marketplace API unavailable

We are currently experiencing an outage on our Marketplace API.
As a consequence, instance creation is not possible while we are working on the issue.

===================

25/06/19 1315Z (1515LT)

Our dev team is working on the issue, we will update this status as soon as possible.

25/06/19 1325Z (1525LT)

Issue has now been fixed, services are available again.

Monday 24th June 2019

RPN services RPNv2 instability between Paris and Amsterdam

We have identified a network outage between AMS and Paris impacting RPNv2 traffic between both locations.
We are currently investigating with our network provider, and will update this status accordingly.

===================

24/06/19 1245Z (1445LT)

Issue escalated to the Network team, currently investigating with our provider.

24/06/19 1320Z (1520LT)

Services are stable following our provider's maintenance.
We will keep monitoring services, and are waiting for a final feedback from our provider regarding the root cause.

DC5 [Paris Datacenters] Specific advanced monitoring during the week due to the heat wave

[Paris Datacenters] Specific advanced monitoring during the week due to the heat wave

Due to the heat wave in Paris region this week, a specific advanced monitoring has been put in place to monitor the infrastructures of our datacenters DC2, DC3 and DC5.

En raison de la canicule prévue en région parisienne cette semaine, un dispositif de surveillance renforcé des infrastructures à été mis en place sur les datacenters DC2, DC3 et DC5.

Global Backbone Route leak on range 62.210.0.0/16

We identified a major incident on routing tables on some providers.

AS701 seems to permit the full routes re-announcement from a downstream ASN of the full view (all internet routes).
That leads to routing loops and inconsistent routing tables for an hour on different networks going through major Tier1 like Level3 / Cogent.
These had impacts on many providers seen on the DFZ Routing table.

We have protection mechanisms to avoid internal traffic getting aspirated by the leak, but we can't have an action on what's going on AS701 and upstream/downstream as we don't have any relationship with them.
We have temporarily blocked the responsible ASN to avoid any issue inside our network.

RPN services SAN connectivity issues

Our teams noticed abnormal behavior on some SAN.
You may experience small disconnections and connectivity issues while our team is working on it.

We will update this status as soon as we have additional details to share.

===================

24/06/19 1015Z (1215LT)

Product team is currently investigating the root cause.
Updates to come shortly.

24/06/19 1430Z (1630LT)

Issue is now fully fixed and all SANs should be reachable again.
As a final note, it seems like we triggered an ARP cache bug during a (supposedly) transparent maintenance this morning.
Please get in touch with our support team if you are still facing any issue.