Some systems are experiencing issues

About This Site

Maintenances and incidents regarding our services, networks and datacenters can be followed here.

Past Incidents

Monday 7th January 2019

DC5 Partial unavailability on Room 1 - Zone 1 - Rack C

Our teams identified an issue on a specific rack in DC5 :
DC5 - Room 1 - Zone 1 - Rack C10 - C11 - C12

We are currently investigating the issue and will update this status as soon as possible.

======

07012019 1550Z (1650LT)

Everything is fix, it was due to a configuration error

Console [2FA] Text messages not received for 2 factor authentification

Some text messages holding the 2FA codes are received with delay, depending on the providers. We are still investigating to find the root cause and resolve it. We apologize for the inconvenience and will keep you updated as soon as possible.

UPDATE: This issue is now fixed

Mail Email reception delay

Our teams identified an issue regarding emails' reception on our shared hosting offers.

It seems like one of our email platform is under heavy load.
We are currently investigating the issue and will update this status as soon as we have more details to share.

Our support team remains at your disposal should you have any specific question.

===================

07012019 1234Z (1334LT)

The load is stabilized, email processing is in progress.
We will keep you informed as soon as we have news.

07012019 1925Z (2025LT)

All emails on the waiting line been received, everything is back to normal.

08012019 1130Z (1230LT)

We are noticing delay regarding email reception.
One of our email platform is having issue handling the load, we are looking into replacing it as soon as possible.

08012019 1256Z (1356LT)

The load is stabilized, email processing is in progress.
New mails are received instantly now.

Sunday 6th January 2019

AMS1 Compute API AMS - Node stuck on ARM64 range

We have identified an issue regarding ARM64 Instances for Amsterdam region. Symptoms : node blocked on “rebooting server” or “Unable to reboot”

Our engineering team is currently investigating the issue.

We will provide additional information through this status as soon as we have more to share. In the meantime, our support team is available to answer any further questions.

Thank you for your patience

===================

The product team has identified the root cause and fix the issue. Everything is back to normal.

PAR - Node stuck

We have identified an issue regarding Instances for PARIS region. Symptoms : node blocked on “starting server” or “rebooting server”

Our engineering team is currently investigating the issue.

We will provide additional information through this status as soon as we have more to share. In the meantime, our support team is available to answer any further questions.

Thank you for your patience and sorry for the inconvenience.

*Next update in 2 hours.

===================

06012019 0451Z (0551LT)

The product team has identified the root cause and working on a fix

06012019 0551 (0651LT)

The issue is coming from our cluster, the rebalancing is ongoing.

06012019 0715 (0815LT)

We encountered an issue on our Storage cluster, re-balancing was mandatory to solve this problem, this is why instance was not properly starting. Everything is back to normal

Saturday 5th January 2019

No incidents reported

Friday 4th January 2019

No incidents reported

Thursday 3rd January 2019

DC3 Network [NETWORK] DC3 - Room 4-4 - Racks C13, C14, C15

We have lost connectivity in DC3 Room 4-4 rack C13, C14, C15

We have already launched actions to fix this as soon as possible

===================

03/01/19 1414Z (1514LT)

All three racks are now reachable. The issue is fixed

Wednesday 2nd January 2019

DC3 Network [NETWORK] DC3 - Room 3-2 - Racks B13, B14, B15 - Replacement of hardware

There will be a short outage (less than a minute) in order to replace a hardware part and improve the network on said racks.
It will be done in the upcoming hour (1545Z, 1645LT).

User Accounts API [SCW] API unavailability

We have had an issue regarding API in Paris and Amsterdam. API was unavailable for around 40 minutes, starting around 0900Z (1000LT) Our team managed to find the root cause and resolve it after investigation.

The service is now back online as it should.

Tuesday 1st January 2019

No incidents reported