All systems are operational

About This Site

Maintenances and incidents regarding our services, networks and datacenters can be followed here.

Scheduled Maintenance
[NETWORK] - Network maintenance #2 between Paris and Amsterdam

[NETWORK] - Network maintenance between Paris and Amsterdam

Our fiber optic provider has planned a maintenance on the public network between Paris and Amsterdam between the 23Th of February and February 24Th.

During this time the network will still be up but you might notice some congestions between Paris and Amsterdam

Start : 23-Feb-2019 0800Z 0900LT

Duration : 1 day

[RPN] - RPN maintenance between Paris and Amsterdam

[RPN] - RPN maintenance between Paris and Amsterdam

Our fiber optic provider has planned a maintenance on the RPN network between Paris and Amsterdam during the night between the 28Th of February and March 1st.

During this time you might not be able to reach a server over the RPN between Paris and Amsterdam

Start : 28-Feb-2019 2259Z 2359LT

Duration : 6 hours

Past Incidents

Tuesday 12th February 2019

[DC5] Public network switch upgrades, scheduled 1 week ago

We have planned a maintenance in Datacenter DC5 to upgrade our switches’ power supplies. We will remove the STS and add a second power supply to increase the redundancy.

For this maintenance, we will need to shut down the switch. During this time the public network will not be reachable.

Downtime estimated : 60 minutes

Whole maintenance duration : 120 minutes

Start : February 12 2019 0500Z 0600LT

Link to the planning: https://bugs.online.net/index.php?do=details&task_id=1399

UPDATE: We have canceled the maintenance for today (February 12TH). We will update the status with new informations as soon as possible.

Tuesday 5th February 2019

AMS1 Compute API AMS - Node stuck on ARM64 range

We have identified an issue regarding some ARM64 Instances for Amsterdam region. Symptoms : node blocked on “rebooting server” or “Unable to reboot”

Our engineering team is currently investigating the issue.

We will provide additional information through this status as soon as we have more to share. In the meantime, our support team is available to answer any further questions.

Thank you for your patience

02.07.19 1450Z (1550LT)

Issue has now been fixed, all ARM64 instances should now be back up.
Please get in touch with our support team if you are still experiencing issues.

Compute Nodes Start/Stop actions unavailable on PAR1

An issue with start/stop actions has been identified on Paris instances, our teams are working on resolving the issue. All other services and functionalities are up and running correctly.

02.05.19 1935Z (2035LT)

Our engineer team is still working on the issue. We will update this status when the situation is completely back to normal.

02.05.19 2200Z (2300LT)

Our engineers fix issue, there are still problems with BareMetal servers, we are working to solve them quickly!

02.07.19 1400Z (1500LT)

Issue has now been fixed and, after some monitoring period, we can confirm that start/stop actions from our internal workers are processed correctly and without any particular delay.
If you are still facing specific issues with your instance, please directly contact our support team by ticket.

Monday 4th February 2019

Global Backbone DC3 <> AMS1 saturation

One of our links between DC3 <> AMS1 is currently down, this may cause saturation on our network. Our teams are working on restoring connectivity.

===================

04/02/19 2230Z (2330LT)

Issue is now fixed.

[Phone Support] Scaleway phone support unavailable

Our phone support is currently unavailable due to high ticket volume. We will update this status when we are back online.

In the meantime you may open a ticket for us via your console:
https://documentation.online.net/en/account-management/assistance/assistance

===================

05/02/19 0600Z (0700LT)

Our support team is available again through the phone.

Sunday 3rd February 2019

No incidents reported

Saturday 2nd February 2019

No incidents reported

Friday 1st February 2019

Network maintenance on RPN links between AMS and PAR, scheduled 3 weeks ago

One of our network supplier is scheduling a network maintenance on RPN links between AMS and PAR.
You may experience partial outages during this timeframe.

Maintenance is scheduled to start on February 1st, 2200Z (2300LT) and should last 6 hours at most.

Our support team stays at your disposal should you have any question.

Compute Nodes Network outage on p12 platform

We are noticing outages on multiple C2 and VC1 servers in PAR1.
Some servers are still not reachable, and new actions performed will not complete correctly.

===================

04.02.19 1100Z (1200LT)

There was a hardware issue on some part our infrastructure. Our team at the datacenter managed to fix it and servers should now be available again. If you still experience an issue please attempt to reboot the server from the console panel and do not hesitate to contact our assistance if it doesn't work as expected.

02.02.19 0435Z (1735LT)

Some instances are still not reachable. We are investigating at the moment

02.02.19 0230Z (0330LT)

All impacted servers should be functional now.

A detailed report of an accident will be published during next week.

02.02.19 0010Z (0110LT)

Issue has now been fixed except for a single hypervisor, we are still working on it.
Regarding C2 servers, it will be required to manually reboot them, network will be reacquired on-boot.

following this major outage, a blog post will be published once the root cause and consequence are fully identified and fixed.
First diag reveals that cascading failures occured caused by corrupted requests.

Impact :
API unavailability, network unavailability on a small number of C2 and VC1 instances.

02.02.19 1830Z (1930LT)

Our teams are still actively working on the few unreachable virtual servers.
Issue is very complex and will probably require full availability of our engineering teams to recover all systems.
Some nodes cannot be restored in their current state, further analysis will be achieved on Monday.

Our support team is still fully available if you have any question.
However response time might be longer than usual following tonight's outage.

We are truly sorry for the inconvenience.

Console Partial API / console outage

We are currently experiencing issues regarding both AMS and PAR APIs.
You might also face issues connecting and managing your account from the console.

Our teams are focused on getting it fixed as soon as possible, we will update this status as soon as we have any update.

===================

01.02.19 2225Z (2325LT)

Issue is now fixed, APIs are up again and console access has been restored.
Do not hesitate to contact our support team if you are still experiencing any issue.

Thursday 31st January 2019

DC3 Electrical outage in S45 - D18

We had an electrical outage around 1700Z in DC3, room 4-5, rack D18.
Issue has now been fixed and services should be back up.

We will keep monitoring electrical stability on this particular rack.

If you are still facing specific issues, please contact our support team by ticket.

Compute Nodes PDS stability issue

We are currently experiencing performance issues on our Persistant Data Store.
As a results, some internal tasks might take longer than usual, and fail in some cases (for example, instance start/stop processes).

We are doing our best to get it fixed as soon as possible.

===================

31/01/19 1730Z (1830LT)

Root cause has been identified, PDS are stable again and we are progressively relaunching internal tasks to avoid congestion.
We will update this status when situation is completely back to normal.

31/01/19 2210Z (2310LT)

Issue is fixed, situation is now completely back to normal

Console Identified billing disruption / Perturbation de la procédure de facturation identifiée

We are noticing disruptions on unpaid workflow of Compute instances in December.
Our teams are fully focused on getting it fixed as soon as possible.
Impacted users will directly receive more information by email.


Une perturbation a été détectée dans le traitement des impayés des instances Compute du mois de décembre.
Nos équipes sont entièrement mobilisées pour rétablir la situation.
Nous vous tiendrons informés des évolutions.
Les utilisateurs concernés recevront directement des informations complémentaires par email.

===================

23.01.19 0614Z (0714LT)

Impacted customers have been contacted directly by email.
Issue has been resolved, feel free to contact our support team if you have any particular question.

DC3 [DC3] Accident in Room: 4 4-6, Rack: F12.

Hello,

We encountered an issue with switch in DC3, Room: 4 4-6, Rack: F12. Multiple servers may not be available cause of lost connection. We apologise for an inconvenience.

===================

01.02.19 0935Z (1035LT)

Our network team fixed the issue and the rack is now available again. If you don't have access to your server yet, do not hesitate to check through KVM or perform a reboot in order to restart the services.

01.02.19 0045Z (0145LT)

Switch was replaced, however further steps have to be performed by our engineers to totally restore connection and make all impacted servers functional again.

31.01.19 0000Z (0100LT)

Issue was escalated to responsible team.

Wednesday 30th January 2019

No incidents reported