Some systems are experiencing issues

About This Site

Maintenances and incidents regarding our services, networks and datacenters can be followed here.

Scheduled Maintenance
[RPN Network] maintenance - PAR > AMS - Scheduled Maintenance

RPN Network maintenance - PAR > AMS - Scheduled Maintenance

Region : PAR/AMS

Time : from April 29th, 2019 at 20:00 UTC to April 30th, 2019 at 04:00 am UTC

Maintenance will be carried out on our network link between AMS and PARIS. You might encounter some disconnections throught the RPN network, but only between AMS and PAR.

Start : Maintenance to start in the window 29-April-2019 2000Z - 30-April-2019 0400Z

Duration : 2 hours

[DC3 Optical link maintenance]

[DC3 Optical link maintenance]

Region : PAR

Time : from April 30th, 2019 at 06:00 UTC

Maintenance will be carried out on 2 DC3 optical links. This will impact servers from these racks Room 31 rack E13 Room 32 rack A7

There will be 2 short 2 minutes disconnection for the servers indicated before.

Start : April 30th, 2019 at 0600Z 0800LT

Duration : 2 time 2 minutes within a 1 hour window.

[RPN Network] maintenance - PAR > AMS - Scheduled Maintenance

RPN Network maintenance - PAR > AMS - Scheduled Maintenance

Region : PAR/AMS

Time : from May 3rd, 2019 at 21:00 UTC to May 4th, 2019 at 04:00 am UTC

Maintenance will be carried out on our network link between AMS and PARIS. You might encounter some disconnections through the RPN network, but only between AMS and PAR.

Start : May 3rd, 2019 at 2100Z 2300LT

Duration : 7 hours

Past Incidents

Tuesday 12th February 2019

[DC5] Public network switch upgrades, scheduled 2 months ago

We have planned a maintenance in Datacenter DC5 to upgrade our switches’ power supplies. We will remove the STS and add a second power supply to increase the redundancy.

For this maintenance, we will need to shut down the switch. During this time the public network will not be reachable.

Downtime estimated : 60 minutes

Whole maintenance duration : 120 minutes

Start : February 12 2019 0500Z 0600LT

Link to the planning: https://bugs.online.net/index.php?do=details&task_id=1399

UPDATE: We have canceled the maintenance for today (February 12TH). We will update the status with new informations as soon as possible.

Monday 11th February 2019

Compute Nodes [SCW] PAR1 - Start failure for some instance

We observed a number of nodes not able to start on Paris. We investigate on the issue and we will update this status as soon as possible.

Sunday 10th February 2019

No incidents reported

Saturday 9th February 2019

[Phone Support] Scaleway phone support unavailable

Our phone technical support is currently unavailable. We will update this status when we are back online.

In the meantime we do remain available through tickets, so do not hesitate to contact us this way : https://documentation.online.net/en/account-management/assistance/assistance

Friday 8th February 2019

DC2 [DC2] Partial network outage

A network issue was detected by our team, we are investigating the root cause and are working on reestablishing full connectivity as soon as possible. Your machines and data aren't impacted, only disconnected.

08/02/19 1400Z (1500LT) Our team are still investigating

08/02/19 1410Z (1510LT) A large fiber incident on our network backbone has been identified causing partial connectivity loss to several servers in PAR1. Our Network Team is on it.

08/02/19 1420Z (1520LT) Our fiber connection provider is going to the area where the fiber has been cut. Our teams have restablished most of the traffic by rerouting the traffic to alternate routes.

08/02/19 1440Z (1540LT) Connectivity is back for DC2 dedicated servers. Connectivity is coming back progressively for Scaleway instances. Our teams are currently working to distribute better the traffic and to fix room DC2/101 and the link from DC2 to AMS.

08/02/19 1500Z (1600LT) We managed to distribute the bandwidth between our providers. The link between DC2 and AMS is still unstable.

We are also setting up additional cables between DC2 rooms, to fix the remaining local issues.

08/02/19 1530Z (1630LT) Situation is stabilized and traffic is nearly normal. We'll stay working with our fiber provider to ensure definitive fix.

08/02/19 1800Z (1900LT) Our provider team is still working to repair the cable. The return to normal is progressive. We expect some delay before the come back to 100% availability. Network traffic is operational being routed on alternates path.

11/02/19 0930Z (1030LT) About 90 percent of fibers are back and traffic is now normal.

13/02/19 0830Z (0930LT) Still missing few fibers. Traffic is nominal.

18/02/19 1600Z (1700LT) All is fixed. Fiber is fully back and traffic restored to nominal routes. Incident closed.

Update 08/02:19 15:20 : Les équipes de notre fournisseur convergent vers la zone de coupure. Nous avons rétabli une grande partie des serveurs depuis un chemin alternatif.

Update 08/02:19 15:40 : Retour de la connectivité pour les serveurs isolés de DC2 et retour à la normale sur les instances Scaleway. Nos équipes travaillent sur les saturations en salle DC2/101 et vers Amsterdam.

Update 08/02:19 16:00 : Nous avons résorbé la saturation de certains peerings et transitaires, les opérations se concentrent sur le Paris/Amsterdam. Coté DC2/101 nous sommes en train de re-câbler physiquement certains routeurs pour limiter la saturation.

Update 08/02:19 16:30 : La situation est maitrisée. Le traffic est sous controle. Nous continuons à travailler avec notre fournisseur pour le fix définitif.

Update 08/02:19 19:00 : Notre fournisseur travaille à la réparation du cable. Nous nous attendons à un délai avant retour à la normale. Mais le traffic réseau est opérationel grace au routage sur des chemins alternatifs.

Update 11/02:19 10:30 : Environ 90 pourcent des fibres sont à nouveau opérationnelles. Le traffic est normal.

Update 13/02:19 09:30 : Il manque encoere quelques fibres. Le traffic est normal.

Update 18/01/19 17:00 : Tout a été réparé. Le traffic a été remis en place sur les routes nominales. Incident fermé

PAR1 Compute API PAR1 - Compute - Provisionning (start ; stop)

We are experiencing some provisioning issue (start and stop) on PAR1. Our team is fully mobilized to solve the problem.

08/02/19 1500Z (1400LT) Issue has been fixed. We keep it under monitoring until next week.

Thursday 7th February 2019

PAR1 Compute API [API] general slowdown on PAR1 Compute API / AMS1 Compute API / User Accounts API / Billing API

We are experiencing a general slowdown on PAR1 Compute API / AMS1 Compute API / User Accounts API / Billing API, due to contention of our internal logging and metrics pipeline. Our team is fully mobilized to solve the problem.

08/02/19 1300Z (1400LT) Problem has been fixed. We keep it under monitoring until next week.

DC3 Network [DC2] - Room 4-4 Rack E11 down

We are having a network issue in Datacenter DC3, Room 4-4, Rack E11

We are currently troubleshooting the switch and doing our best to fix this as soon as possible.

UPDATE: We have fixed the switch, servers are now back online

Compute Nodes AMS - Node stuck/ Not reachable on ARM64 range

We have identified an issue regarding some ARM64 Instances for Amsterdam region.

Symptoms : node are not reachable or blocked on “rebooting server” or “Unable to reboot”.

Our engineering team is currently investigating the issue.

We will provide additional information through this status as soon as we have more to share. In the meantime, our support team is available to answer any further questions.

Thank you for your patience

07.02.19 1530Z (1630LT)

Issue has now been fixed, all ARM64 instances should now be back up.
Please get in touch with our support team if you are still experiencing issues.

08.02.19 0030Z (0130LT)

Some instances are still being supervised by our engineering team.

08.02.19 1300Z (1400LT) Issue has been fixed. Status will remain opened until next week fro monitoring.

Wednesday 6th February 2019

DC2 [DC2] Datacenter running on power generators

Due to an issue from our power (electrical) supplier, our datacenter DC2 is now running on power generators.

Our datacenter technicians are monitoring the situation and suppliers have been contacted.

There has been no downtimes or outages

UPDATE: The 4 power grid cables (2 mains and 2 backups) are down. We are running on the power generators now. Still no impacts on the servers. UPDATE 2: Electricity is now coming from our supplier again. We are not running over power generators anymore. Incident lasted 50minutes and there was no downtime.

Tuesday 5th February 2019

AMS1 Compute API AMS - Node stuck on ARM64 range

We have identified an issue regarding some ARM64 Instances for Amsterdam region. Symptoms : node blocked on “rebooting server” or “Unable to reboot”

Our engineering team is currently investigating the issue.

We will provide additional information through this status as soon as we have more to share. In the meantime, our support team is available to answer any further questions.

Thank you for your patience

02.07.19 1450Z (1550LT)

Issue has now been fixed, all ARM64 instances should now be back up.
Please get in touch with our support team if you are still experiencing issues.

Compute Nodes Start/Stop actions unavailable on PAR1

An issue with start/stop actions has been identified on Paris instances, our teams are working on resolving the issue. All other services and functionalities are up and running correctly.

02.05.19 1935Z (2035LT)

Our engineer team is still working on the issue. We will update this status when the situation is completely back to normal.

02.05.19 2200Z (2300LT)

Our engineers fix issue, there are still problems with BareMetal servers, we are working to solve them quickly!

02.07.19 1400Z (1500LT)

Issue has now been fixed and, after some monitoring period, we can confirm that start/stop actions from our internal workers are processed correctly and without any particular delay.
If you are still facing specific issues with your instance, please directly contact our support team by ticket.