All systems are operational

About This Site

Maintenances and incidents regarding our services, networks and datacenters can be followed here.

Scheduled Maintenance
DC3 - Semi-Annual Refrigeration Units Maintenance

We will do the semi-annual maintenance of the chillers.

The operation will have no impact on your equipments housed in our DC

START: Friday 23-Aug-2019 0600Z 0800LT

END: Tueday 03-Sept-2019 1530Z 1730LT

RPN AMS1/DC3

Preventive maintenance over the RPN network between AMS1 and DC3

Network supplier must perform maintenance on his Network. Maintenance window is from September 13, 2019 at 11:00 pm UTC to September 14th, 2019 at 6:00 am UTC. The expected Impact length is 19 hours at the maximum, during the maintenance window

Impact : Downtime on the RPN network between AMS1 and DC3

Start : 13-Sept-2019 2100Z 2300LT

It will last : 19 hours

DC3 - Annual maintenance of UPS

We will do the annual maintenance of the inverters of electrical chains A, B, C and D.

The operation will have no impact on your equipments housed in our DC

START: Monday 16-Sept-2019 0630Z 0830LT Electrical chain A

END: Friday 20-Sept-2019 1530Z 1730LT

RPN AMS1/DC3

Upgrade over the RPN network between AMS1 and DC3

Network supplier must perform maintenance on his Network. Maintenance window is from October 26th, 2019 at 10:00 pm UTC to October 27th, 2019 at 4:00 am UTC. The expected Impact length is 6 hours at the maximum, during the maintenance window

Impact : Downtime on the rpn network between AMS1 and DC3

Start : 26-Oct-2018 2200Z 0000LT

It will last : 6 hours

Past Incidents

Tuesday 12th February 2019

No incidents reported

Monday 11th February 2019

Compute Nodes [SCW] PAR1 - Start failure for some instance

We observed a number of nodes not able to start on Paris. We investigate on the issue and we will update this status as soon as possible.

Sunday 10th February 2019

No incidents reported

Saturday 9th February 2019

[Phone Support] Scaleway phone support unavailable

Our phone technical support is currently unavailable. We will update this status when we are back online.

In the meantime we do remain available through tickets, so do not hesitate to contact us this way : https://documentation.online.net/en/account-management/assistance/assistance

Friday 8th February 2019

DC2 [DC2] Partial network outage

A network issue was detected by our team, we are investigating the root cause and are working on reestablishing full connectivity as soon as possible. Your machines and data aren't impacted, only disconnected.

08/02/19 1400Z (1500LT) Our team are still investigating

08/02/19 1410Z (1510LT) A large fiber incident on our network backbone has been identified causing partial connectivity loss to several servers in PAR1. Our Network Team is on it.

08/02/19 1420Z (1520LT) Our fiber connection provider is going to the area where the fiber has been cut. Our teams have restablished most of the traffic by rerouting the traffic to alternate routes.

08/02/19 1440Z (1540LT) Connectivity is back for DC2 dedicated servers. Connectivity is coming back progressively for Scaleway instances. Our teams are currently working to distribute better the traffic and to fix room DC2/101 and the link from DC2 to AMS.

08/02/19 1500Z (1600LT) We managed to distribute the bandwidth between our providers. The link between DC2 and AMS is still unstable.

We are also setting up additional cables between DC2 rooms, to fix the remaining local issues.

08/02/19 1530Z (1630LT) Situation is stabilized and traffic is nearly normal. We'll stay working with our fiber provider to ensure definitive fix.

08/02/19 1800Z (1900LT) Our provider team is still working to repair the cable. The return to normal is progressive. We expect some delay before the come back to 100% availability. Network traffic is operational being routed on alternates path.

11/02/19 0930Z (1030LT) About 90 percent of fibers are back and traffic is now normal.

13/02/19 0830Z (0930LT) Still missing few fibers. Traffic is nominal.

18/02/19 1600Z (1700LT) All is fixed. Fiber is fully back and traffic restored to nominal routes. Incident closed.

Update 08/02:19 15:20 : Les équipes de notre fournisseur convergent vers la zone de coupure. Nous avons rétabli une grande partie des serveurs depuis un chemin alternatif.

Update 08/02:19 15:40 : Retour de la connectivité pour les serveurs isolés de DC2 et retour à la normale sur les instances Scaleway. Nos équipes travaillent sur les saturations en salle DC2/101 et vers Amsterdam.

Update 08/02:19 16:00 : Nous avons résorbé la saturation de certains peerings et transitaires, les opérations se concentrent sur le Paris/Amsterdam. Coté DC2/101 nous sommes en train de re-câbler physiquement certains routeurs pour limiter la saturation.

Update 08/02:19 16:30 : La situation est maitrisée. Le traffic est sous controle. Nous continuons à travailler avec notre fournisseur pour le fix définitif.

Update 08/02:19 19:00 : Notre fournisseur travaille à la réparation du cable. Nous nous attendons à un délai avant retour à la normale. Mais le traffic réseau est opérationel grace au routage sur des chemins alternatifs.

Update 11/02:19 10:30 : Environ 90 pourcent des fibres sont à nouveau opérationnelles. Le traffic est normal.

Update 13/02:19 09:30 : Il manque encoere quelques fibres. Le traffic est normal.

Update 18/01/19 17:00 : Tout a été réparé. Le traffic a été remis en place sur les routes nominales. Incident fermé

PAR1 Compute API PAR1 - Compute - Provisionning (start ; stop)

We are experiencing some provisioning issue (start and stop) on PAR1. Our team is fully mobilized to solve the problem.

08/02/19 1500Z (1400LT) Issue has been fixed. We keep it under monitoring until next week.

Thursday 7th February 2019

PAR1 Compute API [API] general slowdown on PAR1 Compute API / AMS1 Compute API / User Accounts API / Billing API

We are experiencing a general slowdown on PAR1 Compute API / AMS1 Compute API / User Accounts API / Billing API, due to contention of our internal logging and metrics pipeline. Our team is fully mobilized to solve the problem.

08/02/19 1300Z (1400LT) Problem has been fixed. We keep it under monitoring until next week.

DC3 Network [DC2] - Room 4-4 Rack E11 down

We are having a network issue in Datacenter DC3, Room 4-4, Rack E11

We are currently troubleshooting the switch and doing our best to fix this as soon as possible.

UPDATE: We have fixed the switch, servers are now back online

Compute Nodes AMS - Node stuck/ Not reachable on ARM64 range

We have identified an issue regarding some ARM64 Instances for Amsterdam region.

Symptoms : node are not reachable or blocked on “rebooting server” or “Unable to reboot”.

Our engineering team is currently investigating the issue.

We will provide additional information through this status as soon as we have more to share. In the meantime, our support team is available to answer any further questions.

Thank you for your patience

07.02.19 1530Z (1630LT)

Issue has now been fixed, all ARM64 instances should now be back up.
Please get in touch with our support team if you are still experiencing issues.

08.02.19 0030Z (0130LT)

Some instances are still being supervised by our engineering team.

08.02.19 1300Z (1400LT) Issue has been fixed. Status will remain opened until next week fro monitoring.

Wednesday 6th February 2019

DC2 [DC2] Datacenter running on power generators

Due to an issue from our power (electrical) supplier, our datacenter DC2 is now running on power generators.

Our datacenter technicians are monitoring the situation and suppliers have been contacted.

There has been no downtimes or outages

UPDATE: The 4 power grid cables (2 mains and 2 backups) are down. We are running on the power generators now. Still no impacts on the servers. UPDATE 2: Electricity is now coming from our supplier again. We are not running over power generators anymore. Incident lasted 50minutes and there was no downtime.