Some systems are experiencing issues

About This Site

Maintenances and incidents regarding our services, networks and datacenters can be followed here.

Scheduled Maintenance
[MAINTENANCE DC5-A] Racks D14;D15;D16;D17;D18;D19;C1;C2;C3;C4;C7;C8

A scheduled maintenance is planned in DC5-A with a physical displacement of your servers listed above in order to optimize and upgrade our infrastructure. Servers will be unavailable for the duration of the maintenance detailed below.

This status will be updated as the operations are going through to keep you informed about the events.

Start : July 6th, 2020 - 2000UTC (2200LT)

End : July 7th, 2020 - 0300UTC (0500LT)

[MAINTENANCE DC5-A] Racks H1;H2:H3;H4;H5;H6;H7;H8;H9;H10;H11;H12;H13;H14;H15;H16;H17

A scheduled maintenance is planned in DC5-A with a physical displacement of your servers listed above in order to optimize and upgrade our infrastructure. Servers will be unavailable for the duration of the maintenance detailed below.

This status will be updated as the operations are going through to keep you informed about the events.

Start : July 15th, 2020 - 2000UTC (2200LT)

End : July 16th, 2020 - 0300UTC (0500LT)

[DC5] Load test of generators

[DC5] Load tests of generators

As part of the preventive maintenance of the Datacenter, we will proceed to a redundancy test of the Datacenter infrastructure by carrying out generator tests.

The operation consists in simulating a cut of electrical TGBT and validate load recovery as well as function through generators. Running time on the generators will be reduced in view of the low load on the site.

Impact : None expected

Start: 16-Jul-2020 0600Z (0800LT)
End: 16-Jul-2020 1200Z (1400LT)
[DEDIBOX] API and Console session handler migration

[DEDIBOX] API and Console session handler migration

We'll migrate our Console and API session handler.

Console and API will be unavailable during the migration process.

Start : 25-Jun-2020 0300Z 0500LT

Duration : 30 min

[DC3] Semi-annual maintenance of chillers

[DC3] Semi-annual maintenance of chillers

We will do the semi-annual maintenance of the chillers, according to the manufacturer’s maintenance range.
Operations will be performed by the manufacturer and supervised by our maintainer, according to the following previsional schedule:

  • Monday 17/08: Chiller n°1, Substation A
  • Tuesday 18/08: Chiller n°2, Substation A
  • Wednesday 19/08: Chiller n°3, Substation A
  • Thursday 20/08: Chiller n°4, Substation B
  • Friday 21/08: Chiller n°5, Substation B
  • Monday 24/08: Chiller n°6, Substation B
  • Monday 31/08: Chiller n°7, Substation C
  • Tuesday 01/09: Chiller n°8, Substation C

Substations A and B supply the chilled water network of the computer halls.
Substation C supplies the chilled water network of the operators halls and the air handling units.

Impact : None expected

Start: 17-Aug-2020 0630Z (0830LT)
End: 01-Sep-2020 0330Z (0530LT)
[DC3] Load tests of generators and PLC software updates

[DC3] Load tests of generators and PLC software updates

As part of the preventive maintenance of the Datacenter, we will proceed to a redundancy test of the Datacenter infrastructure by carrying out generator tests chronologically on the following electrical chains : A / B / C / D / F1 and F2.

During the load tests on the electric chains A / B / C and D, we will proceed to F1 line up backup test.
Description of operations :

  • 25/08/2020:
    • Morning: Load test on the electric chain A with F1 line up backup test.
    • Afternoon: Load test on the electric chain B with F1 line up backup test.
  • 26/08/2020:
    • Morning: Load test on the electric chain C with F1 line up backup test.
    • Afternoon: Load test on the electric chain D with F1 line up backup test.
  • 27/08/2020: As part of the mobile chillers installation, we will update PLC software on chains F1 and F2. After we will proceed to load test on the chains F1 andF2.

During the F1 line up backup test, we will simulate a failure of the generators in order to validate the F1 lineup backup scenario.Our teams and usual maintenance companies will be mobilized for the perfect workflow.

Impact : None expected

Start: 25-Aug-2020 0600Z (0800LT)
End: 27-Aug-2020 1600Z (1800LT)

Past Incidents

Thursday 13th June 2019

[SCALEDAY] [ASSISTANCE] Ralentissement du traitement de tickets / Ticket processing slowdown - 13/06/2019, scheduled 1 year ago

Bonjour,

Le ScaleDay est la grande journée de l'année 2019, nous y célébrerons avec vous les 20 ans d'Online et les avancées exceptionnelles de Scaleway.

Ce grand événement réunira nos clients ainsi que nos collaborateurs, par conséquent des ralentissements dans le traitement des tickets d'assistance sont à prévoir.

Si vous n'êtes pas encore enregistré pour le ScaleDay, n'hésitez pas à vous inscrire et rejoignez-nous dans cette aventure : http://bit.ly/2KMWbz1

Nous vous remercions sincèrement pour votre compréhension et vous souhaitons un joyeux ScaleDay !

==============================

Hello,

ScaleDay is the big event of 2019, together we will be celebrating Online's 20th anniversary with a special focus on Scaleway's exceptional evolution.

This major event will bring together you, our most loyal customers, and all of our employees. This may result in a slower processing of support tickets.

If you have not registered for ScaleDay yet, feel free to register and join us on this adventure : http://bit.ly/2KMWbz1

We sincerely thank you for your understanding and wish you an happy ScaleDay  !

Wednesday 12th June 2019

Compute Nodes [Scaleway] Hot-snapshots temporarily disabled in some situations

Description of the issue : We have identified a bug in the snapshot functionality.

When a hot-snapshot is requested (on the web-console or via API), the snapshot process cannot complete properly if the following conditions apply:

  • instance is a virtualized offer of X86 family (GP1, DEV1, START1, VC1, X64*, RENDER)
  • AND instance last start has occured in or after March 2019
  • AND instance is still running today
  • AND volume being hotsnapshotted matched one the these conditions:
    • it was created before june 2018
    • OR it was created more recently but based on a snapshot created before june 2018
    • OR it was created more recently but based on a snapshot that was itself created based on another snapshot created before june 2018
  • AND the hypervisor (that is running the instance) is in a special state

Impact : In such cases the volume is left in "snapshotting" state, the snapshot is also in "snapshotting" state, so not available (it cannot be used to re-create a new volume). The instance is still running fine and can be used normally (no reboot observed). The instance cannot be stopped, rebooted, or any other action requested by the customer itself, it requires manual fixing by Scaleway team

Temporary disabling and fix : We are working on a definitive fix, so that hotsnapshot can be created again in all situations. Until the bug is solved, we have disabled the hotsnapshot feature for instances that match the above conditions : the API will refuse the creation. This will avoid customer's instance to be "blocked" (no action possible by customer). Note that the hot-snapshot feature is still available today for all other instances.

Workarounds :

  1. Standby ("stop-in-place" action in the API) the instance, then perform a snapshot (AKA cold-snapshot), then start the instance again => this workaround is temporary and best suited for a limited number of snapshots (as it is not automation-friendly), NB: until definitive fix, this process will need to be repeated for each snapshot

or

  1. Create a new instance from scratch (using scaleway official images, or using snapshot created after june 2018), manually move data between volumes, delete the old instance, and perform a hot-snapshot on the new instance => this will allow you to create as many hot-snapshots as needed and will not break existing automated hot-snapshotting operations, even before our definitive fix is available

Tuesday 11th June 2019

DC5 [DC5] Optic maintenance - Room 1 Rack D29

We need to change an optic on DC5 Room 1 - Rack D29 The servers running on the rack will encounter a downtime for a few seconds.

Start : 11-June-2019 1440Z 1540LT

Duration : 2 minutes

=================

The optic has been replaced. No error encountered.
Everything is back to normal after our intervention. If not, please contact our assistance.

Monday 10th June 2019

No incidents reported

Sunday 9th June 2019

No incidents reported

Saturday 8th June 2019

DC3 [PAR] Switch unavailability - DC2 ROOM 101 101 RACK G30

We have detected an issue on the switch for this rack. Our datacenter and network teams are working on it to restore the service as quickly as possible. We will keep you updated as soon as we have more information.

08/06/2019 1550Z (1750LT)

The dedicated team is still working on the issue, we will update this status as soon as we know more.

08/06/2019 1930Z (2130LT)

The issue has been solved, all server are now up and running.

Friday 7th June 2019

Global Backbone [NETWORK] worldstream

We have detected an issue with bandwidth and connectivity between our network and worldstream.

Our network team is investigating for the moment.

===================

07/06/2019 1600Z (1800LT)

WorldStream has changed their optic, issue is considered as fixed but we will keep monitoring the situation.
If you are still experiencing network instabilities through WorldStream, but directly contact our support team about your issue.

Thursday 6th June 2019

No incidents reported