Some systems are experiencing issues

About This Site

Maintenances and incidents regarding our services, networks and datacenters can be followed here.

Scheduled Maintenance
Scheduled border leaf maintenance

Location : DC2

We'll replace one of our border leaf by a bigger shelf.
Replace shelf will be isolated and links shifted to new one.

No impact expected as our shelves are redundant.

Start : 5-Aug-2020 0700Z 0900LT

Duration : 180 minutes

[INSTANCE] Scheduled maintenance on FR-PAR instance API

Our Instance team is scheduling a maintenance on the instance API.
The maintenance will be done in a 1-hour window.

Impact :
The response time can slightly increase and some 500 errors may occur.

[DC3] Semi-annual maintenance of chillers

[DC3] Semi-annual maintenance of chillers

We will do the semi-annual maintenance of the chillers, according to the manufacturer’s maintenance range.
Operations will be performed by the manufacturer and supervised by our maintainer, according to the following previsional schedule:

  • Monday 17/08: Chiller n°1, Substation A
  • Tuesday 18/08: Chiller n°2, Substation A
  • Wednesday 19/08: Chiller n°3, Substation A
  • Thursday 20/08: Chiller n°4, Substation B
  • Friday 21/08: Chiller n°5, Substation B
  • Monday 24/08: Chiller n°6, Substation B
  • Monday 31/08: Chiller n°7, Substation C
  • Tuesday 01/09: Chiller n°8, Substation C

Substations A and B supply the chilled water network of the computer halls.
Substation C supplies the chilled water network of the operators halls and the air handling units.

Impact : None expected

Start: 17-Aug-2020 0630Z (0830LT)
End: 01-Sep-2020 0330Z (0530LT)
[DC3] Load tests of generators and PLC software updates

[DC3] Load tests of generators and PLC software updates

As part of the preventive maintenance of the Datacenter, we will proceed to a redundancy test of the Datacenter infrastructure by carrying out generator tests chronologically on the following electrical chains : A / B / C / D / F1 and F2.

During the load tests on the electric chains A / B / C and D, we will proceed to F1 line up backup test.
Description of operations :

  • 25/08/2020:
    • Morning: Load test on the electric chain A with F1 line up backup test.
    • Afternoon: Load test on the electric chain B with F1 line up backup test.
  • 26/08/2020:
    • Morning: Load test on the electric chain C with F1 line up backup test.
    • Afternoon: Load test on the electric chain D with F1 line up backup test.
  • 27/08/2020: As part of the mobile chillers installation, we will update PLC software on chains F1 and F2. After we will proceed to load test on the chains F1 andF2.

During the F1 line up backup test, we will simulate a failure of the generators in order to validate the F1 lineup backup scenario.Our teams and usual maintenance companies will be mobilized for the perfect workflow.

Impact : None expected

Start: 25-Aug-2020 0600Z (0800LT)
End: 27-Aug-2020 1600Z (1800LT)

Past Incidents

Tuesday 4th August 2020

DNS Resolvers [DNS] Instance name change from cloud.scaleway.com to instances.scw.cloud

[DNS] Instance name change from cloud.scaleway.com to instances.scw.cloud

We are planning a change in the DNS domain used by all Scaleway Instances on September 28, 2020. This message is relevant to you if you use public or private DNS name ending with pub.instances.scw.cloud or priv.instances.scw.cloud to address your instances in your infrastructure design.

===================

28/09/20 0600Z (0800LT)

Our old DNS service will be stopped

04.08.20 0600Z (0800LT)

Our new DNS service is now fully working

DC3 [DC3] Multiple bay in room 4-4 unavailable

We've noticed some bay unavailable in room 4-4 ( B19, B20, B21)

The dedicated team has been alerted.

We will update this status as soon as possible

===================

08.04.2020 1205Z (1405LT)

Issue has been solved by our engineers

08.04.2020 0640Z (0840LT)

Issue has been escalated to local team

Monday 3rd August 2020

No incidents reported

Sunday 2nd August 2020

PAR1 Compute API [Scaleway] PARIS - Hypervisor down

We have identified an issue with one of our hypervisor.

Impacted instances are unreachable, administrative actions (stop, reboot, etc) are not properly executed, resulting in the instance getting stuck.

08.03.20 0730Z (0930LT)

The issue was related to the IPMI card. It has been fixed. Everything is back to normal.

08.03.20 1300Z (1500LT)

Issue has been escalated to our local team

Saturday 1st August 2020

Object Storage [SCW] Object Storage - Disconnection and Low Performances

[SCW] Object Storage - Low Performances

PAR and AMS region We are currently investigating the source of major slowdowns regarding Object Storage

===================

08.02.20 1824Z (1624)

Our teams have been able to stabilize the services.
If you notice any issue(s) please contact our support team.

08.01.20 1900Z (1700LT)

Issue has been escalated to local team

Dedibackup [Dedibackup] Connection issues

Some of our Dedibackup servers are unavailable at the moment, you may experience issues when trying to connect to your backup FTP space.

===================

08.03.20 1210Z (1410LT)

Issue has been solved by our product team.

08.01.20 1100Z (1300LT)

Issue has been escalated to local team.

Friday 31st July 2020

Console [DEDIBOX] ESXI installations can't be completed

[DEDIBOX] ESXI installations can't be completed

We're currently issuing an issue with ESXI installations, which can't be completed. Installations are stuck at the config step, with a wrong boot option.

===================

08.03.20 1200Z (1400LT)

6.0 being EOL, please try again to install your server using a more recent version and don't hesitate to contact our support team should you experience any issue

07.31.20 1630Z (1830LT)

Issue has been escalated to local team

DC5 [DC5] IPMI issue in room 1-1

We've noticed that some IPMI are not reachable at DC5, room 1-1.
Actions like reboot, installation etc are not available.
The dedicated team is already working on it,
We will update this status as soon as possible

===================

07.31.20 1530Z (1730LT)

Issue has been fixed, linked to https://status.scaleway.com/incident/927

07.31.20 1500Z (1700LT)

Issue has been escalated to local team

DC3 Internal issue with one of our infrastructure server in DC3

Following an internal outage, multiple admin actions (reboot, installations, ...) were unable to be performed on DC3-based services.
Issue has been fixed and services are stable again.

Should you still face any issue, please directly contact our Support team by ticket.

===================

31/07/20 1240Z (1440LT)

We noticed that resolvers in DC3 were still not properly answering.
Issue has now been fixed.

31/07/20 1230Z (1430LT)

Issue has been fixed, actions are correctly performed again.

31/07/20 1200Z (1400LT)

Issue has been escalated to product team.

Object Storage Services instabilities following internal outage

Following an internal outage, we noticed stability issues on the following products :

  • Object Storage :
    Partial loss of traffic in PAR.
  • Container Registry :
    Unavailability in PAR.
  • Compute VMs :
    Cannot perform some actions (start/stop/snapshot creation).
  • Kapsule :
    Cannot start new nodes or scale down/up existing clusters in PAR.
  • Database :
    Automatic backup and restoration unavailable on AMS.
    Provisioning may also fail in PAR, but no impact on running instances.

We are working with all teams involved in order to identify impacted services, and will keep updating this status as we go.

===================

08.02.20 1423Z (1623LT)

Our teams have been able to stabilize the services.
If you notice any issue(s) please contact our support team.

08.02.20 1215Z (1415LT)

We have noticed that some issues persist. Our team is currently investigating. Thank you for your understanding.

07.31.20 1315Z (1515LT)

Our teams have been able to restart all services and ensure they were stable again.
If you are still experiencing any issue on your end, please directly contact our support team by ticket about it.

07.31.20 1130Z (1330LT)

Issue noticed on a few infrastructure servers.
We are working with involved teams to perfectly identify impacted services.
Status will be updated as soon as we have additional details.

Thursday 30th July 2020

[DC3] Dedibox resolvers, scheduled 5 days ago

[DC3] Dedibox resolvers

IPv6 2001:bc8:1::16 resolver will be updated.

Impact: Queries on Dedibox resolvers can be impacted for a short time (<30secs) during the upgrade operation.

Start : 30-Jul-2020 1200Z 1400LT

Duration: under a minute

Wednesday 29th July 2020

No incidents reported