Some systems are experiencing issues

About This Site

Maintenances and incidents regarding our services, networks and datacenters can be followed here.

Scheduled Maintenance
[RPN] - Switch maintenance DC5-A Rack B17

[RPN Switch] Switch maintenance DC5-A Rack B17

Location : DC5-A Rack B17

What is done : We will need to power off the RPN switch in DC5-A rack B17 in order to maintain it.

Impact : RPN network will not be reachable during the maintenance.

Start : 17-Dec-2019 1100Z 1200LT

Duration : 60 minutes

Past Incidents

Tuesday 10th December 2019

[DC3] Maintenance on high-voltage delivery station, scheduled 3 days ago

[DC3] Maintenance on high-voltage delivery station DATA1 and sub-station A/C/F1

Location : DC3

What is done : We will proceed to a cells maintenance to “DATA 1”, sub-station “A/C/F1” and electrical protection tests. The electrical load of DC3 has to be supplied from delivery station “DATA 2” before the maintenance. After the maintenance of “DATA1” the load of sub-stations “A/C/F1” and “B/D/F2” will be switched back to “DATA1”. The electrical load will be supplied from generators during the HV operations.

Impact : None

Start : 10-Dec-2019 0700Z 0800LT

Duration : 10 hours

Monday 9th December 2019

[DC3] Transfer of the electrical supply of sub-station A/C/F1 and B/D/F2 from DATA1 to DATA2, scheduled 4 days ago

[DC3] Transfer of the electrical supply of sub-station A/C/F1 and B/D/F2 from DATA1 to DATA2

Location : DC3

What is done : The intervention consist to supply both sub-stations A/C/F1 and B/D/F2 to DATA2 delivery station to prepare for the preventive maintenance of DATA1 the 10/12/2019. The electrical load will be supplied from generators during the operations on high-voltage cells.

Impact : None

Start : 9-Dec-2019 0700Z 0800LT

Duration : 10 hours

Thursday 28th November 2019

No incidents reported

Wednesday 27th November 2019

Web Cloud webhosting

One of our Cloud Web Hosting’s load balancer is failing. Some websites are not reachable for the moment. We are working on a quick solution.

==================

11.27.2019 0500Z (1600LT)

Issue has been fixed. Thank you for your patience. If you notice any issue on your Cloud hosting service, please contact our support.

11.27.2019 0800Z (0900LT)

Issue has been escalated again to the local team

11.26.2019 0845Z (0945LT)

The issue has been fixed. If you are still encountering an issue, please contact our assistance.

11.26.2019 0815Z (0915LT)

Issue has been escalated to the local team

Tuesday 26th November 2019

Console Internal issue with ticketing system

Our teams noticed an internal issue regarding tickets' display on our internal tools.
Our support team is currently unable to process pending request while we are investigating the issue.

Should you have any urgent issue, please get in touch with our support team directly through Slack :
http://scaleway-community.slack.com
Channels :

  • #community (english)
  • #community-fr (french)

===================

11.26.2019 1830Z (1930LT)

Issue has now been fixed.
Creating ticket is available again and we are able to process pending requests accordingly.
Our team will do its best to process all pending tickets as fast as possible.

11.26.2019 1720Z (1820LT)

Ticket creation has been disabled on customer's console, since we aren't able to retrieve these tickets on our backend.
Please use Slack should you have any urgent request.
Our teams are still investigating the issue.

AMS1 Network RPN between AMS and DC3 down

Link between AMS1 and DC3 for the RPN network is down.

We will update this status as soon as we have additional details to share.

===================

11.27.2019 1430Z (1530LT)

All services are back online since last night (around 1am CEST).
Following further monitoring, it appears traffic is now normally routed between AMS and PAR.

11.26.2019 1930Z (2030LT)

Part of the services are back online and the remaining ones should be back up during the evening.

11.26.2019 1000Z (1200LT)

Issue has been identified, it is currently being worked on by our fiber provider.
However, we do not have any ETA yet.

11.26.2019 1000Z (1200LT)

Impacted operator is still working on the precise diagnosis of the current incident

11.26.2019 0800Z (1000LT)

Issue has been escalated to local team

Monday 25th November 2019

Object Storage [SCW] Object Storage - Disconnection and Low Performances

[SCW] Object Storage - Disconnection and Low Performances

PAR and AMS region

We are currently investigating the source of major slowdowns and some disconnections regarding Object Storage

===================

Post-mortem

On Friday 11.22 @ 5 PM GMT, a RAM related incident occured on different servers. This incident slowly cascaded into database unavailability and thus the unability to access some buckets. The incident continued to spread over the weekend despite our efforts and was finally adressed on Sunday 11. 24 @ 9.30 PM GMT.

27.11.19 1630Z (17:30LT)

Issue has been fixed. Thank you for your patience. If you notice any issue with the service please contact our technical support.

25.11.19 0830 (0930LT)

Issue has been escalated to local team

Sunday 24th November 2019

No incidents reported

Saturday 23rd November 2019

No incidents reported

Friday 22nd November 2019

[Scaleway AMS clusters] Scheduled maintenance on hypervisors' network process, scheduled 3 weeks ago

Following an issue identified in our hypervisors' network configuration, we are going to restart the network process of multiple hypervisors during the day.
You may experience a very short outage while the process is restarted (a few seconds).

Our support team remains at your disposal should you have any question.

Start : 22-Nov-2019, Between 1300Z 1400LT and 1630Z 1730LT

Duration : A few seconds

Thursday 21st November 2019

Dedibackup [DC3] One Dedibackup server unavailable

One of our dedibackup server on DC3 is currently not reachable.
If you are not able to connect on your dedibackup, it means you are impacted by this outage.

Next update as soon as we have more information

===================

11.21.19 0845Z (0945LT)

Issue has been escalated to local team