Some systems are experiencing issues

Past Incidents

Friday 8th April 2022

No incidents reported

Thursday 7th April 2022

Infrastructure Sydney zone is unreachable

SYD zone is unreachable. We are investigating.

EDIT 18:23 UTC - the SYD zone (provided by OVH) seems only reachable using the OVH network

EDIT 18:30 UTC - we are waiting for our provider's feedback

EDIT 19:00 UTC - fixed https://network.status-ovhcloud.com/incidents/j5vzf90dpzcc

Wednesday 6th April 2022

No incidents reported

Tuesday 5th April 2022

Access logs Metrics / Access logs: data points are currently delayed

Metrics and access logs are currently delayed. Data points are queued and will be processed as soon as possible. This may lead to some series missing recent data.

Edit 10:27 UTC: The delay is now resolved. Sorry for the inconvenience.

Monday 4th April 2022

Reverse Proxies [RETROACTIVE][PAR] Add-on reverse proxy unavailability

An add-on reverse proxy on the PAR zone was unreachable for 15 minutes. The restart initially failed, ence the extended downtime.

This should now be resolved. The 7 other reverse proxies were working as usual.

Sunday 3rd April 2022

RabbitMQ shared cluster All deployments are stopped

Deployments are broken.  We are looking why.

0740: The reason has been found and it's been fixed.

Access logs Metrics & Access logs are experiencing issues

We identified issues on our metrics and accesslogs storage where certain metrics and accessLogs are not accessible.

The team has found to origin. We are working on a fix.

Infrastructure VMs are crashing on some hypervisors

Live updates:

Some hypervisors are experiencing issues with qemu. VMs are randomly crashing.

We are investigating.

  • 0323: Looks like too processes are started and systemd is kill qemu threads.
  • 0330: We suspect a recent update to be causing the thread exhaustion on the HVs.
  • 0345: We start applying a patch to revert the update.
  • 0407: We finish checking up everything. The HVs look fine, now.

Post Mortem:

Incident summary

The 4th of April, some new deployments were unable to be completed by the CCOS (Clever Cloud Operating System) orchestrator.

A few day ago, we introduced a new notification subsystem. It was required to enable the Network Groups feature. The new notification subsystem led to new connections from hypervisors agent to be initiated to the messaging component.

An issue on the proxy layer which did not properly closed connexions, led to connexion stacking until saturation of the pooler. This situation made agents to stack up too many processes on hypervisors machines for too much time preventing new processes for being spawned.

Our hypervisor controller suffered from being able to spread new threads, which led to new deployments being unable to be completed. It also prevented the current virtual machines from spawning new threads, thus crashing some of these running VMs.

Short term resolution

Network Groups being in ALPHA, we immediately decided to rollback their availability, pushing back a non blocking version which did not rely on our messaging layer.

Long term resolution

Two different actions are being rolled out.

  • The first one is a patch being currently tested on a dedicated deployment to ensure the garbage collection of connections on the messaging service proxy layer.
  • The second one is targeting the hypervisor's agent with an architectural change to prevent too much processes for being spawned. A specific driver has been setup as a service to maintain a single connexion and a single process instead of spawning an on-demand process at each notification. This modification would avoid any issue regarding the messaging service, even in case of other issue than the connection handling.

Saturday 2nd April 2022

No incidents reported