Some systems are experiencing issues

Past Incidents

Thursday 26th March 2020

Access logs Metrics ingestion halted

Metrics ingestion is completely stuck at the moment. We are investigating.

11:02 UTC: Ingestion is back online. It's unclear exactly what went wrong at the moment but it is most likely linked to the issue from yesterday. A complete reboot of all storage nodes 'fixed' the issue. Those storage nodes now have 48 minutes of buffered data to ingest.

11:11 UTC: Ingestion delay very close to normal.

11:17 UTC: Ingestion delay is back to normal.

Reverse Proxies Requests hang during reverse proxy upgrade

Following an usual update on one of our public reverse proxies, some requests were hanging instead of being processed.

During the upgrade process, one of the workers of this reverse proxy continued to accept connections but didn't process them and kept them until the requests timeout. The issue has been resolved by 11:10 UTC+1 and will be investigated further. This is the first time the upgrade process fails us in months and we will take extra-steps to avoid and detect this issue faster.

Wednesday 25th March 2020

Access logs Metrics ingestion disabled caused by network instability

We are experiencing an important network issue with the storage nodes of Metrics.

Because of this, we disabled ingestion temporarily which will make things easier to debug and fix.

17:26 UTC: Network issue seems to be gone, ingestion is restarted

17:31 UTC: Ingestion is going smoothly. As of now, we don't know what happened network-wise, we are awaiting word from our provider. As of now, it looks like a congestion issue from our point of view.

17:35 UTC: Ingestion delay back to normal

Tuesday 24th March 2020

No incidents reported

Monday 23rd March 2020

No incidents reported

Sunday 22nd March 2020

No incidents reported

Saturday 21st March 2020

No incidents reported

Friday 20th March 2020

No incidents reported