Some systems are experiencing issues
Scheduled Maintenance
[PAR] Update of Load Balancer IP Addresses

We've updated load balancer IP addresses for applications and websites hosted on Clever Cloud. The new IP addresses now in use are:


We are going to remove 4 IPs that you must stop to use between now and August 23rd, 2024:

After this date, your applications and websites will no longer be able to use these IP addresses.

We still recommend to use CNAME DNS records when it's possible. To ensure that there is no disruption to your applications and websites, please make sure that your apex domain names are updated to point to the new IP addresses. You can update your apex domain names by editing the DNS records for your domain.


There should be no downtime for your applications or websites as a result of this change. However, if you do not update your apex domain names before August 23rd, your applications and websites may be unavailable.

What you need to do:

Review your apex domain names and ensure that they are pointing to the new IP addresses. If you are unsure how to update your apex domain names, please contact your domain registrar or Clever Cloud support.

For more information:

Please refer to the Clever Cloud documentation for more information about load balancers and DNS records: You can take a look at the changelog entry about this change:
You can also contact Clever Cloud support if you have any questions.

Past Incidents

Monday 18th September 2023

No incidents reported

Sunday 17th September 2023

No incidents reported

Saturday 16th September 2023

No incidents reported

Friday 15th September 2023

No incidents reported

Thursday 14th September 2023

No incidents reported

Wednesday 13th September 2023

No incidents reported

Tuesday 12th September 2023

Access logs Metrics and access logs storage layer issue

The storage layer has lost some nodes. We are investigating the issue.

EDIT 13:45 UTC : We have found that we have a network issue which cause storage nodes to timeout and then crash. Those nodes are now up and running, we are beginning the recovery process

EDIT 15:10 UTC : We have finished the recovery process and we are consuming the lag.

EDIT 18:52 UTC : We have almost consume all the data lag (estimate duration is 30 mins left), but there is still 2h of metadata lag.

EDIT 21:00 UTC: We have catched up the data and metadata lag, the query is now open

API Main API unreachability

Our main API is currently unreachable. We are aware of the issue and working towards bringing it back.

EDIT 12:56 UTC: The main issue is now resolved and the API is back online. We continue to see some errors and are working towards identifying their source.

EDIT 14:25 UTC: The API has stabilized but we are still looking for the origin of the troubles.

EDIT 13/09 09:03 UTC: The API is unreachable again, we are working on it

EDIT 13/09 09:15 UTC: The API is now operational, the root cause has been identified.