I use Uptime robot to monitor the downtime of my websites. Yesterday morning I opened my email to find 92 emails from 1 site over an 11 hour period starting at midnight. The caused were mostly HTTP 423 (Locked) or 404 (Not Found). I stopped the monitoring. Today the site is down again again and is scheduled to awake after > 1 hour and there is a notice that it will sleep again in 13 hours. So much for 1 h sleep at 00:00 h.
My other site was down today for almost 6 hours. On opening there was a message “There’s nothing here” that indicates no site is installed, then later the sleeping page. It was supposed to sleep at 00:00 h for 1 hour, but instead was down from 02:56 h to 08:40 h and many other short downtimes.
One of the moderators said the systems were operating anarchically, which is an apt description. I have read many postings and the situation continues to be chaotic. Why was this planned sleeping not properly tested before implementing it?
My sites have very low traffic, If the reason for implementing the sleeping was overloading, may I suggest that it is applied only to sites with high traffic. These are also the ones that will be more likely to move out to the paid hosting. However, since 000webhost operates so chaotically under the umberella of Hostinger, this will cause second thoughts about moving to Hostinger.
Even this forum has been down for < 54% of the time until now today and returned 502 Bad Gateway, or 504 Gateway Time-out many times over a period of many hours, before saving this posting!.