Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't know about Hetzner but my experience with OVH on dedicated servers and failures is like this: they detect when the server is down, mainly when it's off or doesn't ping, and then they try to boot on their debug distribution and perform some checks on the machine. They don't monitor other health issues however (how would they since you are running your own system?) and therefore don't do anything before they detect a "down" status.

Some failures I experienced and had to monitor/detect myself were: overheating (they replaced thermal paste when I told them I saw strange readings from the CPU stats), raid disk failure or ssd high burn (ie. partial failure, server still running, they replaced the failed disks after I told them).

Most of the time the issues have been resolved within 1-4 hours on low-cost Kimsufi and SoYouStart offers, even on weekends and nights. Often when the server is running they can require a shutdown.

I'm quite happy with this as I am highly technical in those subjects and like to look under the hood, but with dedicated servers you really have to do some more maintenance/monitoring/planning yourself.



I don't think this is accurate. I have rented an OVH dedicated server (through SoYouStart) for about the last decade, let me share some maintenance experiences:

> They don't monitor other health issues however (how would they since you are running your own system?) and therefore don't do anything before they detect a "down" status.

My server has a hardware raid card. I have had one incident where OVH contacted me and said there was an issue with one of the drives, and that they will reboot the server at X time to replace it. They did so, and the problem was solved with no requests or intervention on my part.

I had another incident where I was told the motherboard died. IIRC, it died around 1am my time and was replaced by 5am my time. They of course turned the system back on for me. I was asleep the whole time, and this was likewise solved with zero requests or intervention on my part.

Besides this, I can count the number of times an internet or power issue made my server unreachable on a single hand. IMO, a great experience for a dirt cheap host.

That all being said: OVH's ipv6 solution is laughably bad and is the single reason why I would switch hosts, if a better one with a north American presence appears.


What you describe are hardware failures, and as I said they detect hardware failures. When the server goes down, they are on it by themselves.

But somes issues are not failures and you have to work on them on your side.

Most of the time the raid is software nowadays for example.

IPv6 works fine for my many servers at OVH.


I don't see that in what you wrote but I did see the mention of them not monitoring raid. Hence one reason why I thought your comment is inaccurate, per my experience.


Technically stuff like overheating they could monitor via IPMI (which they most likely use for OOB control anyway)


The thing is, it physically wasn't overheating because there was a process on the system (intel/ubuntu) that was hugely throttling the CPU to make it not overheat. So the machine was almost useless, very slow, but the temperature was ok. When the throttling mechanism was disabled, it indeed overheated. It's only because of those throttling system processes that I found out about the physical problem.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: