As an expert of Fault-tolerant computing system; I may say yes, a highly available IT hardware Infrastructure with sound disaster recovery plans and redundancy, may guarantee high availability on the software to some levels?
Yes, there are models for that but you need to realize that there are different threats and failure modes you might want to protect against. Things can get quite complex very quickly when you mix HA and FT.
In the simplest form, FT H/W can mitigate faults that are bounded to a single machine. RAID storage arrays can mitigate faults bound to physical media. Then HA clusters can address faults bounded to a comprehensive FT machine or a network.
Then you can get into hybrid models, using HA clusters of FT nodes. Ideally, you would want to do so with a heterogeneous OS, to avoid the case where running code hits a driver error causing a crash of Node A. If the same OS/App load is then restored on Node B it is possible for Node B to crash after executing the same instruction set.
If you would like to go one step further you can also add HA Geo units, which are geographically distributed to avoid data center failure. The complexity and investment are unbounded, so it really comes down to what s the cost of downtime and how much of an investment makes sense to prevent that service outage.
Hope that gives you a sense of the possibilities out there.