I am familiar with a fact that in P2P storage distributed systems reliability is provided with redundancy using erasure code or simple file replication. I am interested if there are some other techniques that I have not heard of. Thank you.
I just find out about "Hierarchical Codes" and " Regenerating codes" are they just research results or accepted replication schemes (special case of erasure code, correct?)
To increase reliability, you will always need a kind of redundancy. Thus, the question is only about the level of granularity, i.e. what objects are redundant.
By definition, an erasure code is any code that allows error correction and thus introduces redundancy. If you want, you could even define file replication as a kind of erasure code, (even that doesn't seem to be very useful).
With other words, to increase reliability you will always need a scheme for (forward) error correction, and thus, from the perspective of coding, an erasure code. However, there a plenty of different approaches that are covered by this term, thus it should be easy to find a new/different one (in case you seek for).
Many researchers agree that file replication could be identify as erasure code with one redundant block.... but if we know how redudancy block is created and used for data reconstruction (recalculate) I can t agree with this . So basicly we could do simple file replication, we could split file and distribute this file blocks (as some RAID implementation) and erasure code (reed soloman is the most accsapted one). Any other approach to provideing file realibility that you know?