Summary:

In this paper, the authors propose a bimodal multicast protocol with good scalability and predictable reliability even under highly perturbed conditions, which can also be understood as offering a form of weak real-time guarantee. They show that the behavior of their protocol can be predicted given simple information about how processes and the network behave most of the time, and that the reliability prediction is strong enough to support a development methodology. Their studies include a mixture of experimental work on an SP2, simulation, and experiments with a bimodal multicast implementation for LANs.

Pros:

A big plus of their paper was to offer bimodal multicast as a new approach to reliability.

Strong evaluation by providing a mixture of experimental work on an SP2, simulation, and experiments with a bimodal multicast implementation for LANs was a plus.

The recurrence relation was nicely considered that bounds the probability of protocol state transitions between successive rounds.

Cons:

For the analysis of fixed message failures, they initially assume that there are no faulty processes and that message delay failures occur with exactly ε probability, no more and no less. This assumption limits the system from behaving with a more reliable message failure rate.

Also for predicting latency to delivery, they first assume that processes do not crash.

Thoughts for further development:

One possible way to enhance the evaluation results is to step-by-step address their simplifying assumptions that appear in the paper. However, we know that although the model makes simplifying assumptions, it still makes accurate predictions about real-world behavior J

Critiques/Questions:

I’m interested to see how accurate the real findings could be if no simplifying assumptions were made. Will it worth addressing them for very accurate predictions?!

More Mohammad Hosseini's questions See All
Similar questions and discussions