Currently we are working on a project to estimate software reliability. We want to consider the uncertainty factors in our calculation. That's why we are trying to find out those factors. If you have anything in your mind, please suggest.
First, you need a reliability metric (or grid of metrics) of some sort and then empirical data about it or about other measures that may correlate with it.
One thing that has caught my eye is non-feature-request bug reports, the rate of arrivals versus repairs over time, the bugs which have significant user pile-ons and that remain open for extended periods of time. Also watch out for won't fix and bugs closed because users don't do the verification work.
In this case, I think we're looking at reliability of the production and maintenance of the software, and indirectly reliability of the software itself.
There are still problems about qualifying reliability. Defect reports about loss or corruption of data are clearly more serious, as well as loss of effort, and then there are others that determine a software product unfit for purpose depending on specific circumstances and use cases. That may be very difficult to get to.
So what are the qualities and measures that one is attempting to sample/predict?
Check the software by using the extremes of the parameters that it was designed to run within. In other words push the software to the limits to make it fail. Secondly try repeatbility. If you see a failure, rerun the program until you are able to reproduce the same error. Thirdly, when errors are happenign collect the fequency of such occurrences.
Another factor which can affect the software reliability is the frequency of requirements changes. Also, technology changes, devices failure, political instability, behavior of users, and so on could be affected the reliability as uncertain factors.
I think for considering some of uncertain factors, the defined method should be acted using hybrid calculations (fuzzy & stochastic).