What is the scope of arithmetic progression in software effort/cost estimation? Is arithmetic progression used in estimating efforts or cost in software development process?
I'm not entirely sure of the rationale behind your question but I'm guessing you're asking about linearity. If this is the case I think this is widely assumed so the relationships are additive. In other words if I do A1, ..., An then the total effort is sum(A1, ..., An).
There has been extensive debate about economies/diseconomies of scale ie non-linearities but much as this doesn't conform to our informal intuitions there is no strong evidence for either so a good working assumption (unless you have local evidence to the contrary) is linear relationships between effort and size.
It is trivially true that if the expected duration for task A is two weeks, and the expected duration for task B is three weeks, then the expected duration for performing A then B is five weeks. But this arithmetic law isn't very useful when estimating a software project.
The uncertainty in a task tends to be multiplicative. If there is a 90% chance that A takes more than a week, and a 50% chance that it takes more than two weeks, then there is often a 10% chance that it will take more than four weeks. That is to say, if you want a simple model, choose a lognormal distribution.
Then if you ask "how long is it before there is a 90% chance that both A and B are complete" the key question is whether the distributions are independent. If so, then adding duration estimates works fairly well. Big software projects often fail because of risk factors that affect many tasks, e.g. poor stakeholder management.
For these reasons, making a Gantt chart hasn't got much connection with really planning a software project.
I disagree that you should assume linearity. The larger the project, the more people and the more people the more complex the interactions and the management overhead, hence there's generally a diseconomy of scale for software projects. Read "the Mythical Man Month" by Fred Brooks for more.
That's what people expect but the evidence is surprisingly equivocal.
[1] R. D. Banker, H. Chang, and C. F. Kemerer, "Evidence on economies of scale in software development," Information & Software Technology, vol. 36, pp. 275-282, 1994.
[2] R. D. Banker and S. A. Slaughter, "A field study of scale economies in software maintenance" Management Science, vol. 43, pp.1709-1725, 1997.
[3] B. A. Kitchenham, "The question of scale economies in software - why cannot researchers agree?," Information & Software Technology, vol. 44, pp. 13-24, 2002.
That's not to say in particular settings and at particular size ranges one might not observe diseconomies of scale or for that matter the reverse.
Thanks for these pointers - it's certainly clear that linearity is not to be assumed. The results seem to depend on many factors and can go either way. My personal experience has always shown diseconomy of scale, but clearly that's not what some have encounteres.