I am currently working on software effort estimation. I feel intellectual( could be intelligence level) capability of a software developer is the deciding factor as far as effort involved is concerned.
That is an interesting question that I am also struggling with, especially in the context of developing s/w development AND researcher skills. We are currently looking at critical thinking ability/skills as proxy, for which there are existing instruments (see link below, for instance). Might be worth considering, perhaps.
There is also a Pearson white paper by Lai (2011) that seems relevant, http://images.pearsonassessments.com/images/tmrs/CriticalThinkingReviewFINAL.pdf
Is basic intelligence a better or different measure than experience?
In the 1960's and 1970's before you could get into a commercial programming position or training program, It was necessary to "pass" a programmer aptitude test. Most these test were established to have some type of bias. From memory, I believe the bias was to favor those with some degree of dyslexia.
As an educator, I have observed that the best developers are frequently asymmetric learners. Meaning that they do well in math and cs, but barely pass other subjects. How this relates to IQ I don't know.
I have not seen any data indicating that critical thinking improves software development skills. My personal opinion is that most of the training in this topic promotes group think.
Two other intangible areas that you may want to look at is passion for learning and training. Software development is known as a learning activity so there might be something there. Investigating were the developer went to school and the quality of that experience might be good. Again is it IQ or experience.
Just a guess, but to establish the relationship between IQ and development expertise is probably a 5 or 10 year project to establish all of the variables. Assuming projects of low complexity, i.e. 1 month of effort for one person, it would require at least 100 observations (people 54% female and 46% male) to establish the relationship. From my experience, I don't think there is a linear relationship.
My premise here is, that any software developer must be an intellect. Therefore, while making effort estimation ( which is approximate) , it is good enough if we are able to include this element also though approximately via some metric. This was exactly my contention.
Thank you for the nice analysis that you have made on this issue. Being a teacher, i agree with your observations.
I am very interesting in human side of software engineering, and Your question addresses this area. Based on literature I have read in last few years, issues like the level of intelligence requires composing a multidisciplinary team with experts from different fields, like psychology and sociology or education.
I feel that this is highly interesting research topic, but as Carl stated, it requires several years of inquiry, and in my opinion including experts from fields such as psychology and sociology or education in your team.
And finally, with Your premise that any software developer must be an intellect I disagree. Software engineering is now industry requiring a large number of profiled experts. However, based on my experience as an university professor and connections with software industry I cannot say that any software developer is intellect. Being an intellect requires a broad multidisciplinary education, which in addition to engineering knowledge includes also knowledge on philosophy, psychology, sociology, history, etc..
In this course of thinking, I encourage You to continue with your interesting research topic. Good luck.
This is a very interesing research topic. However the first thing that yoy need to define is the term "intellect". From what I know this term has not been defined formally.
Propable critical thinking is a part of it but it is not the same.
While effort estimation is done, productivity measures from historical baselines for particular technology is being considered. The KLOC/ person months or FP/person month data from the team delivered is analyzed. The baseline are revised from time to time so that new projects are measured against the current baseline.
Most of the times, when you perform effort estimation ( during presales ), you would not be having the " team " with you unless you have a identified bench available. The team is mostly hired / made available later. the estimate ( rather productivity numbers ) is majorly based on historical capability of organization in particular technology/domain/complexity.
Skill measurement ( both soft skills and hard skills ) are done for all team members , based on which training plans are made. The improvement of team members is being tracked using the skill index ( pre and post training ) . Based on the needs identified, the training plan may differ from person to person.
The case in point is effort estimation of small simulation projects. Keeping other things constant across projects (the languge, the architecture of system, etc, I need to find a metric for the intelligence level/ say problem solving capability. I presuppose that a test on programming languge skill, problem solving skill.and the domain knowledge on whoch simulation project is intended wound suffice. Kindly guide me on this.
Thank you for the response. The effort is time consumed to build a simulation project( simulation of certain physical process, hydrodynamic simulation etc...) Students of higher semester are the developers
.The other parameters are lines of code, reusable code, cyclometric complexity, algorothmic complexity, function point estimates, the cgpa obtained by student developers till this point of time( this accounts for overall knowledge gained to some extent), and lastly the intelligence. Iam struck in this last attribute
Once the PSP (Personal Software Process - see https://en.wikipedia.org/wiki/Personal_software_process ) was promoted (a long time ago). While the PSP was intended for self-assessment - subsequently "perverted" to CMMI (https://en.wikipedia.org/wiki/Capability_Maturity_Model_Integration ), you may think about your own variant (or "perversion" :) ) of the PSP to assess the "intellect" of your developers.
According to my experience, no metrics will really be objective - all giving results with respect to the underlying intellectual model. But at least they might give relative ratings.
I think that it won't be easy to find one, exact and acceptable by many people measure. Itself, this problem is not easy, because we are trying to "judge" something that is, in my opinion, unmeasurable. Nowadays computers programs are far beyond than those from 70', 80' of the XX century, but in 1976 P. Wegner wrote interesting paper about paradigms change in computer science (Research paradigms in computer science, https://www.inf.unibz.it/~calvanese/teaching/2015-09-PhD-RM/material/Wegner%20-%20Research%20Paradigms%20in%20CS%20-%20ICSE%201976.pdf) where he stated (Section 5): “It was pointed out by Dijkstra that the structural complexity of a large software system is greater than that of any other system constructed by man and that man's ability to handle complexity is severely limited”.
Since then a lot has been changed in software development, but I think that everyone here will agree that complexity even increases. For example let's consider the case of operating systems. Now, these programs consist of hundred millions of lines of code even if they are built as an Open Source software. I'm convinced that in the case of Open Source software, where many people working together showing something like collective intelligence, the number of SLOC is even grater that in the case of Windows. Obviously, SLOC is not a good measure of software complexity (which one is in fact?), but tells us a lot. about the effort that was done by programmers.
I think that the some kind of solution of your interesting problem can be approach that will allow to see the structure of developed software in time and space. I've tried to do something like this in one of my papers (https://www.researchgate.net/publication/272769090_Fractal_Properties_of_Linux_Kernel_Maps?_iepl%5BviewId%5D=ZdOrp8Fd73M6DaIScqMppPH8&_iepl%5BprofilePublicationItemVariant%5D=default&_iepl%5Bcontexts%5D%5B0%5D=prfpi&_iepl%5BinteractionType%5D=publicationTitle), but this was done in the context of fractal dimension of software. I think that you can be also interested in Makao project (http://mcis.polymtl.ca/publications/2007/symp-paper.pdf). Makao allows showing inner Linux structure like a graph. The development of this graph can be measured by typical parameters for graphs (average distance, clustering coefficient, node degree, etc.).
If you are interested, we can make a collaboration project in this field on RG. If these two approaches will be connected together, I think that we will have very interesting results :)
Whole problem connects with software complexity. We have different approaches for software development, but no one knows how they influence software quality.
So nice of you Mr.Dominik for your words of wisdom. Me along with my research student have ventured into doing some real time research ( not using any publicly available data). In this direction we have successfully completed developing linear, non linear regression models and soft computing related models for predicting software effort for small visualization based projects and web site development based projects. This is the case in point. We have considered the academic accomplishment of the post graduate student as one of the attributes (Cumulative Grade Point Average and Semester Grade Point average. The results are of mixed nature. But these meteric we felt will take into consideration the basic knowledge level ( in subjects like Java, C, C++, HTML, Algorithm design and analysis). The research is under progress...
In my opinion, the intellect is context specific. Say in a software industry setting, CGPA and other accomplishments do not augur well.
It is so nice of you Mr. Matthieu Vergne , I will surely go through your paper and the references that you have suggested. I will appropriately tweak the measure that I want to consider as of now.