I am interested in validation of full-scale research designs based on systematically conducted pilot studies related to a particular family of information systems (socialized cyber-physical systems). What we typically do is designing the pilot based on the assumed uncertain or critical elements of the full-scale study. Your description does not give me enough information if your pilot research design was developed with this objective in mind, or just as a scaled-down version of the full scale study. Information about what research constructs have been considered and with what objective is also missing. Please complete your question - otherwise it is very difficult, if not impossible, to figure out what validation method would be the most relevant in your case.
I would like to appreciate your response. It is my short description about my model. First, the aim of my study is to measure the success of an information system (IS) success project using survey towards a number of the internal project stakeholders. In my modelling process, I tried to adopt the DeLone and McLean's IS success model, combined this popular model with the McLeod and McDonell's project classificatory framework, and adapted both model and framework in term of the selected project success theories.
Second, in this modelling, I did literature study, consultation, interviews, discussions, and seminars in an IS research group and my department.
Lastly, I also examined quantitatively a pilot study using a case study in an institution in order to validate the proposed model.
My question:
Is it enough to justify the model validation using the case study?
Is it possible to assume that combination of the modelling process (assume as the focus group study-the qualitative validation method) and the case study (assume as the quantitative one) as a mixed validation method?
Yes, the issues related to understanding, measuring, and evaluation of the impact of ITs and ISs are in the focus of research in system engineering. The DeLone and McLean Information Systems Success Model provides metrics which are mainly for a qualitative evaluation of ISs, development projects and e-commerce, and the like. Some time ago I studied this model in detail, and tried to apply it in the context of an industrial project (which was about computer-based workflow innovation in a medium sized, geographically distributed shoe design, maker and selling company). The conclusion was that the six measures (system quality, information quality, service quality, use opportunities, user satisfaction, and net benefits) were too general and qualitative concepts to apply and evaluate. Actually all of them had to be decomposed to lower level but more tangible measures that could be related to the nature of the operation and business of the company. However, this inevitable need for decomposition and the large number of the resultant sub-measures brought a complexity to the table that was difficult to address. The complexity became even bigger when it was realized that the more specific sub-measures could be formulated on many levels and from many aspects of the operation of the company. The most crucial and difficult-to-capture issues were information quality (of what and for whom?) and net benefit (again, for whom and in terms of what?). One further problem with complexity was that the sub-measures were typically not independent of one another - on the contrary they were interrelated, even if they belonged to different higher level measures, and there were many (non-linear) correlations among them. And let this be the last one to be mentioned, when we wanted to have quantitative values for the sake of some statistical assessments, even the sub-measures had to be further decomposed to sub-...-sub-measures. I said 'the last one' above, since here I do not have opportunity to deal with the issues originating in dynamically changing objectives, varying environments and contexts, and non-permanency of business and/or business strategy.
Now, let me come back to your case. As you see, validation in such a complex situation described above is not a piece of low hanging food. As you explained, you intend to include in your survey a number of internal project stakeholders. This is definitely a positive thing since it reduces the otherwise wide focus of your study. On the other hand the question is how sufficient it can be with regards to the whole. I must say this because, as I feel, combination of the success model with the project management framework (the latter is also considering institutional context, project contents, stakeholders, actions/interactions, and development processes) raises the danger of (an unmanageable) complexity which is against doing rigorous and dependable research. And my answers to your questions are: (i) I personally believe that one case is not enough. The result is always vague, no matter if it is positive or negative. Even a low-N situation is problematic. Anyway, everything depends on your objective. (ii) Concerning the second question, let me respond that there are also other opportunities for a meaningful (scientifically somewhat more correct) validation. With my PhD students, we have used the quadrant-based validation method. Originally, it was developed for an external validation of design methods, but recently we have successfully generalized it to the evaluation of development methodologies. One of them was validation of a designerly software development methodology and the other was validation of an interactive augmented prototyping methodology. When I am proposing QBV for you, the curiosity is in the back of my mind to learn how and how well it can be applied to complicated cases such as information system design and application. Just let me know if more information on the above would be helpful for you. Kind regards, I.H.
Yes, Sir. I agree with you. I got some learning points from your descriptions, especially about the complexity, the number of the case, the opportunity, and the QBV proposition. It was helpful for me. God willing, specifically, I will learn about the QBV.
Sometimes it is quite challenging to measure IT success using The DeLone and McLean success model due to the longevity of many projects. Rather than measure project success, here is the reference to a paper that used team performance: Pee, L. G., Kankanhalli, A., & Kim, H. W. (2010). Knowledge sharing in IS development projects: A social interdependence perspective. Journal of the AIS, 11(10), 550-575. I recognize that team performance does not ensure project success but this model might be more succinct with you research. In using any model, I highly recommend that you used structured equation modeling using a tool such as R, SmartPLS, or PLS-graph to actually measure your models hypotheses.
Thank you for your response, Madam. Similarly, your recognition was also indicated by my model examination study. As it was described that the hypotheses of the variable (Person and Actions) are also the insignificant hypotheses in the study. I believed that the paper will helpful for my research works. My appreciation for your sharing.