Below are some recommendations from research works:
1. Using multi-method approaches and combining methods to obtain more comprehensive and useful results. The integration of different methods can be beneficial to find an inclusive answer to evaluation questions. The combination of qualitative data gathering approaches (such as observations and interviews) and quantitative data gathering approaches (such as questionnaires and work sampling) provides a good opportunity through triangulation to improve the quality of results.
Source: Sadoughji et al. Iran Red Crescent Med J. 2013 Dec; 15(12): e11716. doi: 10.5812/ircmj.11716.
2. The arguments for performing multi-method evaluations must be acknowledged and progressed within the community. Information technology is not a drug and should not be evaluated as such. We should look to the wider field of evaluation disciplines, in which many of the issues now facing clinical informatics have been addressed. The current political context in which healthcare applications are evaluated emphasizes economic gains rather than quality of life. Thus, the role of evaluation has been to justify past expenditures to taxpayers, managers, etc, and so evaluation becomes a way of trying to rebuild lost public trust. This is short sighted. Evaluation is not just for accountability, but for development and knowledge building in order to improve our understanding of the role of information technology in health care and our ability to deliver high quality systems that offer a wide range of clinical and economic benefits.
Source: Pitty & Hanka. BMJ. 1998 Jun 27; 316(7149): 1959–1961.
Any system, especially those interacting with humans, can easily be planned for evaluation though:
1. Usefulness
2. Easiness of use
These belong to the Technology Acceptance Model. They mediate all other strategic performance measures, such as:
1. Effectiveness; in terms of achieving the task of data entry correctly.
2. Efficiency; in terms of resources used versus tasks completed.
3. Timeliness; in terms of availability when needed.
4. Safety; in terms of minimising errors and data controlling.
How to do this?
You need to plan both quantitative and qualitative measures. For example, the completion of data after entry and the number of errors in the data are quantitative, while the content quality, such as the comprehensiveness is qualitative.
You can also use objective vs subjective methods, where the original users (the data entry persons) can be asked to evaluate the platform and their perception towards its use and benefits, and other end users as well can be recruited to evaluate and judge the quality of the data entered and their perception towards the system performance.
A randomised controlled trial can be planned to evaluate performance with and without the system, or simply a cohort study to evaluate before and after the system.
Hi I feel that Servqual will only measure quality of service provided rather than quality of value of the analytics thus a combination of this and the Technology Acceptance model is a prudent way to evaluate the parameters.
However Impact of Data could be measured directly as a function of utility by defining the value of the utility and measuring data relevance to this i.e man hours saved , capacity to automate decision , adverse events noted through specific data collection. Article Awareness, Use, and Consequences of Evaluation Data in a Com...