I work with decision making within production systems design, but I believe that this question can be answered by people from a much broader researcher area.

This is the case: you as researcher figured that a certain group of people would be benefited by having some framework/guidelines/tool to address some of the issues they have been copying with (which they might be not aware of!). You think this tool will be hopefully improving the performances of the system they handle, so after you developed it you are eager to use it within an existing system, which will be serving as your first case study.

But this cannot be done at the moment. For instance: the industry is not ready for this, you have been unlucky and you didn't manage to find a suitable case, the data to be used won't be reliable, etc.

So, can we still talk about the validation of this work or shall we talk about the usefulness of developing such a work for starters?

If so, what should the researcher do to prove at least the usefulness or potentials of such a tool? 

I was thinking along the lines of a "lighter" implementation of the tool coupled with a questionnaire/interview to the target users on their thoughts on the tool, which may consist in perceptions of potentials and shortcomings,expectations, etc.

How do you see such an approach and what are your suggestions to address this problem?

Do you have previous experiences with it?

Thank you for your help.

More Ilaria Barletta's questions See All
Similar questions and discussions