I am interested on the reliability and ecological validity of online experiments - I would appreciate your opinion and suggestions. What platforms are generally used for such experiments?
I have used this literature source for my online survey related publication:
Reips (2002) Standards for Internet-based experimenting. Experimental Psychology 49(4) 243–56
There is also a 2007 chapter by Reips on the methodology of Internet-based experiments in the Oxford Handbook of Internet Psychology, available online at Google Books.
As with the online survey platform, I have institutional access to Webropol.com, but there are also other solutions capable of exporting data for statistical analysis.
Hi Dana, I don't have an answer for you (yet) but we're in the process of conducting one shortly - I'll be interested in hearing people's thoughts on this, so thanks for bringing this up! As for platforms, ours is a 'thought experiment' (well, three) in the context of a longer questionnaire, although I'm not sure if that's what you mean by platform? What kind of stuff are you working on?
The most popular platforms for online experimentation (aside from just coding it in-house) are SurveyMonkey https://www.surveymonkey.com/ and Amazon's Mechanical Turk https://www.mturk.com/mturk/welcome. Many people also use Unipark http://www.unipark.info/1-0-online-befragungssoftware-fuer-studenten-und-universitaeten-unipark-home.htm to run surveys as it is a major presence in German psychology research.
A common problem with online experimentation is time-wasters; ie, people who just click through the items randomly. It's important to include catch trials or attention probes in your design - something as simple as "This item is to check that you are paying attention. Please check the 'somewhat agree' box."
Another issue is repeat visitors. This is much harder to handle, as even checking to make sure the same IP address doesn't do the experiment twice will miss people who use multiple computers or proxies.
Finally, there really is no way of knowing if anything the participant says is a lie or not, so demographic information is completely suspect.
However, the sample sizes are so large that you can afford to be extremely stringent with chucking out data that looks even a little compromised (before you do any analysis obviously).
I have used this literature source for my online survey related publication:
Reips (2002) Standards for Internet-based experimenting. Experimental Psychology 49(4) 243–56
There is also a 2007 chapter by Reips on the methodology of Internet-based experiments in the Oxford Handbook of Internet Psychology, available online at Google Books.
As with the online survey platform, I have institutional access to Webropol.com, but there are also other solutions capable of exporting data for statistical analysis.
Here's a link to a pdf reprint of Reips paper, for ease of access http://iscience.deusto.es/wp-content/uploads/2010/04/ulf27.pdf
Here also is a review by Birnbaum of online data-collection http://www.uvm.edu/~pdodds/teaching/courses/2009-08UVM-300/docs/others/2004/birnbaum2004a.pdf
Let me (with some self-interest) add www.socsisurvey.de as German survey platform. Technically it goes far beyond UniParc/GlobalParc, e.g., by supporting exact response times for psychological research.
A lot of empiric research has shown that time-wasters etc. are not such a big problem in online surveys - and I strongly expect the same for online experiments. The main problems usually lie in the samples: If you're using convenience samples, they are not repesentative - and they may return other results than representative samples not only in descriptives, but also for correlations. Not talking about more sensitive analyses like factor analysis, regressions, etc.
The point is that online experiments are extremely cheap and more powerful than pen&paper surveys. But it completely depends on the question under research if they are suitable or not.
I really appreciate your suggestions and comments and thank you all.
I came across an online platform called Testing platfoRm for mUltimedia Evaluation (TRUE) and I have used it for ratings of pictures as well as audio stimuli:
Just select the test (now there are 2 running) and click on 'Take the test'
But I am looking for something more than this in the sense that I would be interested to record the response time of the participants. I am not sure if this is possible. Maybe if possible it would allow me to deal easier with the 'time-waters' mentioned above.
> Maybe if possible it would allow me to deal easier with the 'time-waters' mentioned above.
Most tools automatically store the times between the questionnaire pages (if you use somthing like a questionnaire). These times are useful to check for people who just watched the survey - however it is easier to allow implicit "don't know" to see who just wanted to look. Research also found that a question "did you take the survey serious" in the end is useful to clean data. Cleaning the data is usually not the problem.
What I mentioned with response times were measured you use when researching attitudes. In this case you're not interested in seconds but in split seconds - and if there is an error from loading times etc. you need some more specific tools. But as I said: The response times in seconds (+/- 5 sec.) are recorded by most tools.
It is new learning for me! I am involved in actual human encounters in research and in some programs focus groups approach is useful in communities with high rate of illiteracy
Hi all, great to see this discussion here and I hope it is of help. Spent my career on this topic, but before you catch me writing endless postings ;-): Free copies of my publications are available from my profile and also from http://personalwebpages.deusto.es/reips/pubs/publications.html
Two brilliant former students of mine, Thomas Blumer and Christoph Neuhaus, and I have created our own tool to generate Web experiments. Since it went online in 2000 we have made an effort in a sequence of new versions to include more and more of the methods and techniques that those of us developed who did research on the methodology of Internet-based research, see http://wextor.org/wextor/en/features.php This page also brings you to WEXTOR.
Give it a try. The tool is freely available, we only charge for licenses that allow you to do certain things in a more comfortable way.
Sometimes I look at the tens of thousands of Web-based studies today and can't believe where we have come to since I - then a PhD student in Tübingen, Germany - first played around with the idea and implementation of online psychological experiments in 1994. Amazing! Today things are so much easier in many ways (we didn't even yet have HTML editors then), but more difficult in others. I am seeing the delightful work some of you are doing, and wish you all the best for it!
Isn't it wonderful we can now have this exchange right here, thanks to advances in Web technologies?!
First of all Moodle, the online education software has at least one survey module, and it is free.
https://moodle.org/mod/forum/view.php?id=739
And, relatedly, and not wanting to hyjact this topic too much and adding information even as I do...
Can anyone recommend something like the following:
Amazon's Mechanical Turk (recommended by some but alas one needs to be in the US)
Buhrmester, M., Kwang, T., & Gosling, S. D. (2011). Amazon's Mechanical Turk A New Source of Inexpensive, Yet High-Quality, Data?. Perspectives on Psychological Science, 6(1), 3-5.
CloudCroud (one needs to have a budget of 5000 usd or more) or
MySurvey.com (It looks to be for market research only. It pays very little to respondents - less than a dollar a survey. Writing to them now.)
Looks like I'm only a little bit late to the party...
I've just written a thorough, up-to-date (2019) review of the literature. Plenty of actionable advice for avoiding some of the pitfalls and limitations too.