I am interested in conducting an online survey and curious as to the requisite ethical approval and the receptiveness of peer-reviewed journals to the results. If anyone has any experience in conducting surveys using online platforms, e.g. Survey Monkey, and subsequently publishing the results I would be very interested to hear about your experiences with the process. Cheers.
NB: The target demographic for the study I have in mind would be individuals over 18 from the general population and the IP addresses would not be collected. Further, the topic of interest is not of a sensitive nature.
At PatientsLikeMe we've conducted dozens of online studies over the past few years (see https://patientslikeme.newshq.businesswire.com/sites/patientslikeme.newshq.businesswire.com/files/research/file/Chronological_Bibliography_of_PatientsLikeMe.pdf) and there's a few things we've learned that I'll share here:
* Response rates: Unlike a paper & pencil survey or one conducted by mail, the right denominator to use is unclear. I recently reviewed a paper that got 1,000 completes but they fielded 22 million Facebook ads to get these. Should we say their response rate is .005%? This varies depending on your send method and your database, but the use of something like Campaign Monitor or Constant Contact to send the emails (as opposed to your standard email program) will allow you to calculate an open rate, exclude those email addresses that bounced, and it will also allow you to see if your domain has been blacklisted (this can sometimes happen if you put "survey" in the subject line, for instance). In our most recent paper on epilepsy (http://www.sciencedirect.com/science/article/pii/S1525505011005609) we published both the overall response rate as well as the complete rate i.e. number of completes with a denominator of who actually read it.
* Who you invite: We find it's best if we can overlay some measure of recent activity or activation in who we send a message to. We have people in our system who joined 7 years ago and they are unlikely to come back to the site just to do a survey, so look for ways to get more recently active participants.
* Make an inviting initial recruitment message. If possible, your ethics committee / IRB might allow you to make advertisements that entice people to a website, and then you can do the full informed consent once they have arrived there. That's much more effective than trying to do full informed consent in an email, as these are actually quite off-putting in the way they're worded. We have found subject lines in emails that are phrased as questions have a higher response rate e.g. "Does sunlight affect your mood?" rather than "Invitation to participate in a survey examining the role of diurnal variation in sunlight exposure on fluctuations in dysthymia".
* Use an incentive. The Cochrane review of methods to improve survey response rates (http://summaries.cochrane.org/MR000008/methods-to-increase-response-to-postal-and-electronic-questionnaires) has more detail on this. There are many options available; a lottery, a gift card, a donation to a charity. It's good if you can make this incentive clear in the advertising up front but your ethics committee might worry about coercion. Depending on your population, budget, and what sort of response rate is considered credible in your field will determine what's suitable. Less than £5/$8 is probably considered desultory and is a pain to administer, any more than £25/$50 verges on being viewed as coercive. Also note if you're targeting a specific population e.g. pilots, then having an incentive upfront does increase the likelihood some cheeky bugger will fill out the survey once (or multiple times, or invite their mates) in order to get the incentive. Think now about ways of spotting that in the analysis plan such as asking a few questions only your target population would be likely to know.
* Use the CHERRIES checklist and build it in to your protocol. The Journal of Medical Internet Research (JMIR) publishes a lot of survey research online (disclaimer: I am an Associate Editor) and has built a handy checklist for researchers (http://www.jmir.org/2004/3/e34/). This will help you write your paper in such a way that editors and reviewers won't have any ambiguity as to what you did or how you did it.
* Keep it short! Even with an incentive, estimate 10 seconds per question, longer for open text boxes. We have a very motivated population on PatientsLikeMe but I prefer to keep surveys under 60 questions where possible as these can test people's patience. It does make it somewhat easier if more questions are in the same response format (e.g. 0-10) rather than them having to read every response option.
* People don't like open text boxes much. In an interview, you get a ton of qualitative information from people, but open text boxes in online surveys don't really invite a very long response. You might want to get this type of information through telephone interviews or face-to-face interviews in order to formulate better structured questions. Again, depends on your discipline, but I have seen plenty of SurveyMonkey surveys come past my desk with 10+ open text boxes. Be very considerate of people's time. That said, I usually add a single open text box at the end to get feedback or give people space to vent whatever else they might be feeling.
* Don't be afraid of reminders. People get TONS of email these days and many non-scientists don't check theirs every day (shocking I know!). I like to send mine out on a Monday morning, with a reminder 3 days later. There is some research to suggest you can remind people 2-3x before they get annoyed (some will be annoyed they were invited in the first place and a small subset will get annoyed by any reminders you send, hopefully your participant list is of people who were expecting to be contacted). Ideally your communication system should allow you to figure out who's already completed the survey so you don't bug them. And ideally you should have a way of getting people who only got partway through the survey back in there so they can finish it off.
* Test, pilot, test, pilot, launch. Typos, branching logic, words people don't understand, forgetting to capture people's email addresses when you wanted to send them an incentive, unsatisfying response options, offensive questions, massive drop-out due to survey length - these are all things you should be looking for before you send out to the main bulk of your participants. Test on friends and colleagues first (for extra credit, be next to them when they do it so they can give you immediate feedback). Then test with a *small number* of your intended population, they will throw up different issues (and remember that first pilot was done on super-smart people who are motivated to help you. Again, extra credit if you can show it to one of your actual users in person, and extra extra credit for using techniques like "think aloud" or "cognitive debriefing" to improve your survey. Then, and only then are you ready to launch.
* Monitor, monitor, monitor. The server could go down, the survey might break, you might have missed a major error that's venting precious data out into space - when would you rather find that out - when you download the data file or after your first participant has completed the survey? Don't just look at the numbers, download the data, load it in to your stats package, and test that you're seeing what you should be seeing. Depending on the length and size of the survey, get in the habit of doing that. There's nothing to say the system can't crash when you're 80% of the way through fielding.
* Tell people what you found. Most people take part in research out of curiosity and for the intrinsic reward of altruism. Close that loop, and tell them what you found *as soon as possible*. It doesn't have to be a final analysis, but just a topline like "Thanks so much for taking part, 500 of you completed this survey, 65% were female, 20% of you liked walks on the beach more than ice cream, etc." It's not a scientific abstract, it's feedback. Very important if you're going to be coming back to this population again.
* Publish open access. If people do your study in their own free time and you have interesting findings to share, it is (IMHO) a sin not to let them read the full paper when it's completed. There are a couple of different ways to do this; you could publish in an open access journal (like JMIR!), you could pay whichever journal you publish in to make it open access (gold open access), or you could store a pre-print copy in a repository or something like ResearchGate or Mendeley (green open access). Then you can re-message your participants and tell them that if they would like to read it all they can. It'll be too complex for some, but many will really appreciate it and get a nice altruistic glow from knowing they have contributed in some small way to a scientific endeavour.
Best wishes and good luck,
Paul Wicks
I made a survey on satisfaction at work on physicians few years ago and generally we conduct survey using limesurvey.
To ensure the quality of your results, you need to have a good numbers of respondents and also guarantee the anonymity (try to be compliant with the data privacy requirements of the law).
For a good journal, you need to prove the representativeness of your sample and explain the response rate.
My recent paper "Alcohol: Signs of improvement" was a national survey of Emergency Departments conducted using Survey Monkey in the first instance. The protocol was favourably reviewed by our local university research ethics committee and the subsequent paper passed peer review and has been published in a BMJ subsidiary journal.
I don't think that your intention to use an on-line survey tool will present any impediment, so long as the usual assurances of informed consent and confidentiality (as appropriate) are given.
I found the process of using survey monkey was without issue - I opted for the "pay as you go" option, so it was inexpensive. The Survey construction was simple (after an hour or so trial and error to get things looking as I wanted them to), and I exported the data straight into SPSS.
According to my experience regarding Online Surveys, It provides us more bias results and compare to the other survey's and also by utilize Online survey technique we cannot error even we have large sample size.
At PatientsLikeMe we've conducted dozens of online studies over the past few years (see https://patientslikeme.newshq.businesswire.com/sites/patientslikeme.newshq.businesswire.com/files/research/file/Chronological_Bibliography_of_PatientsLikeMe.pdf) and there's a few things we've learned that I'll share here:
* Response rates: Unlike a paper & pencil survey or one conducted by mail, the right denominator to use is unclear. I recently reviewed a paper that got 1,000 completes but they fielded 22 million Facebook ads to get these. Should we say their response rate is .005%? This varies depending on your send method and your database, but the use of something like Campaign Monitor or Constant Contact to send the emails (as opposed to your standard email program) will allow you to calculate an open rate, exclude those email addresses that bounced, and it will also allow you to see if your domain has been blacklisted (this can sometimes happen if you put "survey" in the subject line, for instance). In our most recent paper on epilepsy (http://www.sciencedirect.com/science/article/pii/S1525505011005609) we published both the overall response rate as well as the complete rate i.e. number of completes with a denominator of who actually read it.
* Who you invite: We find it's best if we can overlay some measure of recent activity or activation in who we send a message to. We have people in our system who joined 7 years ago and they are unlikely to come back to the site just to do a survey, so look for ways to get more recently active participants.
* Make an inviting initial recruitment message. If possible, your ethics committee / IRB might allow you to make advertisements that entice people to a website, and then you can do the full informed consent once they have arrived there. That's much more effective than trying to do full informed consent in an email, as these are actually quite off-putting in the way they're worded. We have found subject lines in emails that are phrased as questions have a higher response rate e.g. "Does sunlight affect your mood?" rather than "Invitation to participate in a survey examining the role of diurnal variation in sunlight exposure on fluctuations in dysthymia".
* Use an incentive. The Cochrane review of methods to improve survey response rates (http://summaries.cochrane.org/MR000008/methods-to-increase-response-to-postal-and-electronic-questionnaires) has more detail on this. There are many options available; a lottery, a gift card, a donation to a charity. It's good if you can make this incentive clear in the advertising up front but your ethics committee might worry about coercion. Depending on your population, budget, and what sort of response rate is considered credible in your field will determine what's suitable. Less than £5/$8 is probably considered desultory and is a pain to administer, any more than £25/$50 verges on being viewed as coercive. Also note if you're targeting a specific population e.g. pilots, then having an incentive upfront does increase the likelihood some cheeky bugger will fill out the survey once (or multiple times, or invite their mates) in order to get the incentive. Think now about ways of spotting that in the analysis plan such as asking a few questions only your target population would be likely to know.
* Use the CHERRIES checklist and build it in to your protocol. The Journal of Medical Internet Research (JMIR) publishes a lot of survey research online (disclaimer: I am an Associate Editor) and has built a handy checklist for researchers (http://www.jmir.org/2004/3/e34/). This will help you write your paper in such a way that editors and reviewers won't have any ambiguity as to what you did or how you did it.
* Keep it short! Even with an incentive, estimate 10 seconds per question, longer for open text boxes. We have a very motivated population on PatientsLikeMe but I prefer to keep surveys under 60 questions where possible as these can test people's patience. It does make it somewhat easier if more questions are in the same response format (e.g. 0-10) rather than them having to read every response option.
* People don't like open text boxes much. In an interview, you get a ton of qualitative information from people, but open text boxes in online surveys don't really invite a very long response. You might want to get this type of information through telephone interviews or face-to-face interviews in order to formulate better structured questions. Again, depends on your discipline, but I have seen plenty of SurveyMonkey surveys come past my desk with 10+ open text boxes. Be very considerate of people's time. That said, I usually add a single open text box at the end to get feedback or give people space to vent whatever else they might be feeling.
* Don't be afraid of reminders. People get TONS of email these days and many non-scientists don't check theirs every day (shocking I know!). I like to send mine out on a Monday morning, with a reminder 3 days later. There is some research to suggest you can remind people 2-3x before they get annoyed (some will be annoyed they were invited in the first place and a small subset will get annoyed by any reminders you send, hopefully your participant list is of people who were expecting to be contacted). Ideally your communication system should allow you to figure out who's already completed the survey so you don't bug them. And ideally you should have a way of getting people who only got partway through the survey back in there so they can finish it off.
* Test, pilot, test, pilot, launch. Typos, branching logic, words people don't understand, forgetting to capture people's email addresses when you wanted to send them an incentive, unsatisfying response options, offensive questions, massive drop-out due to survey length - these are all things you should be looking for before you send out to the main bulk of your participants. Test on friends and colleagues first (for extra credit, be next to them when they do it so they can give you immediate feedback). Then test with a *small number* of your intended population, they will throw up different issues (and remember that first pilot was done on super-smart people who are motivated to help you. Again, extra credit if you can show it to one of your actual users in person, and extra extra credit for using techniques like "think aloud" or "cognitive debriefing" to improve your survey. Then, and only then are you ready to launch.
* Monitor, monitor, monitor. The server could go down, the survey might break, you might have missed a major error that's venting precious data out into space - when would you rather find that out - when you download the data file or after your first participant has completed the survey? Don't just look at the numbers, download the data, load it in to your stats package, and test that you're seeing what you should be seeing. Depending on the length and size of the survey, get in the habit of doing that. There's nothing to say the system can't crash when you're 80% of the way through fielding.
* Tell people what you found. Most people take part in research out of curiosity and for the intrinsic reward of altruism. Close that loop, and tell them what you found *as soon as possible*. It doesn't have to be a final analysis, but just a topline like "Thanks so much for taking part, 500 of you completed this survey, 65% were female, 20% of you liked walks on the beach more than ice cream, etc." It's not a scientific abstract, it's feedback. Very important if you're going to be coming back to this population again.
* Publish open access. If people do your study in their own free time and you have interesting findings to share, it is (IMHO) a sin not to let them read the full paper when it's completed. There are a couple of different ways to do this; you could publish in an open access journal (like JMIR!), you could pay whichever journal you publish in to make it open access (gold open access), or you could store a pre-print copy in a repository or something like ResearchGate or Mendeley (green open access). Then you can re-message your participants and tell them that if they would like to read it all they can. It'll be too complex for some, but many will really appreciate it and get a nice altruistic glow from knowing they have contributed in some small way to a scientific endeavour.
Best wishes and good luck,
Paul Wicks
Great answers, especially from Paul! Paul, you should write that in a paper, seriously! I'll add specifically to Paul's note because I've done these surveys in a different context than he did and have a few ideas that may help.
* Check out RedCap.org. I am just now submitting a grant with a colleague at the University of Florida. They have a "rule" (by way of IRB) that online surveys must use this set of services due to their previous vetting for privacy. I have used SurveyMonkey, and RedCap is to SurveyMonkey what a motorcycle is to a bike with training wheels! You need to really understand how to set up IT security, servers, etc. to do RedCap. That's why organizations (like University of Florida) rather than individuals tend to use RedCap. If you collaborate with an institution that uses it, they will try (by way of IRB and policies) to make you use RedCap for privacy reasons - even if you are not doing an online survey, per se. RedCap is more often used to make internal interfaces so you can enter data from, say, a clinical trial or observational study you may be doing in clinic. It just also has the ability for you to format that as an online survey and have external people fill it in. I mention this because I can imagine citing RedCap would look more formal in a peer-reviewed journal than citing SurveyMonkey, and might be an option for you.
* Denominator problems are important. However, they are minimized if you are surveying a targeted audience. Once I surveyed surgical educators about a curriculum, and I used a mailing list from a surgical education society. I am preparing to try to survey all the heads of pharmacy hospital departments in three states in the US. But if you want to do a broader survey, sampling and bias become huge issues. From time to time I come across a small study (usually on the internet as a report, and not peer-reviewed) that assesses the biased response of a particular online survey. It's okay to have some sort of response bias in a peer-reviewed article, you just need to find articles about what that bias might be. They are often in the form of government and marketing reports, but they are better than NO evidence to cite when trying to contextualize one's bias in a peer-reviewed paper.
* Paul mentions response rates for general surveys. For targeted ones like mine, the response rate went way up because I had a professional society behind me, I sent a letter from them in advance having them promote my survey, and then I did it and got something like 70%. This is more like a specific-case of what Paul recommended for improving response rates. Like Paul said, the IRB has to vet/approve all this, so make mock-ups for them of what you plan to send. This is an option for groups, even big ones, as research has shown that opinion-leaders can get people to be in research. If you are doing a survey of thousands of sports fans and you actually are endorsed by the Red Sox for your survey, I can imagine it would go a long way towards improving your response rate, especially in Boston where I am. (Go Sox!)
* What about sending follow-up e-mails to improve a response rate? I agree in general with Paul, but want to add this cautionary note. In a survey class I took 10 years ago, e-mailed an online survey to a list of students as part of a project, and got around a 50% response rate. I did a second wave, begging the remainder to fill it out, and another 30% came in. My professor encouraged me to look for differences between the 30% who answered after my plea and the original 50%. I found they were systematically different on important outcomes! The second 30% were, in general, more "negative" toward my outcome, and also had different population distributions than the first wave (I think in income and race - important to me!). The moral of the story is you can do follow-up e-mails to those who do not respond to up your response rate, but if you do that, you there are pros and cons, and you should compare the first wave to the second wave in your report.
Good luck on your survey!
-Monika
Online surveys need special interest because of many unknown hurdles will arise before and after finishing of survey regarding validity and reliability
Without a lot of detail it is difficult to address specific concerns. Several were addressed by Paul and Minika, but here is another. You said that IP addresses are not collected. However, most of the time they are needed by companies who are running the data - such as SurveyMonkey and some others, so that the system can send reminders,prevent duplicate answeres, etc. So make sure that you state clearly in the informed consent that the company will keep that information durring the actual study period and for what purpose as well as that the investigator and team will not receive that information.
For any survey (online or not) to yield representative data, the most critical step is to utilize a sound sampling frame. The mode (paper, telephone, web, mobile) is the method you use to gather the data. The quality of your sampling frame is what will drive the quality of the representativeness of your data. In other words, if you are interested in the general adult population, you need to be able to randomly select and invite respondents from a set frame (your denominator)- of course, you have already introduced bias because you are limiting your mode to online and, unless you use a panel, will not have generalize-able results.
Hello, Theodore!
I believe that web surveys can be a very useful tool for doing research with a public that has internet access. I used Survey Monkey on a survey of libraries in Brazil and Portugal in my Masters. Would not have had the time or money to make another type of research. My last work is the result of a survey on the web:
Digital Proficiency and Digital Inclusion:
Comparison between students of computer science, public relations and engineering
2013 IEEE Global Engineering Education Conference (EDUCON)
In order to obtain the data for this study, two questionnaires were used over the Internet (Web-Survey), released in the Moodle platforms and made available on the Survey Monkey website (www.surveymonkey.com). The methodological procedures adopted followed the recommendations proposed by [11] and [12] in relation to the elaboration of quantitative research with the use of survey techniques. The questionnaires used follow a matrix structure for answers with a 5-point Likert scale, with the extremes being “totally disagree” and "totally agree”. In the Diagnostic Survey (DS) questionnaire, none of the questions was elaborated with reverse logic, thus the score of 5 (five) always represents the highest level of meeting the technical functionality, attribute or practice being evaluated, in the opinion of the respondent. In the Questionnaire for the evaluation of time use, level of extroversion and social skills (TES 2.0), the questions about time use employ reverse logic. Thus, the higher the score given, the greater the respondent’s difficulty will be in managing his time. The proposed analysis is quantitative, which occurred from a systemic and multidimensional perspective, comparing the results obtained with the theoretical reference used. This sample is comprised of students from a public university in the state of São Paulo, Brazil, studying 3 classroom subjects in Economics, using the Moodle Platform as didactic support and who voluntarily answered the questionnaires.
[11] BABBIE, E. Métodos de pesquisa de survey. Belo Horizonte: UFMG, 1999.
[12] BRYMAN, A.; BELL, E. Business research methods. 2 ed. Oxford: Oxford University Press, 2007. 786 f.
I conducted (and published results from) an online survey. However, I had a very specialized population and had excellent contacts for my target population. As a result, I had high response rates and few concerns about validity (Tighe, 2013 - Housing Policy Debate)
I would be extremely hesitant to do a general population survey using the method you suggest. As has been pointed out above, there are tons of issues with response rates, non-response bias, accuracy of results, etc. If I were peer-reviewing an article that used such methods I would probably recommend rejection purely based on these concerns - they are that serious. I do not believe you can conduct a general population survey successfully unless you have an existing, well-defined, list of email addresses that overlaps with your target population (which, if it's "adults over 18", you don't, and you won't).
Thus, what you would have if you moved forward with this study is not a scholarly survey, but a public opinion poll. Those results would not be publishable in a reputable, peer-reviewed journal.
Most of what I research is collected via online using Survey Monkey. The file is easily downloaded into several formats. I have published but I do not gather from the general public, my data comes from who I send the link to and it is mainly college students. Once the link is accessed from an IP address it won't allow that same IP address to collect information so the survey is not filled out by the same person (computer location) more than once.
I am published in a book and various journals and have never had a problem having my results questioned.
I have extensive experience with both online and mail surveys and for most populations I prefer mail, largely because of the much higher response rates I achieve (as high as 78 percent for individuals and 60 percent for organizations). While online surveys are seemingly easier, my experience is that people will either answer immediately or not answer at all. I also get a larger percent of partially filled out surveys when using online methodologies, although self evidently I cannot tell what the rate is of those who start a mailed survey and do not return it.
I also find in pretesting that individuals often appreciate the chance to go back pages and change their answer, sometimes writing in information. Depending on your survey host, the abiility to move back screens is impossible..
I also find it easier to conduct estimates of bias since at bare minimum I know zip code for the mailed survey, and often a great deal more.
However for targeted populations, who have reason to be interested in the survey, who are internet access an online survey can be very useful, as others have mentioned.
As an aside I use Survey Gizmo, which I find to have better functionality than Survey Monkey.
If there is a way to do a paper-pencil survey, do that. If there is no choice, go for online surveys. In some fields of study, its acceptable to publish studies with response rates less than 50%- which could be the case with online survey studies. For me, studies less than 50% response rate have little value. Major problems with reliability and validity!
As some of you said, this kind of survey is very useful and valid when we have to investigate issues relating with a targeted audience. To my experience, working on the Italian BRFSS (Behavioural Risk Factors Surveillance System), I developed two waves of Audit through an online survey (by Google app instead of S. Monkey; but it is a sort of technology sharing almost the same utilities). These assessments were addressed to an identified network of regional coordinators which were asked to forward the link to complete the online questionnaire to local coordinators. Thus, nearly a response rate at 100% was achieved both for the first and the second experience of audit. Furthermore, a letter is sent before the survey explaining aims, objectives, etc. and after people complete the questionnaire, a personal mail is sent for thanking. When results are analysed, respondents are informed. In conclusion, besides data coming up from survey, communication process has to be followed: it correlates directly with response rate and compliance. About papers, at the moment I just finalized congress communications; for instance, the next one will be an oral presentation at IUHPE (Thailand, August 2013). I have to publish an article, still.
Online survey is a method of data collection technique and now days it is acceptable , but before using the technique you have to take ethical approval from the concern authority such as IRB or IRC. There may be collection of false information sometime , so you have to develop the verification tool and use it for accuricy and reliability of the information and minimization of information bias.
Yes, these two aspects have also to be taken in account, but specifically within the experience I had and shortly described above:
1) ethical approval was not necessary since a well identified network of regional coordinators was surveyed;
2) reliability of information was ensured by the second step that was asking respondents to send documents which were declared.
Possibly of interest:
http://duncan-associates.com/hiddenpop.pdf
and
http://duncan-associates.com/DRUGNET_A_pilot_study.pdf
You don't mention the nature of your survey. There is gray line between surveying for marketing purposes, consumer info, or attitudes, and surveying to collect health information or other personal information. As a rule of thumb if this is not a marketing/consumer/attitude survey, but a survey that collects health/quality of life information you should seek IRB or Ethical Review Committee approval. Be prepared to discuss how data will be collected and stored. Also, for these types of studies most editors of peer reviewed journals will ask whether IRB or Ethical Committee Review was conducted.
I have not come across editors that object to online surveys.
In regard to response rates my experience is this has come down over the last few years; people are becoming inundated with online surveys. But this is dependent on how you distribute your survey, and the nature of your survey. My experience with random selection from distribution lists of people that have given permission to be contacted is a response rate of approximately 16%, but I have had this as high 25%.
I have taken surveys via Survey Monkey and looked into Red Cap, but I have had the most success (for me) using Qualtrics. I have found it rather helpful and you can omit the collection of IP addresses and other items, as well as using a password-protection system so that only those you request to complete it can have access, which our team has needed from time to time. We have not had challenges in getting the results published because what we have investigated has been of interest to our fellow colleagues (both research and clinical). The topic of inquiry and how well formatted (via quality not lay-out) will most likely determine how it is published. Regarding Qualtrics, the survey can be saved as a pdf which created ease in providing the information to the IRB or Human Subjects office.
We recently sent a survey for a representative sample of primary care physicians and provided them the option to respond by mail, fax or online. We had to add a sentence or two to our research ethic board approval to address the issue of how we plan to maintain privacy using online surveys. As for the response rate, only about 20% responded online, the majority responded by mail.
Thank you all for your insights and experiences. In answer to your questions: the nature of the study is conducting an exploratory examination of cross-cultural perspectives of a construct. I do, however, have a few follow up questions:
In terms of distributing the study, would it be appropriate/acceptable to distribute it on a social media site, for example, via the 'share' function on Facebook? (The target demographic is younger persons, i.e. 18-35.)
Which methods of survey distribution would you recommend?
How does one address issues of representativeness and response rate?
I know my 2 cents on your question might be a bit late ...
You can also try and find a specialized company in this matter. Some of them would offer to script the survey and host it for you (also offering their panel) and some others have some DYI solutions so that you can script your own survey using their platform and collect the data using their own panel or yours. Of course prices can vary depending on what is required to be done.
Representativeness might be an issue unless you decide to conduct the survey in countries where the internet penetration is high (say more than 70-80%).
Response rate would vary depending on the length of the survey and the incentive you're offering. However a good response rate might vary between 30-40%, but, as Thom mentioned, is on decreasing trend so I wouldn't be surprised if it gets around 20%.
Hope this helps and good luck with your research
I am definitely interested in responses to this question, particularly from those who have utilized various online survey sites and have some research tenure. I am just starting my master's thesis and am using Surveymonkey for some preliminary questions, so advice/input/precautions/etc. would be greatly appreciated!
I conducted a pilot study with online survey, I used google.docs, very useful for me. My target population was physicians of General Hospital, the rate was similar to other studies (about 40-60%). I didn´t use incentive, I recommend to you to use incentive if you need high rate, there is strong evidence using incentive as said Paul Wicks to improve rate.
The problem is to get a high grade of answers and to get deep open unstructered answers. This experience I have got in leadership research but also in patient surveys as is shown in the article "The value creation-concept
in hospitals. Health values from the patients’ perspective." by Nordgren & Åhgren 2013. The unstructured answers have been analysed in this study by using
qualitative content analysis drawing on a pilot project in Blekinge county
council. The survey was sent out to 1 318 patients at Blekinge Hospital (at
15 wards). Total secrecy was ruling forthis survey.The response rate was 68.1. However for the unstructered answers the rate was lower and not more than 30 %.
Best regards
Lars Nordgren
I have used Survey Monkey in the past but I want to shed the light on an important issue that faces the researcher when collecting data using an online survey which is Low Response Rate...so good luck with it
Clearly a hot topic! One practical suggestion is, when looking to get good defensible response rates (online) from a large multi-cultural population, hire help from a PR or advertising agency! On the other hand, I've never worked on a research project that adequately argued the issues Paul Wicks made above, especially in the project budget proposal. Also, in more modest projects ,such as research higher degree or even post-doctoral studies, funders and ethics committees are unlikely to provide support.
My own (varied) experience with online surveys is in line with Steffi Weidt's remarks. I've mostly used Survey Monkey but also some other open source applications. Clear, short information about the project aims and how the respondent data is handled by the survey application and will be handled by you (& colleagues) is important.
For new recruits to the study, try to keep the questionnaire short. You will lose folk if it takes more than 20-30 minutes unless you offer some really major incentive. Then you can easily get into problems with response and non-response bias. Short topical surveys with an incentive (not necessarily money) attached tend to get broader response but obviously only among those who are, and spend time, online.
Some sort of saved consent indicator (usually a checkbox) is important in my experience, for funder/ethics committee audit, general credibility and publication. Also important (legally) in most countries if you ask for 'identifiable' data eg name etc.
I used survey monkey. I submitted the proposal to an ethical committee and the consent form was available online before to all potential participants.
It is necessary. I think it is fair because the study should have an approval regarding all the ethical principles. Having the participant consent doesn’t mean the study follows all ethical principles. I think is a way to protect the rights of the participants.
Responses so far are all from a survey implementers perspective - a role I often play. But I am also the target of such surveys - far more often than I would like - and survey fatigue is a major issue for me and I assume for others. Those same tools that allow large numbers of potential respondents to be targeted are a part of the problem (as is working in a graduate University, although more than half of my requests are from outside my own institution.
What I can say with confidence is that perhaps three years ago I received requests at less than one per month accepted most (certainly more than half) of requests. I now receive around one per week and decline the great majority. What makes for an exception? Something I feel I can respond to well (ie have I been well targeted); a clear statement of how long its likely to take me (and requests for more than say 20 mins are more or less auto-declines); up-front statement regarding IRB and confidentiality and so on. All of these are basic good survey design concerns and yet I find the majority of surveys that come to me fail one or more of these. Finally I dislike getting 'reminders' and 'pre-warnings' for surveys I have already declined or are not going to take - this is nothing less than spam. For this reason I now consider a 'decline' option in the first approach message a minimum of courtesy (and am more likely to decline those that do not provide it)
I think we would all (surveyors and respondents) benefit from more scientific rigour among those who design and implement surveys or who advise students who use them in thesis/dissertation work. The underlying reason for my high rate of 'decline' is not churlishness, its that most surveys that end up in my in-tray are badly designed and/or badly implemented.
I agree with all Jamie has said. I am currently running an on-line survey as the first study in my doctoral work. I am lucky to have a supervisor well experienced in on-line surveys on populations with specific health complaints. The population I am dealing with is adults living in the community with acquired brain injury (mainly stroke and TBI). The survey is focused primarily on physical activity in order to inform a self-management program to assist people to be more physically active. A couple of things I have learnt:
1. It takes MUCH longer to write the survey than you imagine to really get it right. I thought this would be a quick thing but I was very wrong!! It took me a long time to get the mix of questions right and the survey length right so it isn't too long. I aimed for less than 20 mins as well - there is nothing worse than getting great responses to only 50% of the survey! People are busy and we wanted a short punchy survey that got to the heart of the matter.
2. Recruitment is also hard. Physiotherapy studies are usually face to face and this is fairly new to us. Not having a face to face relationship is very different. I have had only a small amount of help from the larger corporate bodies associated with ABI and have found smaller consumer advocacy organisations more useful in terms of getting the word out about the survey.
3. Ethics was a must for me - we can't run this without even though people have the option to remain anonymous. The patient information and consent form was the first page of the survey, requiring a 'yes' to continue...followed by the same with inclusion and exclusion criteria pages.
4. I used Qualtrics which the university here has a licence for and I have found this to be pretty good.
5. I had no paid incentive etc but participants have the option of leaving details so they can participate in the program trial itself. This is a bit of an incentive and also will assist me to recruit for the next study!! Most participants have opted to do this which is great.
6. I have had to keep on with the recruitment drive via emails galore so be prepared to spend a lot of time on this.
It has been a steep learning curve but I am planning on doing more in other groups!!
1. Be very careful to make sure you understand exactly how to turn off any "collect IP addresses," etc.
2. I have found Qualtrics to be far and above the most sophisticated software with by far the best customer service. And no, I don't work for them.
3. We did a convenience sample of gay and bisexual men who use online dating sites (see www.stopaids.org/online) and offered no incentive. We got 3000 responses in no time flat. Is it random? No. Did it compare fairly well to other national samples? Yes, with a few exceptions.
I would love 3000 responses...that is amazing!! Great sample.
I agree on the Qualtrics comment. I have found it to be really easy to use and it looks great too.
I'm part of a research team that used an online survey to collect quantitative data from bisexuals in the province of Ontario, Canada. Some of the issues related to online surveys that we had to address included: Where is the data being stored? How secure is that data? Is it subject to The Patriot Act? What do we do if someone indicates they're suicidal during the survey? What type of consent will we use? How do we ensure people are eligible? How will we help people who have questions or concerns about the survey reach a member of the research team? Can people stop doing the survey, and log in later to continue? How can we format skip patterns, follow-up questions, etc.?
We used respondent driven sampling, which complicated our recruitment considerably, and necessitated having a programmer who knew his stuff.
I have done several online surveys - used both SurveyMonkey and Qualtrics. Qualtrics is by far the best platform I have used, of course, you get what you pay for! Qualtrics has great support for the user and answers most questions you have very, very quickly. They also have super-flexibility (which can sometimes be a challenge for the novice). Nearly all of my surveys have gone through IRB review and have involved incentives. We generally do this by having a blinded third party collect the contact information outside of the survey (through a separate weblink) and handle the incentives outside of the project. I've never had a problem with IRB accepting an electronic informed consent and this manner of providing incentives - but we always make sure that there is a clear and concrete consent statement presented to the respondent prior to any data collection and we always make sure that the IP address collection is turned off. We also usually check with the IRB before we get too deep into the project preparation to make sure that they haven't added any new criteria for their review. I work mostly with national IRBs and they have been very helpful in streamlining the process. As a previous commenter wrote, you need to be aware of how data is stored, what the security protocols are and how long it will be accessible - all of that will need to be spelled out for the IRB in your project protocol.
Writing online surveys is another lengthy topic. What looks good on paper does not always work when you start to think about how it will look online and how that will translate to your data tables for analysis. You need to test it out thoroughly in multiple internet browsers (Internet explorer, Google Chrome, Safari, Firefox, etc.) , multiple platforms (including phones and tablets) and have some help with proofing and formatting each question. Having lots of graphics and links (like video vignettes) can be fun and make the survey interesting, but will it slow down the survey for someone who is using an old machine or perhaps they might take the survey somewhere that they can't listen to audio. In short, while it seems easy to say "let's do the survey online" - there are lots of things to think through when you operationalize - just as with any other research project!
Paul Wicks has explained every thing in detail. Online surveys look easy, performance is difficult. However, they are popular now. Try. Good luck lalitha kabilan