Exploring online research methods in a virtual training environment: Online questionnaires module
This module was written by Clare Madge except where stated in section headings or case studies.
For information on copyright and how to cite this module, please refer to the following web pages:
- Copyright statement: http://www.geog.le.ac.uk/orm/site/copyright.htm
- Citation policy: http://www.geog.le.ac.uk/orm/site/citation.htm
Contents
- Aims and learning outcomes
- Introduction
- Advantages and disadvantages
- Types of online questionnaires
- Sampling issues
- Design issues 1: Appearance
- Design issues 2: Content
- Implementation: Piloting, evaluation and analysis
- Technical guide
- Frequently-asked questions
- Glossary
- Print version
- References
- Further resources
Aims and learning outcomes
Aims
- To discuss the appropriate use of online questionnaires;
- To outline the advantages and disadvantages of online questionnaires compared to onsite questionnaires;
- To consider different types of online questionnaires, including email surveys, web-based surveys and questionnaires attached to email;
- To explore sampling issues arising from the use of online questionnaires, including recruitment, identity verification, response rates and incentives;
- To introduce key design issues with respect to online questionnaires, including length, text format and question type;
- To discuss issues relating to the evaluation of online questionnaires;
- To describe key terms, definitions and terminology in relation to online questionnaires;
- To provide links to additional resources, frequently asked questions and print versions.
Learning outcomes
At the end of this module, you will be able to:
- Identify the type of research projects where the use of online questionnaires is an appropriate method;
- Evaluate the usefulness of online questionnaires compared to onsite questionnaires;
- Devise an online questionnaire, bearing in mind issues of type, sampling, design and evaluation;
- Use the correct terminology when communicating about online questionnaires;
- Collect information about sources of help when using online questionnaires.
Introduction: Appropriate use of online questionnaires
Why use online questionnaires?
Although the use of the internet has proliferated in recent years, its use for conducting online questionnaire surveys in the Social Sciences still remains relatively limited. This is surprising since the internet offers great methodological potential and versatility. Online questionnaires can offer distinct advantages. They can:
- Enable the researcher to contact a geographically dispersed population and so can be useful in internationalising research;
- Be used to contact groups often difficult to reach, such as the less physically mobile (disabled/in prison/in hospital) or the socially isolated (drug dealers/terminally ill) or those living in dangerous places (war zones);
- Provide savings in costs to the researcher (for example, the costs associated with travel, venue, data entry);
- Supply data quickly, providing fast alternatives to postal, face-to-face and telephone surveys.
The limited uptake of online questionnaires is partly owing to the perceived technical expertise required to use them. The aim of this module, therefore, is to disseminate information on the use, strengths and weaknesses and design of online questionnaires, in the hope of increasing their use within the Social Science research community.
A note of caution
Despite the obvious attractions of online questionnaires, their use must be appropriate and justified for each particular research project. There are important issues that must be considered prior to conducting online questionnaires. These include:
- Ensuring that the use of an online questionnaire is the most appropriate research tool to address the aims of the research;
- Deciding on the most relevant questionnaire type and question format for addressing the aims of the research;
- Establishing a justified sampling strategy to recruit relevant respondents and ensure an appropriate response rate;
- Guaranteeing the ethical rights of respondents including informed consent, confidentiality and privacy.
A decision on whether is it appropriate to use online questionnaires depends on an evaluation of the relative advantages and disadvantages of this method in relation to other methods and in the context of the specific topic that is going to be studied. This module provides information and guidance that will help in addressing these issues.
Learning activity: Background reading
Instructions:
Read the following texts and make notes on when and why it would be better to conduct a questionnaire via the internet rather than using a postal, telephone or face-to-face survey.
- Coomber, R.
(1997) Using the Internet for survey research, Sociological Research
Online, 2, 2.
This article is reproduced with the kind permission of Sociological Research Online and the author. - Roster,
C. A., Rogers, R. D., Albaum, G and Klein, D (2004) A comparison
of response characteristics from web and telephone surveys, International
Journal of Market Research, 46, 3. (pdf, 264 KB)
http://www.websm.org/uploadi/editor/advantages%20and%20disadvantages%20of%20internet%20research%20surveys.pdf - Fricker,
Jr., R. D., and Schonlau, M., (2002) Advantages and Disadvantages
of Internet Research Surveys: Evidence from the Literature, Field
Methods, 14, 4. (pdf, 150 KB) http://www.websm.org/uploadi/editor/advantages%20and%20disadvantages%20of%20internet%20research%20surveys.pdf
Advantages and disadvantages of online questionnaires
Advantages:
Speed and volume of data collection
Using online questionnaires enables the researcher to collect large volumes of data quickly and at low cost. Harris (1997), for example, reports that most completed online surveys are returned within 48-72 hours, making turnaround incredibly fast compared to onsite methods. Data can also be analysed continuously and directly imported into statistical tools and databases, increasing the speed and accuracy of analysis. Also online questionnaires are usually easier and faster to update during the pilot phase and data can be collected continuously - independent of the time of day and day of week. It must, however, be noted that the time taken to prepare an online questionnaire can be substantial and will outweigh some of the time savings noted above. Also, a large volume of responses does not guarantee a good quality of responses.
Savings in costs
Costs associated with online questionnaires can be substantially lower than those associated with onsite surveys. Paperwork, telephone, postage and printing costs can be cut. Travel costs may be reduced and time may be saved by not having to travel to fieldwork sites. No costs are incurred for organising or hiring an interview venue. Software used to conduct online questionnaires is now often free, and savings may also be made on the costs associated with importing data for analysis. But it must be remembered that indirect costs can be passed to the participants, and this raises ethical issues. For example, respondents usually bear the costs of internet connection time. Additionally, financial benefits only accrue to researchers with institutional support in terms of computer equipment, software literacy training costs, internet connection time and technical support.
Flexible design
It is generally agreed that online questionnaires can provide a superior questionnaire interface compared to onsite surveys, as it is possible to make them more user friendly and attractive, thus encouraging higher response rates. Each questionnaire can be tailored to individual respondents with different questions being offered to different individuals. Questions can also be ordered randomly and a dynamic interface can be provided with pop-up instructions and drop down boxes. Skip patterns may be built in for ease of navigation. Online questionnaires can also be included on a dedicated website which can be used as a platform to provide more information about the project, the researchers and the affiliated institution. Online questionnaires also enable multi-lingual formats and pre-populating data about respondents. They can also include prompts if the respondent skips a question and can include audiovisual stimuli. These issues provide an inherently flexible design strategy, which Zhang (1999) suggests may increase a respondent’s motivation to compete the questionnaire.
Data accuracy
Responses from online questionnaires can also be automatically inserted into spreadsheets, databases or statistical packages, such as Microsoft Access and SPSS. This not only saves time and costs in the analysis phase but also ensures that data processing is automated, reducing human error in data entry and coding. Data can be automatically validated because if a data value is entered in an incorrect format, the web-based program can return an error message requesting the respondent to enter the data correctly and resubmit the questionnaire. This means that data entry errors are often low- there are no problems with interpreting handwriting, for example.
Access to research populations
Online questionnaires can be useful in providing direct access to research populations without the need of any 'cultural gatekeepers' who might restrict access to such groups. They also enable greater potential access to small specific population sub-groups, such as people with specific illnesses, family structures, particular ethnicities, as the potential population one can draw on is generally larger than that of most onsite surveys. Finally, online questionnaires can be useful for contacting socially and physically isolated groups.
Anonymity
The anonymity provided by online questionnaires can also be helpful for some topics. Harris (1997), for example, suggests that interviewer bias is reduced or eliminated in online surveys. Pealer et al. (2001) also report that respondents are more likely to answer socially threatening questions in online questionnaires compared to onsite surveys. This is because during online questionnaires the tangible presence of the researcher is removed, so bodily presence (age, gender, ethnicity, hairstyle, clothes, accent) become invisible. It has been claimed that this can lead to online research becoming a 'great equaliser', with the researcher having less control over the research process and potentially becoming a 'participant researcher'. Other have argued that this is a utopian vision as it is also likely that while the 'lived body' is invisible during an online questionnaire, pre-interpreted meanings and unstated assumptions are clearly 'visible' in the creation of online questions because we do not leave the body, and all its material inequalities, behind when we enter cyberspace (see Sweet 2001). Additionally, the 'equaliser argument' glosses over the structural power hierarchies that enable researchers to set the agenda, ask the questions and benefit from the results of the survey process.
Respondent acceptability
As online questionnaires are quick to complete, and can be completed at a time and place convenient to the respondent, they are often more popular than onsite surveys. Madge and O’Connor (2002), for example, used online questionnaires to research mothers of newborn babies. They concluded that online methods were particularly suitable for contacting this particular population group because onsite surveying was not feasible owing to physical and mental exhaustion of the mothers after childbirth and the constant demands of caring for a new baby. In this research project the use of online questionnaires enabled a 'community' (women with new-borns or young children) notoriously difficult to reach and hence habitually left out of research, to be contacted.
Disadvantages:
Sample bias
Perhaps the most questionable, and certainly the most commonly debated, aspect of online questionnaires is sample bias. There are enduring social and spatial divides in access and use of the internet which can induce sample biases to any online research. Also the researcher has less control over the sample population and so has no way of discerning if there are several respondents at one computer address or if one respondent is completing a questionnaire from a variety of computers. Because of the complexity of the debate, full details of this are discussed in the 'sampling' section of this module.
Measurement error
Some researchers (Sax et al. 2003) have recorded 'measurement errors' because responses to the same question vary if the questionnaire is administered online or onsite. Others (Carini et al. 2003) note this measurement error to be particularly large when technology related questions are included in a questionnaire because respondents who complete online surveys are usually more technologically competent than those completing onsite surveys.
Non-response bias
This is the bias introduced when the respondents who answer an online questionnaire have very different attitudes or demographic characteristics to those who do not respond. This is particularly the case for online questionnaires because some social groups are underrepresented among internet users, including people of limited financial resources, members of some ethnic groups, older people and those with lower educational levels (Umbach, 2004). Non-response bias is also increased when different levels of technical ability are present among the respondents, and it becomes a particular problem when response rates are low. This may be related to anxieties about getting viruses or becoming a victim of identity theft. Bosnjak et al (2001) have identified seven patterns in a typology of non-response, as shown in the following diagram:
Adapted from: Bosnjak, M., Tuten, T. L., Bandilla, W. (1991)
Participation in Web Surveys - A Typology, ZUMA Nachrichten 48, 7-17.
Descriptions (p.12)
Complete responders:
Those responders who view and answer all questions.
Answering drop-outs:
Those who provide answers to those questions displayed, but quit prior to completing the survey.
Item nonresponders:
Those responders who view the whole questionnaire, but only answer some of the questions.
Item nonresponding Drop-outs:
Those who view some of the questions, answer some but not all of those viewed, and also quit prior to the end of the survey. A 'more accurate depiction of actual events in web surveys than the relatively basic categorization of complete participation, unit nonresponse, or item non-response.'
Lurking drop-outs:
Those who view some of the questions without answering, but also quit the survey prior to reaching the end, thus sharing some characteristics with 'answering drop-outs' and 'lurkers'.
Unit non-responders:
Those who do not participate in the survey. There are two possible variations: They may be 'technically hindered' or may 'purposefully withdraw after the welcome screen is displayed, but prior to viewing any questions'.
Lurkers:
Those who view all of the questions in the survey, but do not answer any of them.
Bosnjak, M., Tuten, T. L., Bandilla, W. (1991) Participation in Web Surveys - A Typology, ZUMA Nachrichten 48, 7-17.
Length, response and drop out rates
Online surveys may have to be shorter than those conducted onsite. Response rates drop off after 10-15 questions and are directly and negatively correlated with questionnaire length (Harris 1997). It is also reported that online surveys have lower overall response rates than onsite surveys, Witmer et al. (1999) suggesting response rates of 10% or lower being common. Additionally, it has been suggested that drop out from online questionnaires is much more likely than onsite questionnaires. This may be because individual questions regarding the completion of the online questionnaires are usually not possible, which can increase drop out rates. Finally, online questionnaires can be easily ignored and deleted at the touch of a button so getting a reasonable response rate can be challenging.
Technical Problems
Various technical problems can occur with online questionnaires. A computer or server may crash, for example, especially if the questionnaire is very long. There is also great technical variance in computers, monitors, browsers and internet connections which may have design implications - what works on a high-spec system may be impossible to read on a low-spec system. Specific technical problems noted include Smith’s (1997) observation that a pop-up box implemented using JavaScript failed to appear when the respondent pressed the 'submit' button, resulting in multiple submissions of the same data being sent as the user tried repeated submission. Minimising reliance on complicated features is therefore important and careful piloting will mimimise technical difficulties. All these problems can be compounded by the fast pace of change associated with information technologies, delivery devices, web interfaces and hardware and software tools. These changes in information technologies will influence our methodological options with respect to online questionnaires. For example, what methodological consequences will the development of wireless technologies, such as mobile phones, which separate the internet from the computer and interactive television and speech recognition software, have on the development of online questionnaires? A final disadvantage is that a certain level of technical expertise is required to administer an online questionnaire. If a researcher does not have this knowledge, they must take time and effort to obtain it or they must rely on a programmer to provide it.
Ethical issues
Protecting respondent privacy and confidentiality is a significant ethical
issue. Spamming can be considered an invasion of privacy (Umbach 2004). Researchers
must be very careful not to unwittingly collect information without respondent
permission. Data security is also important to protect the anonymity and confidentiality
of the respondent. Because of the complexity of the debate, full details of
these ethical issues are discussed in the 'Online
research ethics' module.
Learning activity: Review quiz
Part 1
Answer the following questions by selecting the one correct option.
1. According to Harris (1997) in what time period are most completed online
surveys returned?
a. 1-2 days
b. 2-3 days
c. 4-7 days
d. After a week
2. Which of the following costs can be saved if a researcher uses an online
questionnaire?
a. Travels costs
b. internet connection time
c. Data entry
d. Postage
e. All of the above
3. Which of the following are disadvantages of online questionnaires?
a. Inaccuracy of data collection
b. Completed at home
c. Anonymity of questionnaires
d. Non-response bias
4. Which of the following would be considered good design principles for
an online questionnaire?
a. It takes around 30 minutes to complete
b. It indicates the time needed to complete the questionnaire
c. It contains 30 questions
d. It provides added functionality when used on a high-spec system, but
can still be used on a low-spec one
Part 2
Give a short description of the following types of non-response:
a. Complete responders
b. Item non-responders
c. Unit non-responders
d. Lurking dropouts
Part 3
The following diagram shows seven patterns of non-response described in the 'non-response bias' section above. Complete it by adding the correct types (1-5) to the correct place in the diagram (a-e).
1. Answering dropouts
2. Item non-responding dropouts
3. Lurkers
4. Complete responders
5. Unit non-responders
Adapted from: Bosnjak, M., Tuten, T. L., Bandilla, W. (1991)
Participation in Web Surveys - A Typology, ZUMA Nachrichten 48, 7-17.
Answers:
Part 1
1. - b. 2-3 days
Comment: This is an incredibly fast turnaround compared to onsite methods.
2. - e. All of the above
Comment: When an online questionnaire is used, all these costs are either
saved or usually incurred by the respondent.
3. - d. Non-response bias
Comment: Non-response bias is a disadvantage of any questionnaire, and online
questionnaires are no exception, as the respondents of an online questionnaire
may have very different attitudes or demographic features to those who do not
respond. The other three answers are usually considered advantages of online
questionnaires. Accuracy of data collection is high as responses can be automatically
inserted into data bases reducing human error in data entry. Anonymity can be
an advantage as it can remove interviewer bias and can be useful for sensitive
questions. Additionally, many respondents prefer online questionnaires to onsite
questionnaires as they can be completed at home at a suitable time.
4. - a. It indicates the time needed to complete the questionnaire
Comment: All questionnaires (onsite and online) should give the estimated
time needed to complete the questionnaire. The other three responses are
incorrect. It has been suggested that ideally online questionnaire should
contain a maximum of 15 questions and take approximately 10 minutes to complete.
Additionally, unless minimum hardware requirements are specified, online
questionnaires must be designed so they can operate equally well on both
high-tech and low-tech systems.
Part 2
a. Complete responders
Those responders who view and answer all questions.
b. Item non-responders
Those responders who view the whole questionnaire, but only answer some of the questions.
c. Unit non-responders
Those who do not participate in the survey. There are two possible variations: They may be 'technically hindered' or may 'purposefully withdraw after the welcome screen is displayed, but prior to viewing any questions'.
d. Lurking dropouts
Those who view some of the questions without answering, but also quit the survey prior to reaching the end, thus sharing some characteristics with 'answering drop-outs' and 'lurkers'.
Part 3
a. - 4. Complete responders
b. - 1. Answering dropouts
c. - 2. Item non-responding dropouts
d. - 5. Unit non-responders
e. - 3. Lurkers
Types of online questionnaires
Web-based questionnaire
A questionnaire is designed as a web-page and either hosted on a web-site or as a link from an email message.
Advantages
- Generally provides a far superior questionnaire interface to email surveys (can include icons, colors, and graphics).
- Possible to make questionnaire more user-friendly and attractive than email-based questionnaires, encouraging higher response rates.
- Can be included on a dedicated website which can be used as a platform to provide more information about the research project and the researchers.
- All web pages can include links to the affiliated institution, to give the project credibility and ensure the participants can verify the authenticity of the research.
- Can be designed to be simple and quick to fill in and include a variety of question types including tick box yes/no questions, ranking attitudinal questions, open-ended responses, multiple select questions, sliding scales and drop down lists.
- Questions can be randomly ordered and tailored to suit the profile of particular respondents.
- Responses can be fed automatically into a spreadsheet or data base, increasing speed and accuracy of data collection.
- Respondents may provide more candid responses as there is a greater sense of anonymity.
Disadvantages
- Design does require some degree of technical expertise so it may be necessary to employ a web designer, increasing costs.
- Relies on respondents coming to the webpage so issues of recruitment are crucial, but this can be overcome by linking to relevant websites (with permission) or sending solicited emails with hypertext link.
Example
Geographies of eBay (Colin Williams, Oli Mould, Tim Vorley: University of Leicester)
A link to the survey was sent to specific eBay users through the 'contact user' function. Participants involved in different sales categories were contacted according to the sampling frame which established the proportion of items in different sales categories. Completed questionnaires requested respondents contact details which were used for verification of identity and subsequent contact.
E-mail questionnaire
The questions are submitted as part of the email itself.
Advantages
- Sent directly to respondent ensuring delivery to recipient.
- Requires little preparation, so low cost.
- Easy to design and answer.
- Easy for respondent to return via email 'reply' button.
- Few technical skills required.
Disadvantages
- Questionnaire design usually simplistic.
- Not attractive owing to limited design features.
- Has to be quite short or get very low response rates.
- Graphics and embedded objects can not be included.
- Results must be hand-entered into a data base which increases time, costs. and data entry error.
- Valid email addresses required for sampling purposes.
- Anonymity of respondent may be jeopardized as email address returned with questionnaire and so may respond in a more socially desirable manner.
Example
Perceptions of folk belief (Claire Hewson, University of Bolton)
Sampling was done by posting a brief message to a number of newsgroups. This provided a short introduction and asked for people who might be interested in taking part in a study looking at folk perceptions of belief to send an email to the researcher requesting either more information, or the study questionnaire/materials.
This questionnaire consisted of two stories, each followed by a question.
The first story was adapted from the following text:
Stich, S (1983) From folk psychology to cognitive science: The case against belief. Cambridge, MA: MIT Press. (pp. 60-61)
Questionnaire:
Thanks for taking part in this experiment. Before proceeding could you
please indicate below your general academic background (i.e. what subjects
you have studied):
This questionnaire consists of two stories, each followed by a question.
For each story you are to read it through to the end and then answer the question which follows. You may now proceed.
Below is a short story. Please read this story and then answer the question at the end.
This story is about two men, Tom and Dick. Tom is a contemporary of ours, a young man with little interest in politics or history. From time to time he has heard bits of information about Dwight David Eisenhower. We can assume that most of what Tom has heard is true, though there is no need to insist that all of it is. Let us also assume that each time Tom heard something about Eisenhower, Eisenhower was referred to as 'Ike'. Tom knows that this must be a nickname of some sort, but he has no idea what the man's full name might be and doesn't very much care. Being little interested in such matters, Tom remembers only a fraction of what he has heard about Ike: that he was both a military man and a political figure; that he played golf a lot; that he is no longer alive; that he had a penchant for malapropisms; and perhaps another half dozen facts. He has no memory of when or where he heard these facts, nor from whom.
Dick, in this story, is a young man in Victorian England. Like Tom, he is bored by politics and history. Dick has heard some anecdotes about a certain Victorian public figure, Reginald Angell-James, who, for some reason that history does not record, was generally called 'Ike'. And in all the stories that Dick has heard about Angell-James, the gentleman was referred to as 'Ike'. Angell-James and Eisenhower led very different careers in different places and times. However, there were some similarities between the two men. In particular, both were involved in politics and the military, both liked to play golf, and both had a penchant for malapropisms. Moreover, it just so happens that the few facts Dick remembers about Angell-James coincide with the few facts Tom remembers about Eisenhower. What is more, Dick would report these facts using the very same sentences that Tom would use, since the only name Dick knows for Angell-James is 'Ike'.
Now, suppose that one fine day in 1880 one of Dick's friends asks him what he knows about Ike. Dick replies "He was some kind of politician who played golf a lot." A century later, one of Tom's friends asks him an identically worded question, and Tom gives an identically worded reply.
Question: Do Tom and Dick have the same or different beliefs when they say "He was some kind of politician who played golf a lot"? (If you wish you may give a reason for your answer).
Below is a short story. Please read this story and then answer the question at the end.
This is a story about two identical twins, Sam and Jim. Sam and Jim grow up to have a very close relationship, each is very fond and proud of his brother. Because they are so close, Sam and Jim spend a lot of time together, and, in fact, tend to adopt very similar mannerisms and interests. However, in order to try and maintain some degree of individuality the brothers dress quite differently, Sam being happiest in a suit and Jim preferring to wear jeans. Being very friendly and outgoing young men, Sam and Jim have many good friends. One such friend is Sara. Sara knows both Sam and Jim very well, and though they look identical and mimic each others mannerisms, she doesn't have any difficulty telling them apart, because of their differing styles of dress.
However, one sunny morning Sara looks out of her window and notices Sam walking past, looking rather smart as usual, and thinks to herself "Ah, there goes Sam walking past my window". In fact, she is mistaken since it is not Sam but Jim, dressed up for a job interview. Meanwhile, Sam, as his usual smart self, happens to be walking past his cousin Jane's window, and she also looks out and thinks "Ah, there goes Sam walking past my window".
Question: Do Sara and Jane have the same or different beliefs when they say "Ah, there goes Sam walking past my window"? (If you wish you may give a reason for your answer).
When participants returned their responses, they were sent an email which thanked them for taking part, provided brief information about the study, and asked them to get in touch if they had further questions. Here is a typical example:
Thank you for taking part in this study. Below is some information about the nature of this research.
To put things most simply, the purpose of this experiment is to try and find out something about people's everyday commonsense concept of belief. (The reason why I asked about your academic background was to assess whether this is likely to have affected your commonsense intuitions on the matter).
What I am trying to find out is the kind of factors that influence whether people will characterise two beliefs as being the same or different. So, for example, if two people both make a statement such as "there goes Sam walking past my window" are people likely to conclude that they have the same belief. Will this conclusion be modified depending on whether each person REALLY IS seeing Sam, or whether one of them is actually seeing Jim (having mistaken him for Sam.)
The answers you gave to the questions should provide some information about your intuitions on these matters. My intention is to compare the intuitions that people really do have with what philosophers have claimed peoples' intuitions are likely to be. (The specific area of philosophy that I am looking at comes under the heading the 'Folk Psychology Debate'.)
I hope you found the experiment interesting. Please do contact me if you have any questions or would like further information about this study.
Questionnaire attached to an email
A questionnaire is sent as an attachment to an email.
Advantages
- Can use word-processing or spreadsheets so simple to produce.
- Can be more attractive than email questionnaire as more design features.
- Can increase response rates as more visually appealing.
- Few technical skills required.
- Reduced length of email required and so increases participation.
Disadvantages
- Reply may be complicated as respondent has to open, complete and save attachment and then reattach to email and return.
- Respondent must have software capable of reading attachment.
- Attachments may expose respondents to more computer viruses.
- Results may have to be hand-entered into a data base which increases time, costs and data entry error.
- Valid email addresses required for sample.
Example
Employer perspectives survey (Cyndy Hawkins, University of Leicester)
The following link shows an example of a Microsoft word form which makes use of Check boxes. This document was sent to participants as an email attachment and hard copies were also distributed to provide a choice of methods of returning it.
Screenshot of a word questionnaire sent as an attachment
Sampling issues
Recruitment
Accessing respondents is a key concern in online questionnaires. As Coomber (1997) has highlighted, there is little point in setting up an online questionnaire and passively 'waiting' for eligible respondents to find the site: more active enrolment is needed to encourage users to complete an online survey. The significance of having any relevant web site providers 'on your side' cannot be underestimated. This access issue is becoming increasingly important. As the use of the internet increases in the general population, and the novelty of responding to online questionnaires is wearing off, getting online users to complete online questionnaires is becoming more problematic. Online users are becoming wise to the fact that they are paying for the privilege of being 'over-surveyed' (McDonald and Adam 2003). The result is that online users are intolerant of unsolicited communications and invitations to participate in research are increasingly considered 'spamming' (Harris 1997), resulting in online surveys often having lower response rates than onsite surveys. Witmer et al. (1999), for instance, report response rates of 10% or lower being common for online surveys.
Sampling
A further issue of concern when using online questionnaires is that they present serious sampling problems for a study based on the quantitative tradition. There is no access to a central registry, or master database, from which to create an accurate sampling frame, nor is there any way of discerning how many users are logging on from a particular computer or how many accounts/memberships a particular individual might have. This means random sampling or gaining a representative sample is not possible. Internet surveys on the whole, therefore, attempt to select a sub-set of users to participate in the survey. This may be through attempts at non-probability sampling, or through self-selection. Coomber (1997) has suggested that online self-selection is suitable to use when researching a particular group of internet users, especially when connecting with groups that are not bound in a particular area but that share a common interest (O'Lear 1996, 210). So while self-selection may clearly limit the scope of the results where broad sample representativeness is required, it is important for reaching marginal groups or if the researcher is conducting an interpretive investigation. Moreover, it must be noted that self-selection occurs in many conventional surveying situations and is not unique to online research.
There is, however, divergent opinion as to whether the internet provides an inherently biased sample population for quantitative studies. Research has documented that in the early years of its inception, those using the internet tended to be predominately male, white, first world residents under 35 years old while those with lower educational levels, lower incomes, living in rural areas and black or Hispanic were underrepresented (Mann and Stewart 2000). Some argue that access to the internet is still highly unevenly distributed both socially and spatially (Janelle and Hodge 2000; Warf 2001). Indeed, according to Silver (2000), the digital divide has continued to grow in America, and this divide is fast becoming a 'racial ravine', suggesting a biased internet user sample population. Hewson et al. (2003) however, are more optimistic. They argue that overall the evidence suggests that the internet user population now represents a vast and diverse section of the general population and that it is rapidly moving beyond the select group of technologically-proficient male professionals who were once largely predominant. Dodd (1998 63), for example, argues that the internet's broad scope can actually improve representativeness, as many population groups usually difficult to contact may be easier to access via the internet while Litvin and Kar (2001) show that the sample characteristics of conventional methods and electronic methods are converging, with electronically solicited samples becoming more like random paper-based samples, as technological uptake of the internet increases. Indeed, most recent research opinion suggests that as online surveys can often survey an entire population of a particular group, rather than a sample, they can reduce or eliminate the effects of sampling error altogether (Umbach 2004). Moreover, samples can also be weighted to reduce bias so if a certain demographic group is underrepresented, its responses can be counted more heavily (Best and Krueger 2004).
Identity verification
A further issue relating to online questionnaires involves verifying the identity of the participants and the reliability of their responses. Often it quite simply is not possible to verify the identity of respondents so there is the possibility that some respondents may be 'spoofs' or indeed may play with their online identity in completing the research (Roberts and Parks 2001). Online research also does not enable the researcher to assess the reliability of responses. As Hewson et al. (2003, 44) state: '…when materials are administered via a computer terminal rather than in person, the researcher is less able to judge the extent to which the responses are sincere and genuine, the conditions under which the questionnaire was answered and the state of the participants at the time of participation (for example, intoxicated, distracted, and so on)…'. While being an irresolvable sampling issue of online research at present, this is not unique to virtual methods: incorrectly completed questionnaires, unreliable responses and non-verifiable identities may also be a feature of onsite surveys. Moreover, in conducting online community research, how necessary is it to 'prove' the offline identity of the participants anyway? Taylor (1999, 443) argues that this depends on the initial research question and that '…the acceptance of online life as a thing in itself' is important. Indeed, it is increasingly recognised that online textual persona cannot be separated from the offline physical person who constructs them and they are commonly based on offline identities in any case (Valentine 2001). Additionally, recent research suggests that the anonymity of participants can play a positive role in the research process, reducing researcher bias and being particularly useful for embarrassing and sensitive topics (Hewson et al. 2003). But identity verification also raises issues for the research participants too as it may be harder for them to verify the researcher's identity. As Raghuram (personal communication, 2005) notes: 'One danger of online questionnaires is that questionnaires involve trust and it may be harder to build up trust when you are not face-to-face. Physically seeing a face can give you a sense of reassurance and in itself provides a forum for communication.' One way to overcome this problem is to have a dedicated project website in which the identity of the researchers and their institutional affiliation can be verified (see Madge and O'Connor, 2002).
Response rates
According to Jeavons (1998) response rates show no relationship to gender, age or education level. However, response rates do drop off rapidly to online questionnaires. Crawford et al. (2001) suggest that if people are going to complete a web-survey they will do so in the first few hours or days of receiving it. However, it has also been found that increasing response rates can be achieved by follow-up reminders. Crawford et al (2001) propose that a single reminder email can double the number of respondents while Schaefer and Dillman (1998) found that four repeated contacts yielded the highest response rate. To improve response rates online questionnaire formats should be simple. Complex graphics, grid questions, open-ended questions and requests to supply email addresses all reduce response rates (Jeavons 1998; Knapp and Heidingsfelder 2001; Porter and Whitcomb 2003a). Response rates can also be improved with carefully worded introductory letters or emails which include details of estimated time to complete the survey and a statement indicating that the respondent is part of a small group chosen to participate in the study (Porter and Whitcomb 2003b). Short surveys (maximum ten minutes) improve response rates (Crawford et al (2001) as do those that request personal information at the start of the questionnaire rather than the end (Frick et al. 2001). Type of internet connection and hardware and software used in accessing the internet will also impact on response rates, while emphasis of confidentiality has also been found to increase response rates. Best and Krueger (2004) also suggest that introducing a 'social presence' will discourage item non-response. To avoid particular questions not being completed, messages can be inserted to express gratitude, emphasise importance or describe progress. The use of missing data messages when an item has not been completed has also been found to reduce item non-response.
To improve response rates: A checklist
- Send introductory letter outlining project and estimated time needed to complete the questionnaire
- Provide clear instructions on how to complete the questionnaire
- Request personal information at the start of the questionnaire rather than the end
- Use simple questionnaire format and avoid unnecessary graphics
- Avoid grid questions, open-ended questions and requests for email addresses
- Design survey so it takes approximately 10 minutes to complete
- Do not include more than 15 questions
- Send one or two follow up reminders
- Include 'social presence' or missing data messages to reduce item non-response
- Emphasise confidentiality
Incentives
The impact of incentives on improving response rates is mixed. Some researchers suggest that incentives have no effect on response rates (Cook et al. 2000) while others indicate some improvement of response rates when an incentive is introduced (Bosnjak and Tuten 2001). There are three main types of incentive: cash equivalents through web-based companies, gift certificates from popular retailers or lotteries that promise financial or produce reward. Bennett (2000) proposes that the incentive must be relevant to the audience while Birnholtz et al (2004) suggest that cash is a superior incentive to gifts for an online survey, even with technologically sophisticated respondents. This may be due to the perceived limitations, delayed payoff or reduced visibility of online gift certificates.
Summary
It is clear therefore, that although the online questionnaire has great potential in reaching specific groups difficult to access using conventional means, and in increasing the opportunity for having a very large worldwide pool of respondents, it also has the potential for supporting the views of those 'privileged' with computer access. This is especially the case if the research is represented uncritically without reference to the sampling procedure. Findings from online questionnaires are indicative, should be read with caution and analysed with acceptance of the likely relative sample bias (although the degree of this cannot be measured). Thus according to Wakeford (2000, 33): 'The quantity of information that may be generated, and the speed at which responses can be collected, can result in pleasing piles of data- but we should be wary of being seduced by sheer quantity; data is only useful if it is representative of the larger population.' This is clearly currently the case but recent research does hint that in the future the sampling issue may become a less significant issue in the virtual environment. Riva et al. (2003), for example, report no significant differences in responses gained from the same questionnaire from online participants compared to those completing a paper survey, even when the online sample is not controlled. However, it is likely that the use of online questionnaires will increase further in popularity as problems of coverage bias and unfamiliarity subside. Also, as the tools for conducting online questionnaires improve in sophistication and research on how best to employ this particular method progress, it is likely that the use of online questionnaires will proliferate.
How to select a sample: A procedure
Following Best and Krueger (2004), there are 5 main stages in drawing a sample for an online questionnaire:
1. Specify the target population
The target population will be informed by the aims of the research. Issues of non-response bias must be considered as respondents who answer an online questionnaire may have very different attitudes or demographic characteristics to those who do not respond. This is particularly the case because some social groups may be underrepresented among internet users, including people of limited financial resources, members of some ethnic groups, older people and those with lower educational levels (Umbach, 2004). Additionally, specifying the target population will depend upon the specific internet service to be researched as different numbers and types of people use different services. For example, 89% of internet users use email, 81% use the web but only 18% use mailing lists (Best and Krueger 2004). Additionally list users are more likely to be non-white, employed, married and parents than email and web users (Best and Krueger 2004).
2. Develop the sample frame
After specifying the target population the sampling frame must be designed. This sampling frame is used to identify and locate suitable respondents. The development of the sampling frame for online questionnaires can be more difficult than for onsite questionnaires as specific computers and their users cannot be identified or located in advance. Development of the sampling frame will also depend upon the specific service being used (Best and Krueger 2004). For example, for email users the email addresses of the potential respondents must be discerned but there is no one comprehensive directory of email addresses so they must be gained from the holder, an associated user, or an organization that compiles email lists for internal or external purposes. For mailing lists, web portals or searches must be used to locate the appropriate mailing list and then the researcher must subscribe to communicate with the users of the list, bearing in mind the associated ethical issues.
3. Choose a sampling method
Researchers must then determine the members of the sampling frame to be selected based on probabilistic or non-probabilistic sampling methods. Probabilistic sampling methods ensure that each member of the sampling frame has an equal chance of being selected. This can be problematic in online research as there is no access to a central registry, or master database, from which to create an accurate sampling frame, nor is there any way of discerning how many users are logging on from a particular computer or how many accounts/memberships a particular individual might have. Probabilistic sampling is therefore only possible when the target population is restricted to a group of users that can be fully identified and contacted, for example when a complete list of email addresses can be discerned for schools or trade associations. Non-probabilistic sampling is more common in online questionnaires whereby sub-set of users are selected. Coomber (1997) has suggested that online self-selection is suitable to use when researching a particular group of internet users, whilst O'Lear (1996) suggested that this is particularly useful when connecting with groups that are not bound in a particular area but that share a common interest. Generalisations from such specific user groups can be problematic but some attempts have been made to overcome these through poststratification weighting (see Taylor et al. 2001) and propensity scoring (see Miller and Panjikaran 2001). Whatever the sampling method selected, its limitations must be clearly stated and taken into account in any analysis and conclusions.
4. Determine the size of the sample
The selection of the sample size will be determined by the particular research question. It will also depend on the desired number of cases, the extent of invalid contact and the projected cooperation of respondents (Best and Krueger 2004). Witmer et al. (1999) report response rates of 10% or lower being common for online surveys, so this must be built into decisions about sample size.
5. Implement contacting procedures
Care must be taken when initiating contacting procedures and it is important to have ethical clearance for the research. Respondents should also have given their informed consent, and the questionnaire should follow equal opportunity guidelines. The Higher Education and Research Organisation in the UK (HERO) webpage on Professional Ethics and Equal Opportunities provides an overview of the key issues along with a range of relevant links.
According to Best and Krueger (2004) there are three main methods of recruitment for online questionnaires. The choice will depend on the sampling strategy and the aims of the research:
Soliciting visitors to web sites
The online questionnaire is posted on a website which visitors view and complete using a web browser. Respondents can be recruited by placing a hypertext link on the home page if the website receives heavy traffic. A programme can be installed on the web server to randomly deliver the survey to people who visit the home page, but using this strategy makes estimation of the sampling frame difficult, thus precluding measurement of response rate and non-response bias. If the webpage does not receive sufficient traffic, then invitations can be sent out via email, postal mail, advertisements in the media, posting on frequently used online services, or using in-house directories of online addresses. Online advertisements can also be posted on frequently visited websites inviting respondents with specified characteristics (e.g. age, sexuality and occupation).
Various locations can be selected including entry portal sites (e.g. Microsoft network), sponsored search engines and directories (e.g. Yahoo), sponsored content sites (e.g. Amazon.com) but formal requests must be made and this may be costly. Adverts can be embedded (displayed to all internet users visiting a particular web page and occupy some proportion of the page) or intercept (appear in a separate browser window and include pop-ups and floating ads). Care must be taken as users may have software designed to prevent intercept ads being displayed on their systems. Features of the advert that encourage users to visit the website include simple intrinsic appeals ('Contribute to an important study') and having a stationary background rather than moving images. The advert then directly links to the main website for formal recruitment. Such approaches are useful for recruiting large, diverse non-probabilistic samples and can also be useful for targeting particular groups through appeals on specialist websites (see example).
Example
How an Online Support Network Affects the Experience of Living with a Food Allergy - Dr. Neil Coulson and Dr. Rebecca Knibb, University of Derby.
A request was made to place the following advertisement on the homepage of an online support group website, FAST (Food Alergy Survivors Together).
Screenshot of a homepage and advert
By following the link, users were given the following simple mesage:
There are currently two exciting opportunities to participate in online research. Please show your support for this research by responding to the short questionairres.
News Release for All Members
Posted: January 14, 2005
How an Online Support Network Affects the Experience of Living with a Food Allergy
This is a survey/research project that you are invited to participate in. It is being conducted by Dr. Neil Coulson and Dr. Rebecca Knibb at the University of Derby, England, UK. Please select the link below.
Emailing list based samples
The email message contains an embedded hyperlink to a website hosting the online questionnaire.
- There is no one central registry of email users lists are available to generate probabilistic samples.
- Sources of email lists can be obtained from public data bases, web portals and individual websites but these can be outdated and incomplete and are often search driven so can be time-consuming to obtain.
- Email addresses can be purchased but companies may have gained permission unknowingly so ethical problems can be encountered.
- Email solicitations can be considered 'spamming' so direct permissions should be sought where possible and all research should include institutional legitimization.
- Emails should be sent direct to single recipient and more than one address should never be listed in the 'to' or 'cc' field since all the recipients will see the entire list. The function 'bcc' can be used to send a single message to multiple recipients without revealing users' email addresses to other participants.
- Include a valid email address in the 'from' field or recipients may consider your message 'spam'.
- The 'subject' field must be precise and attract users to participate in the study.
- Emails with attachments are less likely to be opened owing to virus threats.
- Email message must be compelling, brief and clear to get people to respond, including aims of study, research procedure, how respondent's name and address were obtained, researchers details, institutional affiliation etc.
- Hyperlinks and graphics (including institutional logos) should be used sparingly and audio and video should be omitted.
- Informed consent must be obtained.
- Provide the URL that will take people directly to the online questionnaire.
- Inform the recipients how to contact the researchers if they have a problem or question.
Example
The Association of American Geographers 'Internationalizing Geography in Higher Education' study.
The following email was sent to members of the association to request participation.
Subject: Internationalizing Geography in Higher Education
Dear Colleague,
The Association of American Geographers invites your participation in a new study, “Internationalizing Geography in Higher Education,” to examine international education and collaboration in the discipline of geography. Partial funding for this research project comes from the National Science Foundation and the American Council on Education.
We are gathering information from geographers who teach at higher education institutions outside the United States to consider their perceptions of international collaboration and its value for improving geography education and research. We are also investigating the extent to which these geographers support the goals of international and global learning in their courses.
The results of this study will help professional associations, academic departments, and higher education institutions develop resources and programs to facilitate scholarly and educational collaborations among geographers worldwide. Your perspective is important to us regardless of whether you are currently or formerly have been involved in international collaborative work.
We ask that you take a few minutes of your time to complete the survey available here:
http://communicate.aag.org/eseries/Internationalization_Survey/logon.cfm
When you first visit this site, you must create a username and password to access the questionnaire online -- your AAG membership number (if you have one) will not work. Alternatively, you can download the survey and return a hard copy to us by mail or fax. In either case, we would appreciate receiving your responses by Friday, April 1, 2005.
The survey can be completed in approximately 20 minutes and you can save your work online to complete at a later time.
This research has been reviewed by Texas State University’s Institutional Review Board (IRB) under the Online Center for Global Geography Education project [Tel: 01-512-245-2314]. All records of the content of the survey will be held strictly confidential and neither you nor your department will be identified by name in the final report. You are under no obligation to participate in the study. Your completing and returning the questionnaire will be taken as evidence of your willingness to participate and your consent to have the information used for the purposes of the study. If you decide to participate, you are free to withdraw at any time with no penalty. A copy of the report will be mailed to participating individuals upon request after the completion of the study.
Thank you in advance for your participation. Please direct any questions regarding the survey to Michael Solem at msolem@aag.org.
Regards,
Dr. Michael Solem, Educational Affairs Director
Waverly Ray, Research Assistant
Transmitting appeals to mailing lists
A message is posted to a mailing list requesting participants for the online questionnaire.
- Useful for non-probabilistic samples of sub-users.
- Public lists can be searched via mailing list archive sites (see 'Further resources' section).
- Private lists must be obtained through individual contact and formal permission to organisations concerned, for example relevant businesses or educational institutions.
- Researchers must subscribe to the list to request formal recruitment via the list administrator or moderator.
- Sensitivity and respect should be displayed towards members of the list and only one or two follow up postings (repeat requests) should be made to increase response rates - more than this may be considered spam.
- The appeal for participants should be compelling, brief and clear to get people to respond, including aims of study, research procedure, researchers details, institutional affiliation and why this specific mailing group has been targeted.
- Respondents must be instructed to return any emails to the researcher not the whole mailing list.
- Information about participant demographics must be collected in order to assess the nature of the sample obtained.
Example
Women’s experiences of infertility - Dr Nicola Illingworth, University of Stirling
Four potential online support and discussion groups were identified. Two of these groups were known to the researcher previously via an exploratory study one year earlier. A further two groups were identified by members of these groups (snowballing) by either passing my details on or advising me of an active group to contact.
The initial correspondence with the first two groups are shown below as examples.
Group 1 (UK site): first contact August, 2001.
Details concerning the nature of the current research and previous experience/research in this field was forwarded to the group moderator. Access was agreed by group moderator within 48 hours. Following Rosenthal (1975), a series of repeat calls for participants were posted via the site Bulletin Board at 7 day intervals (x 4).
Access Request: Group moderator:
Dear Moderator,
I am a PhD student based at the University of Stirling, Scotland conducting research into women's experiences of infertility and treatment processes. I would like to request permission to access this discussion group for my current research.
During 1999/2000, I conducted a small-scale pilot study in this field and subsequently received a further 3 years funding for doctoral research in this area. I am particularly interested in talking to women experiencing all stages of infertility and the treatment process - combining experiences pre-treatment, during treatment and post-treatment. My previous research used email communication as a contact method, receiving very positive feedback from participants. Likewise, this current research will also be conducted primarily using the internet - either in the form of one-to-one e-mail interviews or diary-keeping. Anonymity and confidentiality will be strictly maintained at all times.
If you would like further discussion and information regarding my past/current research, please contact me either by phone or by e-mail.
Many thanks and look forward to hearing from you
Nicola Illingworth
Research Student
Department of Applied Social Science
University of Stirling
Stirling
FK9 4LA
Tel: 01786 466305
E-mail: n.a.illingworth@stir.ac.uk
Group moderator response:
Thank you for your request to join this site. Access is confirmed. Membership details will be forwarded shortly.
Infertility Research Call (Bulletin Board):
My name is Nicola Illingworth. I am a research student, based at the University of Stirling, Scotland, conducting research exploring women’s experiences of infertility and the treatment process. During 1999/2000, I conducted a small-scale pilot study in this field and subsequently received a further 3 years funding for doctoral research in this area. I am particularly interested in talking to women experiencing all stages of infertility and the treatment process – combining experiences pre-treatment, during treatment and post-treatment.
Research participation involves completion of an initial email questionnaire and subsequent participation in either a one-to-one email interview or diary-keeping through treatment stages. No technical expertise – other than access to an email account - is required to participate – and I will offer help and advice if needed. Anonymity and confidentiality will be strictly maintained at all times.
If you would like to take part in this research and/or would like more information, please contact me at:
n.a.illingworth@stir.ac.uk
Many thanks
Group 2 (International site): first contact February, 2002.
Access was confirmed by the group moderator after 7 days.
This was a larger site, to which a series of research calls were posted (as above) via the site Bulletin Board and more specialised discussion groups.
Although there had been a high level of response when the researcher had accessed this site previously, on this occasion, response was minimal.
A search of related discussion sites revealed a number of negative comments towards more recent research conducted using this site. In particular, a number of comments received suggested researchers had only revealed themselves after monitoring group discussion for lengthy periods – perhaps a contributing factor.
Case studies
The following case studies give an indication of how sampling issues were dealt with in three studies employing online methods.
1. Cyberparents (Clare Madge and Henrietta O' Connor, University
of Leicester)
The 'Cyberparents' logo
Title
The Cyberparents Research Project
Aim
An internet-based research project initiated to examine how, why and in what ways new parents use the internet as an information source about parenting and as a form of social support. The project focused on one pioneer UK parenting website: Babyworld.
Online questionnaire type
A web-based questionnaire survey was used to identify general patterns of use of Babyworld. The questionnaire survey was created using the html compiler 'Adobe GoLive 4.0' and followed a similar format to traditional self-completion postal questionnaires, the main difference being that the survey form was set up online. In order to administer the questionnaire a series of webpages were developed. All pages included the University of Leicester crest to show institutional affiliation, to give the project credibility and ensure the participants could verify our status. The website included a homepage with a brief introduction to the project, which was linked to further pages entitled 'meet the researchers' and 'more about the project'.
Recruitment issues
In our research several hotlinks were created between the questionnaire, the Cyberparents website and Babyworld website. The links from Babyworld to the research webpages were made at the suggestion of the website providers and positioned strategically in prime locations on the Babyworld home page and the most used pages of the website. This was the only mechanism to elicit responses. It is significant to note that without the agreement and co-operation of the website providers to place these strategic hypertext links, the survey would most certainly not have been successful since it would have been impossible to recruit these specific online community members in any other way. Thus the issue of access to online communities and website providers is crucial when conducting online research.
Sampling issues
As we did not have access to a central registry, or master database, from which to create an accurate sampling frame, random sampling or gaining a representative sample was not possible. We therefore used online self-selection as we were researching a particular group of internet users- mothers with new born babies using a specific website. We were quite clear in writing up the results of our research that our findings were limited to this self-selected sample.
Identity verification
In our research it was not possible to verify the identity of respondent but the questionnaire was so specific to being a new parent and a user of the Babyworld website that it would have been difficult, if not impossible, to complete the questionnaire without a detailed working knowledge of the website. However, this does not diminish the possibility that some respondents may have been 'spoofs' or indeed may have played with their online identity in completing the research.
Incentives
No incentives were used in our research. As we were also both mothers of new born babies, survey respondents suggested that our commonality of experience as new mothers had encouraged them to complete the questionnaire. But we were uneasy with the respondents 'losing out' in terms of paying for the internet connection time to complete the questionnaire.
2. Perceptions of folk belief (Claire Hewson, University of Bolton)
Title
Empirical Evidence Regarding the Folk Psychological Concept of Belief
Aim
To examine aspects of the folk psychological concept of belief, and in particular to test some claims made within the folk psychology debate about the nature of this concept, and explore further factors which may influence people’s judgments in this area.
Online questionnaire type
The questionnaire was administered to participants by email, and consisted of a text file which was cut and pasted into the body of an email message. The text contained a short passage, followed by a question about this passage. Emails were sent to participants from the researcher’s university account, and participants were asked to respond by replying to that same email address.
Recruitment issues
Participants were initially recruited via psychology undergraduate seminars; however, this proved time consuming and did not generate a large number of respondents. 15 respondents were acquired in this manner, before the internet sampling method was adopted. Internet participants were recruited by posting participation requests to a number of USENET newsgroups with which the researcher was familiar. The posting invited people to respond by emailing the researcher if they were interested in participating, or wanted to find out more about the study. 158 people emailed to express an interest in taking part. Of these, 135 took part in the study (i.e. received and returned the questionnaire), within a few weeks of the initial posting. We were impressed with this level of response, compared with the previous method of recruiting undergraduates via seminar classes. Two key factors likely played a role in this pleasing level of response: firstly, this study was carried out in the very early days of internet-mediated research and thus the level of participation may have been enhanced due to novelty value; secondly, the newsgroups targeted were likely to reach potential participants for whom the research topic had high issue salience, thus further encouraging participation.
One problem which emerged during recruitment concerned a harsh response from one newsgroup moderator, who emailed the researcher stating that the posting was inappropriate and had been removed. Clearly, it is good practice to request permission from newsgroup moderators prior to posting a participation request; failure to do this was an oversight in this study.
Sampling issues
Related to the point above concerning issue salience, the internet sample obtained contained a substantial proportion of respondents working within academia, and a fair number of these with backgrounds in areas related to psychology and other cognate disciplines. Given the nature of this research (to test everyday commonsense intuitions) a less specialist sample would have been ideal. Of the final sample of 140 participants (12 of these were the undergraduates recruited via seminars), 21 were excluded for the final data set because they were not considered naïve to the topic under investigation. There would thus appear to be a trade-off between acquiring larger sample sizes, and reducing bias due to issue salience, when using internet sampling methods similar to those employed here. Individual research contexts and goals will dictate the appropriate balance, but it should be borne in mind that it may be necessary to reduce the final data set subsequently, as was the case in the current study.
Identity verification
Participants’ identity was not known beyond the address of the email account from which they responded. For the purposes of this study further identity verification was not necessary, since there were no restrictions on who could take part beyond the requirement of being naïve to the topic under investigation (also, participants needed be over 18 years in order to be able to give informed consent, though enforcing this is a general problem for internet-mediated research).
It was necessary, however, to be able to track participants such that the version of the questionnaire viewed was known (there were several conditions in this study) when the responses were returned to the researcher, so that the response could be linked to the study participation condition. In most cases this was not problematic, since respondents typically replied from the same email address, and appended their answers to the original email which contained the questionnaire. However, a few participants either responded from a different email account, sent their answers without the original questionnaire appended, or both. In the case where both these events occurred, it would not have been possible to determine which version of the questionnaire these responses referred to, had the participants not thought to highlight the fact that they were responding from a different email account (which they did in all cases), and quote their alternative email address. Luckily the researcher had kept a separate record of which version was sent to each email address, otherwise matching responses to conditions would have been problematic even where the respondent had sent their answers form the same email account but deleted the original questionnaire. This issue highlights a possible disadvantage of email-based internet research methods, over for example web-based questionnaires where the researcher has more control over the format in which data is returned.
Response rates
The number of responses obtained was good, and these were fast to come in. A measure of actual response rate was not possible since the sampling frame (how many people read the participation request) could not be determined. However, it is worth noting that of the 158 initial inquiries, 135 people went on the actually complete and return the questionnaire which was sent to them (i.e. an 85% return rate).
Incentives
No incentives were offered in this study.
3. A Critical Geography of UK Biotechnology (Tim Vorley, University of Leicester)
Title
A Critical Geography of UK Biotechnology
Aim
This research aims to identify, interrogate and critically evaluate the innovative geographies of UK biotechnology, which is centred on the Golden Triangle. Biotechnology in the UK is a relatively new phenomenon, less than 30 years old, but its development has been haphazard due to the experimentalist politics and uncertain governance of the UK government. Furthermore, the unconsolidated management strategies and ambiguous ambitions of academia, industry and the government often result in an unresolved conflict between public and private interests.
Online questionnaire type
The online survey was used as a precursor to interviewing and focus groups. The results from the questionnaires helped identify key actors with relevance to specific aspects of the research project. None of the questionnaire results are intended to represent UK biotechnology activity and are for reference purposes only.
Recruitment issues
The recruitment was via an introductory email explaining the rationale of the research, while also covering ethical and confidentiality issues. E-mail addresses were obtained from a variety of sources from which a database of institutions and organizations directly and indirectly involved with UK biotechnology was created.
Sampling issues
There was no predetermined sampling frame. The email containing the survey was sent to as many people as possible so as to identify those directly relevant to my different aspects of my research.Identity verification
The identity of respondents was not an issue as the survey was only sent to a specific list. The survey requested respondents’ contact details which were used for subsequent contact.
Response rates
The response rate was c.25%, which was considered positive from what was a mass-mailed survey. Not all of the responses were followed up as they were not deemed to be relevant, and some of those who did not respond to the survey were subsequently interviewed despite not responding. Those who were interviewed but did not respond to the survey mainly gave the reason for not responding as the survey was 'lost' to other tasks and then forgotten. The majority of responses to the survey were within 24hrs of the email request being sent, which appears to support the reasons given for not submitting by those later interviewed.
One individual specifically requested to be 'interviewed' by email due to the demand on time during the working week The interview consisted of 6 emails asking a series of questions with each response informing subsequent question for either clarification or exploring alternative topics. The clear benefit of this type of interview was the fact that it was written and each email permitted reflection on responses prior to further correspondence.
Incentives
No incentives were offered directly, although a number of individuals and organization have requested copies of the final PhD and any associated publications.
Design of online questionnaires 1: Appearance, by Clare Madge, Jane Wellens and Rob Shaw
Design features of online questionnaires
Design issues are an extremely important consideration for online questionnaires because of the highly visual nature of the web environment, and the variety of technical skills of survey respondents. The massive range of purposes of questionnaires and diversity of the populations to be studied mean that there is no single design approach that is appropriate for all online questionnaires. However, some general design principles are noted in this section.
The first design section will focus on design issues primarily related to appearance, and the second section will consider those issues that are more concerned with content.
Consistency
The online questionnaire must be consistent in appearance when viewed using different computer hardware and software packages. It is important to avoid possible variations in the appearance owing to differences between the respondents' computer hardware or software and the researchers'. To achieve this, consideration of file-size and download time is important, along with the need to check that the questionnaire works at a range of screen settings.
It is common practice to design for the most common screen size (800 x 600) as a minimum, but it is important to check that the questionnaire does not display badly at larger (or smaller) sizes. It is important to use relative rather than absolute sizing where possible, and to avoid the use of images or other fixed size elements (such as tables with a fixed width) without checking their suitability on various screen sizes. If these elements are an essential part of the research question, then the possible impact on the validity of results is clear.
It is also good practice to test the questionnaire on as many different web browsers as possible prior to launch. This can be done by downloading archived versions of major browsers. It is also often a good idea to ask friends and colleagues who may use different browsers or have different connection speeds to view the questionnaire and feed back before deploying it. As shown in the 'multi-media stimuli' section below, the use of images or multi-media can have dramatic effects on file size and can lead to huge variation in download time depending on the speed of connection.
Remember also that users may want to increase text size or change colour options according to accessibility needs or preference. Where this will not affect the validity of results, it is therefore good practice to use Cascading Style Sheets which can be easily replaced or manipulated by the user if necessary. For optimum accessibility, this can be combined with measures such as avoiding absolute text sizes and setting a range of font options.
See the 'Technical guide' module for further details about these issues.
Colour
Colour can be used to enhance the questionnaire style and ease navigation. But technical variance in computers can result in colour variation in the received questionnaire and can increase download times. In order to minimize variation in the colour scheme, Best and Krueger (2004) suggest adopting the 216 colour web-safe palette that eliminates the 40 variable colours on an 8-bit system. Additionally, yellow and blue should be used and red, green, purple and orange avoided (Rigden 1999). Contrast between text and background should be maximized. Also, colours can convey cultural values. For example, green is associated with safety in the USA, criminality in France and youth and the future in Japan (Best and Krueger 2004). Overall, colour should be used with care and for specific purpose.
Text appearance and layout choices
Text appearance is based on font, size and decoration. The choice can affect transmission times, screen configurations and perceived length of questionnaire. Times New Roman and Arial are the preferred font style for high levels of legibility and readability (Bernard and Mills 2000). Verdana is also a good choice. The most popular font size are 12 and 14-point (Bernard et al 2002) but this does vary with sub-group, with 14+ point being preferred amongst the elderly and children (Best and Krueger 2004). Generally font decoration (italics, bold etc) should be used sparingly as it can reduce reading speed and accuracy. Underlining can cause confusion with hypertext links. It is also important to add adequate 'white space' for ease of reading and processing. This is a key factor in creating a suitable layout for the questionnaire. Questions should be easy to distinguish and should be consistently formatted and laid-out. Choices given should be equally spaced and positioned consistently to avoid measurement error.
Design of online questionnaires 2: Content, by Clare Madge, Jane Wellens and Rob Shaw
Principles of good content design
Online questionnaire design should follow principles of good design of onsite questionnaires. These include:
- Background information including the purpose of the research, the sponsor, the means of returning the questionnaire, reassurances of confidentiality (if appropriate), time it will take to complete questionnaire, details of how to contact the researcher;
- Clear instructions for completing each question, with examples if necessary;
- Allocation of adequate time for producing and piloting an initial draft, redesigning and producing the revised version and for pursuing non-respondents;
- Care with questions to avoid duplicate or unnecessary questions; avoid questions that will upset or irritate the respondent; avoid questions that respondents will not have the knowledge or experience to answer; avoid leading, biased or ambiguous questions; use non-sexist, non-racist language;
- Use of a variety of question types including open and closed questions, ranking and rating questions, open-ended responses, multiple select (but bear in mind for online questionnaires open-ended questions are less likely to be completed and may result in more item non-response);
- Use of plain, easy to understand language and avoidance of technical terms, jargon and acronyms;
- Organisation of questions in logical groups and with important questions asked first and questions sub-divided into sections;
- Use of a simple style (but if colour is essential, use of opponent-colours, such as yellow and blue); use of 12 or 14-point font;
- Finishing the questionnaire with a place for respondents to add comments, with a thank-you for assistance and with details of how and when participants can get a copy of the completed survey.
Some general content design principles are noted below.
Welcome screen
No matter how the respondents are directed to or locate the online questionnaire, it is important to ensure that there is a welcome message that informs them that they have come to the correct place. This should be brief, outline the purpose of the questionnaire, make respondents feel that their contribution is important, and emphasise how easy it is for them to participate. The welcome message should be as simple as possible and avoid complex information such as lists of instructions about how to complete the questionnaire. Further links can be provided to more detailed information about issues such as data protection and confidentiality. The welcome message/page is also the appropriate place to locate login facilities for online questionnaires where access is going to be restricted in this way. The welcome screen should also explain the ideal screen configuration for viewing the questionnaire, and provide details of any other requirements (e.g. pdf viewer, flash player etc).
Example
The following welcome page is taken from a psychology study run at the Universities of Cambridge and Cardiff early in 2005. It was carried out by Martin Bruder under supervision of Prof Tony Manstead. The content of the page was designed to be in line with the American Psychological Association and British Psychological Society ethics guidelines, the guidelines of the US government on the use of human subjects (informed consent) and the additional requirements for listing online studies by the Social Psychology Network. The main characteristics include:
- Short description of the content of the study;
- Information on the Institutional Review Board that has approved the study;
- Contact addresses for pertinent questions about the research and research participants' rights;
- Expected duration of study;
- Statement concerning the voluntary character of participation and the right to withdraw from the study at any stage;
- Minimum age for participation;
- Assurance of anonymity of the data;
- Note of caution about data security;
- Description of foreseeable risks and benefits;
- Contact information of principal researcher and supervisor.
The study itself investigated what participants imagine themselves feeling and thinking in short written scenarios. It was of particular interest, in its investigation of how far these responses are influenced by the emotional behaviour of a second person described in each scenario and the supposed relationship with this person.
|
||||
|
Question type
The highly flexible and graphical nature in which materials can be delivered over the web mean that it is tempting to experiment with novel formats for presenting questions in online questionnaires. For example, Dillman et al. (1998, 8) detail some of the unconventional ways in which they have seen questions formatted in web surveys. However, respondents are more likely to find the questionnaire easy to navigate and complete if its layout and the question formats are similar to those used in onsite questionnaires. This is because respondents can concentrate on answering the actual questions rather than getting to grips with an unfamiliar question or response mode. Questions should be presented in the most appropriate formats rather than being driven by the technology. Additionally, the choice of question type will depend on the aims of the research and the characteristics of the respondents. For example, highly computer savvy respondents will have fewer problems in incorporating more complex question formats since these users are likely to be familiar with using such functions from other applications.
For email questionnaires question types are generally limited to open and closed types. But for web-based questionnaires there are a greater variety of question types that can be used: open, closed, pull-down menus, click tags and slider bars. Generally, open-ended questions are less likely to be completed and result in more item non-response. Closed questions must be employed with care in email questionnaires, as the researcher cannot place any limits on the nature of response, so respondents may provide multiple responses to a question that requires a single response. In web-based questionnaires closed questions are less problematic as text input fields can be programmed to accept only appropriate responses. Pull-down menus can also be used allowing multiple responses (usually by holding the 'shift' or 'control' key while clicking the mouse) and reduce the space required for the questionnaire but this can decrease usability and increase completion time (Dillman 2000). Click tags can either be radio buttons or check boxes. Radio buttons are circular tags that fill in when one option is selected. Although radio buttons do not allow multiple responses, they do result in fast and reliable responses (Best and Krueger 2004). Check boxes are square tags that display a mark when selected and can allow multiple responses, useful for rating questions. Click tags are easy to understand and use but take up a lot of space and require good hand/eye coordination skills. Slider bars align response options along a track containing a pointer bar. They are particularly useful for rating scales because they offer a continuum (Arnau et al. 2001) but require good technical skills, may not appear identically in different browsers and non-response is difficult to determine.
Instructions
When giving instructions for various sections of an online questionnaire, it is useful to divide the information into sections to maximise the chance that users will be able to obtain guidance when it is required. If a user experiences difficulty in using the questionnaire or is left unsure of how to proceed, the chances of non-response are likely to be increased.
Care should be taken with each of the following key sections of instructions:
General completion instructions:
These should include adequate detail to allow the user to begin. However, it is important not to overload the respondent in the initial stages of completing a questionnaire.
It may be beneficial to highlight key instructions using a particular background or a border to distinguish them from the questions and provide continuity.
Specific instructions for different question types:
These instructions are provided to make the method of answering a particular question clear, e.g.
Please answer the following question by selecting the 'Yes' or 'No' button.
They may need to be more or less explicit depending on how conventional the question type is, and on the potential technical skill of the target group. Usability testing is likely to provide assistance in targeting potential confusion or inadequate instruction.
It may be beneficial to provide these as optional instructions which can be accessed by the user when required or delivered where a user experiences difficulty. This can be done by offering 'alert-box' instructions, or by providing instructions in 'pop-up' windows. If using pop-up windows, it is important to be aware that many users will block them to prevent pop-up ads being displayed on their computers. If you plan to use pop-up windows, it is a good idea to inform users that they are essential to the working of the survey and that they will need to disable any pop-up blocking software while working on it.
See the 'Technical guide' module for further details of how these instructions can be implemented.
Submission instructions:
Successful instructions at the final stage of a questionnaire can improve response rates, and reduce the incidence of multiple submission or submission of invalid data.
Before submission, it is important to inform the user if the submit button should be pressed only once or if sections of the questionnaire have been missed or answered incorrectly. Appropriate validation routines prior to the point where the user attempts to submit the form are important in maximising success. After submission, it is also important to inform the user that the form has been submitted and to express gratitude.
See the 'Form validation' section of the 'Technical guide' module for further information about validation and submission.
It is also important to reassure the user by informing them exactly what information will be saved, reminding them of the purposes to which it will be put, and providing opportunities to limit the submission of personal details if required.
Examples
Instructions on how to submit the data, given at the end of a questionnaire:
You have now completed the questionnaire. Press the 'submit' button to anonymously send the data.
Please only press the submit button once and wait until the last page of the study appears.
Your data will be saved which might take up to 30 seconds...
Final page which appears after the submission:
Thank you very much for your help in our research.
The data that has been collected as a result of your participation will be held anonymously.
If you chose to supply your e-mail address for further contact, this will be stored separately from your data. It will only be used to contact you for this particular purpose and will not be shared with any third party.
In case you have any comments on this study, we would be grateful, if you would share them with us by completing the textbox below or by contacting the authors via e-mail:
Once again, thank you very much for supporting our research.
Length
Keep the questionnaire as short as possible. Long, time-consuming questionnaires are likely to result in fewer people wanting to participate and less people completing. To overcome these problems ensure that the invitation to participate in the questionnaire, and the welcome message, provide realistic details about how long the questionnaire will take respondents to complete. It is usually better to provide this information as a time rather than stating how many questions respondents will be asked to answer because some online question formats and structures facilitate the completion of a surprisingly large number of questions in a relatively short period of time. For example, respondents typically took less than 13 minutes to answer 119 questions in a web-based survey about Student Experiences at the University of Leicester. As a general rule, Crawford et al (2001) recommend that online questionnaires should take respondents no more than 10 minutes to complete. Providing respondents with an indication of how far through the questionnaire they are at regular intervals can be useful for participant retention. This can be achieved in various ways. A progress indicator can be used with little technical difficulty, though this may affect download speed and there is a danger some users may find them distracting. More simple ways include structuring the questionnaire into different sections (each of which may contain several questions) and indicating to respondents the nature of this structure both in the welcome message and at relevant points through the questionnaire.
Example
The following is an example of extracts from instructions designed to highlight the structure of the questionnaire and make respondents aware of progress.
Instructions in the welcome message:
This questionnaire contains three sections. The first asks you about your experiences as a student during this academic year, the second about your overall university experience and the third about your plans for when you graduate. Completing these three sections will take around 10 minutes.
At the beginning of the first section:
Section 1: this first part of the questionnaire asks you to reflect on your experiences as a student at the University of Leicester over the current academic year (September 2004 – June 2005)
Progress bar
The following image shows an example of a progress indicator.
Multi-media stimuli
Multi-media stimuli can be added to web-based questionnaires to test certain reactions, to improve design or to clarify instructions. But these can produce the most technical difficulties in terms of download time, differences in appearance according to browser and skills required from respondents. It is therefore best to only use multi-media stimuli when essential to the research project.
Static formats include pictures, photos and graphics. As a minimum, an image of the institution the researcher is affiliated to will probably be necessary for research legitimacy, but graphics may also be necessary as an essential feature of the research. In all cases, it is important to ensure that images are provided in an appropriate format (in the vast majority of cases, GIF or JPEG), and to be aware of the effect that size and compression rates will have on file size, and hence on download time. Keeping download times to a minimum is an important factor in reducing non-response.
Streaming formats include audio, video and animation. In addition to issues of file size, which can be particularly problematic in the case of video, there are also a number of further aspects which must be considered. The use of sound and video may require access to computers with relatively high minimum specifications or with particular equipment such a speakers or software. It may also be extremely difficult to predict how the information will appear on the respondent's computer. Given this, the use of multi-media should only be considered if it is essential to the research and, in certain cases, it may be necessary to carry out the research in a controlled environment such as a university computer lab to ensure validity of results (though, of course, this is likely to negate many of the benefits of carrying out research online).
Where multi-media is used, it is important to use formats that are likely to be as widely distributed as possible, simple to download and install, and available free-of-charge, such as 'Macromedia Flash'. It is also a good idea to attempt to maximise the chances of the user having a trouble-free experience. Methods of achieving this include adding an easily accessible link to any plug-in software required, clearly specifying minimum specifications required for the questionnaire, and using 'browser sniffing' to test whether or not the user has certain technology installed. These methods can make the inclusion of multi-media realistic in cases where absolute control over how it appears is less important to the research.
Good design for online questionnaires: a checklist
- Ensure consistency of appearance with different browsers and computers;
- Include a welcome screen;
- Provide clear and precise instruction on how to complete and submit the questionnaire;
- Keep the questionnaire as short as possible- 10 minutes maximum;
- Include a progress indicator to reduce drop-outs;
- Use colour sparingly- yellow and blue most effective;
- Use 12 or 14 point Times New Roman or Arial font;
- Use a variety of question types appropriate to the research question and type of respondent;
- Only use multi-media stimuli when essential to the research; project;
- Pilot and do usability testing (see 'Implementation: Piloting, evaluation and analysis' section below).
Implementation: Piloting, evaluation and analysis
Piloting
Prior to distributing the online questionnaire all aspects of design must be piloted, preferably with different types of potential respondents and with different types of computer. Navigation, spelling, typographical errors, appearance and readability must all be checked. Usability testing can also be conducted for web-based questionnaires which involves checking that the website performs the function for which it was designed, with the minimum amount of user frustration, time and effort (Pearrow 2000). Nielsen (2000) suggests that testing five users will reveal 85% of a website's usability problems. At its simplest, usability testing involves asking the user to do something and observing how they manage the task. During the tests the participants are asked to verbalize their thoughts by 'thinking out loud'. The researcher should not give directions or judgments as to how the participant is using the web-based survey. Problems with page design, navigation, content and links should be noted. Any problems can be remedied before the final questionnaire is distributed. This approach can also be combined with the use of evaluation questionnaires in pilot studies (see example below).
Example
The following example of a paper-based questionnaire for a pilot study has been provided by researchers involved in an ESRC project on gender stereotypes (Prof Constantine Sedikides, University of Southampton; Dr Alison Lenton, University of Edinburgh). In order to carry out the studies connected to this project they established the Social Psychology Web-lab.
This particular study was designed to investigate the structure of implicit gender stereotypes by having participants categorise words as to whether they apply to women/men in general or whether they do not. They were also asked to rate the status of each word and to answer some questionnaires concerning individual differences with respect to own gender identity, sexist tendencies, susceptibility to social desirability, etc. It was hypothesised that the female stereotype is both broader and less rigidly held than the male one. The study entitled "gender representations" can be found on the project Website under "previous studies".
The pilot study with approx. 20 participants was conducted directly before the study went online. As a result of the piloting, some form of adjustment was made in virtually every aspect of the study: instructions were clarified, the answering scale for the categorisation task was thoroughly re-designed and the specific timing of the task (e.g. the fixation cross before each trial) was adjusted.
Piloting studies seem particularly helpful for complex tasks and tasks that involve timing of certain behavioural aspects of the participant. For every study since then, participants have been asked to provide feedback via an open-ended question at the very end of the study. This means that the difficulties/concerns/comments of participants can be taken into account on an ongoing basis within a given study (e.g. if feedback from the first few reveals that they didn't understand a particular set of instructions, they can be changed so that future participants will have a better understanding of the tasks before them) as well as between studies (e.g., some of the same scales tend to be used across all studies and participant feedback {along with the actual data} might reveal something about the utility of this practice).
Thank you very much for your participation. The study you have just completed is a test version of an experiment we will run on the Internet in the coming months. The purpose of the experiment is to examine the way in which stereotypes of men and women are structured. This is done by looking at the way participants in the study associate words with the concepts of "maleness" and "femaleness".
In order to finalise the set-up of our study, we would like to ask you to reflect on your impressions and experiences while completing the different tasks. The basic structure of the final version will be similar to the one you have just completed – except that it won’t require participants to make the confidence ratings.
Given that the upcoming study could be improved in light of your feedback, we would appreciate, if you could offer your comments on each of the following:
initial instructions (e.g. with respect to clarity, length, language)
categorisation task (e.g. with respect to timing, task difficulty, the ease of operating the answering scale, the clarity of the interspersed instructions)
ratings of word status (e.g. with respect to the operation of the answering scale, length of the task, clarity of instructions)
ratings of word status (e.g. with respect to the operation of the answering scale, length of the task, clarity of instructions)
The time the fixation cross (‘+’) was shown before the target word (the word that should be categorised) appeared was 1.5 seconds. When you do not have to do the confidence ratings in-between the categorisations, do you think the fixation cross should be shown…
Shorter |
1 |
2 |
3 |
3 |
3 |
Longer |
You might have experienced that one of your trials ‘timed out’ (i.e., you didn't respond quickly enough). The time from the moment the target word is shown until the timeout was two seconds in this version. What do you think about this timing? – The time for categorising the words should be…
Shorter |
1 |
2 |
3 |
3 |
3 |
Longer |
How tiring did you find the study?
not at all |
1 |
2 |
3 |
3 |
3 |
very |
How interesting did you find the study?
not at all |
1 |
2 |
3 |
3 |
3 |
very |
Thanks for your help!
Evaluation
Prior to analysis and interpretation the researcher must evaluate the success of the online questionnaire. According the Denscombe (2003, 158-159) this will depend on the ability of the online questionnaire to:
- Provide full information
- Supply accurate information
- Achieve a decent response rate
- Uphold ethical principles.
It is also worth considering the following questions, adapted from Thurlow et al. (2004):
- Research breadth: Have you received a reasonable number of correctly completed online questionnaires to allow you to address your research question?
- Research quality: Are you happy that your online questionnaire has been of good quality to produce a robust set of data?
- Good examples: Have you found relevant case studies and examples with which to explore your research question?
- Own ideas: Have you been original in your thinking and translated this into the research design for the online questionnaire?
Analysis
Once the success of the online questionnaire has been evaluated, the results can be analysed as soon as they are received.
Where a web form has been used to automatically populate a database, the data can be manipulated and exported once collected.
Where a web-form has been created that emails results direct to an email address, the emails can be gathered in a single folder (manually or automatically through setting an email 'rule'). They can then be automatically exported in a suitable format (such as comma separated values) directly into spreadsheet or statistical analysis packages. This process is made easier through the use of 'import/export wizards' which can guide the user through the process (See image below for an example of MS Outlook's import/export wizard).
The Microsoft Outlook import/export wizard dialogue box
Further information about gathering and exporting data can be found in the 'Technical guide' module.
Once imported into a suitable package, the data can then be sorted and analysed.
A brief general overview of data analysis can be found on the following page
taken from the Web Center for Social Research Methods' 'knowledge base' by
William M. Trochim at Cornell University.
http://www.socialresearchmethods.net/kb/analysis.htm.
The P|E|A|S (Practical Exemplars and Survey Analysis) website, supported
by the ESRC Research Methods Programme, is a resource which provides exemplars
in survey analysis, outlines the underlying theory, and offers information
about analysis software.
http://www.napier.ac.uk/depts/fhls/peas/index.htm.
The quantitative resources area of the Social Science Information Gateway
(SOSIG) website also offers links to key websites and training materials on
data analysis and statistics.
http://www.sosig.ac.uk/roads/subject-listing/World-cat/quanmeth.html
Technical guide, by Rob Shaw
The 'Technical guide' module is targeted at researchers in a range of contexts and with a range of experience of internet-mediated research and computers in general. It aims to allow you to choose a method of designing and implementing an online questionnaire, and to develop the skills required to carry this out effectively.
The contents are as follows:
- Aims and learning outcomes
- Introduction to online questionnaire production: Overview and options
- Choosing software for online questionnaire production
- Using software for online questionnaire production
- Introduction to HTML 1
- Introduction to HTML 2
- Introduction to CSS
- Web forms
- Introduction to JavaScript
- Form validation
- Key design issues
- Gathering information about participants
- Server-side processing
- FAQs
- Glossary
- Print version
- Further resources
It can be followed in order or different sections can be used independently.
FAQs
Have many researchers used online methods successfully? Can I see what they did?
Yes, many people are now using online questionnaires. There are a number of general social science and psychology-related sites which advertise online questionnaires and experiments:
- Lab-United - International Online-Research
http://www.w-lab.de/lab-united/actual.php; - The Social Psychology Web-lab
http://socpsy.psy.ed.ac.uk (Contains links to listings of online studies); - Online Psychology Research UK
http://www.onlinepsychresearch.co.uk/ (Specifically advertises for participants from the UK).
Some key online questionnaire and research methods sources can be found at:
- Web Survey Methodology, a portal funded by the EU Fifth Framework program
http://www.websm.org/; - The Web Centre for Social Research Methods at Cornell University
http://www.socialresearchmethods.net/; - School of Library, Archival and Information Studies at the University
of British Colombia
http://www.slais.ubc.ca/resources/research_methods/general3.htm.
There are, of course, many more.
Is it possible to obtain a broad representative sample from internet research?
Given the increase in the number of internet users, many writers are increasingly optimistic about the ability of internet studies to reach a representative sample of the population for quantitative research. However, where such a sample is required, the researcher is still likely to face major problems. Attempts to create an accurate sampling frame are hampered by the fact that email listings for broad cross-sections of the population are difficult to come by. Use of the internet remains unevenly distributed both socially and geographically. People from poorer countries and from particular groups within richer countries (such as those from lower social classes or the elderly) are less likely to have access (Denscombe 2003).
Generally, online questionnaires are best used in situations where a particular group is targeted through non-probability sampling or self-selection (see 'Sampling' section).
Are my response rates likely to be higher or lower if I use an online questionnaire rather than a paper-based one? Are the results likely to be of equal quality?
Research suggests that online and postal questionnaires return roughly similar results in terms of response rates. Though research is limited, the studies that have been undertaken also suggest that the quality of response is roughly equivalent (Denscombe 2003).
How can I maximise my response rate?
The following have been reported to increase response rates:
- Send introductory letter outlining project and estimated time needed to complete the questionnaire;
- Include an institutionally sanctioned website to validate researchers' identity;
- Provide clear instructions on how to complete the questionnaire;
- Request personal information at the start of the questionnaire rather than the end;
- Use simple questionnaire format and avoid unnecessary graphics;
- Avoid grid questions, open-ended questions and requests for email addresses;
- Design survey so it takes approximately 10 minutes to complete;
- Do not include more than 15 questions;
- Send one or two follow up reminders;
- Include 'social presence' or missing data messages to reduce item non-response;
- Emphasise confidentiality.
How long should I give people to fill in an online questionnaire after I've sent it out?
Research suggests that if people are going to complete a web-survey they will do so in the first few hours or days of receiving it. However, it has also been found that increasing response rates can be achieved by follow-up reminders, a single reminder can double the number of respondents while it is suggested that four repeated contacts yields the highest response rate. The type of internet connection and hardware and software used in accessing the internet will also impact on how quickly the questionnaire is completed. But generally speaking your response rates will tail off after the first few days and you are unlikely to get many responses from your initial request after four weeks but of course, this varies with the target population and type of research project.
What kind of design issues do I need to consider when creating a questionnaire for online use?
Design issues are an extremely important consideration for online questionnaires because of the highly visual nature of the web environment, and the variations in technical skills of survey respondents. The massive range of purposes of questionnaires and diversity of the populations to be studied mean that there is no single design approach that is appropriate for all online questionnaires. Some general good practice guidelines are noted below.
- Ensure consistency of appearance with different browsers and computers;
- Include a welcome screen;
- Provide clear and precise instruction on how to complete and submit the questionnaire;
- Keep the questionnaire as short as possible - 10 minutes maximum;
- Include a progress indicator to reduce drop-outs;
- Use colour sparingly - yellow and blue most useful;
- Use 12 or 14 point Times New Roman or Arial font;
- Use a variety of question types appropriate to the research question and type of respondent;
- Only use multi-media stimuli when essential to the research project;
- Pilot and do usability testing.
See the 'Design' section for further information.
How can I create boxes in a Word document/web page that people can easily type responses into?
Forms fields can be added to web pages by adding appropriate HTML '<input>' tags or inserting them using WYSIWYG (What You See Is What You Get) editors such as Macromedia Dreamweaver or Microsoft FrontPage. A full guide to creating forms in web pages is available in the 'Technical guide' module, along with links to further information in the resources section.
It is also easy to create questionnaires within Word documents that can be sent as attachments. Participants will, however, need to have a suitable version of the software installed on their machine and it should be remembered that the use of attachments may seriously reduce response rates. Attachments may expose respondents to more computer viruses and the online researcher must ensure that they never forward any viruses with their emails or attachments. Indeed Hewson et al. (2003, 117) note that a researcher using attachments can become a 'global pariah' and it is best to refrain from using attachments altogether and rely on text-based messaging.
However, if a researcher chooses to proceed with this method, it is possible to create word documents ready-made for questionnaires. By inserting form elements from the 'forms' toolbar into a template and then protecting it, the word document is locked so that users are only able to add information to the appropriate form fields and cannot change any other sections.
A step-by-step guide to doing this is available on Microsoft's TechNet site:
http://www.microsoft.com/technet/prodtechnol/office/office2000/tips/msw9741.mspx#EBAA
and further information can be found via the University of Essex's Computing Service pages:
http://www2.essex.ac.uk/cs/documentation/use/word6/howto/forms.html
Other resources are also easy to obtain through a simple Google search.
Does surveying just have to be quantitative? Can I include 'discuss' questions?
Questionnaires are generally based on a quantitative methodology but that
does not preclude the use of more open discursive type questions. However,
you will need to think carefully about how you will analyse such responses
so you do not fall into the trap of simplistic description. The following
site from the School of Library, Archival and Information Studies at the University
of British Colombia is a useful starting point for thinking about qualitative
analysis.
http://www.slais.ubc.ca/resources/research_methods/qualitat.htm
Is there any problem in blending together data that was collected online and onsite?
In theory, no. Indeed, it is likely that in the future we will see an increase in the use of 'mixed method triangulation' with onsite and online methods both used to interrogate and verify the intersections between real and virtual infrastructures, enabling research to take place across a variety of online/offline domains. However, practical issues may emerge relating to compatibility of sampling in the two domains, transference of online and onsite questionnaire design and data format comparability. This remains a very interesting area of methodological enquiry which requires further investigation.
What kind of ethical issues do I need to consider?
As with any research, there are a wide range of ethical issues that should be considered when carrying out research via online questionnaires. Examples include issues of privacy, consent and confidentiality. It is also essential to maximise data security in collection and storage. See the 'Ethics' module of this training package for further information.
Glossary
A
Accessibility
The accessibility of a web page refers to the extent to which users can access the content regardless of the technology they use or any disability they may have. An accessible web page is one that is designed to ensure that this is possible through, for example, providing textual descriptions of graphics used to allow their significance to be described in text-only browsers or via screen-reading software.
Answering drop-outs
Those who provide answers to those questions displayed, but quit prior to completing the survey (Bojniak et al. 1991, 12).
B
Browser
Software which requests resources (mainly web pages) from a server computer and displays them. An example of client software held on a client machine.
Button
A standard HTML button. Can be linked to JavaScript and perform an action
when clicked.
e.g.:
HTML: <input type="button" value = "Standard button"
/>
C
Check box
Square tags that display a mark when selected and can allow multiple responses.
e.g.
A
B
HTML: <input type="Check box" name="1" value="A">
A <input type="Check box" name="2" value="B">
B
Client-side scripting
Client-side scripting through scripting languages such as JavaScript allows dynamic or interactive features to be added to web pages. Code is added to a web page which is executed in the browser on the client computer. It can be used to, for example, carry out different actions according to user actions or input.
Complete responders
Those responders who view and answer all questions in a questionnaire (Bojniak et al. 1991, 12).
CSS
CSS (Cascading Style Sheets - also referred to simply as 'Style Sheets') provide a means of adding design elements to basic HTML pages. For example, using CSS, it is possible to control the colour, positioning and spacing of objects such as text, links, images and tables.
D
Drop down list box / Select box
An element which allows users to select options by clicking. Only one option is displayed until the user clicks on the arrow.
e.g.
HTML: <select name="select"> <option>Option 1</option>
<option>Option 2</option> <option>Option 3</option>
<option>Option 4</option> </select>
E
F
Form elements
A set of form items that the user can enter data into to be sent to the researcher. There are ten commonly used elements, as follows:
Button, Check box, Drop down list, Password box, Radio buttons, Text area, Text box
G
Graphic Interchange Format (GIF)
One of the two most common types of images in use on the internet (along with JPEGs), GIFs are usually more appropriate for line drawings or graphics with a limited number of colours.
H
Hidden form fields
Hidden form fields are form controls that are not displayed on the page (though they are visible in the HTML source for the page). They are useful for storing and passing information from page to page which is not necessary or desirable to display. They can be thought of as text boxes with content that can be set by the developer via HTML or JavaScript rather than being completed by the user.
HTML
HTML (Hyper Text Markup Language) is the technical language that lies behind most web pages. It consists of tags which surround blocks of text to indicate how they should appear in a browser, and which are used to insert elements such as images or tables.
I
Identity verification
The process of checking the identity of individuals using official documentation, or by reference to their social characteristics. This can be important in the situation of a virtual anonymous interview.
Item nonresponders
Those responders who view the whole questionnaire, but only answer some of the questions (Bojniak et al. 1991, 12).
Item nonresponding Drop-outs
Those who view some of the questions, answer some but not all of those viewed, and also quit prior to the end of the survey. A 'more accurate depiction of actual events in web surveys than the relatively basic categorization of complete participation, unit nonresponse, or item non-response' (Bojniak et al. 1991, 12).
J
Joint Photographic Experts Group (JPEG)
One of the two most common types of images in use on the internet (along with GIFs), JPEGs are usually suitable for images with a large number of colours such as photographs. The file extension is '.jpg'.
K
L
Listbox
An element which allows users to select options by clicking. Several options are displayed.
e.g.
HTML: <select name="select2" size="4"> <option>Option
1</option> <option>Option 2</option> <option>Option
3</option> <option>Option 4</option> </select>
Lurkers
Those who view all of the questions in the survey, but do not answer any of them (Bojniak et al. 1991, 12).
Lurking drop-outs
Those who view some of the questions without answering, but also quit the survey prior to reaching the end, thus sharing some characteristics with 'answering drop-outs' and 'lurkers' (Bojniak et al. 1991, 12).
M
Measurement error
Error caused by the type or presentation of questions. In terms of online questionnaires, this is indicated when responses to the same question vary if the questionnaire is administered online or onsite.
N
Non-probabilistic sampling
A sub-set of users are selected from the sampling frame.
Non-response bias
Bias caused when respondents who answer an online questionnaire have very different attitudes or demographic characteristics to those who do not respond.
O
P
Password box
Text input box that allows a single line of text to be entered. It is possible
to limit the size and number of characters that can be entered. As the user
types, the characters are hidden from display.
e.g.
HTML: <input type="password" size="15" maxlength="10"
/>
Portal site
A web site which aims to act as an entry point to users of the internet from which they can access a range of information within the site itself and/or through links to other sites.
Probabilistic sampling
Each member of the sampling frame has an equal chance of being selected.
Q
R
Radio buttons
Circular tags that fill in when one option is selected.
e.g.
Yes
No
HTML: <input type="radio" name="1" value="Yes"
/ > Yes <input type="radio" name="1" value="No"
/> No
Reset button
A reset button clears any form data that has been input, returning them to
the original values they had when the page was loaded.
e.g.:
HTML: <input type="reset" value="Reset" />
S
Sampling frame
The sampling frame is used to identify and locate a suitable respondent (e.g. mailing lists and email directories).
Search engine
Web services that hold information about the contents of websites on the Worldwide Web which can be searched by users who wish to locate information on particular subjects. Typically, the user enters key words and the search engine returns a list of links to sites which include, or contain information connected to, these key words.
Select box / Drop down list box
An element which allows users to select options by clicking. Only one option is displayed until the user clicks on the arrow.
e.g.
HTML: <select name="select"> <option>Option 1</option>
<option>Option 2</option> <option>Option 3</option>
<option>Option 4</option> </select>
Server
A computer which delivers web pages to a client computer when a URL is typed into the address bar of a browser on that computer. Also used to refer to the software held on the server computer which allows this process to take place.
Server-side processing
Server-side processing allows dynamic or interactive features to be added to web pages. This is done by the server computer before the page is sent to the client computer. Server-side processing can be accomplished using a range of technologies such as PHP, ASP(X), Perl/CGI and ColdFusion. It can be used to, for example, validate and process information entered by users into web forms, store or retrieve information in databases, and automatically send emails.
Skip mechanisms
Functionalities added to an online questionnaire which automatically provide participants with a route through the questionnaire, avoiding questions that are not relevant. When a question is answered, the next question will be delivered according to the response so that different questions are provided depending on particular answers.
Submit button
A submit button sends the form data to the server when clicked. The action
of doing this depends on the form action specified. Most commonly it will
be to email the results or add them to a database.
e.g.:
HTML: <input type="submit" value="Submit" />
T
Text area
Allows the user to input a large amount of text. By default, the text will
wrap onto a new line when the end of a line is reached, and a scroll-bar will
appear on the right-hand side when the number of lines displayed is exceeded.
e.g.
HTML: <textarea cols="60" rows="5"></textarea>
Text box
Allows a single line of text to be input of a size and number of characters
specified.
e.g.
HTML: <input type="text" size="25" maxlength="20"
/>
Text editor
A simple application which allows users to enter, edit and save text, typically with basic formatting options.
U
URL
A URL (Uniform Resource Locator) is the address for a resource available online (usually a web page). URLs consist of a reference to the server computer which holds the resource along with a path to the file containing this resource on the computer. By typing the URL into a browser, a request is sent from a user's computer (client computer) to the server computer to deliver this resource.
Unit non-responders
Those who do not participate in a survey. There are two possible variations: They may be 'technically hindered' or may 'purposefully withdraw after the welcome screen is displayed, but prior to viewing any questions' (Bojniak et al. 1991, 12).
Usability testing
Checking that the website performs the function for which it was designed, with the minimum amount of user frustration, time and effort – usually involves observation in which participants are asked to verbalize their thoughts by 'thinking out loud', and analysis of responses.
V
Validation
The functionality which allows forms to be automatically checked before or at submission to ensure that any required questions have been answered and/or that data has been entered in a suitable format. This can be done using client-side and/or server-side scripting. Typically, validation routines will prevent submission where problems are found with the form and a message will be delivered to the user prompting them to check their answers and resubmit.
W
'Web-safe' colour palette
A set of 216 colours recommended for use on the internet as they are not subject to variation on different types of monitors and systems.
WYSIWYG
WYSIWYG (What You See Is What You Get) software packages such as Macromedia Dreamweaver or Microsoft FrontPage are tools that allow web pages to be created and edited using an interface that displays the page as it will appear in a browser.
X-Z
List of references
The following is a list of references cited in this module:
Arnau, R. C., Thompson, R. L. and Cook, C. (2001) Do different response formats change the latent structure of responses? An empirical example using taxometric analysis. Educational and Psychological Measurement, 61, 23-44.
Bennett, G. (2000) Using incentives to ensure quality data and high response rates. Paper presented at Online market research and web based surveys conference, London, May 30 - June 1. Abstract.
Bernard, M. and Mills, M. (2000) So, what size and type of
font should I use on my web-site? Usability News 2, 2.
http://psychology.wichita.edu/surl/usabilitynews/2S/font.htm
Bernard, M., Lida, B., Riley, S., Hackler, T. and Jamzen, K. (2002)
A comparison of popular online fonts: Which type and size is best? Usability
News 4, 1.
http://psychology.wichita.edu/surl/usabilitynews/41/onlinetext.htm
Best, S. J. and Krueger, B. S. (2004) Internet data collection. Sage University Paper 141. London. Sage.
Birnholtz, J. P., Horn, D. B., Finholt, T. A. and Bae, S. J. (2004) The effects of cash, electronic, and paper gift certificates as respondent incentives for a web-based survey of technologically sophisticated respondents, Social Science Computer Review, 22, 3, 355-362.
Bosnjak, M. and Tuten, T. L. (2001) Classifying response
behaviors in web-based surveys, Journal of Computer Mediated Communication,
6, 3.
http://jcmc.indiana.edu/vol6/issue3/boznjak.html
Bosnjak, M., Tuten T. L. and Bandilla, W. (1991) Participation in web surveys: A typology, ZUMA Nachrichten, 48, 7-17.
Carini, R.M. et al (2003) College students responses to web and paper based surveys: Does mode matter? Research in Higher Education, 44, 1, 1-19.
Cook, C., Heath, F. and Thompson, R. L. (2000) A meta-analysis of response rates in web or internet-based surveys, Educational and Psychological measurement, 60, 6, 821-836.
Coomber, R. (1997) Using the Internet for survey research,
Sociological Research Online, 2, 2.
http://www.socresonline.org.uk/2/2/2.html.
Crawford, S. D ., Couper, M. P. and Lamias, M. J. (2001) Web-surveys: Perceptions of burdens, Social Science Computer Review, 19, 2, 146-162.
Denscombe, M. (2003) The good research guide for small scale research projects. Maidenhead. Open University Press.
Dillman, D. A. (2000) Mail and internet surveys - the tailored design method. New York. Wiley.
Dillman, D. A., Tortora, R. D. and Bowker, D. (1998). Principles for Constructing Web Surveys. SESRC Technical Report 98-50. Pullman. Washington.
Dodd, J. (1998) Market research on the Internet- threat or opportunity? Marketing and Research Today, 26, 1, 60-66.
Frick, A., Bachtiger, M. T. and Reips, U. D. (2001) Financial incentives, personal information and drop-out rate in online studies, in Reaps, U. D. and Bosnjak, M. (Eds.) Dimensions of internet science. Lengerich. Pabst Science Publishers. pp. 209-220.
Harris, C. (1997) Developing online market research methods and tools, Paper presented to ESOMAR Worldwide Internet Seminar. Lisbon, July 1997.
Hewson, C., Yule, P., Laurent, D. and Vogel, C. (2003) Internet Research Methods. London. Sage.
Janelle, D. G. and Hodge, D. C. (2000) Information, place, cyberspace and accessibility, in Janelle, D. G. and Hodge, D. C. (Eds.) Information, Place and Cyberspace. New York. Springer. pp. 3-12.
Jeavons, A. (1998) Ethology and the web: Observing respondent behaviour in web surveys. Proceedings of the Worldwide Internet Conference, Amsterdam, ESOMAR.
Knapp, F. and Heidingsfelder, M. (2001) Drop-out analysis: The effect of research design, in Reips, U. D. and Bosnjak, M. (Eds.) Dimensions of internet science. Lengerich. Pabst Science Publishers. pp. 221-230.
Litvin, S. W. and Kar, G. H. (2001) E-surveying for tourism research: Legitimate tool or a researcher's fantasy? Journal of Travel Research, 39, 308-314.
Madge, C. and O'Connor, H. (2002) Online with e-mums: Exploring the Internet as a medium for research, Area, 34, 1, 92-102.
Madge, C. and O'Connor, H. (in press) Mothers in the making? Exploring notations of liminality in hybrid cyber/space. Transactions of the Institute of British Geographers.
Mann, C. and Stewart, F. (2000) Internet Communication and Qualitative Research. London. Sage.
McDonald, H. and Adam, S. (2003) A comparison of online and postal data collection methods in marketing research, Marketing Intelligence and Planning, 21, 2, 85-95.
Miller, T. W. and Panjikaran, K. J. (2001) Studies in Comparability: The Propensity Scoring Approach. University of Wisconsin, Madison.
O'Connor, H. and Madge, C. (2004) My mum's thirty years out of date: The role of the Internet in the transition to motherhood, Community, Work and Family. 7, 3, 351-369.
O'Lear, R. M. (1996) Using electronic mail (e-mail) surveys for geographic research: Lessons from a survey of Russian environmentalists, Professional Geographer, 48, 209-217.
O'Schaefer, D. R. and Dillman, D. A. (1998) Development of standard email methodology, Public Opinion Quarterly, 62, 3, 378-397.
Pealer L. N., Weiler, R. M., Pigg, R. M., Miller, D. and Dorman, S. M. (2001) The feasibility of a web-based surveillance system to collect health risk behaviour form data from college students, Health, Education and Behaviour, 28, 5, 547-599.
Pearrow, M. (2000). Web Site Usability Handbook. Rockland, Mass. Charles River Media, Inc.
Porter, S. R. and Whitcomb, M. E. (2003a) The impact of lottery incentives on survey response rates in Research in Higher Education, 44, 4, 389-407.
Porter, S. R. and Whitcomb, M. E. (2003b) The impact of contact type on web-survey response rates. In Public Opinion Quarterly, 67, 4, 579-589.
Rigden, C. (1999) The eye of the beholder- designing for colour blind users British telecommunications Engineering Journal 17, 2-5.
Riva, G., Teruzzi, T. and Anolli, L. (2003) The use of the Internet in psychological research: Comparison of online and offline questionnaires, CyberPsychology and Behavior, 6, 1, 73-80.
Roberts, L. D. and Parks, M. R. (2001) The social geography of gender switching in virtual environments on the Internet, in Green, E. and Adam, A. (Eds.) Virtual Gender: Technology, Consumption and Gender. London. Routledge. pp. 265-285.
Sax, L. J., Gilmartin S. K. and Bryant A. N. (2003) Assessing response rates and non response bias in web and paper surveys, Research in Higher Education, 44, 4, 409-431.
Schaefer, D. R. and Dillman, D. A. (1998) Development of a Standard E-mail Methodology: Results of an Experiment. Public Opinion Quarterly, 62, 378-397.
Silver, D. (2000) Looking backwards, looking forwards: Cybercultural studies 1990-2000, in Gauntlett, D. (Ed.) Web.Studies: Rewiring Media Studies for the Digital Age. London. Arnold. pp. 19-30.
Smith, C. (1997) Casting the net: Surveying an Internet population, Journal of Computer Mediated Communication, 3, 1.
http://jcmc.indiana.edu/vol3/issue1/smith.html.
Sweet, C. (2001) Designing and conducting virtual focus groups, Qualitative Market Research: an International Journal, 4, 3, 130-135.
Taylor, H., Bremer, J., Overmeyer, C., Siegel, J. W. and Terhanian, S. (2001) Touchdown! Online polling scores big in November 2000, Public perspective, 12, 33-35.
Taylor, T. L. (1999) Life in virtual worlds: Plural existence, multimodalities and other online research challenges, American Behavioral Scientist, 43, 436-449.
Umbach, P. D. (2004) Web surveys: Best practices, New Directions in Institutional Research, 121, 23-38.
Valentine, G. (2001) Social geographies. Space and society. Harlow. Prentice Hall.
Wakeford, N. (2000) New media, new methodologies: Studying the web, in Gauntlett, D. (Ed.) Web.Studies: Rewiring Media Studies for the Digital Age. London. Arnold. pp. 31-41.
Warf, B. (2001) Segueways into cyberspace: Multiple geographies of the digital divide, Environment and Planning B, Planning and Design, 28, 3-19.
Witmer, D. F. Colman, R. and Katzman, S. L. (1999) From paper-and-pencil to screen-and-keyboard: Towards a methodology for survey research on the Internet, in Jones, S. (Ed.) Doing Internet Research: Critical Issues and Methods for Examining the Net. London. Sage. pp. 145-161.
Zhang, Y. (1999) Using the internet for survey research: A case study, Journal of American Society for Information Science, 51, 1, 57-68.
Further resources
Useful references for online questionnaires
Websites
Web Survey Methodology
http://www.websm.org/.
This website aims to provide information on new technologies in data collection,
with a focus on web surveys. It includes an extensive bibliography, discussion
forum, news and events. The bibliography includes case studies, sampling,
non-response, design, incentives and technology. Also includes a selection
of articles and
Edinburgh-Southampton Social Psychology Web-lab
http://socpsy.psy.ed.ac.uk.
Website set up by researchers involved in an ESRC project on gender stereotypes
(Prof Constantine Sedikides, University of Southampton; Dr Alison Lenton,
University of Edinburgh) in order to carry out the studies connected to the
project. Lists current and previous studies carried out by the team alongside
links to other collections of online experiments and questionnaires.
How to Put Questionnaires on the Internet, Dr Paul Kenyon
http://www.flyfishingdevon.co.uk/salmon/internet_questionnaires/internet_questionnaires.htm.
An excellent learning module from SALMON (Study and learning Materials online),
by Paul Kenyon, formerly of The University of Plymouth, Department of Psychology.
Designed to introduce students to the use of the internet for collecting
research data. It is related to the particular software and systems in use
in the department, but also contains a wealth of general information of
use to those aiming to produce online questionnaires in a range of different
contexts.
Online survey design guidelines
http://lap.umd.edu/survey_design/guidelines.html.
A collection of tips on design, navigation, accessibility and usability issues
collated from key literature in the field, based at the University of Maryland.
Association for Survey Computing
http://www.asc.org.uk/.
Professional association for Survey Computing in the UK. Includes a searchable
database of software for survey creation, administration and analysis.
Web Survey guide
http://joni.soc.surrey.ac.uk/~scs1ps/ surrey%20web%20surveys/index.html.
Website offering a basic overview to online surveying and advice on how to proceed in conducting a Web survey.
SRA/RMP seminars on survey methods: Use of internet surveys in opinion
polling
http://www.ccsr.ac.uk/methods/ events/SRARMP1/ programme.htm.
Information webpage of a half-day seminar held jointly by the Social Research Association and the ESRC Research Methods Programme in November 2005. Includes powerpoint presentations on subjects such as layout of online questionnaires and the future of online research.
Books and journal articles
Andrews, D., Nonnecke, B. and Preece, J. (2003) Electronic survey methodology: A case study in reaching hard-to-involve internet users. International Journal of Human-Computer Interaction, 16, 185-210.
Ballard, C. and Prine, R. (2002) Citizen perceptions of community policing: Comparing internet and mail survey responses. Social Science Computer Review, 20, 4.
Bandilla, W. (2002) Web surveys - An appropriate mode of data collection for the social sciences? in Batinic, B., Reips, U. D. and Bosnjak, M. (Eds.) Online Social Sciences. Seattle, WA. Hogrefe and Huber. pp. 1-6.
Barnes, S. B. (2003) Issues of attribution and identification in online social research, in Johns, M. D., Chen, S. S. and Hall, G. J. (Eds.) Online Social Research New York. Peter Lang. pp. 203-222.
Best, S. J. and Kruger, B. (2002) New approached to assessing opinion: The prospects for electronic mail surveys. International Journal of Public Opinion Research, 14, 73-92.
Best, S. J., Krueger, B., Hubbard, C. and Smith, A. (2001) An assessment of the generalizability of Internet surveys. Social Science Computer Review, 19, 131-145.
Birnbaum, M. H. (2004) Human research and data collection via the Internet. Annual Review of Psychology, 55, 803-832.
Bosnjak, M. and Batinic, B. (2002) Understanding the willingness to participate in online surveys - The case of e-mail questionnaires, in Batinic, B., Reips, U. D. and Bosnjak, M. (Eds.) Online Social Sciences Seattle, WA. Hogrefe and Huber. pp. 81-92.
Bosnjak, M. and Tuten, T. L. (2003) Prepaid and promised incentives in Web surveys: An experiment. Social Science Computer Review, 21, 208-217.
Brenner, V. (2002) Generalizability issues in Internet-based survey research: Implications for the Internet addiction controversy, in Batinic, B., Reips, U. D. and Bosnjak, M. (Eds.) Online Social Sciences. Seattle, WA. Hogrefe and Huber. pp. 93-113.
Chen, S. S. and Christians, C. G. (2003) Introduction: Technological environments and the evolution of social research methods, in Johns, M. D., Chen, S. S. and Hall, G. J. (Eds.) Online Social Research New Work. Peter Lang. pp. 15-24.
Dillman, D. A. and Bowker, D. K. (2001) The Web questionnaire challenge to survey methodologists, in Reips, U. D. and Bosnjak, M. (Eds.) Dimensions of Internet science. Lengerich, Germany. Pabst Science Publishers. pp. 159-178.
Glover, D. and Bush, T. (2005) The online or e-survey: a research approach for the ICT age. International Journal of Research & Method in Education, 28, 2, 135-146.
Gosling, S. D., Vazire, S., Srivastava, S. and John, O. P. (2004) Should we trust Web-based studies? American Psychologist, 59, 93-104.
Kaye, B. K. and Johnson, T. J. (1999) Taming the cyber frontier: Techniques for improving online surveys. Social Science Computer Review, 17, 323-337.
Kraut, R., Olson, J., Banaji, M., Bruckman, A., Cohen, J. and Couper, M. (2004) Psychological research online. American Psychologist, 59, 105-117.
Neustadtl, A., Robinson, J. P. and Kestnbaum, M. (2002) Doing social science research online, in Wellman, B. and Haythornthwaite, C. (Eds.) The Internet in Everyday Life Malden, MA. Blackwell. pp. 186-211.
O’ Neil, K. M. and Penrod, S. D. (2001) Methodological variables in Web-based research that may affect results: Sample type, monetary incentives, and personal information, Behavior Research Methods, Instruments and Computers, 33, 226-233.
O’Neil, K. M., Penrod, S. D. and Bornstein, B. H. (2003) Web-based research: Methodological variables’ effects on dropout and sample characteristics. Behavior Research Methods, Instruments, and Computers, 35, 217-226.
Ranchhod, A. and Zhou, F. (2001) Comparing respondents of e-mail and mail surveys: Understanding the implications of technology. Marketing Intelligence and Planning, 19, 254-262.
Schonlau, M., Asch, B. J. and Du, C. (2003) Web surveys as part of a mixed-mode strategy for populations that cannot be contacted by e-mail. Social Science Computer Review, 21, 218-222.
Scriven, A. and Smith-Ferrier, S. (2003) The application of online surveys for workplace health research, Journal for the Royal Society for the Promotion of Health, 123, 2, 95-101.
Truell, A. D., Bartlett, J. E., II. and Alexander, M. W. (2002) Response rate, speed, and completeness: A comparison of Internet-based and mail surveys. Behavior Research Methods, Instruments & Computers, 34, 46-49.
Welker, M. (2001) E-mail surveys: Non-response figures reflected, in Reips, U. D. and Bosnjak, M. (Eds.) Dimensions of Internet science. Lengerich, Germany. Pabst Science Publishers. pp. 231-238.
Wonshik, C. (2003) Issues in Internet research. Nursing Outlook, 51, 6-12.
Useful references for web-design
Websites
The 'Technical guide' module contains a range of information and further links on web design and web programming for questionnaires. Some key links are also shown below.
W3 Schools
http://www.w3schools.com/.
A range of reference information, tutorials, examples and quizzes on a wide
range of internet technologies, including HTML, Cascading Style Sheets and
JavaScript.
Getting started with HTML
http://www.w3.org/MarkUp/Guide/.
A good basic introduction to HTML from the World Wide Web Consortium (W3C).
HTML Goodies
http://www.htmlgoodies.com/.
A range of short tutorials designed to help you with specific aspects of web
design.
HTML Reference
http://www.w3schools.com/tags/ref_byfunc.asp.
List of HTML tags organised by their function, from W3Schools.
Cascading Style Sheets (CSS)
http://www.w3.org/MarkUp/Guide/Style.html.
An introduction to Cascading Style Sheets from W3C.
Web-safe colours
http://www.lynda.com/hex.html.
Offers tables of web-safe colours organised by either hue (colour) or value
(lightness). Makes it easier to design appropriate colour schemes.
JavaScript Primers
http://www.htmlgoodies.com/primers/jsp/.
30 short JavaScript lessons with learning activities.
JavaScript examples
http://JavaScript.internet.com/.
Over two-thousand examples of JavaScripts organised into sub-sections.
JavaScript use in forms
http://irt.org/script/form.htm/.
World Wide Web Consortium (W3C) Validators
http://www.w3.org/QA/Tools/#validators
Enter a link to your web pages or upload a local file to check that your HTML
or CSS meets web standards and guidelines.
HTML Tidy
http://www.w3.org/People/Raggett/tidy/.
Automatically cleans up HTML to correct any problems caused either by mistakes
or automatic production of invalid HTML by web editors.
Books and journal articles
Badre, A. N. (2002) Shaping Web Usability. Interaction design in context. Boston. Addison-Wesley.
Holzsclag, M. E. and Lawson, B (2002) Usability: The site speaks for itself. Birmingham. Glasshaus
Lynch, P. J. and Horton, S. (1999) Web style guide: Basic design principles for creating web sites. New Haven and London. Yale University Press.
Nielsen, J. (2000) Designing web usability; The practice of simplicity. Indianapolis, Indianapolis. New Riders. (See also http://www.useit.com)
Nielson, J. and Tahir, M (2002) Homepage usability – 50 Websites Deconstructed. Indianapolis. New Riders.
Pearrow, M. (2000) Web Site Usability Handbook. Rockland, Mass. Charles River Media, Inc. (Chapter 5).
Useful references for usability testing
Websites
Usability news: Wichita State University Software Usability Research
Lab
http://psychology.wichita.edu/surl/usability_news.html.
Newsletter providing a range of articles with information on research into
software and website design and usability.
Usable Information Technology
http://www.useit.com/.
Leading site on usability and user studies by Jacob Neilson.
Books and journal articles
Badre, A. N. (2002) Shaping Web Usability. Interaction design in context. Boston: Addison-Wesley. (Chapter 12).
Krug, S. (2000). Don't make me think: A common sense approach to web usability. Indianapolis. New Riders. (Chapter 10).
Nielsen, J. (March 2000). Alertbox column: Why You Only
Need to Test with 5 Users.
http://www.useit.com.
Pearrow, M. (2000) Web Site Usability Handbook. Rockland, Mass: Charles River Media, Inc. (Chapter 8).