Methodology Review

  • Category:
  • Document type:
  • Level:
  • Page:
  • Words:

Data collection

The study targeted college students between the ages of 18 to 24 years as the focus population. In this case, a pre-test of N = 54 was used to ensure that there were validity and reliability of used constructs and refined question wording. A large university in the South-Eastern USA was selected and an email link to the survey was sent (Chan-Olmsted, Rim and Zerba, 2013).

Time of data collection

The email link to the sample was sent in March 2011. This was the time in which the sample population in the university was in session. When sending email based surveys, timing plays a vital role. This is because wrong timing can lead to a poor response on the questionnaires. The emails should be sent based on the time the respondents are highly likely to open the emails and have time to respond to them. The fact that the questionnaires were sent when the students were in session may have increased the response rate due to good timing. Also, the participants were given an extra credit on their course which may have motivated participation (Chan-Olmsted, Rim and Zerba, 2013).

Method used

The researchers in this study used electronic survey which is quantitative based. This is where a computer plays a major role in survey delivery and collection of the survey data from the respondents. The email-based online survey was then used over the internet. In this case, the researchers developed questionnaires to be used in the email survey. This included the development of scales and multiple choice questionnaires based on exploratory interview data. To eliminate bias, the proper wording was used which was clear and unambiguous. The email questionnaires included rating definitions, scale formats and demographic items (Chan-Olmsted, Rim and Zerba, 2013).


A convenience sample was selected from the undergraduate students who were taking introductory courses at a large public university in South East USA. This included students between the ages of 18 to 24 years as the focus population. A total of 755 students were contacted and the sample response rate was 51%. This led to the sample of 384 students. In the sample of 384 students, 376 questionnaires were complete and the rest were eliminated for being incomplete (Chan-Olmsted, Rim and Zerba, 2013).

Data collected

The data collected through the email survey was the respondent answers on the questionnaires filled electronically. The first set included frequency and intensity of use for the mobile news. The data included the participant’s willingness to invest in the mobile news through paying for the content. The participants were also asked on how often they receive news through the use of the mobile device. Time spent reading news on the mobile device and their subscription to any of the news service. Non-users gave data on the likelihood of using mobile news in future. All was recorded using a 7 point scale (Chan-Olmsted, Rim and Zerba, 2013).

The second data was based on the relative advantage of mobile news which utilised a 7 point scale. Perceived utility and ease of use were measured using a three-item 7 point scale and media usage also utilised 7 point scale where participants indicated how often they utilised communication platforms such as social media, mobile phone and computer. The next form of data was based on news consumption and preference. This is also utilised a 7 point scale to record preference and consumption of news. Lastly, demographics data was collected as control variables. This included the level of education, gender, age, ethnicity, income and marital status (Chan-Olmsted, Rim and Zerba, 2013).

Methodological analysis

The email-based electronic survey is one of the oldest modes of an electronic survey. In the past, email based surveys had a linear structure and had a limited length. In this case, the email based electronic survey was used. This is where an email link was sent to the participants to fill in an electronic questionnaire with scales and choices. This type of survey should only be used where there is internet access. This is because everything is done online. In this case, students participating had access to the internet and email accounts. The method is also affected by the frequency of internet use. Internet access has been on the rise and hence this was not a problem that the survey had to face. The demographic differences were reduced in this case since all participants were in the same facility. This helped in giving unbiased population characteristics (Chan-Olmsted, Rim and Zerba, 2013).

Email based surveys have been adopted because it is cost effective and has higher response rates that a web-based survey. They do not have to be printed and can be easily updated. It has been proved that through the use of email-based online survey, it is possible to get 75% of the response in the first three days (Van Selm & Jankowski, 2006). As seen in this case, the ability to analyse the data fast based on the web surveys cannot be underestimated. This is because there is embeddedness of descriptive statistics which makes it easy to analyse. The scales used in the electronic surveys, in this case, make it simpler to analyse the electronic data. This also reduces the human error which can affect the integrity of the whole results (Chan-Olmsted, Rim and Zerba, 2013).

It is vital to note that since the targeted group was based on the same location, the selection bias was limited. After the email based survey is over, the researcher as seen in this case is expected to do manual coding for the results obtained. Data reliability has been a major threat to email based surveys. Despite this, use of automation tools can help a lot in allowing data checking. Use of close ended questions in this interview made it easy and quick to for respondents to answer the questions. It also reduced the missing data in the questionnaires. When processing, close-ended questions are easily processed and answers can be compared for inter-code reliability. Despite the positive side of close-ended questions, there are chances that the participants had a restricted range of answers. There was also the risk of the respondents interpreting the questions differently (Cobanoglu, Warde & Moreo, 2001). For clarity, strengths and limitations of this methodology will be further discussed using subheadings.

Strengths and limitations of email-based online survey used


Low costs

Use of email surveys has low costs than the traditional methods. This is because one can reach a large sample size with relatively low costs. The method has low overhead costs and the collection of data does not require a lot of resources compared to other methods (Van Selm & Jankowski, 2006).

Real-time access

The respondents as seen in this case input their data electronically and its automaticity are stored. This makes it easy to analyse the data. The data is available immediately and can be streamlined.

Less time

Use of email based survey makes it possible to deploy it in a rapid manner. The return rates attained through this method are not possible with other traditional survey methods. If a wrong contact is used for the email, it is possible to rectify after being notified immediately after sending. This is not possible with traditional surveys (Van Selm & Jankowski, 2006).


The respondents as seen in this study were able to complete the questionnaires on their schedule and pace. The respondents using email based surveys can carry out the surveys at their pace where they can pause and finish later (Cobanoglu, Warde & Moreo, 2001).

No interviewer

Lack of face to face interaction with the interviewer as in this case makes it possible for the participants to disclose their information which they may not share with another person. It has been proved that use of interviewers in the face to face interview can influence the response hence leading to bias (Van Selm & Jankowski, 2006).

Flexibility in design

Design flexibility makes it possible to make the email based surveys simpler. This is where logics and patterns are used to ensure that errors are reduced (Van Selm & Jankowski, 2006). An example is the use of scale in the design of the questionnaires to simplify the response and cut down errors.


Low response rates

Use of online surveys is characterised by low response rates. It has been found that internet-based surveys tend to have worse response rates compared to traditional methods (Van Selm & Jankowski, 2006). This can be evidenced by the fact that the study had a response rate of 51% in this case which is quite low. The survey contacted 755 students and only 384 students responded. This is a major weakness for this method.

Confidentiality and anonymity issues

Online surveys are subject to ethical issues. When carrying out online surveys, making an assurance of anonymity and confidentiality for the participants is a requirement. When the researcher fails to keep assurances of anonymity and confidentiality, there is a risk of breaching the ethics. With the rise of electronic theft, respondents may fear to participate in online surveys. Email surveys contain some form of an identifier of the sender. This leads to issues with sender confidentiality and a reduced return rate (Cobanoglu, Warde & Moreo, 2001).

Multiple replies

This occurs where the respondents have more than one email address; there is the likelihood of multiple responses. This is because most people have more than a single email address. This may lead to bias and inaccurate results (Van Selm & Jankowski, 2006).


Using the email to draw samples is harder than using other methods. This is because some of the population may lack internet access and hence fail to access the email based questionnaires. The samples are taken through generalising that the entire population have access to the internet which in some instances may not be the case. Poor control of the sample may lead to biased samples. The method also has few sampling frames (Cobanoglu, Warde & Moreo, 2001).


Chan-Olmsted, S., Rim, H. and Zerba, A., (2013), Mobile news adoption among young adults examining the roles of perceptions, news consumption, and media usage, Journalism & Mass Communication Quarterly, 90(1), 126-147.

Cobanoglu, C., Warde, B., & Moreo, P. J. (2001), A comparison of mail, fax and web-based survey methods, International journal of market research, 43(4), 441-452.

Van Selm, M., & Jankowski, N. W. (2006). Conducting online surveys, Quality and Quantity, 40(3), 435-456.