Continue describing the survey Essay Example

  • Category:
  • Document type:
  • Level:
  • Page:
  • Words:


Barriers are defined as hindrances that limit the optimal gathering of data in a research (Funk, Tornquist, & Champagne, 1995). Notably, barriers to research can relate to the human subjects, the method or tools of research, or even the perceptions that respondents have towards the research. In the survey used in Appendix B, several barriers are possible especially considering the nature of the web-based survey research instrument that was identified for use. The web-based survey was considered for use due to its ability to reach all sampled respondents fast and cost-efficiently. As Gordon (2003) notes also, web-based surveys have the potential of improving sample validity since large numbers of respondents have been found to complete such surveys. Moreover, it has been established that web-based surveys reduce the sampling bias (Gordon, 2003).

The advantages of the web-based surveys notwithstanding, there are several barriers that emerge from literature. First, the time needed to read and answer the questions correctly may become a barrier especially for respondents who are pressed for time. As Funk et al. (1995) further note, some respondents may not understand the research’s value and as such, may not give it the necessary attention. Others, on the other hand, may not perceive the research as beneficial to self, and as such, may not pay it much attention. Another barrier that is likely to hinder the effectiveness of web-based surveys is some respondents being isolated and hence lacking accessing to colleagues with whom they would discuss the contents of the survey (Funk et al., 1995).

The absence of enough awareness about the research among respondents may also act as a barrier. Funk et al. (1995) specifically notes that when respondents are not aware about the research or the intended use of the research findings, they may not appreciate its value. Moreover, without proper orientation about the research, respondents may feel bombarded and may not give the research much attention.

Notably, the web-survey is a self-report questionnaire that respondents access via the web. While the self-report approach enables the respondent to give their perspectives on a subject, they are also disadvantageous to the researcher because of potential validity problems. As Barker, Pistrang and Elliot (2005) observe, for example, respondents may deceive other and even themselves when filling the self-reported questionnaire. Another barrier that emerges when self-report questionnaire are used is related to respondents providing details that do not contribute any value to the concept that the researcher had an interest in. Such a barrier is especially likely in the section 5 of appendix B, where open-ended questions are used.

Regardless of the barriers identified herein, web-based survey was identified for use in this research because of convenience reasons related with costs, speed and the ability to reach a wider sample in Saudi Arabia. As indicated in the literature review section, Saudi Universities have invested heavily in technology. However, the use of such technology in LMS can only be established by understanding how the stakeholders (especially the teaching staff) use the technology that has been made available by the government. A web-based survey, which is ideally a self-reported tool of research where respondents will indicate their answers on several structured questions, is therefore considered an ideal method of information gathering.


Incentives refer to any offer (financial or otherwise) that is made to the sampled respondents for purposes of encouraging them to participate in the research fully (Guyll, Spoth & Redmond, 2003). It has been argued that in addition to enhancing the response rate of respondents in a survey, incentives also improve the quality of responses provided (Harhoff, 1996). The quality of responses arguments comes from the conviction that incentivised respondents understand that there is some attached value to the research questions, and would therefore be more keen when filling in the questionnaire.

The debate on whether incentives represent an undue influence that is corruptive in nature is widely discussed in literature by authors who include Titmuss (1997), Wilkinson and Moore (1997), McNeill (1997), Grant and Sugarmann(2004), and Grant (2002) among others. Grant and Sugarman (2004), for example, note that incentives are considered unethical in some cases because they have some coercive power, which means that the respondent are not as objectives as they would otherwise be. In their own research, Grant and Sugarman (2004) found out that incentives do not necessarily have to corrupt the respondents’ judgement. Guyll et al. (2003) also indicated that in some cases, researchers offer incentives based on the recognition that responding to research will cost the respondent in terms of exposing his details or spending his time resources providing relevant data to the researcher.

With the preceding indicated arguments about the ethical nature of incentives in mind, this research sought to incentivise the respondents by letting them participate in a research, which could have an impact in the future of elearning in Saudi Arabia. The incentive was communicated to all respondents in the questionnaire’s cover page. Such decision was inspired by McDowell et al.’s (2015) view that most respondents are just as happy to contribute to a better future and do not necessarily need to be rewarded or incentivised in any other way. Beside, some of them might be uncomfortable with prevailing situations related to the research topic, and are therefore convinced that their contribution to research will make the situation better.

As has been indicated in the literature review section, the user intention of LMS may be high among female academic staff in Saudi Arabia, but that does not necessarily equate to widespread use of the same. External variables, for example, the availability of technology, may lead to a situation where differences between the intention of use and the actual technology use become evident (Park, 2009). The gap between user intentions and the actual use therefore implies that the users may have an interest any research that may enhance the use of LMS in teaching. This preceding argument therefore supports the rationale of the non-monetary incentive used in this research. Specifically, the respondents too may have an interest in any developments registered in the research area, and therefore, may find such interest inspiring enough to make them provide quality responses for the survey.

Perception of support and faculty desire to teach

Perception has varied definitions in literature. Hill (2001) for example defines it as “the process of interpreting and organising the environmental information received by the senses” (p. 124). On his part, Galotti (2009) defines perception as the “process that makes sensory patterns meaningful” (p. 5). Although different words have been used by different authors to define perception, several similarities from the two definitions quoted above. One, that perception is a process, and second, that perception is related to how human attach meaning to the information they receive either through sight, hearing, tasting, touching or feeling. Griggs (2010) explains subject further and indicates that perception depends on a person’s psychological processes, which involve emotions, motivation or memory. Applied in the context of this research, it is evident why perception would be expected to effect the faculty’s interactions with the LMS systems, and how they use the system in teaching. In other words, the faculty’s perceptions of how they are supported to use LMS in teaching would most likely affect their desire to use them in teaching.

Perceptions are a critical consideration in TAM, because as Park (2009) observes, the perceived usefulness of technology as well as the perceived ease of use of the technology interact and are responsible for the attitude that people have toward technology, Moreover, the perceived usefulness affect the intention to use and may in the end affect the actual use of technology.. If LMS is perceived as essential for learning and teaching, it is possible that the faculty would desire it more. Desire is a psychological process, where one wishes to have something. According to Hsieh (2003), desire is generates action especially if coupled with reason. In other words, if the faculty desires to use LMS in education, the possibility of the faculty members using the learning systems in real life increases when they are convinced through reason that the use of LMS is good for teaching and learning. As indicated in the literature review section, the use of elearning in Saudi Arabian universities is influenced by perceptions that are held by both learners and the teaching staff.

Open-ended questions and logic

As evident in Appendix B, the survey will include close-ended questions in sections one to four and open-ended questions in section 5. The open-ended questions are a departure from the survey instrument used by Al Balawi (2007). Arguably, closed-ended questions have been found to restrict the respondents’ answers to what is provided in the questionnaire (Reja et al., 2003). Open-ended questions, on the other hand, have been found effective in gathering information that is spontaneous from respondents (Reja et al., 2003). Additionally, it has been established that open-ended questions do not have the bias that usually surfaces when responses are suggested to sampled respondents.

Notably, the questions that the researcher chooses to use have an effect on the quality of data obtained in a research. As Reja et al. (2003) note, there is a possibility of the researcher discovering new and useful information from open-ended questions that he had not even envisaged. However, open-ended questions also come with an added responsibility of extensive coding. A compromise between close- and open-ended questionnaires has however been advocated for by Reja et al. (2003). If for example there are deviant responses in the close-ended questions used in the survey, the researcher can use open-ended questions to explore the same. As is the case in the web-survey questions used for this research, the researcher can also identify open-ended questions that will used to support or verify the data obtained from the close-ended responses.

As suggested in the technology acceptance model (TAM), which is described in detail elsewhere in this paper, people have different intentions for accepting technology. Arguably, therefore, open-ended questionnaires give them platform for making it known to the researcher, what their intentions for technology acceptance are. Using such intentions, and in line with TAM, the researcher can then predict the technology usage intentions as indicated by Leong and Huang (2002). Notably, the three open-ended questions that were added in section 5 of appendix B are meant to gather data about the respondent opinions regarding Blackboard. The responses are meant to provide the researcher with qualitative data, which will be essential for supporting the quantitative data that will be gathered using the closed-ended questions.


Al Balawi, M.S. (2007). Critical factors related to the implementation of web-based instruction by higher-education faculty at three universities in the Kingdom of Saudi Arabia. (Unpublished PhD thesis). University of West Florida, Pensacola, FL.

Barkerm C., Pistrang, N., & Elliot, R. (2005). Research methods in clinical psychology: an introduction for students and practitioners, (2nd ed.). London: Wiley.

Funk, S., Tornquist, E., & Champagne, M. (1995). Barriers and facilitators of research and utilization: an integrative review. Nursing Clinics of North America, 30(3), 395-407.

Galotti, K.M. (2009). Cognitive psychology. In and out of the laboratory. Stamford, CT: Cengage learning.

Gordon, S. (2003). Computing information technology: the human side. London: Idea Group Inc.

Grant, R. (2002). The ethics of incentives: Historical origins and contemporary understandings. Economics and Philosophy, 18, 111–139.

Grant, R., & Sugarman, J. (2004). Ethics in human subject research: do incentives matter? Journal of Medicine and Philosophy, 29(6), 717-738.

Griggs, R, A. (2010). Psychology- a concise introduction (3rd ed). New York: Worth publishers.

Guyll, M., Spoth, R., & Redmond, C. (2003). The effects of incentives and research requirements on participation rates for a community-based preventive intervention research study. The Journal of Primary Prevention, 24(1), 25-41.

Harhoff, D. (1996). Strategic spillovers and incentives for research and development. Management Science, 42(6), pp, 907-925.

Hill, G. (2001). A level psychology through diagrams. Oxford: Oxford University Press.

Hsieh, D. (2003). Desire, reason and action. Retrieved from

Leong, L., & Huang, S. Y. (2002). Is TAM still valid? A test of the Technology Acceptance Model in software usage. In M. Khosrow-Pour (Ed.), Issues & Trends of Information Technology Management in Contemporary Organisations (pp. 436-439). Hershey PA: Idea Group Publishing.

McDowell, G., Gunsalus, K., MacKellar, D., Mazilli, S. …& Matozzi, M. (2015). Shaping the future of research: a perspective from junior scientists. Future of Research (FOR) Symposium, 1(3).

McNeill, P. (1997). A response to Wilkinson and Moore: Paying people to participate in research: Why not? Bioethics, 11, 390–396.

Park, S.Y. (2009). An analysis of the Technology Acceptance Model in understanding university students’ behavioral intention to use e-Learning. Educational Technology & Society, 12 (3), 150–162.

Reja, U., Manfreda, K., Hlebec, V., & Vehovar, V. (2003). Open-ended vs. close-ended questions in web questionnaires, Developments in Applied Statistics, 19, 1-19.

Titmuss, R. (1997). The gift relationship. New York: The New Press.

Wilkinson, M., & Moore, A. (1997). Inducement in research. Bioethics, 11, 373–389.