Pre-Trial Test Analysis Essay Example

  • Category:
  • Document type:
  • Level:
  • Page:
  • Words:


Pre-Trial Test Analysis

Relationship between the purpose and the content

The purpose of the pre-trial test was to ensure that the learners acquire the necessary skills needed to enable them to construct future classroom tests, which would test the linguistic competence exhibited by learners in regards to language and grammar proficiency, as well as the four language skills. On the other hand, the content of the test included five sections which were designed to achieve this aim. They included reading, listening, grammar, writing and speaking tests. As such, the content aimed at evaluating their competence levels in the four language skills, all the while ensuring that the learners grasped the writing concepts needed to construct similar tests in future. This shows that the content of the test was well designed to help the learners achieve its purpose.

Suitability of the content and level of the test for the test takers

The content was sourced from real-life scenarios applicable to both the native and non-native speaker. In addition, the instructions were clear and concise, which eliminated the chances of misunderstanding. This was further facilitated by the inclusion of graphical aids (videos and pictures) within the content. As regarding to the level of the test, the content was not suitable for the age group of the subjects but could easily apply in their experience. As such, both the content and the test level were suitable for the intended subject.

Adjustments made to the test

Since the test was originally, designed for younger subjects, minor changes had to be made to counter the subjectivity of the subjects. This was due to the fact that at their age, they were more subjective about the English language (in regards to historical interpretations). However, accent and dialect (Irish and Patois respectively) were used to reflect on the specifications required by the test. This adjustment was made to facilitate objectivism in the subjects’ articulations.

Administration arrangement

The administration arrangement was successful because the quality of individual test items was very distinctive and clear. The subjects to be addressed were sourced from universal themes that applied to the experiences had by the tested subjects.


To a large extent, the instructions were very clear and concise. However, the only problem emanated from the lexicons defining the range of vocabulary to be used. For example, the Irish subject did not fully grasp the Zulu due to his attitude. As such, he viewed the text as foreign. On the other hand, the Jamaican subject had difficulties in describing their wedding ceremony concisely owing to the fact that the required level of spoken English differed from that spoken in Jamaica. As such, no changes were made to the initial instructions because they applied to the test given.

Quality of individual test items (e.g. ambiguous or not)

The individual test items were very definite and distinctively designed to meet the aims of the test. For example, the text selected was suitably relevant for the subjects concerned as they were universal themes. The language and subject matter was not for their age groups though but could still be place in their experiences. Its authenticity was apt as most of the content was sourced from real-life and everyday lives of both the native and non-native speaker.

he degree of linguistic difficulty for both written and spoken texts was balanced as simple and short sentences rather than long complex sentences, which could make frequent use of the passive voice, were used. In addition, by providing visual support through pictures and video in the listening test, the text was easier to understand and the candidate could easily have a higher rate of success compared to when the visual support was absent. As such, the test items were not ambiguous but rather, well designed to help the learners achieve the best from the test. However, none of the items produced any problem or unexpected results. I realized that each subject had a unique way of understanding and decoding the content. This facilitated their ability to answer the questions as expected.In addition, t


The only problems that were cited included the use of vocabulary which affected the Jamaican during the speaking section and the writing section which proved to be hard for the Irish subject due to his attitude towards the Zulu. However, what they lacked in these sections was compensated in the other sections in which they showed exemplary understanding.

Intended target

formed a subjective attitude towards English (the colonial master-and slave language politics put them off) even though the level was intermediate.a middle-aged Irish woman with O-Level education from her native Belfast High School and a 50-year old Jamaican immigrant who had lived in London for the better part of his life. As such, they were not the best candidates for the test and they seemed to be subjective in their response. For example, they The test was intended for a younger audience who would answer the questions objectively. My subjects were


hey were tested to show proficiency, recognize speech sounds and hold a transitory “imprint” of them in short term memory long enough to decode a plausible interpretation to the message and assign a literal and intended meaning to the utterance. This was done without problem. Note-taking was done by the Jamaican; while the native speaker used her own language experience to store imprinted information from the audio tape. These sections showed the subjects’ ability to shadow their weaknesses with their strengths.The test did give a balanced view of the subjects’ skills. For example, in the listening section of the test, t

Scoring system

there was more than one possible answer, all possibilities were included in the key (Fulcher, n.d). These made the scores easy to attain since all answers were correct as long as they met the set margins. In addition, the total marks were awarded as a percentage of total grades possible to the ratio of possible correct mark in the four tested skills of the English language test. The maximum marks were as follows:The keys, marking schemes and rating scales were clearly objective, and mark provision at the end of the questions were considered. Where

Reading out of the total possible 20 marks

Speaking out of the total possible 20 marks

Listening out of the total possible 20 marks

Writing out of the total possible 20 marks

Grammar out of the total possible 10 marks

This grading rubric made the scores reliable because each mark was dependant on how best the subject performed in each section.

Scoring decisions

This was made easy by the marking criteria followed. Each section had its own marks which would be accumulated to form the final score. As such, each section was graded independently according to how the subject performed. This made the decision-making process easy as compared to marking the whole test as a single exam.

Result expectations

The results from the test were as expected. The content as well as the easy instructions facilitated the right articulation for each section. In addition, by using relevant scenarios, the test appealed to the subjects’ experience thereby enabling them to apply their experience in answering the questions. The only problem was in the speaking section. This was mainly attributed to the accent variation and vocabulary usage which differed from what was expected.

Success factor

To a large extent, the test did achieve its aim. Again, this is attributed to the test design which aimed at testing all the concepts needed to guarantee linguistic competence and knowledge of the four language skills. However, by making the answers multifaceted, the results would not be as precise as required. Instead, I believe that it would have been perfect if the answers were standardized and similar so as to measure lingual proficiency of the subjects on a leveled ground.


A diagnostic listening testing should be taken alongside or prior to the test to ascertain a student’s listening aptitude. This test may assess more than just recognition of spoken words. It may be marked according to the exact words (including the exact punctuation and spelling), the accuracy of certain phrases, or simply according to meaning.
After marking and analyzing, an outline of both pros and cons of the represented outcome, I constructed tips for writing effective multiple choice and true-false questions. Further, since most candidates couldn’t differentiate between norm-referenced competitive tests, I created a criterion-referenced mastery tests to help me in determining the most appropriate purpose for our assessment. Further, my diagnosis was the language test did not cater for revision, which meant I had to develop better assessments. It is also important that we provide for educational communication and Technologies facilitates faculty assessment to support various tools to develop online or in-class tests.


, n.d. p. 10: University of data-based approaching to rating scale constructionFulcher, G. A

Surrey. Retrieved 27 May, 2011 from