Single-Correct Answer (SCA) and Multiple-Correct Answer (MCA) in Multiple-Choice Computer Assisted Language Testing (CALT) Program

<Click here to download full paper>

Single-Correct Answer (SCA) and Multiple-Correct Answer (MCA) in Multiple-Choice Computer Assisted Language Testing (CALT) Program

Herri MULYONOa*, Gunawan SURYOPUTROa & Tri Wintolo APOKOa

aUniversity of Muhammadiyah Prof. DR. HAMKA (UHAMKA), Indonesia

*hmulyono@uhamka.ac.id

Computer has been widely used to assess language proficiency (Coniam, 2006; Dunkel, 1991; Lee, 2004; Y. Sawaki, Stricker, & Oranje, 2009). In promoting the benefits of computer in carrying the language testing task, some literature (e.g. Choi, Kim, & Boo, 2003; Coniam, 2006; Lee, 2004; Sawaki, 2001) evaluate the use of computer in testing language within comparison to conventional paper-based test. Coniam(2006) evaluate the use computer-based and paper-based English listening test. His evaluation of 115 students grade 11 and 12 from two schools attending the two mode of tests: computer and paper based tests has shown that the students performed better in computer-based test than the conventional one. In Choi et al.’s (2003) comparative study of a paper-based language test and a computer based test at five universities at Korea, they found that the section of reading within the computer based test received the weakest support. Choi et al (2003) believes that eye fatigue may be factors harming students’ concentration while reading the passages on the computer. This corresponds suggestion offered by Bridgman and Rock (1993) saying that computer based test need to pay attention to the length of instruction given.

Some authors view that open-ended questions (e.g. essay) offers wider room for measuring cognitive process as well as behavior (e.g. Bennett et al., 1990; Birenbaum & Tatsuoka, 1987; Brown, 2004). Although multiple choice test format is likely viewed to be inferior in exploring students’ problem solving ability as well as constrain teachers from retrieving much information from the students or test takers (Birenbaum & Tatsuoka, 1987), the test format may be seen as alternative of test that offer less stress for students and practical. Within multiple choice test format, students may be facilitated by available answers to the question items (Cheng, 2004) and retrieved direct feedback as the test offers benefit in suggesting practicality of scoring (see Birenbaum & Tatsuoka, 1987; Bridgeman & Rock, 1993).

In a study conducted by Coniam(1999), the use of multiple choice test in computer based program is shown to be students’ preference compared to the paper-based test. Coniam(1999) argues that students’ preference to the computer-based program is derived by simplicity of the program such as clicking alternative answer in multiple choice test application. In addition, a study conducted by Cheng (2004) has shown that students preferred the multiple-choice test compared to the multiple choice cloze and the open-ended tests. Cheng (2004) discusses that students’ preference is mainly derived by stimuli available in multiple choice test format. Furthermore, it is likely obvious that alternative answers available in the multiple choice tests promote students for guessing. However, it is still not clear if the scoring method applied in the three test formats may also attribute to such students’ preference.

In Indonesian context, wide use of computer to facilitate language testing has been applied to evaluate school teachers’ competence. For such use, the Indonesian government has developed web-based application to facilitate this teacher competency online testing as available to access at http://www.ukg.web.id. Although teachers have been shown to get much exposures as well as experiences in completing the online testing, they are likely to pay little interest of using computer to evaluate their students’ language proficiency. There are three indications to this reluctance of using computer for testing purpose in Indonesian secondary school context: 1) teachers’ inadequate knowledge of testing principles applied in computer-based testing, and 2) teachers’ inability of designing computer based test for classroom use. This paper describes principles applied in designing SCA and MCA CALT programs in assessing secondary school students’ grammar proficiency. It also discusses students’ preference to SCA and MCA CALT programs they experienced.

Advertisements

Creating native-like but comprehensible listening texts for EFL learners using NaturalReader

<<Click here to download full paper>>

Creating native-like but comprehensible listening texts for EFL learners using NaturalReader

A media review by:

Herri Mulyono

“Native English speakers are often thought to bring benefits to English as a foreign language (EFL) classrooms. The native speaker is often called upon to answer vocabulary and pronunciation issues from non-native speakers (Medgyes, 1994). Within this perspective, the native speaker is believed to promote the best model for language users (see Carless, 2006; Lasagabaster & Sierra, 2002) and may encourage extrinsic motivation for EFL learners (Carless, 2006; Harmer, 2007), particularly in listening sessions. However, many EFL learners encounter difficulty in comprehending the speech of native speakers. Speech rate is believed to be one of the factors leading to such problems (see Griffiths, 1991; Hirai, 1999).

Text-to-speech (TTS) technologies, which allow users to “make the computer talk” by transforming text input into speech, offer one way to control the speed of the input learners receive (Handley, 2009, p. 906). Although speech synthesis was originally developed for people with visual impairments (Kilickaya, 2006), some teachers have begun to adopt TTS technology in foreign language classrooms. Handley (2009) states that integration of TTS within the computer-assisted language learning (CALL) environment may involve three different roles: reading machine, pronunciation model, and dialogue partner. In reference to these roles, TTS technology offers increased opportunities for EFL learners to access the target language with a native-like, but accessible model.

NaturalReader, originally developed by NaturalSoft Ltd in Canada, is TTS synthesis software that promotes natural voice conversion from text input. With supplementary add-in and floating bar features, the software is not only able to carry out text-to-speech conversion from MS office documents, PDFs, webpages, and email, but also to convert these texts into audio files in MP3 or WAV formats (NaturalReader, 2014). The advanced version 12 of this software now has made optical character recognition (OCR) possible, and this makes the number and types of texts available for TTS conversion even greater.

This article describes the basic operational functionality and features of NaturalReader as a text-to-speech synthesis system. It will also discuss some ways that NaturalReader may be used to facilitate the provision of native-like, but comprehensible input to EFL learners. ”

Click here to download