Purdue Language and Cultural Exchange (PLaCE)

Research Studies

Research Studies on ACE-In Testing and PLaCE Program.

Resource 1

Alamyar, M. & Bush, H. (2018, March). Writing for broader digital audiences. Electronic Village Event, Teaching English to Speakers of Other Languages (TESOL), Chicago, IL.

Abstract:

For this project, students work in pairs to take a position on one of the given broader topics and, then create a website to place their argument paper in it to showcase their writing to broader audiences. For demonstration, in 10 minutes, I will introduce the overall project along with criteria for both writing an argumentative paper and creating a website through a website called Weebly. Then, I will take 15 minutes to demonstrate how to create the website, place the content in it, and use some apps and websites to edit website, content, images, charts, graphs, etc.

 

Allen, M., Ginther, A., & Pimenova, N. (2018, March). Building a professional learning community in an ESL program. TESOL 2018 conference, Chicago, IL.

Abstract:

Most language programs strive to provide high-quality instruction with limited resources and competing demands. This presentation emphasizes individual team members as the most valuable resource for ESL programs and demonstrates why and how a professional learning community can help a program be much more than the sum of its parts.

 

Allen, M. (2017, March). The ever-continuing evolution of a favorite intercultural exercise: Growing theoretical legs for the DAE framework and covering new ground. Paper presented at Purdue Languages and Culture Conference, West Lafayette, IN.

Abstract:

This presentation maps a popular intercultural exercise onto established learning models to create a more robust pedagogical framework. Intercultural competence is an important part of many foreign and second language programs, and it is of particular importance for university students who are studying abroad or in institutions with large populations of international students. The point of departure of the current project is recent work by Nam and Condon (2010) to refine to a long-standing instructional model known as Describe-Interpret-Evaluate (or D.I.E), (Bennett, Bennett, & Stillings, 1977). Nam and Condon’s iteration—the DAE framework (Describe-Analyze-Evaluate)—offers several refinements to D.I.E. but stops short of providing a solid basis in learning theory. Here, I propose a way to push the DAE model forward by mapping it to several influential principles of learning theory, namely Bloom’s taxonomy of educational objectives (Krathwohl, 2002) and Kolb’s (1984) experiential learning theory. To help attendees visualize how this works in practice, I show the DAE in action, so to speak, through means of an original visual model and examples from student work. Use of this enhanced DAE framework can lead students to engage in what is perhaps the most important learning outcome—creation of knowledge—through teacher-facilitated cycles of experiential learning that include classroom instruction, independent practice, reflection, and assessment.

 

Allen, M. (2016). Measuring Silent and Oral Reading Rates for Adult L2 Readers and Developing ESL Reading Fluency through Assisted Repeated Reading. Unpublished PhD dissertation. Purdue University.

Abstract:

The purpose of this study was to investigate whether assisted repeated reading is an effective way for adult second language (L2) learners of English to develop oral and silent reading fluency rates. Reading fluency is an underdeveloped construct in second language studies, both in research and practice. This study first lays out a framework of text difficulty levels and reading rate thresholds for intermediate and advanced L2 readers of English based upon a theoretical framework of automatization of the linguistic elements of reading through structured practice and skill development. This framework was then implemented through a single-case design (SCD), an experimental method that is appropriate for testing the effectiveness of behavior and educational interventions with individual participants. Data was collected for several measures related to fluency, including oral and silent reading rates, for a small group of L2 learners in a U.S. university setting. The focus of the analysis is participants’ fluency development as they used a computer-based assisted repeated reading program called Read Naturally. The analysis concentrates on the case of an adult L2 English learner from Chinese (pseudonym of Hong Lin), presenting a longitudinal analysis of her progress through six months of continual practice and assessment. Notable results for Hong Lin include increased rates of oral reading (from 94 to 123 wpm) and silent reading (from 148 to 189 wpm) on a variety of comparable passages of unpracticed, advanced level prose.

 

Allen, M. (2016, October). Measuring ESL reading fluency development with assisted repeated reading. Poster presented at the 18th annual conference of Midwest Association of Language Testers (MwALT), West Lafayette, IN.

Abstract:

Much remains unknown about how to define, measure, and develop reading fluency for ESL students at different proficiency levels (Anderson,1999; Grabe, 2009; 2014; Lems, 2012; Taguchi & Gorsuch, 2012). These practical and theoretical issues are addressed in this presentation of findings from a single-case design (ABAB) study with a small number of participants (n=12 university students).  Two research questions are addressed: (1) What are participants’ silent and oral reading rates? (2) Does use of an audio-assisted repeated reading program contribute to increased reading rates? In this study, reading fluency is operationalized as the number of words per minute read by participants, and is supported by several other measures to indicate participants’ accuracy and comprehension.  Empirical data for selected participants will be displayed in graphs showing their (1) recognition vocabulary; (2) baseline measurements of silent and oral reading rates at multiple points in time; and (3) progress in the repeated reading intervention. The graphs will show how silent and oral reading rates compare within and across participants, and the extent to which the reading intervention (IV) increased participants’ reading fluency rates (i.e., led to a stable change in the DVs). The use of graphic displays to visualize quantitative data is a hallmark of single-case designs, and the method of careful visual analysis of data to identify trends translates well to a poster session. This study advances our understanding of L2 reading fluency, with implications for assessment, curriculum and instruction, and student motivation.

 

Allen, M., & Cheng, L. (2016, April). Measuring silent and oral reading rates for adult EAP students and developing ESL reading fluency through audio-assisted repeated reading. Poster presented at the annual conference of American Association for Applied Linguistics (AAAL), Orlando, FL.

Abstract:

Extensive research on reading fluency has been conducted with first language (L1) speakers of English, and fluency has come to be viewed as a crucial element in reading achievement (National Reading Panel, 2000; Samuels, 2002). By comparison, reading fluency has received limited attention in second language (L2) research (for this study, the L2 context is ESL). Much remains unknown about how to define, measure, and develop reading fluency for ESL students at different proficiency levels (Anderson,1999; Grabe, 2009; 2014; Lems, 2012; Taguchi & Gorsuch, 2012). These practical and theoretical issues are addressed in this presentation of findings from a single-case design (ABAB) study with a small number of participants (n=12 university students).

Two research questions are addressed: (1) What are participants’ silent and oral reading rates? (2) Does use of an audio-assisted repeated reading program contribute to increased reading rates? In this study, reading fluency is operationalized as the number of words per minute read by participants, and is supported by several other measures to indicate participants’ accuracy and comprehension.

Empirical data for selected participants will be displayed in graphs showing their (1) recognition vocabulary; (2) baseline measurements of silent and oral reading rates at multiple points in time; and (3) progress in the repeated reading intervention. The graphs will show how silent and oral reading rates compare within and across participants, and the extent to which the reading intervention (IV) increased participants’ reading fluency rates (i.e., led to a stable change in the DVs). The use of graphic displays to visualize quantitative data is a hallmark of single-case designs, and the method of careful visual analysis of data to identify trends translates well to a poster session.

This study advances our understanding of L2 reading fluency, with implications for assessment, curriculum and instruction, and student motivation.

 

Allen, M., Cheng, L., & Fehrman, S. (2015, September). Administration + Assessment + Curriculum Design + Teaching = Purdue Language and Cultural Exchange. Invited program presentation at the ESL Speaker Series event to faculty and graduate students from Second Language Studies, Linguistics, Applied Linguistics at Purdue University, West Lafayette, IN.

Abstract:

Three staff members from PLaCE—the Purdue Language and Cultural Exchange program—will discuss their experience of developing an English language support program at the university. We will discuss our respective roles in administration, assessment, and teaching, with a focus on how these roles overlap and interact. Other topics to be addressed include curriculum design, research, professional development, and collaboration across campus, with various individuals, groups, and departments. After providing information about the PLaCE program to this point and some of our goals for the future, and we plan to leave ample time for discussion with audience members.

 

Allen, M., Fehrman, S., & Bush, H. (2016, October). Using the DAE framework in a language and culture course for international students. Paper presented at the annual Symposium on Second Language Writing (SSLW), Tempe, AZ. 

Abstract:

College writing is complex and challenging in a second language and new cultural context. Students and teachers need support. We show how we implement Nam and Condon’s (2010) DAE framework into EAP course activities, focusing on writing at different stages of the learning process.

 

Baechle, J., & Climer, T. (2014, November). Navigating cultural and academic differences: How international students find their place in university life. Paper presented at the annual conference of Indiana Teachers of English to Speakers of Other Languages (INTESOL), Indianapolis, IN.

Abstract:

The goal of our presentation is to show what we are doing at PLaCE, a new program at Purdue University that aims to connect first year international students to the campus and community. In addition, we hope to provide practical teaching assignments that other programs can implement into their coursework.

 

Bush, H., Farner, N., & Allen, M. (2017, November). Advocating the arts in EAP classes. Indiana Teaching English to Speakers of Other Languages (INTESOL) Conference, Indianapolis, IN.

Abstract:

Postsecondary EAP programs typically focus on ESL students’ communicative language abilities for their academic work – often with a narrow interpretation of content that excludes the Arts. However, in order for students to engage and succeed in real-world intercultural and academic contexts, EAP instructors must provide engaging learning experiences so that students can develop both critical and creative thinking skills. Specifically, students must learn to identify the assumptions behind their thoughts and behaviors, and to view ideas and experiences from multiple perspectives.  

This presentation advocates for the use of the Arts in EAP classes as an innovative and stimulating way to accomplish these outcomes with students. After giving a brief overview of the learning principles guiding their pedagogy,  the presenters will guide participants through several Arts-based learning activities that they have used successfully with their students. The presenters will illustrate several ways to bring the Arts to students through technology and to take students to the Arts outside of the classroom. From this session, participants will learn to (1) design authentic learning experiences with the Arts for their students, and (2) develop communicative activities that combine critical thinking skills and creative thinking processes.  

 

Cheng, L., & Allen, M. (2016, May). Timed oral reading: A useful method for second-language reading fluency assessment and intervention. Paper presented at the 3rd annual conference of Asian Association for Language Assessment (AALA), Sanur, Bali, Indonesia.

Abstract:

Timed oral reading is a widely used method to assess reading fluency in first-language (L1) instructional contexts. Reading fluency is typically operationalized as the ability to read aloud accurately, with appropriate expression (prosody, phrasing), and at a speed, such that the oral reading aligns with the meaning of the text (Rasinski, 2004). Fluency is the key because it provides evidence that the reader understands what is being read (National Reading Report). By contrast, in second-language (L2) instructional contexts, the development of reading fluency and the use of timed oral reading for assessment (or instruction) is almost nonexistent (Anderson, 1991; 1994; Grabe, 2014; Chang, 2010, 2012; Taguchi, Gorsuch, Takayasu-Maass & Snipp, 2012).

This study investigated the efficacy of timed oral reading as a learning-oriented assessment method for evaluating and improving L2 reading fluency. A pre- and post-test was given to 77 international graduate and undergraduate students enrolled in an English for Academic Purposes (EAP) course at a large public Research I university in the Midwestern United States. Read Naturally®, a web-based program for improving L1 English reading fluency was used as the intervention tool with our ESL group; and 8 texts at Grade 6 L1 English level were implemented as homework assignments and materials or topics for teacher-student conferences. At the end of the thirteen-week intervention, this group of English as a Second Language (ESL) learners was found to have a statistically significant improvement, with a very large effect size, in reading speed: an increase from 131 correct words per minute (cwpm) to 144 cwpm. When the study design was replicated the following semester with a different group of 92 adult ESL learners, using 8 texts at Grade 8 L1 English level, the average pre-test oral reading rate of 106 correct words per minute increased to a post-test of 121 cwpm, as compared to L1 adult speakers of English whose rates range from 150-200 words per minute.

In addition to a report of this study and its replica over two semesters, this paper presentation will also be devote to a discussion of why we are interested in oral reading fluency and why we felt it would be a useful, learning-oriented assessment tool for matriculated EAP students at this U.S. university.

 

Cheng, L., & Allen, M. (2016, March). Timed oral reading: A useful method for L2 reading fluency assessment and intervention. Paper presented at the annual conference of Georgetown University Round Table (GURT) on Languages and Linguistics, Georgetown, DC. 

Abstract:

Timed oral reading is a widely used method to assess reading fluency in L1 instructional contexts; fluency is typically operationalized as the ability to read aloud accurately with appropriate expression (prosody, phrasing), at a speed, such that the oral reading aligns with the meaning of the text (Rasinski, 2004). Fluency is the key because it provides evidence that the reader understands what is being read (National Reading Report). By contrast, in L2 instructional contexts, the development of reading fluency and the use of timed oral reading for assessment (or instruction) is almost nonexistent (Anderson, 1991; 1994; Grabe, 2014; Chang, 2010, 2012; Taguchi, Gorsuch, Takayasu-Maass & Snipp, 2012).

This study investigates the efficacy of timed oral reading as both an assessment and intervention method for evaluating and improving L2 reading fluency. A pre- and post-test was given to 77 international graduate and undergraduate students enrolled in an English for Academic Purposes (EAP) course at a large public Research I university in the Midwestern United States. Read Naturally®, a web-based program for improving L1 English reading fluency was used as the intervention tool with our ESL group; and eight texts at Grade 6 L1 English level were implemented as homework assignments and materials or topics for teacher-student conferences. At the end of the thirteen-week intervention, this group of ESL learners was found to have a statistically significant improvement, with a very large effect size, in reading speed: an increase from 131 words per minute (wpm) to 144 wpm.

Our presentation will be devoted to a discussion of why we are interested in oral reading fluency and why we felt it would be a useful tool for assessing and providing intervention to matriculated EAP students.

 

Cheng, L., Ginther, A., & Allen, M. (2015, October). The development of an essay rating scale for a post-entry English proficiency test. Paper presented at the 17th annual conference of Midwest Association of Language Testers (MwALT), Iowa City, IA.

Abstract:

The dramatic increase in enrollment of international undergraduate students at U.S. universities not only reflects a national trend of shifting undergraduate demographics but also highlights the need for effective evaluation of newly admitted international students’ English language proficiency.  To better inform language instruction in an English for Academic Purposes program at a large public university, an internet-based post-entry English proficiency test, the Assessment of College English - International (ACE-In) was developed.  This presentation focuses on the development of an empirically derived rating scale for the writing assessment included in the ACE-In. 

Drawing on the literature of L2 rating scale development (e.g., Fulcher, Davidson, & Kemp, 2011; Upshur & Turner, 1995), we began by analyzing a sample (n=42) of first-semester international students’ ACE-In essays to identify the categories and elements (i.e., constructs and variables) present and emerging levels of performance.  A series of rating and discussion sessions were iteratively conducted with 33 additional essay samples until agreed-upon descriptors were established and an acceptable level of inter-rater reliability reached.  These rater norming sessions not only served the purpose of developing and refining an essay rating scale, but also helped to build a community of practice by providing a venue for raters to share what they value as writing instructors (Kauper, 2013).

This presentation provides a practical example of developing an empirically derived rating scale for a timed writing assessment.  With the emphasis on instructor values, we provide a model for creating effective communities of practice through rating scale development. 

 

Cheng, L., & Song, S. (2014, November). Student needs analysis for an EAP support program at a large Midwestern public university. Paper presented at the annual conference of Indiana Teachers of English to Speakers of Other Languages (INTESOL), Indianapolis, IN.

Abstract:

This presentation focuses on the use of an early-semester student survey for needs analysis in an integrated-skills EAP course offered by a brand-new language bridge program. Survey responses suggest that specific language tasks were perceived as most difficult. Curricular and co-curricular components targeting at these areas will then be presented.

 

Crouch, D. (2018, March). Longitudinal development of second language fluency in writing and speaking. Paper to be presented at the annual conference of American Association for Applied Linguistics (AAAL), Chicago, IL.

Abstract:

Little is known about how complexity and fluency develop together within individual L2 learners. This study analyzed the longitudinal development of oral fluency and global written syntactic complexity in the test responses of 60 first year L1-Chinese first year undergraduate students over two semesters. The author collected responses to a post-entry computer-administered language proficiency test required of all first year international students with TOEFL scores at 100 or below at a large university in the US. The students took the test at the beginning of the first semester and again at the end of the second semester of a required two course ESL sequence. For both the written and the spoken task, each student responded in support or opposition to a statement of opinion.

The author analyzed the written responses automatically to calculate mean length of sentence using Lu's (2010) L2 Syntactic Complexity Analyzer and the oral responses for speech rate (syllables/second) and mean syllables per run (syllables/run) using a proprietary system specially designed to measure oral fluency.     

Results showed that the test-takers increased their oral fluency significantly but not their global written syntactic complexity. When the mean syllables per run  of the oral pre-tests (M=7.27, SD=2.24) was compared to that of the oral post-tests (M=7.88, SD=2.39), there was a statistically significant difference, t(59)=3.58, p=.001. In a paired sample t-test the speech rate of the oral pretests (M=2.90, SD=0.43) was compared to that of the oral post-tests (M=3.00, SD=0.51), and there was also a significant difference, t(59)=2.55, p=.014.   Finally, a paired sample t-test compared the mean length of sentence of the written pre-tests (M=19.44, SD=4.43) to that of the written post-tests (M=20.37, SD=4.80), showing no significant difference, t(59)=1.57, p=.123. The findings provide evidence that oral fluency and written syntactic complexity develop at different rates in college level L1 Chinese L2 learners.

 

Crouch, D. Assessment of Productive English Language Proficiency. PhD dissertation in progress. Purdue University.

Abstract:

The study will analyze the "timed writing" responses and oral "express your opinion" responses of 100 ACE-In test-takers. The study will find out the extent to which the sentence length, frequency of coordinate phrases and frequency of various types of complex nominals in timed writing responses, 4 measures of temporal oral fluency in "express your opinion" oral responses, and word count and lexical diversity in the responses to both tasks have the potential to represent the range of proficiency levels in the ACE-In test-taker population. The study will also calculate the extent to which all of these measures correlate with each other, particularly across the two modalities of writing and speaking. The study will provide useful insights into the English language strengths and weaknesses of the ACE-In test-taker population and theoretical insights into how complexity and fluency relate to each other within the learner across the modalities of writing and speaking.

 

Levy, M., & Rodriguez-Fuentes, R. (2015, May). Analysis of C-test Items from the ACE-In Test: A Preliminary Study of a Colombian Population at Purdue University. English 618 course paper. Purdue University.

Abstract:

Language placement tests are a common method to assess students upon arrival at American universities. ACE-In, a test developed at Purdue for incoming international undergraduate students, was administered in this study. Scores for C-items of this test (Module I) were analyzed for their reliability in measuring language performance in a population of 22 Colombian students at Purdue. Variation in item difficulty and item discrimination was also analyzed. A second instrument, a questionnaire regarding educational background information, including specific English language instruction details, was also applied to the test population.  Item difficulty index for the ACE-In ranged from 0.50 and 0.95, making this moderately difficult module a useful test to assess a wide range of proficiency levels while maintaining consistency and item interdependence. Analysis of the item discrimination index showed a range between -0.10 (poor) to 0.60 (excellent). Cronbach’s alpha values varied little when items were removed one at a time, suggesting that sections are reliably constructed in a cohesive manner. Both instruments appeared to be promising in detecting variability in proficiency levels across a diverse population of Colombian participants. Overall performance for the ACE-In was good, with 14 participants scoring above 80%. There was a moderate positive correlation between ACE-In module 1 scores and time spent at Purdue. The degree of correlation between self-assessed language abilities and the scores was also positive but weak. On average, participants perceived that their language instruction prepared them adequately for reading and for listening, marginally for writing and poorly for speaking.  Although more research is needed, this study provided a good foundation for refining the instruments used to characterize Colombian students interested in pursuing either advanced degrees or research visits at Purdue.  This research also served as a supporting element in the design of an intervention ESP course for this target audience.

 

Li, X. An Item Analysis of Elicited Imitation Using Classical Test Theory (CTT). PhD dissertation in progress. Purdue University.

Abstract:

Elicited imitation (EI) was originally designed for first language (L1) development research, but since the 1970s, it also has been widely used in the SLA field. In the late 1970s, EI underwent a series of critiques regarding its reliability and validity. The major criticism is the possibility of mere rote repetition in the EI tasks. In recent years, a resurgent interest in EI has been witnessed along with an increasing number of empirical studies validating and refining the EI tasks. An item analysis using Classical Test Theory (CTT) was conducted in this project. By examining the item difficulty, item discrimination and score reliability, this study explores how the EI tasks function as a measure of general proficiency. This study analyzed 299 test samples of the Assessment of College English-International (ACE-In), which is a locally developed language test for post-entry international undergraduate students.

 

Li, Y. (2016, January). The Validity and Reliability of Grammar Quizzes for ESL Undergraduate Students. English 674 course paper. Purdue University.

Abstract:

This project is to conduct item analyses for the GS 100 quiz of PLaCE program in Purdue University. The researcher has conducted statistic analysis for three GS 100 quizzes which were taken by over 200 participants. The elicited data presents several statistic results of the test which includes the testing coefficient of reliability and items’ indices (difficulty and discrimination indices). From analyzing the data set, the researcher have uncovered two noticeable findings regarding differences in coefficient of reliability amongst different forms and parts of a form when they have the same testing construct. The result of the explanation of these findings in the data is related to the item writing and also the item analysis format which is the 0/1 format. In addition, the researcher will also explain the part that the statistic results of this project fails to shed light on.

 

Rodriguez-Fuentes, R. The Impact of Linguistic and Cultural Factors on Graduate Schools Admissions: An Examination of Two Cohorts of Colombian Undergraduate Students. PhD dissertation in progress. Purdue University.

Abstract:

Increasing matriculation of Colombians to diversify Purdue graduate programs is one of the major goals of the Colombia-Purdue Initiative (CPI). The present study is an analysis of the language profiles of Colombian undergraduate students who had six-month internships at Purdue and investigates campus linguistic and cultural factors that influenced their experiences. Pre- and post-measures of the ACE-In will be studied in two groups (one with language instruction and one without it). Supporting information will be obtained from structured interviews and TOEFL-iBT scores of former CPI interns currently enrolled in graduate programs at Purdue. The outcomes of this project will contribute to improved understanding of language skills of prospective Colombian graduate students. It will also provide Colombian and Purdue stakeholders with information on the most effective support measures for student success during the six-month undergraduate experience, which may lead to the matriculation of these students into Purdue University graduate programs.

 

Shin, J. , & Crouch, D. (2018, March). Fluencing: A reliable and simple tool to measure temporal   measures of oral fluency. Paper to be presented at the annual Language Assessment Research       Council (LARC), Ames, IA.

Abstract:

Temporal measures of oral fluency (TMOF) have been shown to be positively correlated with holistic scores of oral English proficiency (Ginther, Dimova, & Yang, 2010).

This finding has positive implications for the use of measured variables in assessing oral English proficiency. However, the main obstacle to the use of TMOF in language assessment is the difficulty of computing such measures accurately, reliably, and efficiently. Solving these problems would provide language testers with more objective evidence of test-takers' language proficiency, thus improving the validity of interpretations that can be made from test results. To that end, the purpose of this demonstration is to introduce the use of a software application, Fluencing, as one solution.

Fluencing is a software that automatically calculates TMOF based on audio-visual aided user annotation. Other speech analysis systems designed for acoustic analysis, for example Praat, can be used to compute TMOF, but they require more tedious and complicated calculation by hand. When using Praat, users have to count the number of syllables in each speech run, input the syllable count into the Praat syllable tier, and then manually calculate the TMOF. This process is time-consuming and prone to human error. Fluencing, a Python-based system developed by the Purdue Oral English Proficiency Program (Park, 2016), simplifies the process in a user-friendly way. It calculates speech rate, mean syllables per run, pause ratio, and expected pause ratio with the help of some basic user input or annotation.

The automatic calculation of TMOF requires relatively simple user preprocessing. First, the user segments speech samples into pauses and speech runs and then annotates the speech runs using a built-in speech annotation function. The system then counts syllables using a customizable syllable dictionary to which the user adds all new words as they are encountered in speech samples.

Our presentation will demonstrate this process of Fluencing using a response from an elicited imitation, also known as Listen and Repeat task. The responses were collected from a post-entry English language proficiency exam for college ESL speakers. The presentation includes a step-by-step demonstration of the graphical user interface and the annotation tools. In addition, we will report inter-annotator agreement on pre-test and post-test elicited imitation responses (n=72) The agreement between two annotators using Pearson correlation was .987 (p<.0001) for speech rate and .917 for mean syllables per run (p<.0001).

The high reliability of this system will provide empirical support for studies examining the importance of fluency and thus, contribute to the understanding of the construct of fluent speech and human perception of it.

 

 

Thirakunkovit, S. (2016). An Evaluation of a Post-Entry Test: An Item Analysis Using Classical Test Theory (CTT). Unpublished PhD dissertation. Purdue University. 

Abstract:

This study is an analysis of test reliability of two screening tasks (C-test and cloze-elide) in the Assessment of College English-International test (ACE-In), a post-entry test developed at Purdue University. The study uses Classical Test Theory (CTT) to assess the reliability of these test items. CTT is selected because this theory is the standard comprehensive procedure for developing, evaluating, and scaling test items (DeVellis, 2006). This reliability analysis is important because it is a prerequisite to the test validation process. This study has three major research questions: 1. What is the item characteristics of C-test and cloze elide? 2. What are the average values of item difficulty and item discrimination of C-test and cloze elide items? 3. What are the internal consistency coefficients for and correlation coefficient between the C-tests and cloze elide tests? The results of the pilot study showed that the average score of C-test is 77.8 (SD = 9.98), and that of cloze-elide test is 36.59 (SD = 14.86). Considering the average values of item difficulty and item discrimination of both tasks, C-test items are generally considered easy (item difficulty > 0.7), while cloze-elide items are of medium difficulty (item difficulty ≈ 0.6). Even though C-test items have acceptable discrimination i.e., the average biserial correlation indices (rpb) are 0.3, cloze-elide items are shown to have much better discrimination values on average i.e., rpb indices are higher than 0.5. The Cronbach’s alpha coefficients, a measure of internal consistency, of C-test and cloze-elide are .88 and .96, respectively. The Pearson product-moment correlation analysis revealed that the correlation between the C-test and cloze-elide is high (r = .66), and it is significant with the p-value less than .01. These analyses of test reliability indicated that the test items were measuring the same underlying construct – generally language proficiency. Even though the key results of the item analyses showed that C-test did not meet the standard of item difficulty and discrimination, it does not necessarily mean that C-test cannot sufficiently serve its intended purpose as a preliminary screening tool. After examining the score distributions of both C-test and cloze-elide scores, the scores of both tasks range widely. With fairly wide standard deviations, there is a potential to combine the scores of these two screening tasks to identify the students who had a uniformly low performance across both tasks.

 

Yan, X. (2015). The Processing of Formulaic Language on Elicited Imitation Tasks by Second Language Speakers. Unpublished PhD dissertation. Purdue University.

Abstract:

The present study investigated the processing of formulaic language, in an effort to examine how the use of formulaic language may or may not contribute to second language (L2) fluency in speaking performance. To examine the effect of formulaic language on L2 fluency, this study utilized elicited imitation (EI) tasks designed to measure general English language proficiency in order to compare repetition of individual sentences containing formulaic sequences (FS) to repetition of sentences that do not. In addition to the presence of FS, the length of stimuli sentences was manipulated and compared to a second independent variable. Responses to EI tasks were automatically measured for articulation rate (AR) and number of silent pauses (NumSP), two important measures of L2 fluency. Repeated measures ANOVAs were conducted to examine the main and interaction effects of FS and sentence length (SL) on AR and NumSP. Results of analyses of EI performances showed that both SL and FS had a significant effect on L2 fluency in speech production; however, these two variables had differential effects on AR and NumSP. SL had a strong effect on NumSP on EI performances: as the stimulus sentence becomes longer, NumSP on EI performances increases. The presence of FS had a larger effect on AR than on NumSP: higher proportion of formulaic sequences in language use contributes to faster articulation rate, while the processing advantage of formulaic sequences helps reduce the number of silent pauses when the processing load is large. Findings of this study suggest that the presence of formulaic sequences create a processing advantage for L2 speakers and that EI tasks prompt language comprehension and processing. Findings have important implications for language teaching and assessment, in particular with respect to the teaching of formulaic sequences and the use of EI as a measure of L2 proficiency. Recommendations for future research of formulaic sequences and development of EI tasks are discussed.

Purdue University, 610 Purdue Mall, West Lafayette, IN 47907, (765) 494-4600

© 2015 Purdue University | An equal access/equal opportunity university | Copyright Complaints | Maintained by Purdue Language and Cultural Exchange

Trouble with this page? Disability-related accessibility issue? Please contact Purdue Language and Cultural Exchange at PLaCE@purdue.edu.