Purdue Language and Cultural Exchange (PLaCE)

Research Studies

Presentations/Dissertations on ACE-In, PLaCE Curriculum and Students

Last updated: 8/2/2019                                                                                               

Resource 1

Alamyar, M. & Bush, H. (2018, March). Writing for broader digital audiences. Electronic Village Event, Teaching English to Speakers of Other Languages (TESOL), Chicago, IL.

Abstract:

For this project, students work in pairs to take a position on one of the given broader topics and, then create a website to place their argument paper in it to showcase their writing to broader audiences. For demonstration, in 10 minutes, I will introduce the overall project along with criteria for both writing an argumentative paper and creating a website through a website called Weebly. Then, I will take 15 minutes to demonstrate how to create the website, place the content in it, and use some apps and websites to edit website, content, images, charts, graphs, etc.

 

Alamyar, M. & Bush, H. (2019, March). Kahoot: The Ultimate Engaging and Power Tool for ELs. Paper presented at the annual conference for Teaching English to Speakers of Other Languages (TESOL), Atlanta, GA.

Abstract:

Integrating effective educational technology can be challenging and time-consuming for TESOL educators. This presentation demonstrates how educators can utilize Kahoot! for English language learning in higher education. The attendees learn how to use Kahoot! for assessment, review of materials, comprehension checks, warm-up activities, surveys, and class discussions.

 

Allen, M. (2016). Measuring Silent and Oral Reading Rates for Adult L2 Readers and Developing ESL Reading Fluency through Assisted Repeated Reading. Unpublished PhD dissertation. Purdue University.

Abstract:

The purpose of this study was to investigate whether assisted repeated reading is an effective way for adult second language (L2) learners of English to develop oral and silent reading fluency rates. Reading fluency is an underdeveloped construct in second language studies, both in research and practice. This study first lays out a framework of text difficulty levels and reading rate thresholds for intermediate and advanced L2 readers of English based upon a theoretical framework of automatization of the linguistic elements of reading through structured practice and skill development. This framework was then implemented through a single-case design (SCD), an experimental method that is appropriate for testing the effectiveness of behavior and educational interventions with individual participants. Data was collected for several measures related to fluency, including oral and silent reading rates, for a small group of L2 learners in a U.S. university setting. The focus of the analysis is participants’ fluency development as they used a computer-based assisted repeated reading program called Read Naturally. The analysis concentrates on the case of an adult L2 English learner from Chinese (pseudonym of Hong Lin), presenting a longitudinal analysis of her progress through six months of continual practice and assessment. Notable results for Hong Lin include increased rates of oral reading (from 94 to 123 wpm) and silent reading (from 148 to 189 wpm) on a variety of comparable passages of unpracticed, advanced level prose.

 

Allen, M. (2016, October). Measuring ESL reading fluency development with assisted repeated reading. Poster presented at the 18th annual conference of Midwest Association of Language Testers (MwALT), West Lafayette, IN.

Abstract:

Much remains unknown about how to define, measure, and develop reading fluency for ESL students at different proficiency levels (Anderson,1999; Grabe, 2009; 2014; Lems, 2012; Taguchi & Gorsuch, 2012). These practical and theoretical issues are addressed in this presentation of findings from a single-case design (ABAB) study with a small number of participants (n=12 university students).  Two research questions are addressed: (1) What are participants’ silent and oral reading rates? (2) Does use of an audio-assisted repeated reading program contribute to increased reading rates? In this study, reading fluency is operationalized as the number of words per minute read by participants, and is supported by several other measures to indicate participants’ accuracy and comprehension.  Empirical data for selected participants will be displayed in graphs showing their (1) recognition vocabulary; (2) baseline measurements of silent and oral reading rates at multiple points in time; and (3) progress in the repeated reading intervention. The graphs will show how silent and oral reading rates compare within and across participants, and the extent to which the reading intervention (IV) increased participants’ reading fluency rates (i.e., led to a stable change in the DVs). The use of graphic displays to visualize quantitative data is a hallmark of single-case designs, and the method of careful visual analysis of data to identify trends translates well to a poster session. This study advances our understanding of L2 reading fluency, with implications for assessment, curriculum and instruction, and student motivation.

 

Allen, M. (2017, March). The ever-continuing evolution of a favorite intercultural exercise: Growing theoretical legs for the DAE framework and covering new ground. Paper presented at Purdue Languages and Culture Conference, West Lafayette, IN.

Abstract:

This presentation maps a popular intercultural exercise onto established learning models to create a more robust pedagogical framework. Intercultural competence is an important part of many foreign and second language programs, and it is of particular importance for university students who are studying abroad or in institutions with large populations of international students. The point of departure of the current project is recent work by Nam and Condon (2010) to refine to a long-standing instructional model known as Describe-Interpret-Evaluate (or D.I.E), (Bennett, Bennett, & Stillings, 1977). Nam and Condon’s iteration—the DAE framework (Describe-Analyze-Evaluate)—offers several refinements to D.I.E. but stops short of providing a solid basis in learning theory. Here, I propose a way to push the DAE model forward by mapping it to several influential principles of learning theory, namely Bloom’s taxonomy of educational objectives (Krathwohl, 2002) and Kolb’s (1984) experiential learning theory. To help attendees visualize how this works in practice, I show the DAE in action, so to speak, through means of an original visual model and examples from student work. Use of this enhanced DAE framework can lead students to engage in what is perhaps the most important learning outcome—creation of knowledge—through teacher-facilitated cycles of experiential learning that include classroom instruction, independent practice, reflection, and assessment.

 

Allen, M., & Bush, H. (2018, August). Creativity and multilingual students: Towards a conceptual framework for creative thinking in EAP writingPaper presented at the annual Symposium on Second Language Writing (SSLW), Vancouver, Canada. 

Abstract:

Creative thinking is an invaluable but underexplored concept for multilingual writers in academic contexts. In this presentation, we define creative thinking from a developmental perspective and situate this concept in a pedagogy that engages students in a process of inquiry and discovery across multiple modes of learning and communication.

 

Allen, M., & Cheng, L. (2016, April). Measuring silent and oral reading rates for adult EAP students and developing ESL reading fluency through audio-assisted repeated reading. Poster presented at the annual conference of American Association for Applied Linguistics (AAAL), Orlando, FL.

Abstract:

Extensive research on reading fluency has been conducted with first language (L1) speakers of English, and fluency has come to be viewed as a crucial element in reading achievement (National Reading Panel, 2000; Samuels, 2002). By comparison, reading fluency has received limited attention in second language (L2) research (for this study, the L2 context is ESL). Much remains unknown about how to define, measure, and develop reading fluency for ESL students at different proficiency levels (Anderson,1999; Grabe, 2009; 2014; Lems, 2012; Taguchi & Gorsuch, 2012). These practical and theoretical issues are addressed in this presentation of findings from a single-case design (ABAB) study with a small number of participants (n=12 university students).

Two research questions are addressed: (1) What are participants’ silent and oral reading rates? (2) Does use of an audio-assisted repeated reading program contribute to increased reading rates? In this study, reading fluency is operationalized as the number of words per minute read by participants, and is supported by several other measures to indicate participants’ accuracy and comprehension.

Empirical data for selected participants will be displayed in graphs showing their (1) recognition vocabulary; (2) baseline measurements of silent and oral reading rates at multiple points in time; and (3) progress in the repeated reading intervention. The graphs will show how silent and oral reading rates compare within and across participants, and the extent to which the reading intervention (IV) increased participants’ reading fluency rates (i.e., led to a stable change in the DVs). The use of graphic displays to visualize quantitative data is a hallmark of single-case designs, and the method of careful visual analysis of data to identify trends translates well to a poster session.

This study advances our understanding of L2 reading fluency, with implications for assessment, curriculum and instruction, and student motivation.

 

Allen, M., Cheng, L., & Fehrman, S. (2015, September). Administration + Assessment + Curriculum Design + Teaching = Purdue Language and Cultural Exchange. Invited program presentation at the ESL Speaker Series event to faculty and graduate students from Second Language Studies, Linguistics, Applied Linguistics at Purdue University, West Lafayette, IN.

Abstract:

Three staff members from PLaCE—the Purdue Language and Cultural Exchange program—will discuss their experience of developing an English language support program at the university. We will discuss our respective roles in administration, assessment, and teaching, with a focus on how these roles overlap and interact. Other topics to be addressed include curriculum design, research, professional development, and collaboration across campus, with various individuals, groups, and departments. After providing information about the PLaCE program to this point and some of our goals for the future, and we plan to leave ample time for discussion with audience members.

 

Allen, M., Fehrman, S., & Bush, H. (2016, October). Using the DAE framework in a language and culture course for international students. Paper presented at the annual Symposium on Second Language Writing (SSLW), Tempe, AZ. 

Abstract:

College writing is complex and challenging in a second language and new cultural context. Students and teachers need support. We show how we implement Nam and Condon’s (2010) DAE framework into EAP course activities, focusing on writing at different stages of the learning process.

 

Allen, M., Ginther, A., & Pimenova, N. (2018, March). Building a professional learning community in an ESL program. TESOL 2018 conference, Chicago, IL.

Abstract:

Most language programs strive to provide high-quality instruction with limited resources and competing demands. This presentation emphasizes individual team members as the most valuable resource for ESL programs and demonstrates why and how a professional learning community can help a program be much more than the sum of its parts.

 

Baechle, J., & Climer, T. (2014, November). Navigating cultural and academic differences: How international students find their place in university life. Paper presented at the annual conference of Indiana Teachers of English to Speakers of Other Languages (INTESOL), Indianapolis, IN.

Abstract:

The goal of our presentation is to show what we are doing at PLaCE, a new program at Purdue University that aims to connect first year international students to the campus and community. In addition, we hope to provide practical teaching assignments that other programs can implement into their coursework.

 

Bush, H., Farner, N., & Allen, M. (2017, November). Advocating the arts in EAP classes. Indiana Teaching English to Speakers of Other Languages (INTESOL) Conference, Indianapolis, IN.

Abstract:

Postsecondary EAP programs typically focus on ESL students’ communicative language abilities for their academic work – often with a narrow interpretation of content that excludes the Arts. However, in order for students to engage and succeed in real-world intercultural and academic contexts, EAP instructors must provide engaging learning experiences so that students can develop both critical and creative thinking skills. Specifically, students must learn to identify the assumptions behind their thoughts and behaviors, and to view ideas and experiences from multiple perspectives.  

This presentation advocates for the use of the Arts in EAP classes as an innovative and stimulating way to accomplish these outcomes with students. After giving a brief overview of the learning principles guiding their pedagogy,  the presenters will guide participants through several Arts-based learning activities that they have used successfully with their students. The presenters will illustrate several ways to bring the Arts to students through technology and to take students to the Arts outside of the classroom. From this session, participants will learn to (1) design authentic learning experiences with the Arts for their students, and (2) develop communicative activities that combine critical thinking skills and creative thinking processes.  

 

Bush, H. & Pimenova, N. (2018, December). Engaging faculty in professional growth. Indiana Teaching English to Speakers of Other Languages (INTESOL) Conference, Indianapolis, IN.

Abstract:

Intensive English language programs enable faculty from non-English speaking countries to work towards their professional goals. Attendees will learn about two models of professional development workshops designed to improve faculty participants’ English language skills and expand their knowledge of designing English as a medium of instruction courses to support ELLs.

 

Bush, H. & Pimenova, N. (2019, October). Language, Experience, and Reflection: English Language Institute Designed for Academics in Higher Education from Colombia. Poster to be presented at the Design and Development Showcase. Association for Communications & Technology (AECT). Las Vegas, NV.

Abstract:

The English Language Institute (ELI) provides extensive language training for Spanish-speaking academic professionals from universities in Colombia who are eager to join the global community of practice. The ELI’s innovative curriculum and design enable participants to (1) improve English language proficiency based on personalized goals, (2) apply theoretical and practical language knowledge in coursework and co-curricular activities, (3) expand knowledge of learner-centered pedagogy and educational technology, and (4) establish contacts with faculty members at the U.S. university for academic collaboration. Attendees of this session will learn about the curriculum design, educational technology for professional development, and lessons for future development.

 

Cheng, L. (2018, September). Extending the validity argument for an in-house ESL proficiency test through test score gains. Paper presented at the 20th annual conference of Midwest Association of Language Testers (MwALT), Madison, WI.

Abstract:

Longitudinal tracking of ESL test performances not only contributes to program evaluation, but it also extends the validity argument for the language test used. A validity argument is supported when test scores reflect appropriate change as a function of construct-related teaching or learning (Cronbach, 1971; Messick, 1989).

The Assessment of College English – International (ACE-In), a locally developed, computer-mediated, semi-direct English proficiency test has been used as embedded assessment in an English for Academic Purposes (EAP) program at a large public university for purposes such as providing diagnostic information to language teachers, tracking students’ language development, and gathering information for program effectiveness. This EAP program provides language and culture support to matriculated international undergraduate students with relatively lower TOEFL iBT or IELTS scores.

This study focused on a machine-scored cloze-elide section and a human-rated elicited imitation section on the ACE-In. Test items in both sections were developed based on well-defined test specifications and pilot-tested before operationalization. Cloze-elide assesses vocabulary, grammar, and silent reading whereas elicited imitation assesses listening comprehension, information retention, and grammatical accuracy in oral production. Both sections have high internal consistency estimates and small standard errors of measurement. The cloze-elide score has a significant, moderate correlation with the TOEFL iBT writing score (r = .42, p = .04) while the elicited imitation section score has a significant, strong correlation with the TOEFL iBT speaking score (r = .74, p < .0001).

Test data gathered at the beginning of a two-semester sequence, at the end of Semester 1, and at the end of Semester 2 indicates that the international students in this EAP program made gains in each semester (Cloze-elide: χ2 = 217.07, p < .001; Elicited imitation: F = 41.28, p < .001, ηρ2 = .40). This evidence of score gains helps to extend the validity argument for ACE-In.

 

Cheng, L., & Allen, M. (2016, March). Timed oral reading: A useful method for L2 reading fluency assessment and intervention. Paper presented at the annual conference of Georgetown University Round Table (GURT) on Languages and Linguistics, Georgetown, DC. 

Abstract:

Timed oral reading is a widely used method to assess reading fluency in L1 instructional contexts; fluency is typically operationalized as the ability to read aloud accurately with appropriate expression (prosody, phrasing), at a speed, such that the oral reading aligns with the meaning of the text (Rasinski, 2004). Fluency is the key because it provides evidence that the reader understands what is being read (National Reading Report). By contrast, in L2 instructional contexts, the development of reading fluency and the use of timed oral reading for assessment (or instruction) is almost nonexistent (Anderson, 1991; 1994; Grabe, 2014; Chang, 2010, 2012; Taguchi, Gorsuch, Takayasu-Maass & Snipp, 2012).

This study investigates the efficacy of timed oral reading as both an assessment and intervention method for evaluating and improving L2 reading fluency. A pre- and post-test was given to 77 international graduate and undergraduate students enrolled in an English for Academic Purposes (EAP) course at a large public Research I university in the Midwestern United States. Read Naturally®, a web-based program for improving L1 English reading fluency was used as the intervention tool with our ESL group; and eight texts at Grade 6 L1 English level were implemented as homework assignments and materials or topics for teacher-student conferences. At the end of the thirteen-week intervention, this group of ESL learners was found to have a statistically significant improvement, with a very large effect size, in reading speed: an increase from 131 words per minute (wpm) to 144 wpm.

Our presentation will be devoted to a discussion of why we are interested in oral reading fluency and why we felt it would be a useful tool for assessing and providing intervention to matriculated EAP students.

 

Cheng, L., & Allen, M. (2016, May). Timed oral reading: A useful method for second-language reading fluency assessment and intervention. Paper presented at the 3rd annual conference of Asian Association for Language Assessment (AALA), Sanur, Bali, Indonesia.

Abstract:

Timed oral reading is a widely used method to assess reading fluency in first-language (L1) instructional contexts. Reading fluency is typically operationalized as the ability to read aloud accurately, with appropriate expression (prosody, phrasing), and at a speed, such that the oral reading aligns with the meaning of the text (Rasinski, 2004). Fluency is the key because it provides evidence that the reader understands what is being read (National Reading Report). By contrast, in second-language (L2) instructional contexts, the development of reading fluency and the use of timed oral reading for assessment (or instruction) is almost nonexistent (Anderson, 1991; 1994; Grabe, 2014; Chang, 2010, 2012; Taguchi, Gorsuch, Takayasu-Maass & Snipp, 2012).

This study investigated the efficacy of timed oral reading as a learning-oriented assessment method for evaluating and improving L2 reading fluency. A pre- and post-test was given to 77 international graduate and undergraduate students enrolled in an English for Academic Purposes (EAP) course at a large public Research I university in the Midwestern United States. Read Naturally®, a web-based program for improving L1 English reading fluency was used as the intervention tool with our ESL group; and 8 texts at Grade 6 L1 English level were implemented as homework assignments and materials or topics for teacher-student conferences. At the end of the thirteen-week intervention, this group of English as a Second Language (ESL) learners was found to have a statistically significant improvement, with a very large effect size, in reading speed: an increase from 131 correct words per minute (cwpm) to 144 cwpm. When the study design was replicated the following semester with a different group of 92 adult ESL learners, using 8 texts at Grade 8 L1 English level, the average pre-test oral reading rate of 106 correct words per minute increased to a post-test of 121 cwpm, as compared to L1 adult speakers of English whose rates range from 150-200 words per minute.

In addition to a report of this study and its replica over two semesters, this paper presentation will also be devote to a discussion of why we are interested in oral reading fluency and why we felt it would be a useful, learning-oriented assessment tool for matriculated EAP students at this U.S. university.

 

Cheng, L., & Crouch, D. (2018, August). The relationship between text borrowing patterns and second language proficiency. Paper presented at the annual Symposium on Second Language Writing (SSLW), Vancouver, Canada.

Abstract:

This study examined the relationship between text borrowing and second language proficiency, based on analysis of 50 L1 Chinese timed essays. Results showed a strong, positive correlation (r = 0.84, p = 0.019) between the extent of accurate, syntactic/morphological reformulation, and slight modification of text from the prompt and writers’ TOEFL iBT total score.

 

Cheng, L., Ginther, A., & Allen, M. (2015, October). The development of an essay rating scale for a post-entry English proficiency test. Paper presented at the 17th annual conference of Midwest Association of Language Testers (MwALT), Iowa City, IA.

Abstract:

The dramatic increase in enrollment of international undergraduate students at U.S. universities not only reflects a national trend of shifting undergraduate demographics but also highlights the need for effective evaluation of newly admitted international students’ English language proficiency.  To better inform language instruction in an English for Academic Purposes program at a large public university, an internet-based post-entry English proficiency test, the Assessment of College English - International (ACE-In) was developed.  This presentation focuses on the development of an empirically derived rating scale for the writing assessment included in the ACE-In. 

Drawing on the literature of L2 rating scale development (e.g., Fulcher, Davidson, & Kemp, 2011; Upshur & Turner, 1995), we began by analyzing a sample (n=42) of first-semester international students’ ACE-In essays to identify the categories and elements (i.e., constructs and variables) present and emerging levels of performance.  A series of rating and discussion sessions were iteratively conducted with 33 additional essay samples until agreed-upon descriptors were established and an acceptable level of inter-rater reliability reached.  These rater norming sessions not only served the purpose of developing and refining an essay rating scale, but also helped to build a community of practice by providing a venue for raters to share what they value as writing instructors (Kauper, 2013).

This presentation provides a practical example of developing an empirically derived rating scale for a timed writing assessment.  With the emphasis on instructor values, we provide a model for creating effective communities of practice through rating scale development. 

 

Cheng, L., & Song, S. (2014, November). Student needs analysis for an EAP support program at a large Midwestern public university. Paper presented at the annual conference of Indiana Teachers of English to Speakers of Other Languages (INTESOL), Indianapolis, IN.

Abstract:

This presentation focuses on the use of an early-semester student survey for needs analysis in an integrated-skills EAP course offered by a brand-new language bridge program. Survey responses suggest that specific language tasks were perceived as most difficult. Curricular and co-curricular components targeting at these areas will then be presented.

 

Climer, T. (2019, May). Sustainability and English learning: A future of sustainable learning. Workshop presented at the Teachers of English to Speakers of Other Languages (TESOL) Colombia III conference, Chia, Colombia.

Abstract:

Although many ESL textbooks have readings about environmental issues, the concept of sustainability and its three pillars (environmental, economic, and social) are not taught in many ESL classrooms. This is unfortunate because sustainability is a very important concept to our world. Sustainability not only deals with how to maintain resources and services for future generations, but it is also about improving and bettering the world. For example, sustainability includes the principal of equality and inclusiveness. The main outcomes of this presentation are for attendees to be able to understand why and how they can use the concept of sustainability in their own ESL classrooms. The objectives of the presentation are to show how teachers can use the frameworks of conceptual learning and critical thinking to teach a topic related to sustainability in a way that leads to students being able to make meaningful applications to their own learning and life. To this end, the presenter will share several lessons and activities that teachers can use in their own classrooms. The presenter’s main argument is that teaching sustainability in the adult ESL language classroom meets two important goals. First, it promotes the idea of global citizenship and awareness in students, and second it supports the development of students’ English language skills. To make this argument, the presenter will model and illustrate how he integrates sustainability into his ESL teaching and material design by using frameworks of conceptual learning and critical thinking. These frameworks emphasize the processes that students go through in discovering that learning goes well beyond memorizing facts and statistics, to making connections and transferring knowledge to understanding key concepts and values about a topic in diverse fields or disciplines. Finally, the presenter will show how this unit on sustainability strengthens students’ English ability by building vocabulary as well as speaking and listening skills.

 

Crouch, D. (2018, March). Longitudinal development of second language fluency in writing and speaking. Paper presented at the annual conference of American Association for Applied Linguistics (AAAL), Chicago, IL.

Abstract:

Little is known about how complexity and fluency develop together within individual L2 learners. This study analyzed the longitudinal development of oral fluency and global written syntactic complexity in the test responses of 60 first year L1-Chinese first year undergraduate students over two semesters. The author collected responses to a post-entry computer-administered language proficiency test required of all first year international students with TOEFL scores at 100 or below at a large university in the US. The students took the test at the beginning of the first semester and again at the end of the second semester of a required two course ESL sequence. For both the written and the spoken task, each student responded in support or opposition to a statement of opinion.

The author analyzed the written responses automatically to calculate mean length of sentence using Lu's (2010) L2 Syntactic Complexity Analyzer and the oral responses for speech rate (syllables/second) and mean syllables per run (syllables/run) using a proprietary system specially designed to measure oral fluency.     

Results showed that the test-takers increased their oral fluency significantly but not their global written syntactic complexity. When the mean syllables per run of the oral pre-tests (M=7.27, SD=2.24) was compared to that of the oral post-tests (M=7.88, SD=2.39), there was a statistically significant difference, t(59)=3.58, p=.001. In a paired sample t-test the speech rate of the oral pretests (M=2.90, SD=0.43) was compared to that of the oral post-tests (M=3.00, SD=0.51), and there was also a significant difference, t(59)=2.55, p=.014.   Finally, a paired sample t-test compared the mean length of sentence of the written pre-tests (M=19.44, SD=4.43) to that of the written post-tests (M=20.37, SD=4.80), showing no significant difference, t(59)=1.57, p=.123. The findings provide evidence that oral fluency and written syntactic complexity develop at different rates in college level L1 Chinese L2 learners.

 

Crouch, D. Assessment of Productive English Language Proficiency. PhD dissertation in progress. Purdue University.

Abstract:

The study will analyze the "timed writing" responses and oral "express your opinion" responses of 100 ACE-In test-takers. The study will find out the extent to which the sentence length, frequency of coordinate phrases and frequency of various types of complex nominals in timed writing responses, 4 measures of temporal oral fluency in "express your opinion" oral responses, and word count and lexical diversity in the responses to both tasks have the potential to represent the range of proficiency levels in the ACE-In test-taker population. The study will also calculate the extent to which all of these measures correlate with each other, particularly across the two modalities of writing and speaking. The study will provide useful insights into the English language strengths and weaknesses of the ACE-In test-taker population and theoretical insights into how complexity and fluency relate to each other within the learner across the modalities of writing and speaking.

 

Gao, J., Crouch, D., & Cheng, L.  (2019, October). Concept Mapping for Guiding Rater Training in an ESL Elicited Imitation Assessment. Paper to be presented at the 21st annual conference of Midwest Association of Language Testers (MwALT), Bloomington, IN.

Abstract:

Elicited Imitation (EI) has been integrated into second language (L2) assessments measuring examinees’ overall language proficiency (Tracy-Ventura et al., 2014) or examining L2 learners’ language development (Ellis et al., 2006). Our study focused on rater behavior when judging L2 learners’ EI performances on a local English proficiency test. Implementation of a 5-point holistic rating scale from 0 to 4, with rater training, has rendered high rater agreement (above .90 for R1/R2 correlation) at the section level. Raters, however, seem to operate with different priorities when making decisions at the lower end of the scale.

We investigated 1/2 rater splits regarding the same item response. Two trained raters rated 56 examinee responses. Of the total 672 EI sentences, 125 1/2 splits were identified. Based on transcriptions and detailed error analyses, a Performance Decision Tree (PDT) was developed with the purpose of fine-tuning the decision-making process at the lower levels of the scale and helping raters align better with each other and with the rating scale. This PDT guides raters to make grammaticality judgements of each item response and then identify semantic deviations at the word level. While the grammaticality judgements cover grammatical accuracy, the semantic comparisons between the examinee’s version and the prompt include minor or major meaning deviations resulting from word substitution, addition, omission, or distortion (using a completely different word).

Preliminary results show that 59 of the 125 sentences (47.2%) have grammatical errors. Semantic deviations appear in 98 sentences (78.4%), 50% of which result from word omission. Word addition, substitution, and complete distortion contribute 2%, 15.3%, and 21.4% respectively. The remaining 7% of semantic deviations result from combinations of the aforementioned categories. This study has contributed to our ongoing rater training, with the construction of this PDT to help raters navigate through the lower end of the rating scale.

 

Levy, M., & Rodriguez-Fuentes, R. (2015, May). Analysis of C-test Items from the ACE-In Test: A Preliminary Study of a Colombian Population at Purdue University. English 618 course paper. Purdue University.

Abstract:

Language placement tests are a common method to assess students upon arrival at American universities. ACE-In, a test developed at Purdue for incoming international undergraduate students, was administered in this study. Scores for C-items of this test (Module I) were analyzed for their reliability in measuring language performance in a population of 22 Colombian students at Purdue. Variation in item difficulty and item discrimination was also analyzed. A second instrument, a questionnaire regarding educational background information, including specific English language instruction details, was also applied to the test population.  Item difficulty index for the ACE-In ranged from 0.50 and 0.95, making this moderately difficult module a useful test to assess a wide range of proficiency levels while maintaining consistency and item interdependence. Analysis of the item discrimination index showed a range between -0.10 (poor) to 0.60 (excellent). Cronbach’s alpha values varied little when items were removed one at a time, suggesting that sections are reliably constructed in a cohesive manner. Both instruments appeared to be promising in detecting variability in proficiency levels across a diverse population of Colombian participants. Overall performance for the ACE-In was good, with 14 participants scoring above 80%. There was a moderate positive correlation between ACE-In module 1 scores and time spent at Purdue. The degree of correlation between self-assessed language abilities and the scores was also positive but weak. On average, participants perceived that their language instruction prepared them adequately for reading and for listening, marginally for writing and poorly for speaking.  Although more research is needed, this study provided a good foundation for refining the instruments used to characterize Colombian students interested in pursuing either advanced degrees or research visits at Purdue.  This research also served as a supporting element in the design of an intervention ESP course for this target audience.

 

Li, X. An Item Analysis of Elicited Imitation Using Classical Test Theory (CTT). PhD dissertation in progress. Purdue University.

Abstract:

Elicited imitation (EI) was originally designed for first language (L1) development research, but since the 1970s, it also has been widely used in the SLA field. In the late 1970s, EI underwent a series of critiques regarding its reliability and validity. The major criticism is the possibility of mere rote repetition in the EI tasks. In recent years, a resurgent interest in EI has been witnessed along with an increasing number of empirical studies validating and refining the EI tasks. An item analysis using Classical Test Theory (CTT) was conducted in this project. By examining the item difficulty, item discrimination and score reliability, this study explores how the EI tasks function as a measure of general proficiency. This study analyzed 299 test samples of the Assessment of College English-International (ACE-In), which is a locally developed language test for post-entry international undergraduate students.

 

Li, Y. (2016, January). The Validity and Reliability of Grammar Quizzes for ESL Undergraduate Students. English 674 course paper. Purdue University.

Abstract:

This project is to conduct item analyses for the GS 100 quiz of PLaCE program in Purdue University. The researcher has conducted statistic analysis for three GS 100 quizzes which were taken by over 200 participants. The elicited data presents several statistic results of the test which includes the testing coefficient of reliability and items’ indices (difficulty and discrimination indices). From analyzing the data set, the researcher have uncovered two noticeable findings regarding differences in coefficient of reliability amongst different forms and parts of a form when they have the same testing construct. The result of the explanation of these findings in the data is related to the item writing and also the item analysis format which is the 0/1 format. In addition, the researcher will also explain the part that the statistic results of this project fails to shed light on.

 

Pimenova, N. & Farner, N.  (2019, March). Using Flipgrid to assess students’ reading, listening and speaking skills.  Electronic Village Event, Teaching English to Speakers of Other Languages (TESOL), Atlanta, GA.

Abstract:

Challenged by finding a more engaging tool to give ESL university students a chance to practice their reading, listening and speaking skills, the instructors chose Flipgrid as a video platform. For each video blog, students select a section from the assigned reading they find interesting, meaningful, or surprising. Next, they record themselves reading a passage and explain the reason they chose it. Partners view, react, and respond to their buddy’s video. The attendees will learn how to use this free, easy to use app and how to provide formative feedback to their students using custom assessment rubrics and video feedback.

 

Pimenova, N. (2019, June). Increasing your vocabulary size short course for international students. Work-in-progress presented at Consortium on Graduate Communication (CGC) Summer Institute, George Mason University, Arlington, VA.

Abstract:

This short 6-week noncredit course was developed to help international students to improve their academic English vocabulary knowledge. Though this course was open for all international students enrolled in a large university in the Midwest, graduate students were our target population. Since English language learners who took this class had different levels of English language proficiency, teaching them one list of academic words was not reasonable. To measure students’ vocabulary size, I used the Vocabulary Levels Test created in 1983 by Paul Nation. The test was later improved and validated by others (Beglar & Hunt, 1999, Schmitt, Schmitt, & Clapham, 2001). In this course students set personal goals for vocabulary development and created action plans to achieve their goals. By the end of the session, students were able to increase their vocabulary size by repeating and recycling new vocabulary; organizing new vocabulary in a meaningful way; making vocabulary learning personal; using strategic vocabulary in class; independently studying vocabulary in and out of class; keeping vocabulary notebooks; and using online dictionaries (McCarten, J., 2007).At the end of the course students taught new words they learned to their peers. In this work-in- progress presentation I will share what I learned as an instructor of Increasing Your Vocabulary Size course after piloting it in Fall 2018.

 

Rodriguez-Fuentes, R. (2018). Linguistic and Cultural Factors in Graduate School Admissions: An Examination of Latin American Students at Purdue University. Unpublished PhD dissertation. Purdue University.

Abstract:

While the number of graduate students from different parts of the world in the United States is decreasing, the trend in Latin American populations is the opposite. Nonetheless, the current lack of information regarding the reasons behind this tendency, in terms of English language proficiency and cultural aspects, affects all parts involved: graduate students do not know what type of opportunities they can make use of; American universities do not have enough information to provide Latin American students with a sheltering environment; and Latin American governments are unable to make policies that encourage the application and facilitate admission to graduate school in American universities.

The aim of this study is to establish a starting point for understanding the linguistic and cultural complexities of the Latin American population in graduate school in the United States. To do so, surveys and interviews were carried out to explore academic experiences, cultural influences and socioeconomic patterns that influenced the admission of Latin American students to graduate school. Mixed methods were used to describe the patterns of the survey responses quantitatively while leaving room for confirmatory quantitative analysis using the information of the interviews. The participants of this study were graduate students from Purdue University, one of the American universities with the highest number of Latin American graduate students. The results of this study underscore the importance of effective English language instruction during college years for reaching the graduate school admission scores, especially in cases when English language training during school was not possible or had little impact on the functional proficiency of the learner. Also, there is a large body evidence indicating that undergraduate research internships could be one of the opportunities with the highest potential to recruit graduate Latin American students, regardless of their socioeconomic background.

 

Shin, J. Fluency, Accuracy, and Their Relationships in Second Language Development: Insights from Cross-Sectional and Longitudinal Analyses of Elicited Imitation Performance. PhD dissertation in progress. Purdue University.

Abstract:

Accuracy and oral fluency are important aspects of oral proficiency. Investigating objective features and scores of accuracy and fluency may provide diagnostic information on L2 proficiency and language development (Skehan, 2003). Elicited imitation (EI), an oral sentence repetition task, is a reliable and efficient test to measure accuracy and fluency (Van Moere, 2012), but less is known about using EI to diagnose fluency. My dissertation investigates the relationships among objective measures and human ratings of fluency and grammatical accuracy on the EI subsection of the Assessment of College English—International (ACE-In), a post-entry assessment of general English proficiency used in the Purdue Language and Cultural Exchange (PLaCE). Through the use of Python-based and R-programmed tools, acoustic fluency features and grammatical errors will be extracted and estimated; the relationships among objective features and corresponding holistic scores will be examined, in addition to changes in fluency and accuracy features in EI performance over 7 months of instruction in a sequence of two American language and culture courses, not covering EI, though. The findings from the cross-sectional and longitudinal observations will also shed insight on the roles of accuracy and fluency and dynamic trade-offs between fluency and accuracy in second language acquisition and development.

 

Shin, J. , & Crouch, D. (2018, March). Fluencing: A reliable and simple tool to measure temporal measures of oral fluency. Paper presented at the annual Language Assessment Research Council (LARC), Ames, IA.

Abstract:

Temporal measures of oral fluency (TMOF) have been shown to be positively correlated with holistic scores of oral English proficiency (Ginther, Dimova, & Yang, 2010).

This finding has positive implications for the use of measured variables in assessing oral English proficiency. However, the main obstacle to the use of TMOF in language assessment is the difficulty of computing such measures accurately, reliably, and efficiently. Solving these problems would provide language testers with more objective evidence of test-takers' language proficiency, thus improving the validity of interpretations that can be made from test results. To that end, the purpose of this demonstration is to introduce the use of a software application, Fluencing, as one solution.

Fluencing is a software that automatically calculates TMOF based on audio-visual aided user annotation. Other speech analysis systems designed for acoustic analysis, for example Praat, can be used to compute TMOF, but they require more tedious and complicated calculation by hand. When using Praat, users have to count the number of syllables in each speech run, input the syllable count into the Praat syllable tier, and then manually calculate the TMOF. This process is time-consuming and prone to human error. Fluencing, a Python-based system developed by the Purdue Oral English Proficiency Program (Park, 2016), simplifies the process in a user-friendly way. It calculates speech rate, mean syllables per run, pause ratio, and expected pause ratio with the help of some basic user input or annotation.

The automatic calculation of TMOF requires relatively simple user preprocessing. First, the user segments speech samples into pauses and speech runs and then annotates the speech runs using a built-in speech annotation function. The system then counts syllables using a customizable syllable dictionary to which the user adds all new words as they are encountered in speech samples.

Our presentation will demonstrate this process of Fluencing using a response from an elicited imitation, also known as Listen and Repeat task. The responses were collected from a post-entry English language proficiency exam for college ESL speakers. The presentation includes a step-by-step demonstration of the graphical user interface and the annotation tools. In addition, we will report inter-annotator agreement on pre-test and post-test elicited imitation responses (n=72) The agreement between two annotators using Pearson correlation was .987 (p<.0001) for speech rate and .917 for mean syllables per run (p<.0001).

The high reliability of this system will provide empirical support for studies examining the importance of fluency and thus, contribute to the understanding of the construct of fluent speech and human perception of it.

 

Thirakunkovit, S. (2016). An Evaluation of a Post-Entry Test: An Item Analysis Using Classical Test Theory (CTT). Unpublished PhD dissertation. Purdue University. 

Abstract:

This study is an analysis of test reliability of two screening tasks (C-test and cloze-elide) in the Assessment of College English-International test (ACE-In), a post-entry test developed at Purdue University. The study uses Classical Test Theory (CTT) to assess the reliability of these test items. CTT is selected because this theory is the standard comprehensive procedure for developing, evaluating, and scaling test items (DeVellis, 2006). This reliability analysis is important because it is a prerequisite to the test validation process. This study has three major research questions: 1. What is the item characteristics of C-test and cloze elide? 2. What are the average values of item difficulty and item discrimination of C-test and cloze elide items? 3. What are the internal consistency coefficients for and correlation coefficient between the C-tests and cloze elide tests? The results of the pilot study showed that the average score of C-test is 77.8 (SD = 9.98), and that of cloze-elide test is 36.59 (SD = 14.86). Considering the average values of item difficulty and item discrimination of both tasks, C-test items are generally considered easy (item difficulty > 0.7), while cloze-elide items are of medium difficulty (item difficulty ≈ 0.6). Even though C-test items have acceptable discrimination i.e., the average biserial correlation indices (rpb) are 0.3, cloze-elide items are shown to have much better discrimination values on average i.e., rpb indices are higher than 0.5. The Cronbach’s alpha coefficients, a measure of internal consistency, of C-test and cloze-elide are .88 and .96, respectively. The Pearson product-moment correlation analysis revealed that the correlation between the C-test and cloze-elide is high (r = .66), and it is significant with the p-value less than .01. These analyses of test reliability indicated that the test items were measuring the same underlying construct – generally language proficiency. Even though the key results of the item analyses showed that C-test did not meet the standard of item difficulty and discrimination, it does not necessarily mean that C-test cannot sufficiently serve its intended purpose as a preliminary screening tool. After examining the score distributions of both C-test and cloze-elide scores, the scores of both tasks range widely. With fairly wide standard deviations, there is a potential to combine the scores of these two screening tasks to identify the students who had a uniformly low performance across both tasks.

 

Watson, W., Watson, S., Fehrman, S., Yu, J., & Janakiraman, S. (2019). Examining international students’ attitudinal learning in a higher education course on cultural and language learning. Journal of International Students.

(Abstract to be uploaded later)

 

Yan, X. (2015). The Processing of Formulaic Language on Elicited Imitation Tasks by Second Language Speakers. Unpublished PhD dissertation. Purdue University.

Abstract:

The present study investigated the processing of formulaic language, in an effort to examine how the use of formulaic language may or may not contribute to second language (L2) fluency in speaking performance. To examine the effect of formulaic language on L2 fluency, this study utilized elicited imitation (EI) tasks designed to measure general English language proficiency in order to compare repetition of individual sentences containing formulaic sequences (FS) to repetition of sentences that do not. In addition to the presence of FS, the length of stimuli sentences was manipulated and compared to a second independent variable. Responses to EI tasks were automatically measured for articulation rate (AR) and number of silent pauses (NumSP), two important measures of L2 fluency. Repeated measures ANOVAs were conducted to examine the main and interaction effects of FS and sentence length (SL) on AR and NumSP. Results of analyses of EI performances showed that both SL and FS had a significant effect on L2 fluency in speech production; however, these two variables had differential effects on AR and NumSP. SL had a strong effect on NumSP on EI performances: as the stimulus sentence becomes longer, NumSP on EI performances increases. The presence of FS had a larger effect on AR than on NumSP: higher proportion of formulaic sequences in language use contributes to faster articulation rate, while the processing advantage of formulaic sequences helps reduce the number of silent pauses when the processing load is large. Findings of this study suggest that the presence of formulaic sequences create a processing advantage for L2 speakers and that EI tasks prompt language comprehension and processing. Findings have important implications for language teaching and assessment, in particular with respect to the teaching of formulaic sequences and the use of EI as a measure of L2 proficiency. Recommendations for future research of formulaic sequences and development of EI tasks are discussed.

Purdue University, 610 Purdue Mall, West Lafayette, IN 47907, (765) 494-4600

© 2015 Purdue University | An equal access/equal opportunity university | Copyright Complaints | Maintained by Purdue Language and Cultural Exchange

Trouble with this page? Disability-related accessibility issue? Please contact Purdue Language and Cultural Exchange at PLaCE@purdue.edu.