2019 International AALA Pre-conference Workshops

 

Presentation type

Speaker

Title

Pre-Conference Workshop 1
Full Day, Oct. 16

Alistair Van Moere & Jing Wei

Comparing Tests and Linking Test Scores – A Practical Guide

Pre-Conference Workshop 2
Full Day, Oct. 16

Richard Spiby, Sheryl Cooke, & Johnathan Cruise

Standards in the Classroom

Pre-Conference Workshop 3
Half Day, Morning of Oct. 16

Sara Cushing

Speaking and Writing Rater Training

Pre-Conference Workshop 4

Half Day, Afternoon of Oct. 16

Lorena Llosa

Assessing students in the CLIL (Content and Language Integrated Learning) classroom

 

Workshop 1: Comparing Tests and Linking Test Scores – A Practical Guide

This workshop will discuss procedures for comparing tests and introduce the basic steps involved in linking two sets of test scores. There are many reasons why researchers may want to compare tests and test scores. For example:

  • Understanding how local or institutional test scores compare to scores on an established international test, in order to develop a concordance table
  • Evaluating the degree to which the construct, content, and purpose of one test is similar to that of another test
  • Establishing validity evidence for the use of test scores by ensuring that two tests with similar constructs are correlated

This workshop will walk researchers through the main issues to consider when conducting these types of studies.

The morning session will cover the following topics:

  • Understanding the purpose, construct, and uses of test scores
  • Determining an appropriate framework for comparing tests, content, and tasks
  • Using text complexity measures such as the Lexile® Framework to compare the reading challenge in different tests
  • Deciding whether it is appropriate to compare/link test scores

The afternoon session will cover:

  • Hands-on practice with statistical approaches for comparing test scores, such as: means comparison; correlation; scatterplot analysis
  • Demonstrations of more sophisticated analyses for linking such as Rasch analyses and linear linking methods
  • Evaluating linking results to determine what was good or bad about the linkage and how the results should be used/interpreted

Participants will actively engage in each session using sample tests and case studies. We will also provide sample sets of test scores for participants to work with in excel, in order to establish and evaluate a link.

This workshop is suitable for graduate students, test developers, and researchers. This is an introductory-level workshop for those with limited experience in this area. Participants should bring their laptops with Excel software.

Speaker: Dr Alistair Van Moere

Dr Alistair Van Moere is Chief Product Officer at MetaMetrics Inc, where he drives innovation and helps organizations make sense of test measurement. Previously Alistair was President of Pearson’s Knowledge Technologies group and managed artificial intelligence scoring in speaking and writing for tens of millions of learners. He has worked as a teacher, examiner, director of studies, university lecturer, and test developer, in the US, UK, Japan, and Thailand. Alistair’s PhD won the Jacqueline Ross TOEFL award for best dissertation in language testing; he has an MBA, and has authored over 20 research publications on assessment and educational technology.

Speaker: Dr Jing Wei

Dr. Jing Wei is a Senior Researcher at MetaMetrics Inc, where she conducts research on linking studies.  Jing was formerly a Senior Research Associate at the Center for Applied Linguistics where she managed test development projects for both the U.S. K-12 English Language Learner (ELL) and the international English Foreign Language Learner (EFL) populations. She has worked as an EFL teacher, adjunct professor, test developer and researcher, in China and U.S. She has a PhD in language testing and regularly presents at national and international conferences.

Back to top

 

Workshop 2: Standards in the Classroom

Target audience: Teachers and teacher trainers

There is increasingly wide use of the Council of Europe Framework of Reference (CEFR), localised versions of the CEFR or standards of proficiency developed in-country, often introduced at policy level with the aim of positively impacting the English language standards of learners. These have a direct influence on the education system and teachers are often called upon to write tests and create tasks that relate to these descriptors, or to make decisions about whether their students meet the performance standards required at a certain level.

This workshop presents an overview of how language proficiency scales drive the learning system and considers how teachers can exploit the multi-purposes of standards in their practice. The workshop includes a hands-on exploration of standards with activities centred around understanding the different component areas of the CEFR and how teachers can interpret the descriptors. We will discuss how teachers and learners can leverage standards of language proficiency to improve learning and measurement of learning. A key aim of the workshop is to provide participants with a stronger sense of the different levels of the CEFR and what differentiates one level from the other. In keeping with the strong practical focus of the event, participants will look at receptive skills tests and be guided through the process of deconstructing tasks and linking these to the CEFR. Participants will also be introduced to procedures for calculating CEFR cutoff scores on language tests.

The workshop is designed for teachers who are keen to know more about standards in general and the use of standards in the classroom in particular. It is suitable for those with some experience of using standards to set cut-off scores or measure performance of students who want more guidance, as well as those who are new to using the CEFR or standards in general in practice. Participants can expect to engage in a very interactive workshop and to leave with more hands-on experience of the CEFR.

Speaker: Richard Spiby

Richard Spiby works at the British Council in London. He has been a Test Development Researcher there with the Assessment Research Group since June 2016. His main responsibilities involve overseeing operational analysis and developing the receptive skills components of new and existing Aptis test variants. He also works on a variety of assessment development and training projects worldwide.

Richard has previously worked in the UK and Turkey, mainly in the university sector, in test production, management and research. His areas of interest include cognitive processing, strategy use in reading and listening, vocabulary testing and inclusivity in language assessment.

 

Speaker: Sheryl Cooke

Sheryl Cooke is the Director East Asia Assessment Solutions Team, British Council. She leads a regional team that provides language assessment solutions for partners throughout East Asia including needs analysis, test development, post-test services and teacher support.

Sheryl has 20 years’ experience in language assessment, including examiner training and item writing. Her qualifications include an MA Language Testing (Lancaster University), an MA Linguistics (SOAS), and the DELTA. She is currently a PhD candidate at the University of Jyväskylä (Finland), focusing on automated assessment of spoken English and the potential implications for ELF. Research interests include the use of new technologies and the ethics of language assessment in the global context.

 

Speaker: Johnathan Cruise

Johnathan Cruise is the Assessment Solutions Manager of the East Asia Assessment Solutions Team, British Council. He graduated with a BA in Linguistics and Literature in 1991 and obtained a PGCE in General Primary Education in 1993, before then moving to China to train English teachers.

After 3 years in provincial Teachers’ Colleges, Johnathan taught at Beijing Language and Culture University as an EFB teacher. From 2000, Johnathan worked in Shanghai as an education consultant for various notable institutes. He became an IELTS examiner in 2000 and an Examiner Trainer in 2008 with the British Council.  In 2010, Johnathan completed his MA in Language and TESOL and his dissertation on interlocutor speech rate in speaking tests received a distinction. He is now involved in standard setting work, particularly around China’s Standards of English, and in technology-driven language assessment.  His current research interests include the effectiveness of visuals in language assessment.

Back to top

 

Workshop 3: Speaking and Writing Rater Training

In this interactive workshop, I will briefly discuss the importance of using scoring rubrics for the assessment of speaking and writing. I will then take participants through the process of rater training, using a classroom-based writing rubric and sample essays. Participants will learn to align their own internal criteria for assessing writing with the language of a scoring rubric, match features of an essay with descriptors, and justify a numerical score.  Finally, we will discuss how this process can be adapted for assessing speaking.

 

Speaker: Professor Sara Cushing

Sara Cushing (also known as Sara Cushing Weigle) is Professor of Applied Linguistics at Georgia State University and Senior Faculty Associate for the Assessment of Student Learning in the Office of Institutional Effectiveness. She received her Ph.D. in Applied Linguistics from UCLA.  She has published research in the areas of assessment, second language writing, and teacher education, and is the author of Assessing Writing (2002, Cambridge University Press).  She has been invited to speak and conduct workshops on second language writing assessment throughout the world, most recently in Vietnam, Colombia, Thailand, and Norway.  Her current research focuses on assessing integrated skills and the use of automated scoring for second language writing.

Back to top

 

Workshop 4: Assessing students in the CLIL (Content and Language Integrated Learning) classroom

Language learners in a CLIL classroom face a double challenge: they are learning content (e.g. science) at the same time as they are developing their L2 language proficiency. Professor Llosa will discuss two ways in which we can help learners face this challenge in the classroom. Using science standards and instruction in the U.S. as an example, she will first illustrate how to develop content lessons that create a rich context for language use and development. Second, she will propose an alternate conceptualization of English language proficiency that embraces and leverages the overlap between content and language. She will demonstrate how, by focusing on disciplinary practices rather than language as traditionally defined (e.g., vocabulary, grammar, organization), we can support language learners in the CLIL classroom in ways that simultaneously promote their content and language learning.

 

Speaker: Associate Professor Lorena Llosa

Lorena Llosa is an Associate Professor of Education in the Steinhardt School of Culture, Education, and Human Development at New York University. Her work addresses second and foreign language teaching, learning, and assessment. Her studies have focused on standards-based classroom assessment of language proficiency, validity issues in the assessment of academic writing, and the integration of language and content in instruction and assessment. She is currently Co-Principal Investigator on two projects funded by the National Science Foundation to develop science curricula and assessments that support English learners’ science learning, computational thinking, and language development. Her research has appeared in such journals as Language TestingLanguage Assessment QuarterlyEducational Measurement: Issues and PracticeEducational AssessmentAssessing WritingLanguage Teaching ResearchLanguage LearningReading and Writing Quarterly, and the American Educational Research Journal. Dr. Llosa was awarded the National Academy of Education/Spencer Postdoctoral Fellowship in 2009 and the AERA Second Language Research SIG Mid-Career Award in 2019. Dr. Llosa received her Ph.D. in Applied Linguistics with a specialization in language assessment from the University of California, Los Angeles.

Back to top