This is the first question that Dr. Cathy Baumann, Senior Instructional Professor and Director of the University of Chicago Language Center (CLC), asks the participants in her pedagogy workshops. The answer?
It depends on where you’re going.
Two weeks ago, we spoke with Ahmet Dursun, Director of the Office of Language Assessment at the CLC, regarding student assessment. We hope that his advice is useful for instructors who are preparing for finals season this winter quarter. This week, in the spirit of helping instructors prepare for their spring quarter classes, we connected with Cathy, from whom we have much to learn about structuring and developing courses and using reverse design.
Put simply, reverse design (like reverse engineering) involves identifying the goals of your course, and then identifying the steps it will take to get there. Many instructors select materials for a class and then create tests that cover that material. Reverse design instead asks instructors to first consider what students should know or be able to do by the end of quarter, then determine what assessments would best measure those outcomes, and then decide what a course preparing for those assessments might look like.
“At the end of your course, if your student walked out of the room and someone asked them, ‘What’d you do in there?’ what would you want the student to say? When we teach, we make hundreds, if not thousands, of decisions: which materials should I use? Which textbook? What will be homework? What will I do in class? You can make better choices about all those questions if you know where you’re going, if you know what your outcomes are, and if you know how you want to test them. We encourage people to think deliberately about that.”
Cathy and her colleagues at the Language Center organize multiple workshops both for UChicago instructors and for language instructors at other universities, and she has noticed that one thing is common among the instructors she works with, regardless of their discipline or level of experience: many have very little training in assessment. That’s where Ahmet’s expertise plays a role. The CLC’s workshops aim to give instructors a better understanding of their approach to course design, helping them identify outcomes, build assessments, and then realign their curricula in effective ways. She says that the most gratifying thing about the work is seeing instructors feel empowered after using this approach for their own classes. They realize the agency they have to realign existing courses or design new courses after going through the process of identifying outcomes and creating assessments.
If you’re as motivated as we were after hearing Cathy describe this approach, here is a step-by-step of the questions for you to consider when structuring your spring courses:
- At the end of the quarter, what should your students know or be able to do? Articulate that clearly. This could be about content knowledge and/or functional abilities/skills.
- How exactly will students demonstrate these outcomes? How do you measure that? Would you be able to identify that a student is or isn’t achieving the goals, or is doing so at a high or low level of proficiency?
- What sorts of assessments will help you measure your students’ performance? How can you grade them?
- How do you get students from here to there? What materials should they read or view? How is class time spent? What kind of assignments will they have? What kind of formative assessments?
(Formative assessments are the tests and quizzes throughout the quarter that help you check in and goalpost. They provide feedback to the students, and their level of success gives you feedback as well!)
It is important to view your curriculum as a kind of accountability – you’re telling your students that at the end of the quarter they will be able to know this or do that. You have to get them there.
If your students succeed at your assessments, you know the curriculum was successful. If not, you get a chance to look back and ask, what happened? What material needed to be covered more, or less? What would have made classroom time more effective? Then you teach again and test again and tweak again.
As you’re putting together this puzzle, assessment is the key piece. The CLC calls this assessment-driven pedagogy. One very important question to ask yourself is, are my assessments actually assessing what I think they’re assessing?
For example, when Cathy taught introductory German, at the end of the year she wanted to find out whether students could actually function at basic levels if they were dropped into the middle of a German-speaking city. That’s an important element in reverse design for language: that we are preparing students to use language in the real world, not in the classroom. Cathy could organize an oral exam and give students a list of seven possible questions to reply to and three possible roleplay scenarios, telling them that they would be asked to respond to three of those questions and engage in one of those scenarios – but that test would not be valid. That is, the assessment would not define “speaking the language” correctly. When you are speaking in the real world, you don’t know the questions ahead of time, and you don’t know what conversations may come up. Therefore, a test like the one described above does not actually help Cathy determine if her students are prepared to function in the language in the way they would in the real world.
In a more realistic oral exam, the students would not be able to prepare for specific questions beforehand. This might make the assessment seem more difficult, which means it is now time to determine if the curriculum sufficiently prepares students for the task of maintaining impromptu conversations or getting through different scenarios. Throughout the quarter, they must be given opportunities that approximate the skills they will need to use in the assessment.
Another example that Cathy uses to define the process of reverse design and the role of valid assessments involves graduate reading exams. When graduate students were being tested on their ability to conduct research using secondary sources in another language, it used to be that they were given 600 or so words from a random text to translate. This assessment did not approximate what graduate students, or any scholars, do in the real world with secondary literature. The goal is not to translate directly, but to read the text in order to gain information from it. The modified version of this assessment now asks the students to read an excerpt from a scholarly article and write down what they have understood, and then answer critical questions about the content, keeping them focused on the information in the text rather than translating word-for-word. This tests their ability to read and understand, which is the goal for the graduate student, rather than their ability to translate.
Before this assessment was modified, Cathy notes that students became less enthusiastic about using second-language secondary literature sources after having taken the translation course and test. This is called negative washback: the goal and assessment do not align, so the test has a negative effect on the students’ understanding or enjoyment of the material. When structuring assessments, Cathy aims for positive washback. When everything is aligned – the outcome, the test, and the curriculum – students feel that they are gaining proficiency in a skill and being given the tools to succeed on the assessment, and ultimately to use that skill in the real world.
Never one to miss a teaching moment, Cathy asked us one more question as we wrapped up, to describe how languages used to be taught and tested versus how they are tested now. “Would you get into a car with someone who knew the history of cars, all the parts of a car, and how to repair cars, but had never driven a car?”
The answer: no.
“Right. Now in our classes and assessments, we let the students drive the car.”
Catherine Baumann is director of the University of Chicago Language Center. She received her Ph.D. in Second Languages and Cultures Education at the University of Minnesota, specializing in reading comprehension and language testing. She is a certified ACTFL tester and trainer, and does consulting for high school and college language programs on a variety of curricular and assessment-related issues. She oversees all programs in the CLC.