This article will ask why we test learners of English. Before I start, let's get some terminology straight. I'm not talking about exams. We as a society need exams. Governments and large institutions couldn't function without exams. Governments can't deal with tiny sub-sets of people: individuality prevents it from doing its business of dividing people up into large groups, separating out, say, the kind of person who will go on in life to run the laundrette on the corner, from those who will go on to run our banks into the ground. Governments have to plan via demographics. How many spaces will universities need in 20 years time? Will this area need a new or different type of school? Should we encourage people into the IT industry? This basic business of government - sorting people into socio-economic groups largely through the education system - has been going on for years.
And there are some other exams, too, like driving tests, or IELTS, that need to exist to test a particular function. Such as whether Person A will be able to function on University course X in English (the English not being able to speak any other languages).
Okay, so we can see that there is an argument that exams need to exist. But tests? And by "test" I mean anything that looks or acts as a test, and that hasn't been designed by experts at a national level. Do they need to exist? Most teachers say yes. Let's look at some of the arguments why.
I need to see if my students have learned what I've taught them.
Well, this is the easiest one to answer. The answer is a simple "No, they haven't". Why? Well, because they have learned what they have learnt, and not what you have taught them. It has often been pointed out that the relationship between "teach" and "learn" is very different from that of "sell" and "buy". You can't say "I sold him the bike, but he didn't buy it". Yet all round the world staffrooms are filled with people saying "I taught the present perfect but they still haven't learnt it". Learners learn what they notice, not what the teacher notices for them. There may be happy occasions where the teacher helps the learner to notice. But these are few and far between. Because there isn't much time to allow for encouraging or assisting learners to pay attention to their individual intake because… we must cover the syllabus so they can pass the test.
I need to see if my learners have made progress
Another easy one. The answer is that your test won't tell you this. The chances that we could devise a test that could test exactly the same items or skills on Occasion A as again on Occasion B are tiny. And what would it tell us anyway? "This person has made progress". Oh. Good. Can it tell us why? Can it tell us how? Can it tell us whether, if we had taught differently, they would make the same progress? Or less? Or more? Should they have made more progress than the progress they did make? Then you start asking "What is progress?", and we disappear down the rabbit hole of madness.
And progress tests can easily be misused. Sometimes teachers want to prove to themselves that they have been Doing A Good Job. Sometimes Academic Directors use them to prove the opposite – as a form of teacher appraisal: "none of her students knew their reported speech!"
Of course, progress is entirely a perceptual construct, so really it would be better to ask the learner "Do you feel you have made progress?" Our learners might then consider the question, and this might lead to a discussion about what helps them learn, how they notice progress, how the teaching process could help more. But of course that syllabus means we haven't got time. And the learners know the game. They will say "Yes, I have made a lot of progress. Could you write that on my report, please?" Because they realise that schools value tests more than learning.
I need to know what they don't know
Another familiar test is the placement and/or needs analysis test. These are often the saddest tests. A group of teachers with a dodgy take on grammar and testing will devise a test which will cover the traditional structures in a traditional order, with a few prepositions and phrasal verbs thrown in. This will represent The Ladder of English (or any other language), up which prospective learners will be sent, like newly press-ganged recruits on 18th century sailing ships, up, into the masts amid the howling winds of the Mixed Conditional and the Gales of Inversions. In colleges and offices some of these items will be replaced by Special Vocabulary and be born again as ESP. Does "the language of negotiation" come higher or lower than "describing graphs"? The tragedy is that, once this information is collected and the scores assigned, what does it mean? Who will interpret it and following what logic? Why test these things indirectly when you could simply ask a question? It's as if involving the learner is somehow a threat: we need to prove our professionalism by producing – yes! a special syllabus to follow. And then test.
A waste of time
Let's face it. Most testing that we do today is a waste of time. It has all the trappings of good responsible teaching, but essentially is just a time-consuming activity. Teachers administer tests that take up useful class time (unless, of course, they're being used as a form of collective class punishment). And then comes the marking… "Do we give half-marks or not?" "I think she's shown she understands the questions" "Does spelling count?" "Is that an "s" or a squiggle?" Hours of this stuff using all your breaks at school or late at night while the family watches TV in another room wondering where you are. To produce – what?
Registration software produced where I once worked allowed us to enter a single percentage mark to sum up a learner's year of learning. Yes, we had to summarise Peter. We had to balance out his reading difficulties and his handwriting issues with his wide vocabulary and his excellent interest in the classes, his variable control of past tenses, his playing a constructive and leading role in group work but with his high total of absences due to him taking his sister to school when his mother was working. When I asked where I could enter these comments, I was told the software didn't keep comments, just percentages. Okay then. Let's give him, erm, 58.5% then. And round it up. Of course, every teacher in the school used slightly different criteria and assigned their percentages in different ways. The school thought that made us look unprofessional. So they told us to write a test to make it fairer.
Testing. Yeah. Whatever…
By Andy Baxter
- Teaching resources
- Teacher development
- Teacher training