Making Educational Reform Work

One of the projects I am currently working is called English for Universities (EfU). This has been running for four years in Ukraine, with a dual focus on improving the quality of ESP teaching in universities and developing the English-medium instruction skills of lecturers in a full range of academic disciplines. We are now doing the final evaluation of the project and in the last two months I have spent three weeks in Ukraine visiting universities, meeting teachers and students, observing classes and discussing the project with Heads of Department. The results of the evaluation will be presented in February 2019 but I would like to reflect here on the ESP component of the project and on the factors which have shaped the considerable impact it has had on over 30 participating universities across the country.

In brief, as described in a baseline study by Bolitho & West (2017), ESP in Ukrainian universities was characterised by a narrow focus on teaching subject-specific terminology, reading and in-depth analysis of technical texts, limited consideration for the professional needs graduates will have in relation to English, and overall approach to teaching and learning which was teacher-led and non-interactive. EfU thus sought to promote a more contemporary approach to ESP.

The model for professional development embedded in EfU was not particularly radical; ESP teachers attended three five-day intensive blocks of training on different aspects of ESP (collectively, these blocks constitute the British Council’s CiVELT course). They returned to their institutions and were expected to ‘disseminate’ what they learned to their colleagues, but otherwise the approach to professional development adopted can be described as an intensive off-site workshop-based training model. I have written elsewhere about the shortcomings of such models and reviews of different approaches to teacher professional development have also suggested that, in terms of impact on what teachers do, workshops are often not particularly effective. However, the experience of EfU suggests otherwise and it is useful to consider key factors that have contributed to its success (as evidenced in the extensive impact evaluation data that have been collected) in bringing about practical change in what ESP teachers do.

Some of these key factors were:

  1. Systematic baselining. The baseline report by Bolitho & West (2017) was noted above. This described the state of ESP in Ukrainian universities before EfU and provided clear evidence of the need for change which acted as a stimulus for institutions to participate in the project.
  2. Institutional support. EfU did not work simply at the level of individual ESP teachers; Heads of Department (HoDs) were also heavily involved, attending dedicated training events and being tasked with the central role of facilitating among their staff the new approaches to ESP that were being promoted on the project.
  3. A new ESP syllabus. Early in the project a new ESP syllabus was adopted by all participating universities. This provided a template for the development of new ESP courses for different disciplines. The collective adoption of this new syllabus ensured that the changes to teaching and learning ESP teachers were being encouraged to make were aligned with the syllabus they were required to implement.
  4. Collective enterprise and ownership. EfU was characterised by a sense of collective enterprise and ownership, on a national level. HoDs and participating teachers formed an enduring community which persisted throughout the project and which will undoubtedly persist beyond its formal conclusion.
  5. Action planning. All participating departments were engaged in developing, reviewing and sharing action plans which allowed them to define the work they were doing to enhance the impact of EfU. This periodic engagement with their action plans meant that the profile of EfU was kept high and institutions were able to monitor the progress they were making in promoting change in the teaching of ESP.
  6. Dissemination. Some 300 ESP teachers were directly trained, but many more had indirect access to new ideas about ESP through workshops, presentations and similar sessions that those attending the training organised on return to their departments. This substantially extended the reach of the project.
  7. Teacher motivation. It is clear too that many ESP teachers were ready to change and keen to engage in the kinds of professional development EfU provided. Post-training, too, teachers returned to their workplaces with a new vision of ESP which they were keen to implement (and which their department supported them in doing so). In educational reform, teacher motivation is vital because it drives teachers to apply new skills and knowledge.
  8. High-quality training.  Finally, it must be noted too that the quality of the training during EfU was generally seen by teachers to be very high. High quality training does not guarantee that subsequent transformations in practice will occur. However, driven by the other facilitative factors noted here, high quality training provided an excellent basis for changes in teachers’ work and that of their departments more generally.

Of course, departments and individual teachers varied in their engagement with EfU and the impact of the project did vary across institutions. However, overall, EfU provides valuable insights into the factors that allow professional development to achieve practical change. In particular, the project highlights the key roles that institutional support, committed leadership, a sense of collective enterprise and teacher motivation play in bringing about positive educational reform.

Posted in ESP, professional development | 2 Comments

Systemic barriers to practitioner research

I’ve just attended a conference in China which focused on supporting teachers in becoming researchers. Most participants were university College English teachers and one issue I found myself reflecting on throughout the event was whether our concern was academic research (for publication and career advancement) or practitioner research (for professional development and enhancing teaching and learning). This was never quite clear to me and China is a great example of the tensions that exist for teachers between these two kinds of research.

Much has been written about the value of practitioner research (variously described as teacher research, action research and similar terms) as a way of helping teachers systematically study, understand and improve their work. I have promoted the idea for many years and continue to support it. It is, though, naïve to promote practitioner research without reference to the socio-cultural and educational contexts in which teachers operate. While as an outsider I want to be cautious in my analysis, my experience in China over several years suggests that there are many factors which actively discourage the kind of pedagogical, experimental, innovative, critical, small-scale, reflexive and often collaborative inquiry that practitioner research entails. For example:

• Teachers (at all levels but especially at university) are under pressure to publish in high-ranking academic journals;
• Quantity (of publications) is a key measure of success;
• Academic products matter more than the process of inquiry;
• The practical utility of the research teachers do is not a core concern;
• Teachers are trained in traditional research methods but not for practitioner research;
• Research is often equated with ‘writing papers’;
• Practitioner research does not further teachers’ careers;
• Mechanisms that facilitate the dissemination of practitioner research do not exist;
• Centralised curricula and assessments discourage experimentation;
• Time and other resources to support practitioner research are limited;
• A competitive environment can discourage collaboration and the sharing of ideas.

These factors are not in themselves problematic – they are clearly aligned with a policy to promote high levels of academic productivity among teachers. And some teachers, despite the challenges, have found ways of doing practitioner research and can provide inspiration to others. But, overall, the environment strongly favours academic research. It is no surprise at all, then, that many teachers understandably feel compelled to focus more on how to complete a publishable research project than on professional inquiry. And, given the current situation, while there may be value in creating an awareness of practitioner research, I am inclined in my continuing work with teachers in China to support them in doing well the kind of research they are required to do rather than to promote alternatives which currently may have limited perceived relevance or feasibility for them.

I’d be interested in comments from colleagues in China on my analysis and reflections from readers elsewhere who work in similar or contrasting language teaching contexts as far as practitioner research is concerned.

Posted in professional development, research, teacher research | 7 Comments

Teacher Confidence

I’ve worked on a few teacher development projects recently where one of the objectives has been to boost teachers’ confidence, both as speakers and teachers of English. For example, on the EfECT project in Myanmar, participants’ confidence in their English and teaching skills was assessed at the start, mid-point and end of the project, and one hoped-for outcome was that improvements in self-rated confidence would be higher at end-project than at baseline (it wasn’t always – more on this below). When improved confidence is seen as a desirable outcome of teacher development work, this implies that confidence – our beliefs about our own capabilities – is a component of, or at least contributes in some way to, teacher competence. Confidence (often discussed under the heading of self-efficacy) is believed to influence how much effort we are willing to invest in an activity, but the relationship between confidence and performance, though, is not straightforward.

In fact, it has been argued that confidence is often inversely related to competence, as the interesting selection of quotations here attest. Some research on teaching, too, has shown that teachers may be confident in their ability even though observations of their work suggest this is unwarranted. This raises interesting issues in the context of teacher development: we want teachers to be confident, but it is essential that this is moderated by a healthy level of self-awareness so that teachers’ beliefs about their competence are realistic. While high levels of confidence may mean that teachers are more willing to engage with new practices in the classroom, longer-term professional growth is more likely when teachers’ assessments of their capabilities are grounded in reality. This suggests that attempts to develop teacher confidence need to include opportunities for teachers to reflect on evidence of the effectiveness of their work. This evidence can be generated through teachers’ own reflective practices and/or facilitated externally, for example, with the help of a mentor.

As I said above, the relationship between confidence and competence is not straightforward. A finding that has emerged from some recent studies (for example, the EfECT paper and Coburn 2016) is that the development of teacher confidence during professional development may follow a U-shape: levels of confidence may initially be (unrealistically) high, but as teachers become more knowledgeable, more aware of their current competence and of how much they can still learn, confidence may actually decrease. In other words, professional development may, at least initially, lead to lower teacher confidence.

It does depend, too, of course, on how confidence is being measured. Likert-scale questionnaire items are commonly used (see Borg & Edmett, 2018) and, because these are not grounded in immediate concrete experience, such measures are more susceptible to being unrealistically high (particularly if respondents are not comfortable admitting limited competence). Measures of teacher confidence that reflect real recent experience in the classroom and which are informed by evidence are more likely to be realistic. I still remember how over 20 years ago I was new to a school and was unexpectedly asked to teach a lesson in a language laboratory. I had never used the equipment and the lesson was a disaster. If I’d been asked to rate my competence in using language labs after that lesson my assessment – informed by immediate experience, evidence and awareness – would have been realistically low. This story reminds us too that confidence will vary across specific areas of teaching and attempts to assess it should be targeted rather than general. For example, in another recent  project in Azerbaijan, teacher confidence was examined specifically in relation to the teaching of speaking.

Confidence (not simply in terms of how we appear to others but more fundamentally in what we believe about ourselves) is undeniably desirable. Confidence in itself, though, does not equate to competence. For professional development, teachers (and teacher educators) need opportunities to develop realistic assessments of their own competence which can then provide a springboard for targeted growth and a more informed assessment of their abilities.

Some key questions for us to consider then are:

What aspects of our work are we most/least confident in? What are our judgements about our own competence based on? What can we do to ensure these judgements are grounded in evidence?

Posted in professional development, research, teacher cognition, teacher education | Tagged , , , , | 5 Comments

Teacher evaluation

Two new reports  on teacher evaluation were launched this week. The first is a global literature review of teacher evaluation, which examines different ways of evaluating teachers and makes recommendations for teacher evaluation in  ELT.  One interesting insight from this report is that while the literature on teacher evaluation is vast, detailed accounts of its implementation in state sector ELT contexts is limited. What happens on the ground, then, remains somewhat of a mystery (in some cases, even to teachers themselves!).

The second report is an analysis of teacher evaluation in India. It provides an overview of some of the approaches to teacher evaluation which exist in the country and discusses the factors that need to be in place to make teacher evaluation work effectively in this context.

This attention to teacher evaluation in ELT is timely. Teacher quality affects learning outcomes and so improving teachers is key to improving learning. However, we cannot really help teachers improve without some sense of their current levels of competence, which is where teacher evaluation comes in.  The challenge is to make teacher evaluation a positive and motivating experience for teachers and to use it in a way that both provides quality assurance and helps teachers improve.


Posted in professional development, teacher education, Uncategorized | 2 Comments

Why do teachers assess English the way they do?

I’ve been based in Slovenia for a few years now and regularly come across examples of how English is assessed in primary schools. I don’t know with empirical certainty how typical what I see is but I’m tempted to believe that it is simply because the same kinds of test items and approaches to marking keep on appearing. Here’s one example.

In it’s favour, the exercise uses a visual to stimulate a response. But what’s frustrating is the severity with which it is marked; numbers 2 and 3 were awarded 0 marks because of spelling mistakes while, if we accept one/a as possible alternatives, the half mark deducted on the final item also seems severe.  The fifth item also received 0, but this is justified given the grammar and spelling mistakes. My overall reaction to this example is that the marking is excessively harsh.

Here’s another one:

It’s useful for students to know the names of cities and countries in English. But this exercise is as much a test of general knowledge as it is of English. And again, while the student left a) and d) blank (the teacher filled these in while marking), the teacher’s decision not to award anything for b (‘Japan’) is harsh.

One final example:

This exercise required students to complete the sentences with a possessive adjective. I’m interested in items b, f and g. These were all marked wrong but one can imagine contexts where they would be meaningful and are not technically grammatically incorrect. The problem here is poor test design (lack of clear context) and the teacher not considering anything beyond the obvious answers.

I’m not suggesting that form-focused exercises which reward accuracy are not of value – of course they have a role to play. But it is problematic when these are the recurrent exercises students encounter in tests. Decontextualised examples, assessing general knowledge, harsh marking, and not rewarding unusual but grammatically correct answers all add to this problematic situation. Learning English is thus reduced to knowing how to complete discrete-item exercises of this kind. Such items are most obvious in written tests, but a fair amount of oral testing also takes place and I’ve heard of cases where primary school students are, for example, asked to explain (in their own language) when the present continuous or some other grammatical form is used. English tests can thus become an opportunity not for students to show what they know but for teachers to highlight what children do not. This whole approach to testing seems patently wrong to me (for one it puts children off English), so I am left with several questions:

  • Why do teachers test their students in this way?
  • Is it because they genuinely believe that accuracy and explicit grammatical knowledge are what primary students need most?
  • What levels of assessment literacy do primary school teachers of English have? The British Council’s Teacher Assessment Literacy project may be useful here, including the video on assessing young learners.
  • What  other factors shape teachers’ assessment practices? (for a list, see an earlier blog on teaching grammar)?
  • How common are the above approaches to the classroom testing of English in Slovenia and in primary schools elsewhere?
  • How can teachers be encouraged to reflect on and review such practices?

It would be great to hear what primary school teachers of English in Slovenia think and to hear about experiences elsewhere too.

Posted in grammar, professional development | Tagged , , , | 5 Comments