chevron-down chevron-left chevron-right chevron-up home circle comment double-caret-left double-caret-right like like2 twitter epale-arrow-up text-bubble cloud stop caret-down caret-up caret-left caret-right file-text

EPALE

Pjattaforma Elettronika għat-Tagħlim għall-Adulti fl-Ewropa

 
 

Blog

4 steps to an effective evaluation

30/07/2015
minn Tim GREBE
Lingwa: EN

Financial resources for adult learning and education tend to be limited across EU Member States. This makes it all the more important to ensure such resources are used effectively and efficiently. But what tells us if money is well invested? Evaluation is increasingly important in public decision-making and can contribute to answering this question, but for an evaluation to be effective, it has to be carefully designed. Important decisions have to be made when planning an evaluation in the field of (mostly formal) adult education. The key aspects in designing an evaluation are below:

  1. Evaluations can be formative (improvement-oriented) or summative (judgement-oriented). To decide on the main goal of the evaluation is important because some techniques (see below) may need years to deliver results, others (normally more qualitative) can provide insights earlier on.

 

  1. It must be decided what exactly shall be evaluated. The following table (Gutknecht-Gmeiner, 2009) gives an overview on subjects and related topics:

 

SubjectsEvaluation topics
Micro-level: A single course, a training measureContent, didactics, teachers, framework conditions, usefulness, learned skills
Meso-level: An educational institutionOrganisation, development of teachers’ skills, curriculum development, infrastructure
Macro-level: A policy intervention, a programme or the overall supply of adult education in a region/countryChanges in supply of adult education, satisfaction of participants, increases in labour market success of participants

3.   Concerning the learners or programme participants it must be decided which kind of questions shall be answered: Kirkpatrick has the following four levels that are applicable to all subjects mentioned before:

  1. Reaction: the (immediate) satisfaction with the intervention (with the course, the institution, the programme),
  2. Learning: the increase in skills and competencies,
  3. Behaviour: the change in behaviour among participants (use of acquired skills in the workplace
  4. Results arising from the change in behaviour, e.g. improved job satisfaction, increased labour market integration etc.,

 

4.   As for methods, it is widely agreed that qualitative and quantitative methods should (in most cases) be used jointly. Especially in large-scale evaluations of policy interventions (macro-level), rigorous quantitative evaluation techniques have their place, particularly if an evaluation focuses on behaviour and results. These dimensions are also often of major interest to policy-makers.

 

By means of an example I will demonstrate the main challenges in a summative evaluation at the macro-level:

The programme that was evaluated is a large-scale adult education voucher system (Bildungsprämie - results published here). Participants can get a voucher for a course or training related to their occupation. The evaluation has looked at all dimensions mentioned above, but had a focus on changes in participants’ behaviour (mobilisation effects) and results (improved employment situation).

 

The main challenge came from the experience that simply asking participants about the effects of an intervention is prone to bias, due to the social desirability bias but also the difficulty to imagine the counterfactual (what would have happened had I not received the voucher?). The evaluation responded by combining self-reported assessments with pre-/post comparisons, control-group and experimental approaches. This included:

  • In repeated surveys among groups of participants, the evaluation has tried to find out whether persons who where educationally inactive before their first participation have subsequently engaged in more regular educational activities;
  • In an experimental treatment, one group of eligible non-participants was informed intensively about the programme; another group was not. Both groups were asked about their educational activities one year later to estimate mobilisation effects;
  • In a control-group treatment, participants have been compared to persons who wanted to participate but could not do so due to external reasons (e.g. the course did not take place). Both groups had applied for a voucher and could thus be assumed to be similar regarding initial motivation. Using matching analysis, differences in their qualification, earnings and job status in later years could be attributed to the training.

All steps have been challenging due to the need of talking to survey respondents again one and two years after the initial interview and to reach large enough groups to perform robust analysis. One response was to engage in “panel-maintenance” efforts (e.g. informing participants about first results of the evaluation).

More generally, the example shows that one needs time (and money) to perform rigorous summative evaluations, especially on the macro-level. Simply asking participants once is often not enough. However, this should not be an obstacle when large amounts of scarce public funds are invested in adult education.

Tim Grebe is an evaluation expert and works as a Senior Researcher at InterVal GmbH, Berlin. He has participated in numerous programme evaluations in the field of education and labour market policy.

Share on Facebook Share on Twitter Epale SoundCloud Share on LinkedIn