TIER UvA centres on the educational knowledge infrastructure chain. Educational intervention forms the key link in this chain. Three types of educational interventions are distinguished: interventions occurring at the level of the educational establishment as a whole (the macro-perspective), at the level of teaching institutions (the meso-perspective) and at the level of the primary process of learning and instruction (the micro-perspective). Examples of micro-level educational interventions include an IT application used for teaching purposes or a didactic method. At the meso level, these might include measures to change class size and, at the macro level, interventions in policies for eliminating educational disadvantage.
Within this focus area, educationalists and economists will work together on distinct projects. Educationalists will have primary responsibility for the methodological and content-related organisation of research into the effects of educational interventions once these have been developed. Economists will concentrate on performing economic assessments (cost-effectiveness analyses) of these interventions.
Knowledge about the effects of educational interventions is still limited in the Netherlands. It is difficult to say decisively whether a particular intervention works in practice. To be able to answer this question, we would like to know what would have happened had the intervention not been implemented (a counterfactual condition). Only then can causal links be proven. The theory of 'counterfactuals' refers to the notion that an event (effect) would not have occurred if, contrary to facts, a previous event (cause) had not occurred. However, this is in principle impossible, given that both conditions cannot be observed.
In order to nevertheless gain an impression of policy effects, researchers rely on control groups consisting of people who have not been affected by the intervention. In the context of Dutch education practice, it has proven difficult to find reliable control groups. Policy is oftentimes implemented at the national level, which means that all pupils are affected by the same policy instruments. Differences between schools, for example in class size, rarely arise by chance and are almost always the result of parent, pupil or teachers' decisions. Teaching methods are generally implemented across the board. Consequently, any comparison between different groups of pupils will always beg the question of whether the groups really are equivalent, for example in terms of social-economic backgrounds or aptitudes. As such, inter-group variation in areas like pupil linguistic ability in small versus large groups could plausibly be the result of specific policy (e.g. class size reduction), but could also be tied to unobserved differences between the groups, such as higher motivation among pupils in smaller classes. The literature shows that taking no or insufficient account of selection effects can lead to erroneous conclusions about policy effects. Such errors may relate to not only the extent of the effects, but also, and even, to the direction of those effects (i.e., an effect that was considered to be an improvement in fact represented a deterioration, or vice versa).
Randomisation is generally considered to be the best method for forming equivalent groups and for preventing bias resulting from (self-)selection. In practice, however, there are additional equivalency problems to be dealt with, such as: 1) lack of compliance, 2) incomplete data and 3) 'unblinding' (participants know they belong to the intervention or control group, respectively). Each of these three problems occurs during policy intervention assessments and can impede the viability of a reliable assessment.
Over the past decade, international literature in the field has achieved significant advances in developing methods that can convincingly be applied to measure the effects of educational interventions on the macro, meso and micro levels. New forms of empirical research into the effectiveness of various forms of educational policy have proven highly valuable, particularly when it comes to conducting trials and making use of coincidentally arising circumstances that bear a strong resemble to such trials. The effects of educational interventions are measured by comparing outcomes observed in an experimental group that has been subjected to a certain policy with those of a control group that has been chosen at random and has not been subjected to this policy. Such comparison furnishes significantly more reliable evidence of the effects of educational intervention than was the case in trials done in the past.