logo
ResearchBunny Logo
World-wide barriers and enablers to achieving evidence-informed practice in education: what can be learnt from Spain, England, the United States, and Germany?

Education

World-wide barriers and enablers to achieving evidence-informed practice in education: what can be learnt from Spain, England, the United States, and Germany?

J. R. Malin, C. Brown, et al.

Join Joel R. Malin and colleagues on a vital exploration of evidence-informed practice (EIP) in education across Catalonia, England, Massachusetts, and Rheinland-Pfalz. Discover how accountability pressures, cultural factors, and institutional features shape varying patterns of evidence use in education systems worldwide.

00:00
00:00
~3 min • Beginner • English
Introduction
The paper investigates how to achieve more consistent evidence-informed practice (EIP) in education globally. It applies a social cohesion/regulation matrix together with an institutional analytic lens to compare four systems—Catalonia, England, Massachusetts, and Rheinland-Pfalz. The aims are to explore and critique a new analytical approach for studying EIP across systems and to generate insights into fostering EIP that may translate to other social policy sectors. The study posits that research evidence is only one influence on practice and its use depends on favourable conditions across micro, meso, and macro levels. It also argues the proposed dual theoretical/methodological approach can aid understanding and improvement of EIP within and across systems.
Literature Review
The paper reviews evidence on EIP and its potential benefits. Prior research suggests engaging with research and data can improve teaching and student outcomes (e.g., Mincu, 2014; Rose et al., 2017; Lai et al., 2014; Van Geel et al., 2016), with proposed mechanisms including problem identification, design of instructional strategies, conceptual development, and programme evaluation. However, there is limited evidence on facilitating EIP at the school level and a persistent research–practice gap (Coldwell et al., 2017; Graves and Moore, 2017; Whitty and Wisby, 2017). Comparative analyses of evidence use have been scarce; OECD’s “Evidence in Education” (Burns & Schuller, 2007) and EIPPEE provided cross-national insights but lacked a common framework directed at practice-level use. Darling-Hammond et al. (2017) noted high-performing systems support research-informed, collaborative teaching. Systems approaches to knowledge exchange (Best & Holmes, 2010) motivate the present dual framework: a cohesion/regulation matrix (Hood, 1998; Chapman, 2019) and institutional theory (Powell & DiMaggio, 1991; Martin & Williams, 2019) to understand contextual norms, rules, power dynamics, and isomorphic pressures shaping evidence use.
Methodology
Design: Comparative multiple case analysis of four education systems (Catalonia, England, Massachusetts, Rheinland-Pfalz) using a dual analytical frame: (1) social cohesion/regulation matrix to position systems by macro-level cohesion and regulation; (2) institutional theory to analyse meso- and micro-level norms, rules, power relations, and organisational behaviours affecting evidence use. Sampling: Cases selected strategically for diversity across matrix dimensions and by convenience/access to author expertise. Data sources: Reliance on extant data, policy documents, prior studies, and available system reports; no uniform, original data collection across cases. Each case synthesised available evidence on patterns of use, enablers, and barriers. Analytic focus: For each system, the study (i) assessed the extent and forms of evidence use, (ii) identified enablers/barriers via institutional theory (state policies, professional norms, leadership roles, brokerage), and (iii) appraised the relative strength of these factors vis-à-vis system type. Cross-case comparisons linked patterns to accountability regimes, cohesion, and regulation levels. Limitations to method: Uneven data depth across cases; absence of systems in the individualist quadrant; cross-case comparability constrained by reliance on extant, non-uniform data.
Key Findings
- Overall patterns: Evidence use varies substantially across systems and aligns with their positions in the cohesion/regulation matrix. High-regulation contexts (England, Massachusetts) concentrate educator attention on specific accountability-linked data (inspections, high-stakes tests). Lower-regulation, more egalitarian contexts (Rheinland-Pfalz) show greater use of internal, practice-proximal evidence but lower uptake of external accountability data. - Catalonia (hierarchist): Policy commitment to EIP has increased (Decret 274/2018; “Evidence-informed schools” initiative), but EIP is not generalized. Surveys indicate 68.1% of teachers and 77.3% of principals report frequently/always using research, yet teachers more often rely on experiential and peer knowledge; barriers include time, resources, leadership support, and limited research training. - England (fatalist): NFER survey (n=1670) found teachers predominantly influenced by their own/school ideas (60%), other schools (42%), and non-research CPD (54%); only 13% cited research-organization sources and 7% university guidance; 32% reported research as a strong influence. EEF toolkit accessed by <25% of teachers, ~59% of senior leaders. High-stakes inspections and league tables drive behaviour; barriers include time, paywalls, dense academic writing, limited research literacy, budget cuts, and high teaching hours (England 942 vs Finland 677, Germany 799 hours). - Massachusetts (fatalist): Hedberg (2018) interviews (N=22 districts) show predominant use of state test (MCAS) data; structures include data meetings (13 mentions), PD (5), data teams (5). 25% reported consulting research when selecting programs. Self-reported frequency (1–10 scale): adopt new materials 9.13; select interventions 8.87; PD 8.27; inform instruction 8.07; allocate funds 7.53; allocate staff 7.0. Barriers: limited time/staff (12), perceived value of research (5), culture (3). Research typically accessed via brokers (associations, conferences, newsletters) and state resources; skepticism noted, especially toward vendor research. - Rheinland-Pfalz (egalitarian): EviS study (N=1230 teachers) shows higher use of internal, practice-proximal evidence (student feedback, subject journals) and lower use of external instruments (inspections, state-wide tests). School leaders report higher uptake than teachers. Low-stakes accountability yields limited external data utilization; effectiveness depends on schools’ improvement expertise (Matthew effect). Systemic support and leadership are pivotal for embedding data use. - Cross-case synthesis: Accountability design shapes what evidence is used (“what gets measured matters”); sanctions/gratuities increase attention to external data (England, MA), while low-stakes contexts (RP) risk underutilization without robust supports. Enablers include leadership, dedicated roles (data/research leads), networks, and broker organizations (EEF, research schools, state OPR). Barriers include time, resources, access/relevance of research, culture/skepticism, and uneven infrastructure.
Discussion
The dual framework clarified how macro-level cohesion/regulation and meso/micro institutional forces condition EIP. Findings address the research questions by showing: (1) extent and forms of EIP differ markedly across systems, patterned by accountability regimes and cultural/institutional features; (2) enabling/hindering factors include policy signals, leadership, brokerage, professional norms, organisational capacity, and resource/time constraints; (3) system-level lessons with wider sector relevance highlight that evidence strength alone rarely determines use—powerful actors, incentives, and organizational routines shape uptake. Strong accountability can focus attention but risks narrowing. Low-stakes environments may support bottom-up, context-relevant use but require infrastructure, leadership, and capacity building to translate data into improvement. The process insights indicate the dual frame is useful for diagnosis and for designing context-specific supports rather than one-size-fits-all solutions, a principle applicable beyond education.
Conclusion
The study contributes a novel, dual analytical approach for comparative EIP analysis and offers provisional, context-sensitive guidance. It shows that evidence use is contingent upon multi-level conditions, including accountability design, leadership, brokerage, and organizational learning capacity. To move toward more routine and meaningful EIP, systems should: build organisations that learn; invest in networks, embedded research roles, and research–practice partnerships; improve access, relevance, and interpretation supports for research; and align policy, accountability, and supports to encourage balanced, context-appropriate evidence use. The approach and insights are transferable to other sectors where institutional contexts and system incentives similarly shape evidence use. Future research should implement coordinated, multi-level data collection across diverse system types (including the ‘individualist’ quadrant) to strengthen comparative claims and refine actionable, context-specific support packages.
Limitations
- Reliance on extant, non-uniform data across cases limited depth and cross-case comparability. - Case selection did not cover the full diversity of global systems (no ‘individualist’ quadrant case). - Uneven availability and granularity of evidence within cases; results are provisional and context-bound. - The complexity of applying institutional theory required substantial interpretive work and may yield variability in operationalization across cases.
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs, just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny