logo
ResearchBunny Logo
Evaluating the process of partnership and research in global health: reflections from the STRIPE project

Health and Fitness

Evaluating the process of partnership and research in global health: reflections from the STRIPE project

A. Kalbarczyk, A. Rao, et al.

Discover the insights from a thorough process evaluation of the STRIPE project's partnership, aimed at eradicating polio globally. This research, conducted by a diverse team of experts, highlights significant challenges and proposes innovative solutions in the realm of multi-country research collaborations.

00:00
00:00
~3 min • Beginner • English
Introduction
The paper addresses how to evaluate and strengthen equitable partnerships within a large, multi-country global health research consortium (STRIPE) during its first year. It situates the research in the context of increasing North–South academic partnerships and recognizes established guidance (e.g., CIOMS) emphasizing equitable collaboration, capacity-building, conflict of interest management, education, and publication/data-sharing practices. Prior literature identifies recurring partnership challenges such as unpredictable financing, trust deficits, and limited capacity-building emphasis, particularly for LMIC partners. The study’s purpose is to prospectively evaluate the effectiveness and equity of STRIPE’s partnership processes, identifying what worked, the challenges encountered, and workable solutions during the knowledge mapping phase (scoping review, survey, KIIs, context mapping). The work aims to inform best practices for managing multi-country consortia with participatory approaches, acknowledging contextual factors (e.g., political events, insecurity) and the need for shared vision, goals, and social justice principles in partnership governance.
Literature Review
The background synthesizes guidance and frameworks relevant to equitable global health partnerships. CIOMS/WHO ethical guidelines (Guideline 8) emphasize collaborative partnerships, capacity strengthening, conflict of interest management, education, and publication/data sharing. Participatory evaluation literature (e.g., Scarinci et al.) highlights commitment, participation, and meaningful engagement as foundational to successful collaboration evaluation. Governance checklists for consortia (Pratt & Hyder) focus on shared resources, responsibility, accountability, sovereignty, and justice, advocating application during consortium development and routine monitoring. Dankwa-Mullan et al. propose six elements for transdisciplinary teamwork to address health disparities, urging prospective application for innovation and institutionalization. Blackstock et al. developed a participatory research evaluation framework linking purpose, criteria, methods, context, process, and outcomes, centering co-construction of knowledge. Across these works, recurring themes include social justice/equity, context sensitivity, and shared vision/goals, alongside constraints common in LMIC partnerships (financing unpredictability, trust, and capacity deficits).
Methodology
Design: Prospective, participatory-oriented process evaluation of the STRIPE consortium partnership during year one (knowledge mapping phase), guided by Blackstock et al.’s framework for evaluating participatory research. Framework: The evaluation aligned purpose, focus, timing, and criteria selection, adapting Blackstock’s constructs (access to resources, accountability, capacity, context, shared vision/goals, relationships, quality, transparency). Methods and data sources were defined prior to finalizing criteria due to iterative refinement. Data sources and collection: - JHU team reflections: In December 2018, all JHU team members (n=10) were invited to provide 2–3 page reflections on experiences; 9 written reflections were received (what worked, changes, challenges, mitigation actions). - Consortium partner calls: Dec 2018–Jan 2019, individual calls with each partner team (n=7) to discuss technical work and gather reflections; notes were taken. - Consortium meeting and working group: April 2019, 3-day in-person meeting (Baltimore, USA). A 90-minute process evaluation working group session (one representative per institution) reviewed preliminary criteria, mapped challenges/solutions, and refined the evaluation set. - Collaborative documentation: Post-meeting, a shared Google document included criteria definitions (Table 1) and illustrative challenges/solutions (Table 2). Working group members applied criteria to their settings and added examples. Analysis: Preliminary thematic analysis of reflections and call notes mapped to Blackstock’s framework, identifying existing and emergent themes. The working group iteratively added, removed, or combined criteria (e.g., merged amount of information with quality; engagement with communication; relationship building with social capital/power; removed compatibility). Final set of 12 criteria was agreed and used to structure findings. Ethical review deemed the work non-human subjects research.
Key Findings
- Evaluation criteria: Twelve criteria used to assess STRIPE’s partnership and process: access to resources; expectation setting (shared vision and goals); organizational context; external context; quality of information; relationship building & social capital/power; transparency; motivation; scheduling; adaptation; communication & engagement; capacity building. - Access to resources: Contracting and payment delays across institutions; recruitment challenges for qualified staff, particularly in large geographies (e.g., DRC); difficulties using online tools (e.g., Qualtrics) and limited internet connectivity. Proposed improvements: allocate more time for recruitment, training, reflection, pre-testing, and financing; enable partner access to JHU library and needed software licenses; consider at least one full-time project coordinator per country. - Expectation setting: While overall vision/goals were agreed, technical details were insufficiently specified; communication about timeline shifts and task changes was not always effective; managing stakeholder expectations (e.g., anticipating immediate policy outputs) was challenging. Suggested: pre-proposal meeting to clarify technical scope; clear protocols for communicating adaptations and iterations; early, explicit expectation alignment. - Organizational context: Competing priorities, staff turnover, and long hiring processes varied by institution; differing coping capacities for financing delays. Recommendations: fund and hire a full-time coordinator per team; integrate project tasks with doctoral programs; allow flexibility in targets. - External context: Ebola outbreaks (DRC), national elections (Afghanistan, Bangladesh, DRC), cVDPV outbreaks (Indonesia, DRC), insecurity (Afghanistan, Eastern DRC) affected access and stakeholder availability. Suggested: flexible targets; formal communications from JHU to local governments/GPEI partners to bolster ownership. - Quality of information: Imbalance between excessive and insufficient data; lengthy, complex tools (survey, KIIs, context analyses) challenged focus and consistency; early-tool finalization by JHU limited perceived room for change. Proposed: more focused tools derived from research questions; flexibility for country-level tool modification and adjusted timelines. - Relationship building & social capital/power: Trust deficits with government/GPEI stakeholders hampered access to data; ministry turnover impeded continuity; some teams had weak networks. Strategies: leverage JHU/funder reputation where appropriate; emphasize national leadership/local ownership (e.g., Indonesia); conduct pre-launch stakeholder workshops/advocacy visits (successes in India, Bangladesh, Nigeria); maintain regular stakeholder communication. - Transparency: Desire for earlier involvement in decisions on tool development, data collection/analysis, authorship, and publication plans. Recommendations: co-develop analysis plans and authorship criteria; clarify publication/conference procedures early. - Motivation: Lengthy, detailed processes and back-to-back deliverables (notably literature reviews and HiT tool) reduced motivation among researchers and participants. Suggestions: allocate more planning/training time; pursue rapid dissemination/publication; recognize contributions. - Scheduling: Frequent timeline adjustments due to holidays, time zones, differing workweeks, and external events complicated management. Suggested: quarterly timeline reviews; plan around major holidays and local contexts. - Adaptation: Teams adapted via hiring additional staff and engaging students; confusion arose when adaptations were misinterpreted as replacing other tasks. Positive example: Afghanistan shifted to phone-based survey to boost responses. Recommendations: simplify/shorten tools; plan workload rollout and adaptation communication clearly. - Communication & engagement: Unclear points-of-contact per workstream; information overload via email; difficulty attending calls across time zones; lack of a common platform. Proposed: more one-on-one meetings; clear role/responsibility maps; develop an externally facing website; identify shared, accessible platforms. - Capacity building: Limited familiarity with tools/processes (Qualtrics, F1000, transcription, memoing, scoping review analysis). Recommendations: targeted trainings in tools, analysis, and writing; engage student researchers (e.g., dissertations); strengthen south–south learning; interdisciplinary team integration enhanced mixed-methods capacity at JHU. Quantitative/contextual details: 9 written reflections from JHU staff; 7 partner calls; one 3-day consortium meeting; final set of 12 criteria after consolidation.
Discussion
The evaluation demonstrates that effective, equitable multi-country research partnerships hinge on transparent communication, early and ongoing expectation alignment, and robust institutional support. Technological fragmentation and bandwidth constraints undermined collaboration; widely used tools (Zoom, Google Docs, F1000, Qualtrics, Dropbox) were insufficiently integrated and sometimes inaccessible. There is a need for simple, shareable project management solutions that accommodate low-bandwidth contexts, support online/offline syncing, collaborative document editing, and scheduling—accompanied by training and institutional permissions for cross-organization use. Early-stage buy-in and clarity on mission, goals, scopes of work, document workflows, timelines (including adaptation processes), feedback windows, authorship/publication policies, and communication strategies are essential to build trust and maintain momentum. Clear scopes of work also facilitate appropriate budgeting and timely recruitment of qualified personnel. Recognizing varied institutional coping capacities, consortia should assess and support members’ needs (e.g., dedicated coordinators) and design flexible targets responsive to external contexts. Capacity building must extend beyond individual technical skills to encompass organizational systems (IT, regulatory, communications). Such investments require earmarked resources; funders should explicitly require and finance institutional-level capacity strengthening within consortium grants. By embedding these elements, consortia can improve process quality, equity, and research outputs, addressing the study’s objective to identify workable solutions during early implementation.
Conclusion
A collaboratively conducted process evaluation of the STRIPE consortium underscores the importance of clear, open communication, proactive expectation alignment, and capacity-building activities that address both individual and institutional needs. Improved, context-appropriate collaborative project management tools are needed to support partnership and research processes across diverse settings. Multiyear academic consortia should incorporate prospective partnership evaluations to generate actionable insights for ongoing improvement and to inform future cross-country collaborations.
Limitations
Key limitations include potential bias due to power dynamics: the evaluation was conducted among subcontracted partners with involvement from the lead organization (JHU), possibly influencing feedback to preserve relationships. Group-based data collection (working group session, shared Google Doc) may have inhibited disclosure of sensitive issues. Future evaluations could mitigate these concerns by engaging independent third-party evaluators from the outset, budgeting accordingly, and prioritizing individual/team-level data collection with later-stage group consensus-building.
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs, just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny