logo
ResearchBunny Logo
Evaluating the process of partnership and research in global health: reflections from the STRIPE project

Medicine and Health

Evaluating the process of partnership and research in global health: reflections from the STRIPE project

A. Kalbarczyk, A. Rao, et al.

Explore the intricate dynamics of global health partnerships in the STRIPE project! This research, conducted by Anna Kalbarczyk, Aditi Rao, Yodi Mahendradhata, Piyusha Majumdar, Ellie Decker, Humayra Binte Anwar, Oluwaseun O. Akinyemi, Ahmad Omid Rahimi, Patrick Kayembe, and Olakunle O. Alonge, reveals key lessons learned from polio eradication efforts and offers vital recommendations for managing multi-country research projects effectively.

00:00
00:00
~3 min • Beginner • English
Introduction
Global health research increasingly relies on partnerships between institutions in the global north and south to address health disparities and enable collaborative knowledge generation. Recognizing the ethical and practical importance of equitable collaboration, guidelines such as CIOMS emphasize partnership, capacity building, conflict-of-interest management, education, and data sharing. Despite growth in consortia-based research, partnerships often face challenges including unpredictable financing, limited trust, and insufficient capacity-building, and many evaluation guidelines are developed without substantial LMIC input. The Synthesis and Translation of Research and Innovations from Polio Eradication (STRIPE) project, a collaboration between Johns Hopkins University and seven global partners across diverse polio epidemiological contexts, undertook a process evaluation during its first year (knowledge mapping phase) to prospectively assess effectiveness and equity in a large, multi-country research consortium. The study aimed to understand what worked well, challenges encountered, and solutions implemented regarding partnership relationships and experiences with STRIPE’s research activities.
Literature Review
The paper reviews frameworks and guidance for evaluating and governing research partnerships: CIOMS International Ethical Guidelines (Guideline 8) emphasize collaborative partnerships, research capacity strengthening, conflict-of-interest management, education, and publication/data sharing practices. Participatory evaluations highlight commitment, participation, and meaningful engagement (Scarinci et al.). Equity-focused governance for transnational consortia is addressed by Pratt and Hyder’s checklist based on shared health governance (shared resources, responsibility, accountability, sovereignty, justice), intended for development and ongoing monitoring. Dankwa-Mullan et al. propose six elements to guide transdisciplinary teamwork, urging prospective application to foster innovation and institutionalization. Blackstock et al. developed a participatory research evaluation framework connecting outcomes, design, process, and context with co-construction of knowledge at the center. Across frameworks, key concepts include social justice and equity, contextual understanding, shared vision and goals, and acknowledgment of exacerbating factors (e.g., insecurity, political environment). The STRIPE evaluation builds on these, using Blackstock’s framework while conducting an ongoing, participatory-influenced process evaluation.
Methodology
Design and framework: The study used Blackstock et al.’s framework for evaluating participatory research to assess STRIPE’s first-year consortium processes. The team defined purpose, focus, bounding, and timing, and selected process, context, and outcome criteria, adapting and refining constructs (e.g., access to resources, accountability, capacity, context, shared vision/goals, relationships, quality, transparency). While participatory elements were included, the consortium was contractual (subcontracts from JHU), not fully participatory. Data collection: In December 2018, all JHU team members were invited to submit 2–3 page reflections; 9 were received. From December 2018 to January 2019, the project manager held individual calls with each consortium team (n=7) to discuss technical progress and solicit reflections; detailed notes were taken. Preliminary analyses mapped themes to Blackstock’s framework and identified additional criteria. In April 2019, a 3-day consortium meeting (Baltimore, USA) convened primary and co-investigators from each institution, JHU team, and funder representatives. A 90-minute process evaluation working group session (one representative per country) reviewed preliminary criteria, discussed challenges and solutions, and refined the evaluation set. Two observers took notes. Post-meeting, a shared Google Doc captured criteria definitions and illustrative challenges/solutions; working group members added context-specific examples. Criteria: Fourteen criteria initially emerged from written reflections (10 aligned with Blackstock). Calls added no new criteria. During the consortium meeting, capacity building and social capital/power were added; compatibility was removed. Amount of information merged with quality of information; engagement merged with communication; relationship building merged with social capital/power. The final 12 criteria: - Access to resources - Expectation setting - Organizational context - External context - Quality of information - Relationship building (including social capital and power) - Transparency - Motivation - Scheduling - Adaptation - Communication and engagement - Capacity building Analysis: Qualitative thematic analysis of reflections, call notes, and working group inputs was conducted, organizing evidence under the 12 criteria and mapping challenges to proposed solutions. Outputs included a synthesized table of challenges and recommended strategies per criterion.
Key Findings
- A total of 12 evaluation criteria were finalized and applied to assess the STRIPE consortium’s first-year research processes and partnerships. - Participation: 9 written reflections from JHU team members; calls with 7 consortium teams; an in-person 3-day meeting with a working group session. - Access to resources: Common challenges included contract and payment delays, recruiting qualified staff for large geographies (e.g., DRC), limited training with online tools (Qualtrics), and poor internet connectivity. Solutions: allocate more time for recruitment/training/reflection/pre-testing; ensure timely financing; provide access to JHU library and necessary licenses; consider at least one full-time coordinator per country. - Expectation setting: While vision and goals were agreed upon, technical details and communication about timeline changes and priorities were insufficient. Managing in-country stakeholder expectations was challenging. Solutions: pre-proposal submission meeting; protocols for communicating adaptations and iterations; clearer early-stage discussions to set realistic targets. - Organizational context: Competing priorities, staff turnover, and long hiring processes affected progress; varied institutional coping capacities impacted ability to bridge financing delays. Solutions: fund at least one full-time project coordinator; integrate activities with doctoral programs; allow flexibility in targets. - External context: Ebola outbreaks (DRC), national elections (Afghanistan, Bangladesh, DRC), re-emergence of cVDPV (Indonesia, DRC), and insecurity (Afghanistan, Eastern DRC) disrupted activities. Solutions: flexible targets; formal communications from JHU to local governments and GPEI partners to support local ownership. - Quality of information: Perceived imbalance in data quantity and areas of focus; lengthy and complex tools (survey, KIIs, context analysis) made maintaining focus difficult; uncertainty about flexibility to adapt tools. Solutions: allow country-level tool and timeline modifications; streamline tools aligned to research questions. - Relationship building & social capital/power: Trust deficits with government and stakeholders delayed access to national data; weak networks and leadership turnover impeded engagement; some teams leveraged JHU/funder names while balancing national ownership (e.g., Indonesia). Solutions: pre-launch stakeholder meetings/workshops; advocacy visits; regular communication; leverage institutional reputations appropriately. - Transparency: Desire for earlier engagement in decisions about tool development, analysis plans, authorship, and publication processes. Solutions: co-develop procedures for analysis, authorship criteria, and dissemination. - Motivation: Lengthy, detailed processes (e.g., literature reviews, HIT tool) decreased researcher and participant motivation. Solutions: more planning and training time; rapid publication and recognition. - Scheduling: Frequent timeline changes due to holidays, time zones, and external events complicated management. Solutions: quarterly timeline reviews; consider local calendars, time zones, and working days. - Adaptation: Misunderstandings about what activities were replaced vs. added led to confusion and missed deadlines; some successful adaptations included Afghanistan’s shift to phone-based surveys. Solutions: hire additional staff, engage students, simplify tools, and improve planning of workload rollout. - Communication & engagement: Unclear points of contact per workstream; information overload via email; difficulties attending calls; lack of a common platform. Solutions: increase one-on-one meetings; clarify roles/responsibilities; create an externally facing website. - Capacity building: Limited familiarity with tools and processes (Qualtrics, F1000, transcription, memoing, scoping review analysis). Solutions: additional training; engage student researchers; increase south-south collaboration; cross-method learning within integrated teams at JHU.
Discussion
Findings indicate that effective management of multi-country, multicenter implementation research hinges on clear, transparent communication, early alignment of expectations, and strong institutional support. Despite widespread availability of digital tools, partners faced fragmented, bandwidth-intensive platforms that were not uniformly accessible or supported by institutional policies. This underscores the need for integrated project management solutions that accommodate low-bandwidth environments, enable online/offline syncing, and support document sharing and scheduling, accompanied by comprehensive training to ensure uptake. Early-stage collaboration and expectation alignment are essential to build buy-in, define scopes of work, and plan human resources. Variability in institutional coping capacity for delays, hiring constraints, and workload surges highlights the importance of assessing and supporting both individual and organizational capacities. Capacity building should extend beyond technical training for individuals to include strengthening institutional systems (IT, regulatory, communications), which requires dedicated funding and sponsor requirements that prioritize organizational development. These insights directly address the study objective by identifying what worked, surfacing challenges, and mapping actionable strategies to enhance equity and effectiveness in global research consortia.
Conclusion
This partnership process evaluation emphasizes the need for clear, open communication, early expectation alignment, and intentional capacity building at both individual and institutional levels. Improved, accessible project management tools are necessary for effective collaboration in academic consortia. Multiyear, multi-country consortia should embed ongoing partnership evaluations to inform adaptive management and offer lessons for others developing cross-country collaborations. Future efforts should plan for organizational capacity strengthening and ensure adequate human resources, including dedicated country-level coordination.
Limitations
The evaluation was conducted within a contractual consortium led by JHU, introducing potential power dynamics and social desirability bias in feedback from subcontracted partners. Data collection relied on shared experiences (working group session) and shared tools (Google Docs), which may have limited disclosure of sensitive issues. An independent third-party evaluator could mitigate these concerns, but would require upfront planning and budgeting. Future evaluations might emphasize individual or team-level data collection with later group consensus processes.
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs, just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny