Computer Science
Engaging Company Developers in Security Research Studies: A Comprehensive Literature Review and Quantitative Survey
R. Serafini, S. A. Horstmann, et al.
Company developers overwhelmingly value security: 62.5% prioritize security tasks and 96.5% are willing to join security studies. This large-scale survey of 340 professionals examines compensation, study length, repeat-participant perceptions, and Qualtrics as a recruitment channel, and provides concrete recommendations for engaging company developers. This research was conducted by Raphael Serafini, Stefan Albert Horstmann, and Alena Naiakshina.
~3 min • Beginner • English
Introduction
The study addresses the persistent challenge of recruiting professional software developers employed by organizations (company developers) for empirical security and privacy research. Prior work shows company developers differ meaningfully from CS students and freelancers in security behaviors, making their inclusion essential for ecological validity. Yet researchers lack clear guidelines on motivating participation, particularly regarding appropriate compensation and study length, and many studies omit these methodological details. This work formulates three research questions: RQ1: What factors influence company developers' participation in security studies? RQ2: Are there differences in attitudes between first-time and repeat participants? RQ3: Is Qualtrics suitable for recruiting company developers compared to existing platforms regarding programming and security experience, skills, and knowledge? To investigate, the authors conduct a systematic literature review and a large-scale quantitative survey of 340 company developers, aiming to produce foundational recommendations for recruitment strategies, study design, compensation, and length.
Literature Review
The paper synthesizes guidance and evidence across software engineering, HCI, and security venues on recruiting participants and motivating developer participation. Existing recruitment guidelines emphasize methodology, sample reliability, and company cooperation but offer limited insights on motivating large numbers of company developers. Prior comparisons of recruitment platforms (e.g., MTurk, Prolific, Upwork, student mailing lists) show variability in participant programming and security experience, data reliability, and tooling; screening questions are often needed due to skill disparities. Studies suggest persuasive elements in invitations (personalization, rewards, scarcity cues, humor) can raise response rates, and motivations beyond money (learning, fun, social connection) matter. A related qualitative study found study length, topic, compensation, and trust are influential, with concerns about effort and performance in security tasks.
A systematic literature review of 163 papers (2017–2022) across CCS, CHI, IEEE S&P, NDSS, USENIX Security, ICSE, and SOUPS extracted participant samples, recruitment channels, study type, tasks, participant counts, compensation, and length. Participant types included software developers (100 papers), CS students (36), security/IT experts (39), admins (8), and others (7). Recruitment channels most used were social/regional contacts (75), unsolicited email (55), social media (42), and university recruitment (36). Less common were snowball sampling (29), online forums/blogs (29), networking platforms (21), freelancer/crowdsourcing (16), and security events (8); 12 papers did not report channels. Studies used on average 2.06 channels. Most studies were online (125), primarily surveys (77), with interviews (72), practical tasks (75), field (17), and lab (37) less common. Surveys had the largest participant numbers (mean 247.53; median 102). Study length was reported in 101 papers with an overall average of 77.75 minutes; typical lengths: surveys 16.58 minutes, interviews 50.93 minutes, practical tasks 126.35 minutes; many practical multi-day lengths were missing and 46 surveys omitted length entirely. Compensation reporting was inconsistent: 77 did not mention rewards, 21 provided no compensation, 48 reported monetary compensation (mean $60.09; median $26.04), and 17 used other incentives; among the 30 papers with both compensation and length, the estimated mean hourly rate was $41.07.
Methodology
The authors conducted a 30-minute online survey (actual mean completion 32.77 minutes; median 28.5; σ 18.85) over 2.5 months (79 days), recruiting 340 company developers via Qualtrics research panels. Inclusion criteria targeted professional developers employed at least part-time by organizations with software development as the main job component. Quotas required ≥40% first-time and ≥40% repeat participants. Screening procedures excluded non-developers and used programming skills screeners and attention checks recommended in prior work. Of 3,240 starters, 672 failed inclusion, 188 were from overrepresented regions; of 2,380 remaining, 1,556 failed programming screeners, 247 failed attention/quality checks, and 237 were excluded due to inconsistencies or completion times under 10 minutes, yielding n=340. The survey was in English; Qualtrics compensated panelists via money or redeemable points (exact participant-level compensation details not provided). Total study cost was €17,588.75 (~$62.15 per participant). The survey comprised 10 sections and 66 questions covering recruitment channels; study tasks, types, and lengths; compensation expectations; willingness to participate; security vs software engineering topic preferences; longitudinal study interest; implementation vs code review tasks; organizational and security attitudes; developer backgrounds; coding/security experience; and SSD-SES self-efficacy. Items were drawn from and aligned with prior studies to enable comparability. A pilot with 8 participants adjusted currency options and reduced item counts for EFA sets (motivators, barriers, security attitudes). Quantitative analyses used independent t-tests for continuous data and Mann-Whitney U tests for Likert items (p<.05), with power analysis thresholds (Cohen’s d ≥ 0.36). Outlier handling for compensation and length expectations used MAD-based exclusion (>3.5×MAD). Qualitative open-ended responses were inductively coded by two researchers. Exploratory Factor Analysis employed KMO (all >0.8 except one item at 0.54, retained), Bartlett’s test (significant), oblimin rotation, and Horn’s parallel analysis; variables with factor loadings |>0.4| were retained. Ethics approval (IRB) was obtained; GDPR compliance and informed consent procedures ensured anonymization and withdrawal rights.
Key Findings
Participation willingness and topic preference: 96.5% were willing to participate in security studies, and 62.5% preferred security over software engineering when given a choice. Willingness across study types/tasks was high: security 0.964, lab 0.841, field 0.805, interview 0.861, implementation 0.958, code review 0.894.
Recruitment channels: Highest acceptance means were employer recommendations (4.48), CS mailing lists (4.33), targeted emailing (4.01), study mailing lists (4.17), referral (4.40); lowest were unsolicited emailing (2.67) and flyers/posters (3.17). Security-focused contexts showed higher acceptance for employer-based recruitment.
Preferred study lengths: Average willingness was 25.61 minutes for surveys (median 20), 31.56 minutes for interviews (median 30), 56.89 minutes for implementation (median 60), and 56.61 minutes for code review (median 60). 81.18% were willing to participate in multi-day/week/month studies, with 93.48% expecting higher hourly compensation for longer-term engagements. Percentile acceptance showed 90% would accept a 15-minute programming task.
Compensation expectations: For 15-minute studies, average expected USD compensation ranged from $6.50 (online survey) to $24.04 (field review); implementation and code review tasks commanded higher rates ($13.14–$24.04). For 60-minute studies, expectations ranged from $15.12 (online survey) to $49.58 (field review); increases were typically sub-linear (about 2× from 15 to 60 minutes). Regional differences were substantial: Asia expected markedly lower compensation (often <1/3 of other regions); Africa and North America were similar on average; Europe slightly lower than NA/Africa in many cases. 282 participants preferred Amazon vouchers; 145 hardware products; 56 non-anonymous charitable donations.
EFA results: Motivators had high agreement (means 3.93–4.29), led by altruism (4.14) and transparency (4.29). Barriers showed moderate agreement (means 2.81–3.65), with highest scores for commitment (3.65), privacy concerns (3.32), and time constraints (3.30); uncertainty was lowest (2.81). Security attitudes were uniformly high (means 3.92–4.54), led by security culture (4.54), responsibility (4.49), and risk awareness (4.43); task difficulty had the lowest mean (3.92) yet remained high.
First-time vs repeat participants: First-timers were willing to spend more time on interviews (37.88 vs 21.60 minutes; p<0.0005; d=0.42) and implementation (64.72 vs 41.83 minutes; p<0.0005; d=0.37). Repeat participants placed more importance on personalization, self-interest, and personal development motivators; had higher privacy/confidentiality concerns (including past negative privacy experiences); and perceived security tasks as more challenging, while first-timers were more concerned about company risks. Compensation references differed: repeat participants used previous studies (55.5%) as a baseline; first-timers referenced job salary (39%).
Qualtrics sample characteristics and platform evaluation: Compared to samples in Kaur et al. (2022), the Qualtrics-recruited company developers reported higher proficiency in frontend (mean 4.14) and backend (mean 4.20), software testing (78.24%), networking (40.29%), vulnerability research (38.53%), and reverse engineering (30.88); greater engagement with security certifications, events, CTFs, vulnerability disclosure, and bug reporting; higher SSD-SES vulnerability identification/mitigation (mean 35.61) and moderate security communication (mean 24.55). Recruitment via Qualtrics faced challenges: extended duration (79 days vs promised 8–10), high screening failure (65.38%), and high per-participant cost ($62.15 for ~30 minutes), though Qualtrics could access company-employed developers across regions.
Demographics: Average age 37; 77.94% male, 21.5% female; 92.65% held academic degrees; average professional software development work experience 8.86 years; total development experience 12.76 years; security experience 5.85 years; average team size 22; most worked in security-focused fields/companies; 98.82% were at least somewhat comfortable in English.
Discussion
Findings directly address the research questions by identifying the key drivers and deterrents of company developer participation in security studies, quantifying acceptable study lengths and compensation bands, and evaluating a recruitment channel that can reach company-employed developers. High willingness to participate in security-focused studies (96.5%) and a preference for security over general software engineering tasks (62.5%) suggest security topics can be attractive when framed around learning, personal development, and organizational benefit. Recruitment strategies should emphasize trusted channels (employer recommendations, targeted emails, mailing lists) and avoid low-trust approaches (unsolicited emails, flyers). Time expectations center around 20–30 minutes for surveys/interviews and about 60 minutes for practical tasks; flexibility matters, especially for first-time participants.
Compensation expectations are task- and type-dependent and region-sensitive; expectations grow sub-linearly with duration and are typically at least aligned with local minimum wages. Ethical, transparent compensation aligned to participant expectations can reduce recruitment bias, improve sample diversity, and potentially boost participation rates. Repeat participants’ higher privacy concerns and perception of task difficulty highlight the need for robust data handling transparency, appropriate challenge calibration, and recognition of participants’ contributions. The Qualtrics panel shows promise for accessing company developers with strong security engagement but presents cost, quality screening, and timeline challenges; rigorous screeners and realistic timelines are necessary. Overall, the results support recommendations to diversify recruitment channels, report study parameters transparently, align compensation with local contexts and participant expectations, and position security studies as opportunities for skill growth consistent with corporate culture.
Conclusion
Recruiting company developers at scale for security research is challenging but feasible with informed study design and recruitment strategies. The study provides data-driven guidelines: use trusted recruitment channels (employer recommendations, targeted emails, mailing lists); design study lengths around 20–30 minutes for surveys/interviews and ~60 minutes for practical tasks; calibrate compensation to task complexity, duration, and regional expectations; and present security studies as learning and development opportunities aligned with organizational culture. The literature review exposes gaps in reporting compensation and study length, motivating a call to standardize transparent reporting via a Study Parameters section. Future work should examine how payment levels influence performance, explore non-anonymous charitable donations as alternative compensation, assess ecological validity across recruitment sources (freelancers vs company developers), and study the impact of AI assistants on developer study design, compensation, and duration.
Limitations
The literature review may not include all relevant developer studies. Additional factors beyond those studied may influence company developers' motivation and intention to participate. Social desirability bias may have affected self-reported preferences. Qualtrics did not provide detailed participant-level compensation or response rates. Strict screening introduced a high drop-out rate, potentially causing selection bias and limiting generalizability across the diverse population of company developers.
Related Publications
Explore these studies to deepen your understanding of the subject.

