logo
ResearchBunny Logo
PLS-SEM and reflective constructs: A response to recent criticism and a constructive path forward

Business

PLS-SEM and reflective constructs: A response to recent criticism and a constructive path forward

P. Guenther, M. Guenther, et al.

This article confronts the misconceptions surrounding reflective construct measurement in PLS-SEM, arguing that such models accurately represent theoretically grounded constructs. The research, conducted by Peter Guenther, Miriam Guenther, Christian M. Ringle, Ghasem Zaefarian, and Severina Cartwright, emphasizes the value of a multimethod approach in structural equation modeling to leverage diverse strengths instead of fostering competition among methods.... show more
Introduction

The paper responds to Henseler et al. (2025), who argue (1) that PLS-SEM is unsuitable for estimating reflectively measured constructs due to biased parameter estimates and (2) that reflective measurement assessment criteria are biased under PLS-SEM. The authors contend these concerns are unwarranted. They situate the debate in the widespread use and ongoing methodological development of PLS-SEM in business marketing research and set out to clarify misconceptions and provide constructive guidance for researchers.

Literature Review

The authors synthesize prior work distinguishing conceptual (theory-based) measurement from statistical estimation. Building on Rigdon’s (2012) proxy framework, they emphasize a validity gap (metrological uncertainty) between theoretical constructs and their statistical proxies (Rigdon et al., 2019a; Rigdon & Sarstedt, 2022). They stress that reflective versus formative specification is a measurement-theoretic decision distinct from the data-generating process, which may follow common factor or composite logic (Sarstedt et al., 2016). Prior critiques that equate reflective measurement with common factor models are challenged by work showing the practical divergence between model design and estimation (Cook & Forzani, 2023; Rhemtulla et al., 2020; Rigdon, 2012). Literature comparing CB-SEM and composite-based methods indicates differing biases when model assumptions are violated (Reinartz et al., 2009; Sarstedt et al., 2016; Cho, Sarstedt, & Hwang, 2022), with PLS-SEM often being less sensitive when the true model type is unknown. Research also documents that many reflective assessment criteria (e.g., HTMT, Cronbach’s alpha, ρA) are not affected by potential loading inflation under PLS-SEM and that differences between CB-SEM and PLS-SEM estimates diminish with larger samples and more indicators due to consistency-at-large properties (Hui & Wold, 1982; Schneeweiß, 1993; Hair et al., 2022).

Methodology

This is a conceptual response and methodological commentary. The authors: (1) articulate a theoretical framework distinguishing conceptual constructs from statistical proxies; (2) synthesize evidence from prior simulation and empirical comparison studies contrasting PLS-SEM, CB-SEM, and related methods; and (3) conduct a secondary descriptive assessment using reported results from Sarstedt et al. (2022) on 239 PLS-SEM studies (486 models) to examine the distribution of AVE values (n=1,825) and evaluate practical implications of potential loading inflation on convergent validity conclusions. No new primary data were collected.

Key Findings
  • Reflective measurement is not equivalent to common factor model estimation; reflective constructs are theory-driven conceptualizations, and multiple estimation approaches yield method-specific proxies.
  • Many standard reflective assessment metrics are unaffected by alleged PLS-SEM loading inflation: HTMT and Cronbach’s alpha do not depend on loadings, and ρA is also unaffected; only indicator reliability and AVE may be higher under PLS-SEM when the true model is factor-based.
  • Magnitude of loading differences is small in practice: simulations show mean absolute error in loadings of about 0.074 when using PLS-SEM to estimate common factor models (Cho, Sarstedt, & Hwang, 2022), with larger differences (~0.1) mainly for 3-indicator models and small samples; empirical comparisons report loading differences of 0.00–0.08 (avg. ~0.05) (Dash & Paul, 2021). Differences diminish with more indicators and larger samples due to consistency at large.
  • In Sarstedt et al. (2022) review of PLS-SEM in top marketing journals: 239 articles, 486 models, mean 3.85 indicators per reflective construct, 1,825 AVEs with mean 0.722 (min 0.330, max 0.995). Assuming an average loading inflation of 0.05, 87% of constructs would still satisfy convergent validity using an adjusted AVE threshold corresponding to a corrected loading threshold (≈0.573 AVE).
  • When model type is unknown, PLS-SEM is often a safer choice: bias from factor-based SEM applied to composite models can be up to 11 times higher than bias from PLS-SEM applied to factor models (Sarstedt et al., 2016); mean absolute error in path coefficients is nearly twice as high when using CB-SEM to estimate composite models vs. PLS-SEM to estimate factor models (Cho, Sarstedt, & Hwang, 2022).
  • From a practical perspective, CB-SEM can incur sizable sampling errors in path coefficients due to simultaneous estimation of many parameters; weighted composite approaches can offer advantages in prediction/classification (Deng & Yuan, 2023; Yuan & Zhang, 2024).
  • A multimethod SEM approach (CB-SEM, PLS-SEM, PLSC/PLSe, GSCA, IGSCA, factor/sum score regression) is advocated to leverage complementary strengths (model fit vs. prediction) and assess robustness across methods.
Discussion

The findings directly counter the claim that PLS-SEM is inherently unsuitable for reflective constructs. By separating measurement theory from statistical estimation and emphasizing proxies, the authors show that using reflective evaluation criteria within composite-based estimation does not invalidate results. Any loading inflation under PLS-SEM for factor-model populations is typically small and rarely alters substantive validity conclusions, especially with adequate indicators and sample sizes. Comparative evidence indicates PLS-SEM may be less biased than CB-SEM when the true data-generating process is composite, making it a prudent option when model type is uncertain. The authors argue for methodological pluralism: use CB-SEM to assess global model fit and PLS-SEM (and related component methods) to assess predictive performance; employ robustness checks across methods to focus on stability of substantive inferences rather than exact parameter equality. For formatively measured constructs arising from composite populations, PLS-SEM or GSCA are recommended. Overall, embracing a multimethod perspective enhances the credibility and utility of SEM-based findings.

Conclusion

The authors reject the premise that reflective measurement equates to common factor model estimation and the associated critique that PLS-SEM is invalid for reflective constructs. They show that any differences in loadings under PLS-SEM are typically minor and have limited impact on construct validity assessments, while PLS-SEM can be advantageous when the true model type is unknown. They advocate for methodological pluralism and a multimethod SEM approach—combining CB-SEM and composite-based methods—to ensure robustness, integrate assessment of model fit and predictive power, and advance methodological rigor. Future research should further distinguish conceptual versus statistical models, refine construct specification and evaluation with flexible criteria, integrate fit and prediction, and address the validity gap between conceptual constructs and statistical proxies, supported by open science practices.

Limitations

The article is a conceptual response relying on prior simulation and empirical comparison studies and a secondary descriptive analysis of published AVE values; no new primary data were collected. The authors acknowledge inherent metrological uncertainty and that researchers cannot know the true data-generating process with certainty, which limits definitive claims about any single method’s universal superiority.

Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs, just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny