logo
ResearchBunny Logo
Artificial intelligence enabled COVID-19 detection: techniques, challenges and use cases

Computer Science

Artificial intelligence enabled COVID-19 detection: techniques, challenges and use cases

M. Panjeta, A. Reddy, et al.

Explore the fascinating world of AI-based methods for COVID-19 detection, as reviewed by Manisha Panjeta, Aryan Reddy, Rushabh Shah, and Jash Shah. This paper highlights various machine learning and deep learning techniques, their efficiency, and future research directions to overcome current challenges in diagnostic tools.... show more
Introduction

The study addresses how artificial intelligence—specifically machine learning (ML) and deep learning (DL)—can support COVID-19 detection and related healthcare workflows during a global public health emergency. It examines the limitations of the RT-PCR gold-standard testing (e.g., delayed turnaround, laboratory and reagent constraints) and motivates AI as a scalable, faster adjunct for screening and triage, especially in resource-limited settings. The paper aims to comprehensively review AI-enabled COVID-19 detection techniques (laboratory RT-PCR support, blood test–based ML, and medical imaging–based DL using X-ray/CT/LUS), compare them on availability, ease of use, accuracy, cost, and analytical efficiency, and highlight open challenges, gaps, and future research directions. The motivation emphasizes AI’s potential for early warning, tracking, diagnosis/prognosis, and operational support in healthcare, and the need to understand benefits, constraints, and best practices across datasets and methods.

Literature Review

The paper contrasts prior surveys and domain reviews. Giri et al. (2020) compared diagnostic methods on sensitivity, specificity, and throughput, discussing usability and resource availability but omitting several emerging technologies. Shah et al. (2021) broadly surveyed AI techniques across the pandemic lifecycle (from early warning to social control) but did not cover current clinical implementations. Udugama et al. (2020) provided an in-depth review of nucleic acid and serological tests with emphasis on point-of-care diagnostics, but missed several promising detection technologies beyond RT-PCR and nucleic acids. The present review extends these by systematically comparing state-of-the-art detection methods globally, assessing severity and societal threats, detailing multiple ML/DL techniques for COVID-19 detection, their affordability, ease of use, performance, unresolved issues, future directions, and including a case study component. The paper also references numerous primary studies on X-ray/CT segmentation/classification, blood-test ML models, and RT-PCR optimization to synthesize a comprehensive perspective (Tables 1, 3–6).

Methodology

The authors conducted a structured literature review limited to peer-reviewed and reputable sources (Google Scholar, IEEE Xplore, ScienceDirect, Springer). Search strings included: “Detection techniques for COVID-19,” “Applications of Machine Learning in COVID,” “Deep Learning technologies for COVID-19,” “ML for COVID detection,” and “COVID Detection using DL.” Because AI for COVID spans many subfields, additional manual screening of titles/abstracts/full texts was used to identify relevant detection-focused papers. The survey is organized to: (1) describe ML/DL characteristics and potential use; (2) analyze datasets and methodologies used across detection techniques (RT-PCR, blood tests, X-ray/CT/LUS); (3) compare detection methods by challenges, ease of use, accuracy; (4) identify research gaps and limitations; and (5) outline future directions. Evaluation dimensions included analytical efficiency, sensitivity/specificity, limit of detection, affordability, ease of use, and implementation constraints. The paper also details a typical blood-test ML pipeline as used in cited works: imputation (e.g., k-NN with k=5), data normalization, feature selection (e.g., recursive feature elimination), and classification (Naive Bayes, SVM, Random Forest, KNN, Logistic Regression), with hyperparameter optimization via grid search and 5-fold stratified cross-validation using AUC, and calibrated evaluation with AUC, Brier score, and accuracy. Internal-external validation is described using bootstrap and training on combined datasets to test generalization across symptomatic/asymptomatic cohorts, with sensitivity/specificity reported separately. Imaging-based pipelines leverage CNN architectures (ResNet, VGG, DenseNet, SqueezeNet), transfer learning, data augmentation, and segmentation (e.g., JCS, Covid-SegNet) with common metrics including accuracy, sensitivity, specificity, Dice, and AUC.

Key Findings
  • RT-PCR remains the diagnostic gold standard but faces constraints: reagent and lab shortages, turnaround times of 2–48 hours, and sensitivity dependence on specimen type and collection timing. High-throughput 384-well assays with 5 µL volumes have reported 100% sensitivity and specificity in controlled settings; saliva can show higher and more stable viral titers than NP swabs across time.
  • Blood-test ML models using routine laboratory parameters (e.g., WBC, platelets, CRP, LDH, ALP/AST, D-dimer) can discriminate COVID-19 and assess severity. Reported metrics include: Random Forest accuracy 82% and AUC 84% (Brinati et al.); sensitivity 81.9% and specificity 97.9% (Kukar et al.); several models showed strong AUC/accuracy but often on limited datasets. Some frameworks (e.g., compressed sensing pooled RT-PCR and related classifiers) reported high AUC (up to ~98.5–99.4% with XGB) in specific experimental settings.
  • Imaging-based DL (X-ray/CT/LUS) achieves high reported performance with transfer learning and segmentation, but often on small, imbalanced, or synthetic-augmented datasets:
    • ResNet18: sensitivity 98%, specificity 90.17% (Minaee et al.).
    • ResNet101: accuracy 98.93% (Jain et al.).
    • SDNet: accuracy 97.22% (Ismael and Şengür).
    • nCOVnet: positive detection accuracy 97%, overall 88% (higher false positives) (Panwar et al.).
    • Mini-COVIDNet (ultrasound, POC): accuracy 83.2% (Awasthi et al.).
    • JCS (joint classification and segmentation): average sensitivity 95%, specificity 93% (Wu et al.).
    • Covid-SegNet: Dice ~0.987; sensitivity ~0.986; precision ~0.990 (Yan et al.).
  • Ease of use and affordability: AI models (post-training) enable rapid, potentially mass screening using X-ray/CT/LUS where imaging equipment exists; blood-test ML provides a cost-effective adjunct when RT-PCR capacity is constrained; however, expert labeling, computational resources, and standardization remain barriers.
  • Data constraints substantially limit generalization: early studies used small or synthetic datasets; dataset imbalance and heterogeneity (across sites, populations, disease stages) challenge robustness, particularly in federated or multi-institutional contexts.
  • Overall, while several AI models report high accuracy on specific datasets (often 83–99% range), many underperform or lack parity with RT-PCR in real-world settings; clinical validation and standardized evaluation protocols are limited.
Discussion

The review demonstrates that AI can complement traditional diagnostics by accelerating triage and screening, especially where RT-PCR capacity is limited. Imaging-based DL and blood-test ML approaches address research questions about feasibility, speed, and cost-effectiveness: they can process readily available inputs (X-rays, routine labs) and return rapid predictions. However, findings also highlight critical constraints that temper clinical utility: small and biased datasets, reliance on synthetic augmentation, lack of standardized protocols and transparent models, data heterogeneity, and limited external/clinical validation. These factors explain discrepancies between high reported metrics and uncertain real-world performance. The synthesis underscores that AI systems need larger, diverse, well-annotated datasets; multi-modal integration (clinical features plus imaging/labs); rigorous validation (internal-external, multi-site); and operational considerations (compute, workflow integration). Addressing these will make AI-based detection more reliable and translatable to clinical practice, informing public health strategies for rapid screening, surge management, and resource allocation.

Conclusion

The paper contributes a comprehensive survey of AI-enabled COVID-19 detection techniques, comparing laboratory RT-PCR support tools, blood-test ML, and imaging-based DL on availability, ease of use, accuracy, and cost. It consolidates datasets and model performance, illustrates trade-offs, and articulates open challenges and research gaps. The authors conclude that AI techniques are poised to significantly aid healthcare through rapid screening, early diagnosis, and prognostication; yet broad adoption requires larger, well-labeled datasets, improved generalization, integration of multi-source patient data, transparency, and clinical validation. Future directions include expanding CT and LUS-based models as data grow; leveraging transfer learning to reduce training data/time; exploring federated learning while mitigating communication costs and performance degradation under heterogeneity; incorporating blockchain for privacy-preserving contact tracing; deploying ML for forecasting and resource planning; and conducting real-world clinical trials to establish efficacy and safety. As datasets mature and standards evolve, AI is expected to become a robust adjunct to conventional diagnostics in pandemic response and beyond.

Limitations

Key limitations identified include: (1) Data scarcity and imbalance—early studies used small, often synthetic or weakly labeled datasets, reducing generalizability and risking overfitting; (2) Lack of standardization—heterogeneous model architectures, preprocessing, and metrics hinder cross-study comparisons; (3) Limited clinical validation—few prospective, multi-center, real-world evaluations; (4) Model opacity—complex models lacking transparency and interpretability complicate clinical adoption; (5) Computational demands—some methods are memory- and compute-intensive, challenging deployment at scale or in low-resource settings; (6) Dataset heterogeneity—population, comorbidity, and protocol differences degrade performance across institutions and make federated learning communication-heavy with uncertain convergence under non-IID data; (7) Performance gaps—several AI models still underperform RT-PCR in accuracy and reliability; (8) Modality constraints—X-ray-only approaches may miss multi-phase disease patterns; multi-modal integration is often absent; (9) Operational constraints—specialized hardware, expert annotations, and workflow integration are non-trivial; (10) Publication-era constraints—the review notes many results predate availability of large, diverse datasets (early-pandemic bias).

Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs, just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny