logo
ResearchBunny Logo
Understanding the Benefits of Hardware-Accelerated Communication in Model-Serving Applications

Computer Science

Understanding the Benefits of Hardware-Accelerated Communication in Model-Serving Applications

W. A. Hanafy, L. Wang, et al.

This groundbreaking research by Walid A Hanafy, Limin Wang, Hyunseok Chang, Sarit Mukherjee, T V Lakshman, and Prashant Shenoy reveals how hardware-accelerated communication can significantly reduce latency in machine learning pipelines. By leveraging RDMA and GPUDirect RDMA, the study demonstrates a potential latency savings of 15-50% compared to traditional TCP methods, showcasing crucial insights into performance optimization.

00:00
00:00
~3 min • Beginner • English
Abstract
It is commonly assumed that the end-to-end networking performance of edge offloading is purely dictated by that of the network connectivity between end devices and edge computing facilities, where ongoing innovation in 5G/6G networking can help. However, with the growing complexity of edgeoffloaded computation and dynamic load balancing requirements, an offloaded task often goes through a multi-stage pipeline that spans across multiple compute nodes and proxies interconnected via a dedicated network fabric within a given edge computing facility. As the latest hardware-accelerated transport technologies such as RDMA and GPUDirect RDMA are adopted to build such network fabric, there is a need for good understanding of the full potential of these technologies in the context of computation offload and the effect of different factors such as GPU scheduling and characteristics of computation on the net performance gain achievable by these technologies. This paper unveils detailed insights into the latency overhead in typical machine learning (ML)-based computation pipelines and analyzes the potential benefits of adopting hardware-accelerated communication. To this end, we build a model-serving framework that supports various communication mechanisms. Using the framework, we identify performance bottlenecks in state-of-theart model-serving pipelines and show how hardware-accelerated communication can alleviate them. For example, we show that GPUDirect RDMA can save 15-50% of model-serving latency, which amounts to 70-160 ms.
Publisher
This information was not provided in the paper.
Published On
Jan 01, 2023
Authors
Walid A Hanafy, Limin Wang, Hyunseok Chang, Sarit Mukherjee, T V Lakshman, Prashant Shenoy
Tags
latency
machine learning
hardware acceleration
RDMA
GPUDirect RDMA
model-serving
performance optimization
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs, just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny