Please use this identifier to cite or link to this item:
https://dspace.iiti.ac.in/handle/123456789/16776
| Title: | SafeTail: Tail Latency Optimization in Edge Service Scheduling via Redundancy Management |
| Authors: | Shokhanda, Jyoti Pal, Utkarsh Kumar, Aman Chattopadhyay, Soumi Bhattacharya, Arani |
| Keywords: | Edge Computing;Redundant Scheduling;Reward-based Deep Learning;Tail Latency;Augmented Reality;Deep Learning;Optimization;Redundancy;Wireless Networks;Computational Resources;Edge Computing;Edge Server;Edge Services;Latency Optimizations;Redundancy Management;Redundant Scheduling;Reward-based Deep Learning;Service-scheduling;Tail Latency |
| Issue Date: | 2025 |
| Publisher: | Institute of Electrical and Electronics Engineers Inc. |
| Citation: | Shokhanda, J., Pal, U., Kumar, A., Chattopadhyay, S., & Bhattacharya, A. (2025). SafeTail: Tail Latency Optimization in Edge Service Scheduling via Redundancy Management. IEEE Transactions on Network and Service Management. Scopus. https://doi.org/10.1109/TNSM.2025.3587752 |
| Abstract: | Optimizing tail latency while efficiently managing computational resources is crucial for delivering high-performance, latency-sensitive services in edge computing. Emerging applications, such as augmented reality, require low-latency computing services with high reliability on user devices, which often have limited computational capabilities. Consequently, these devices depend on nearby edge servers for processing. However, inherent uncertainties in network and computation latencies—stemming from variability in wireless networks and fluctuating server loads—make service delivery on time challenging. Existing approaches often focus on optimizing median latency but fall short of addressing the specific challenges of tail latency in edge environments, particularly under uncertain network and computational conditions. Although some methods do address tail latency, they typically rely on fixed or excessive redundancy and lack adaptability to dynamic network conditions, often being designed for cloud environments rather than the unique demands of edge computing. In this paper, we introduce SafeTail, a framework that meets both median and tail response time targets, with tail latency defined as latency beyond the 90th percentile threshold. SafeTail addresses this challenge by selectively replicating services across multiple edge servers to meet target latencies. SafeTail employs a reward-based deep learning framework to learn optimal placement strategies, balancing the need to achieve target latencies with minimizing additional resource usage. Through trace-driven simulations, SafeTail demonstrated near-optimal performance and outperformed most baseline strategies across three diverse services. © 2025 Elsevier B.V., All rights reserved. |
| URI: | https://dx.doi.org/10.1109/TNSM.2025.3587752 https://dspace.iiti.ac.in:8080/jspui/handle/123456789/16776 |
| ISSN: | 1932-4537 |
| Type of Material: | Journal Article |
| Appears in Collections: | Department of Computer Science and Engineering |
Files in This Item:
There are no files associated with this item.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.
Altmetric Badge: