Publication Details

A Reality Check on Inference at Mobile Networks Edge

CARTAS, A.; KOCOUR, M.; RAMAN, A.; LEONTIADIS, I.; LUQUE, J.; SASTRY, N.; NUNEZ-MARTINEZ, L.; PERINO, D.; PERALES, C. A Reality Check on Inference at Mobile Networks Edge. In Proceedings of the 2nd ACM International Workshop on Edge Systems, Analytics and Networking (EDGESYS '19). Dressden: Association for Computing Machinery, 2019. p. 54-59. ISBN: 978-1-4503-6275-7.
Czech title
Ověření inference na okraji mobilní sítě v reálných podmínkách
Type
conference paper
Language
English
Authors
CARTAS, A.
Kocour Martin, Ing. (DCGM)
RAMAN, A.
LEONTIADIS, I.
Luque Jordi (FIT)
SASTRY, N.
NUNEZ-MARTINEZ, L.
PERINO, D.
PERALES, C.
URL
Keywords

Edge computing, Artificial Intelligence

Abstract

Edge computing is considered a key enabler to deploy ArtificialIntelligence platforms to provide real-time applications such asAR/VR or cognitive assistance. Previous works show computingcapabilities deployed very close to the user can actually reduce theend-to-end latency of such interactive applications. Nonetheless,the main performance bottleneck remains in the machine learninginference operation. In this paper, we question some assumptionsof these works, as the network location where edge computing isdeployed, and considered software architectures within the frame-work of a couple of popular machine learning tasks. Our experimen-tal evaluation shows that after performance tuning that leveragesrecent advances in deep learning algorithms and hardware, net-work latency is now the main bottleneck on end-to-end applicationperformance. We also report that deploying computing capabilitiesat the first network node still provides latency reduction but, over-all, it is not required by all applications. Based on our findings, weoverview the requirements and sketch the design of an adaptivearchitecture for general machine learning inference across edgelocations.

Annotation

Edge computing is considered a key enabler to deploy Artificial Intelligence platforms to provide real-time applications such as AR/VR or cognitive assistance. Previous works show computing capabilities deployed very close to the user can actually reduce the end-to-end latency of such interactive applications. Nonetheless, the main performance bottleneck remains in the machine learning inference operation. In this paper, we question some assumptions of these works, as the network location where edge computing is deployed, and considered software architectures within the framework of a couple of popular machine learning tasks. Our experimental evaluation shows that after performance tuning that leverages recent advances in deep learning algorithms and hardware, network latency is now the main bottleneck on end-to-end application performance. We also report that deploying computing capabilities at the first network node still provides latency reduction but, overall, it is not required by all applications. Based on our findings, we overview the requirements and sketch the design of an adaptive architecture for general machine learning inference across edge locations.

Published
2019
Pages
54–59
Proceedings
Proceedings of the 2nd ACM International Workshop on Edge Systems, Analytics and Networking (EDGESYS '19)
ISBN
978-1-4503-6275-7
Publisher
Association for Computing Machinery
Place
Dressden
DOI
UT WoS
000470896200010
EID Scopus
BibTeX
@inproceedings{BUT156850,
  author="CARTAS, A. and KOCOUR, M. and RAMAN, A. and LEONTIADIS, I. and LUQUE, J. and SASTRY, N. and NUNEZ-MARTINEZ, L. and PERINO, D. and PERALES, C.",
  title="A Reality Check on Inference at Mobile Networks Edge",
  booktitle="Proceedings of the 2nd ACM International Workshop on Edge Systems, Analytics and Networking (EDGESYS '19)",
  year="2019",
  pages="54--59",
  publisher="Association for Computing Machinery",
  address="Dressden",
  doi="10.1145/3301418.3313946",
  isbn="978-1-4503-6275-7",
  url="https://dl.acm.org/citation.cfm?doid=3301418.3313946"
}
Files
Back to top