Séminaire du pôle SLS – invité : Marcos Dias De Assunçao
14 novembre 2019 @ 10 h 00 min - 12 h 00 min
Marcos Dias De Assunçao, chercheur Inria invité par l’équipe STACK, animera un séminaire sur le site de l’IMT Atlantique, jeudi 14 novembre 2019 à 10h en Amphi Besse.
Title: Reinforcement Learning for Reconfiguring Data Stream Processing Applications on Edge Computing.
Abstract:
Distributed Stream Processing (DSP) applications are increasingly used in new pervasive services that process enormous amounts of data in a seamless and near real-time fashion. Edge computing has emerged as a means to minimise the time to handle events by enabling processing (i.e., operators) to be offloaded from the Cloud to the edges of the Internet, where the data is often generated. Deciding where to execute such operations (i.e., edge or cloud) during application deployment or at runtime is not a trivial problem. In this talk I will discuss how Reinforcement Learning (RL) and Monte-Carlo Tree Search
(MCTS) can be used to reassign operators during application runtime. I will describe an optimisation to an MCTS algorithm that achieves latency similar to other approaches, but with fewer operator migrations and faster execution time. In the second part of the talk, I will explain how we consider multi-objective RL reward considering metrics regarding operator reconfiguration, and infrastructure and application improvement.
Short-bio:
Marcos Dias de Assuncao is an Inria Starting Researcher at Avalon, LIP, ENS Lyon. Prior to joining Inria, he was a research scientist at IBM Research in Sao Paulo. He obtained his PhD in Computer Science at the University of Melbourne in Australia (2009). Marcos has over 19 years of experience in research and development in distributed systems and networks, has published over 60 papers, deposited more than 20 patents applications, and contributed to the design and development of several software systems. His current topics of interest comprising deep reinforcement learning to address resource management problems in edge and cloud computing and fault tolerance for distributed data stream processing applications. He also intends to design solutions that facilitate the execution of machine-learning pipelines on edge computing.
Abstract:
Distributed Stream Processing (DSP) applications are increasingly used in new pervasive services that process enormous amounts of data in a seamless and near real-time fashion. Edge computing has emerged as a means to minimise the time to handle events by enabling processing (i.e., operators) to be offloaded from the Cloud to the edges of the Internet, where the data is often generated. Deciding where to execute such operations (i.e., edge or cloud) during application deployment or at runtime is not a trivial problem. In this talk I will discuss how Reinforcement Learning (RL) and Monte-Carlo Tree Search
(MCTS) can be used to reassign operators during application runtime. I will describe an optimisation to an MCTS algorithm that achieves latency similar to other approaches, but with fewer operator migrations and faster execution time. In the second part of the talk, I will explain how we consider multi-objective RL reward considering metrics regarding operator reconfiguration, and infrastructure and application improvement.
Short-bio:
Marcos Dias de Assuncao is an Inria Starting Researcher at Avalon, LIP, ENS Lyon. Prior to joining Inria, he was a research scientist at IBM Research in Sao Paulo. He obtained his PhD in Computer Science at the University of Melbourne in Australia (2009). Marcos has over 19 years of experience in research and development in distributed systems and networks, has published over 60 papers, deposited more than 20 patents applications, and contributed to the design and development of several software systems. His current topics of interest comprising deep reinforcement learning to address resource management problems in edge and cloud computing and fault tolerance for distributed data stream processing applications. He also intends to design solutions that facilitate the execution of machine-learning pipelines on edge computing.