Natalia Díaz Rodríguez

More about Natalia Díaz Rodríguez:

ENSTA Paris – Assistant Professor

Natalia Díaz Rodríguez graduated from University of Granada (Spain) in 2010 and got her double PhD from Abo Akademi (Finland) and University of Granada in 2015 on symbolic Artificial Intelligence (hybrid data-driven and knowledge based human activity modelling and recognition for ambient assisted living).

She is Asst. Prof. of Artificial Intelligence at the Autonomous Systems and Robotics Lab at ENSTA Paris, Institute Polytechnique de Paris, and INRIA Flowers team on developmental robotics. Her background is on knowledge representation, reasoning and machine learning. Her current interests include deep, reinforcement and unsupervised learning, open-ended learning, continual/lifelong learning, (state) representation learning, neural-symbolic computation, computer vision, autonomous systems, explainable AI, and AI for social good.

She has worked on R&D at CERN (Switzerland), Philips Research (Netherlands), University of California Santa Cruz, and in industry in Silicon Valley at Stitch Fix Inc. (San Francisco, CA).

She has participated in a range of internaHonal projects (e.g. EU H2020 DREAM www.robotsthatdream.eu) and was Management Commitee member of EU COST (European Cooperation in Science and Technology) Action AAPELE.EU (Algorithms, Architectures and Platforms for Enhanced Living Environments), Google Anita Borg Scholar 2014, Heidelberg Laureate Forum 2014 & 2017 fellow, and Nokia Foundation fellow.

She is co-founder and board member of the non-profit continualAI.org organisation and online open community of Continual Learning enthusiasts with near 600 members worldwide.

Natalia Díaz Rodríguez will speak about:

Continual Learning for Robotics, an overview

15:40 - 16:10 Monday 21 September


Masterclass:

Continual learning (CL) is a particular machine learning paradigm where the data distribution and learning objective changes through time, or where all the training data and objective
criteria are never available at once. The evolution of the learning process is modeled by a sequence of learning experiences where the goal is to be able to learn new skills all along the
sequence without forgetting what has been previously learned. Continual learning also aims at the same time at optimizing the memory, the computation power and the speed during the
learning process.
An important challenge for machine learning is not necessarily finding solutions that work in the real world but rather finding stable algorithms that can learn in real world. Hence, the ideal
approach would be tackling the real world in a embodied platform: an autonomous agent.
Continual learning would then be effective in an autonomous agent or robot, which would learn autonomously through time about the external world, and incrementally develop a set of
complex skills and knowledge.
Robotic agents have to learn to adapt and interact with their environment using a continuous stream of observations. Some recent approaches aim at tackling continual learning for robotics, but most recent papers on continual learning only experiment approaches in simulation or with static datasets. Unfortunately, the evaluation of those algorithms does not provide insights on whether their solutions may help continual learning in the context of robotics. This talk aims at reviewing the existing state of the art of continual learning, summarizing existing benchmarks and metrics, and proposing a framework for presenting and evaluating both robotics and non robotics approaches in a way that makes transfer between both fields easier.

One particularly rapidly evolving area within deep learning is the development of representation learning algorithms designed to learn abstract features that characterize data. State  representation learning (SRL) focuses on a particular kind of representation learning where learned features are in low dimension, evolve through time, and are influenced by actions of an agent. In other words, SRL algorithms are built to capture factors of variation influenced by an agent in a given environment and project it into a disentangled and low dimensional space. As the
representation learned captures the variaHon in the environment generated by agents, this kind of representation is particularly suitable for robotics and control scenarios.
Moreover, the low dimension helps to overcome the curse of dimensionality and provides easier interpretation and utilization for both humans and other algorithms.
Therefore, SRL can improve both human-machine interaction, performance and speed in policy learning algorithms such as reinforcement learning. I will present an overview covering the state-of-the-art on state representation learning in the most recent years considering methods that involve interaction with the environment and their applications in robotics control tasks. I will also present S-RL Toolbox (a toolbox for SRL for RL), an open source toolbox with data-generating environments, baselines and metric evaluation in reinforcement learning.

 

Level: Expert.


Language of speak: Inglés