Times are in the Chilean Time zone, i.e. UTC-4. Download LACORO's iCalendar here.
|07:50 – 08:00||Welcome and Introduction|
|08:00 – 09:00||
Explainable Robotic Systems in Reinforcement Learning Scenarios
Robotic systems are more present in our society every day. In human-robot environments, it is crucial that end-users may correctly understand their robotic team-partners in order to complete a task collaboratively. To increase action understanding, users demand more explainability about the decisions by the robot in particular situations. Recently, explainable robotic systems have emerged as an alternative focused not only on completing a task satisfactorily but also in justifying, in a human-like manner, the reasons that lead to making a decision. In reinforcement learning scenarios, a great effort has been focused on providing explanations using data-driven approaches, particularly from the visual input modality in deep learning-based systems. In this talk, we focus on the decision-making process of reinforcement learning agents performing tasks in robotic scenarios.
|09:00 – 09:30||Break|
|09:30 – 10:30||
Mario Villalobos Kirmayr
Back to the Cybernetic Roots: Enaction Meets Computation
Cybernetics represents a common ancestor for computationalism and enactivism, two research programs that, despite their common root, have set and developed their respective theoretical agendas in opposition. In this talk, I will review some basic cybernetic ideas with respect to biological cognition, analyzing their significance for a potential enactive and computational reinterpretation of cognitive phenomena.
|08:50 – 09:00||Welcome and Introduction|
|09:00 – 10:00||
Sao Mai Nguyen
Discovering the Hierarchy of Tasks with Intrinsic Motivation Enables Transfer Learning in Curriculum Learning
Multi-task learning by robots poses the challenge of the domain knowledge: the complexity of tasks, complexity of the actions required, relationship between tasks for transfer learning. However, this domain knowledge can be learned to address the challenges of high-dimensionality and unboundedness in life-long learning. For instance, the hierarchy between tasks of various complexities can be learned to bootstrap transfer of knowledge from simple to composite tasks. This domain knowledge can also be learned by imitation. We focus on the hierarchical reinforcement learning framework, on algorithms based on intrinsic motivation to explore the action space and task space. They can discover the relationship between tasks and learnable subtasks. Robots can efficiently associate sequences of actions to multiple control tasks by means of: representations of task dependencies, the emergence of affordances mechanism, curriculum learning and active imitation learning. These active learning algorithms infer its curriculum by deciding which tasks to explore first, how to transfer knowledge, and when, how and whom to imitate.
|10:00 – 10:30||Break|
|10:30 – 11:30||
Ricardo R. Gudwin
The Robot Life Specification Problem and Its Challenge to Cognitive Architectures
Differently from service robots, which usually have a fixed and repeatable behavior, autonomous cognitive robots designed to work in collaboration with human beings, performing some sort of daily repertoire of responsibilities, working as either a collaborator or a companion on different kinds of working environments, pose a completely new challenge in the specification of their behavior. We are here talking about lifelong robotic agents, which should be given different kinds of responsibilities, duties which they must accomplish in different time scales and realization horizons. This poses a completely new kind of problem, from the specification point of view, which is how to properly address the many duties such robotic agents should be responsible for, and how to map them on different functionalities to be fulfilled by the cognitive architectures controlling them. The expected tasks for these robots include more than a simply SLAM, or specific missions, but an integration of all of that, together with addressing multiple goals and motivations, choosing at each time instant, which of its responsibilities to attend, and which they are able to postpone for later attendance. In this talk, we define what we mean for the 'Robot Life Specification Problem', and exemplify with a case of a transportation robot in a factory environment. Then, we propose how this problem can be addressed in terms of a cognitive architecture, with the particular example of the MECA Cognitive Architecture being developed by our research group.
|11:30 – 12:00||Break|
|12:00 – 13:00||
María José Escobar
Towards Bio-Inspired Artificial Agents
The study of biological and sensory systems allows us to understand the principles of computations used to extract information from the environment inspiring new algorithms and technologies. Inspired in the retina computation, we propose visual sensors for automatic image/video equalization (Tone Mapping) and autonomous robot navigation. On the other hand, we will also analyze the cortical circuit associated with decision making, cortical-basal ganglia loop, to incorporate it into a robot controller. For this, we propose a model including tonic dopamine type D1 receptors, which modulates the robot behavior, in particular, in the balance between exploitation and exploration.
|06:50 – 07:00||Welcome and Introduction|
|07:00 – 08:00||
How to Let Robots Learn, Develop, Communicate and Evolve
|08:00 – 08:30||Break|
|08:30 – 09:30||
How to Teach a Robot to Sing Like a Bird? (or to Model Language Learning with Recurrent Neural Networks?)
|09:30 – 10:00||Break|
|10:00 – 11:00||
Anna Helena Reali Costa
Scaling up Reinforcement Learning with Transfer
Reinforcement learning is a powerful scheme to learn through direct interaction with the environment, receiving rewards and punishments. This process can be time-consuming, and the use of prior knowledge can speed it up. When people apply information, strategies, and skills they have learned to a new situation or context, transfer of learning occurs. In this talk, I will explore how the transfer of learning can be used to accelerate artificial agent reinforcement learning. I will outline approaches that allow agents to leverage experience gained from solving previous tasks and with advice from other agents. Some applications will also be presented.
|11:00 – 11:15||Break|
|11:15 – 12:15||Discussion Panel / Closing|