Welcome to the ALR-Lab

This site is still under construction

The Autonomous Learning Robots (ALR) Lab at the Institute for Anthropomatics and Robotics External Link of the Department of Informatics External Link, focuses on the development of novel machine learning methods for robotics.  Future robot technology will have to deal with very challenging real world scenarios that are quite different from the lab environments typically considered in robotics research. Real world environments are unknown and unstructured, consisting of objects of unpredictable shapes or even other, unknown agents such as humans. The robot can encounter so many different situations while interacting with such environments that pre-programming such tasks seems to be infeasible.

Our research is focused on the intersection of machine learning, robotics, human-robot interaction and computer vision. Our goal is to create data-efficient and mathematically principled machine learning algorithms that are suitable for complex robot domains such as grasping and manipulation, forceful interactions or dynamic motor tasks. In our research, we always aim for a strong theoretical basis for our developed algorithms which are derived from first order principles. In terms of methods, our work is is focused on:

  • Reinforcement Learning and Policy Search
  • Imitation Learning 
  • Movement Representations
  • Time-Series Modelling
  • Model-Learning

While we thrive to extend the state of the art for each of these areas of machine learning, our vision is to create an orchestration of these methods in order to develop a fully autonomous learning robotics system. 

New ICLR paper! Differentiable Trust Region Layers for Deep Reinforcement Learning
New ICLR paper! Differentiable Trust Region Layers for Deep Reinforcement Learning

Do you like Deep RL methods such as TRPO or PPO? Then you will also like this one! Our differentiable trust region layers can be used on top of any policy optimization algorithms such as policy gradients to obtain stable updates -- no approximations or implementation choices required :) Performance is enpar with PPO on simple exploration scenarios while we outperform PPO on more complex exploration environments.

New ICLR paper! Bayesian Context Aggregation for Neural Processes
New ICLR paper! Bayesian Context Aggregation for Neural Processes

Neural Processes are powerful tools for probabilistic meta-learning. Yet, they use rather basic aggregation methods, i.e. using a mean aggregator for the context, which does not give consistent uncertainty estimates and leads to poor prediction performance. Aggregating in a Bayesian way using Gaussian conditioning does a much better job !:)

CoRL paper accepted - AC-RKN For Dynamics Learning
CoRL paper accepted - Action Conditional Recurrent Kalman Networks

Action conditional probabilistic model inspired by Kalman filter operations in the latent state. Find out how we learn the complex non-markovian dynamics of pneumatic soft robots and large hydraulic robots with this disentangled state and action representation.

New JMLR Paper - Need to approximate complex distributions with a GMM? Here you go!

Many methods for machine learning rely on approximate inference from intractable probability distributions. Learning sufficiently accurate approximations requires a rich model family and careful exploration of the relevant modes of the target distribution...

ICLR paper accepted - Expected Information Maximization

 Using the I-Projection for Mixture Density Estimation. Find out why maximum likelihood is not well suited for mixture density modelling and why you should use the I-projection instead.  

Starting @ KIT

The Autonomous Learning Robots (ALR) Lab was founded at Jan. 2020 at the KIT. The new group is now building up and looking forward to do exciting research and teaching!