Welcome to the ALR-Lab
The Autonomous Learning Robots (ALR) Lab at the Institute for Anthropomatics and Robotics of the Department of Informatics
, focuses on the development of novel machine learning methods for robotics. Future robot technology will have to deal with very challenging real world scenarios that are quite different from the lab environments typically considered in robotics research. Real world environments are unknown and unstructured, consisting of objects of unpredictable shapes or even other, unknown agents such as humans. The robot can encounter so many different situations while interacting with such environments that pre-programming such tasks seems to be infeasible.
Our research is focused on the intersection of machine learning, robotics, human-robot interaction and computer vision. Our goal is to create data-efficient and mathematically principled machine learning algorithms that are suitable for complex robot domains such as grasping and manipulation, forceful interactions or dynamic motor tasks. In our research, we always aim for a strong theoretical basis for our developed algorithms which are derived from first order principles. In terms of methods, our work is is focused on:
- Reinforcement Learning and Policy Search
- Imitation Learning
- Movement Representations
- Time-Series Modelling
- Model-Learning
While we thrive to extend the state of the art for each of these areas of machine learning, our vision is to create an orchestration of these methods in order to develop a fully autonomous learning robotics system.

We design two new types of cross-category level vision regression tasks, namely object discovery and pose estimation, which are of unprecedented complexity in the meta-learning domain for computer vision with exhaustively evaluation of common meta-learning techniques to strengthen the generalization capability. Furthermore, we propose functional contrastive learning (FCL) over the task representations in Conditional Neural Processes (CNPs) and train in an end-to-end fashion.

We propose a multi-task deep Kalman model, that can adapt to changing dynamics and environments. The model gives state of the art performance on several robotic benchmarks with non-stationarity with little computational overhead!!!

We propose a new method which enables robots to learn versatile and highly accurate skills in the contextual policy search setting by optimizing a mixture of experts model. We make use of Curriculum Learning, where the agent concentrates on local context regions it favors. Mathematical properties allow the algorithm to adjust the model complexity to find as many solutions as possible.
A video presenting our work can be found here.
mehr
We developped a new residual reinforcement learning method that not just manipulated the output of a controller but also its input (e.g., the set-points). We applied this method to a real robot peg-in-the hole setup with a significant amount of position and orientation uncertainty.
mehr
Do you like Deep RL methods such as TRPO or PPO? Then you will also like this one! Our differentiable trust region layers can be used on top of any policy optimization algorithms such as policy gradients to obtain stable updates -- no approximations or implementation choices required :) Performance is enpar with PPO on simple exploration scenarios while we outperform PPO on more complex exploration environments.

Neural Processes are powerful tools for probabilistic meta-learning. Yet, they use rather basic aggregation methods, i.e. using a mean aggregator for the context, which does not give consistent uncertainty estimates and leads to poor prediction performance. Aggregating in a Bayesian way using Gaussian conditioning does a much better job !:)

Action conditional probabilistic model inspired by Kalman filter operations in the latent state. Find out how we learn the complex non-markovian dynamics of pneumatic soft robots and large hydraulic robots with this disentangled state and action representation.
mehr
Many methods for machine learning rely on approximate inference from intractable probability distributions. Learning sufficiently accurate approximations requires a rich model family and careful exploration of the relevant modes of the target distribution...
mehrUsing the I-Projection for Mixture Density Estimation. Find out why maximum likelihood is not well suited for mixture density modelling and why you should use the I-projection instead.
mehrThe Autonomous Learning Robots (ALR) Lab was founded at Jan. 2020 at the KIT. The new group is now building up and looking forward to do exciting research and teaching!