No version for distro humble showing github. Known supported distros are highlighted in the buttons above.

Package Summary

Tags No category tags.
Version 0.0.0
License BSD
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Description Tutorials for the KBS robotics labs
Checkout URI https://github.com/uos/ros2_tutorial.git
VCS Type git
VCS Version main
Last Updated 2025-05-06
Dev Status UNKNOWN
Released UNRELEASED
Tags No category tags.
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

State estimation of ceres robot

Additional Links

No additional links.

Maintainers

  • Alexander Mock

Authors

  • Alexander Mock

ex03_state_estimation

Theory Lesson: state estimation

(TODO write it down here)

Sensors for State Estimation

Wheel Odometry

How far did my robot travel based on the number of revolutions my motor did? Calculate it for the simplest case by hand: Only driving forwards. Think about what changes if the robot has a linear and an angular velocity.

What to know about wheel odometry:

  • Depends on motion model. Equations are different for Ackermann and differential steering.
  • Normally depends on the ground surface, but this is ignored in most software
    1. assumption: The ground is flat
    2. assumption: Wheels have infinite friction. They do not slip over the ground

Task: Find, print and understand the wheel odometry data. What message of which package is used?

IMU

Why is it hard to use linear accelerations for translational changes?

  • Integration problems.

Task: Find, print and understand the IMU data. What message of which package is used?

LiDAR

Start Gazebo open RViz and visualize the scan topic in fixed frame base_link. Now drive the robot forward. Can you estimate how far the robot has moved? Hint: The default size of a grid cell in RViz is 1x1 m.

What you have done by intuition is exactly what the task of point cloud registration is trying to solve. Search the internet for “ICP”. Small overview:

  • Find correspondences between a data set and a model by finding the closest points
  • Estimate the transformation parameters by Umeyama or non-linear optimization
  • Static environment

Locally registering scans is sometimes referred to as LiDAR odometry. It is a core concept of modern SLAM solutions. We do that later.

Camera

It is also possible to use cameras to completely estimate the state of the robot in 3D. A good overview is given by Wikipedia in the Egomotion estimation GIF:

visual-slam

Fusion

If we have different sensors that all can measure the state of the robot we need to fuse them appropriately. In this case “appropriately” means that we want some measurements to influence parts of our state estimation more than others. For example: We can estimate the translational speed by using the linear acceleration of the IMU or the linear velocity of the wheel odometry. Since we already know linear accelerations are not very reliable when it comes to integrating them to positions, we don’t want them to have much influence during fusion. And do not forget: All measurements are noisy. For this, the so-called “Kalman Filter” was invented. It is a simple filter that is a special case of a Bayes-Filter.

This page gives a good overview of how a linear Kalman-Filter works in 1D: https://www.kalmanfilter.net/kalman1d.html :

kalman1

kalman2

Linear Kalman-Filters have optimal filtering properties given that

  • all the sensors are pefectly modelled
  • the system can be modelled by linear transitions
  • sensor noise is normal distributed
  • the belief state is normal distributed and unimodal

But this is rarely the case in reality. At some point, linear Kalman-Filters are becoming inaccurate because we cannot model the reality well enough. Then one usually switches to Extended Kalman Filters (EKF) or Unscented Kalman Filters (UKF).

On mobile robots, there are most probably running at least one EKF as well. It is oftentimes used to fuse internal sensors, such as IMU and wheel odometry. A Kalman-Filter in general is not something that is the same for every robot. It has to be configured properly to fit the robot’s sensors. For example, the ceres robot can be modelled as:

  • Use the cmd_vel topic as action / as prior
  • Use the odom topic as measurement / as posterior. Linear velocity is less noisy than the rotational velocity
  • Use the imu/data_raw as measurement / as posterior. Rotational velocity is far less noisy compared to linear acceleration

Oftentimes the linear acceleration is completely ignored to estimate translational state components. Instead, the linear acceleration is used to improve the angular velocities using e.g. a Madgwick Filter.

With the simulation, we already started a pre-configured EKF from the package robot_localization

The result is the state of the robot given as odometry/filtered topic or as tf-transform odom -> base_footprint.

Task: Open RViz. Switch to fixed frame odom and enable the laser scan. Drive around and see that the robot is localizing well in RViz.

Make yourself familiar with the way the EKF was started and parameterized. New things that you should see:

  • Launch files, written in Python
  • Parameters defined in YAML files

Task 1: Try to change those parameters a little. For example: Try to enable the linear acceleration of the IMU as an additional measurement

Task 2: Try to integrate such a robot_localization launch and configuration files into this package

Real Robot

File truncated at 100 lines see the full file

CHANGELOG
No CHANGELOG found.

Package Dependencies

System Dependencies

No direct system dependencies.

Dependant Packages

No known dependants.

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged ex03_state_estimation at Robotics Stack Exchange

No version for distro jazzy showing github. Known supported distros are highlighted in the buttons above.

Package Summary

Tags No category tags.
Version 0.0.0
License BSD
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Description Tutorials for the KBS robotics labs
Checkout URI https://github.com/uos/ros2_tutorial.git
VCS Type git
VCS Version main
Last Updated 2025-05-06
Dev Status UNKNOWN
Released UNRELEASED
Tags No category tags.
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

State estimation of ceres robot

Additional Links

No additional links.

Maintainers

  • Alexander Mock

Authors

  • Alexander Mock

ex03_state_estimation

Theory Lesson: state estimation

(TODO write it down here)

Sensors for State Estimation

Wheel Odometry

How far did my robot travel based on the number of revolutions my motor did? Calculate it for the simplest case by hand: Only driving forwards. Think about what changes if the robot has a linear and an angular velocity.

What to know about wheel odometry:

  • Depends on motion model. Equations are different for Ackermann and differential steering.
  • Normally depends on the ground surface, but this is ignored in most software
    1. assumption: The ground is flat
    2. assumption: Wheels have infinite friction. They do not slip over the ground

Task: Find, print and understand the wheel odometry data. What message of which package is used?

IMU

Why is it hard to use linear accelerations for translational changes?

  • Integration problems.

Task: Find, print and understand the IMU data. What message of which package is used?

LiDAR

Start Gazebo open RViz and visualize the scan topic in fixed frame base_link. Now drive the robot forward. Can you estimate how far the robot has moved? Hint: The default size of a grid cell in RViz is 1x1 m.

What you have done by intuition is exactly what the task of point cloud registration is trying to solve. Search the internet for “ICP”. Small overview:

  • Find correspondences between a data set and a model by finding the closest points
  • Estimate the transformation parameters by Umeyama or non-linear optimization
  • Static environment

Locally registering scans is sometimes referred to as LiDAR odometry. It is a core concept of modern SLAM solutions. We do that later.

Camera

It is also possible to use cameras to completely estimate the state of the robot in 3D. A good overview is given by Wikipedia in the Egomotion estimation GIF:

visual-slam

Fusion

If we have different sensors that all can measure the state of the robot we need to fuse them appropriately. In this case “appropriately” means that we want some measurements to influence parts of our state estimation more than others. For example: We can estimate the translational speed by using the linear acceleration of the IMU or the linear velocity of the wheel odometry. Since we already know linear accelerations are not very reliable when it comes to integrating them to positions, we don’t want them to have much influence during fusion. And do not forget: All measurements are noisy. For this, the so-called “Kalman Filter” was invented. It is a simple filter that is a special case of a Bayes-Filter.

This page gives a good overview of how a linear Kalman-Filter works in 1D: https://www.kalmanfilter.net/kalman1d.html :

kalman1

kalman2

Linear Kalman-Filters have optimal filtering properties given that

  • all the sensors are pefectly modelled
  • the system can be modelled by linear transitions
  • sensor noise is normal distributed
  • the belief state is normal distributed and unimodal

But this is rarely the case in reality. At some point, linear Kalman-Filters are becoming inaccurate because we cannot model the reality well enough. Then one usually switches to Extended Kalman Filters (EKF) or Unscented Kalman Filters (UKF).

On mobile robots, there are most probably running at least one EKF as well. It is oftentimes used to fuse internal sensors, such as IMU and wheel odometry. A Kalman-Filter in general is not something that is the same for every robot. It has to be configured properly to fit the robot’s sensors. For example, the ceres robot can be modelled as:

  • Use the cmd_vel topic as action / as prior
  • Use the odom topic as measurement / as posterior. Linear velocity is less noisy than the rotational velocity
  • Use the imu/data_raw as measurement / as posterior. Rotational velocity is far less noisy compared to linear acceleration

Oftentimes the linear acceleration is completely ignored to estimate translational state components. Instead, the linear acceleration is used to improve the angular velocities using e.g. a Madgwick Filter.

With the simulation, we already started a pre-configured EKF from the package robot_localization

The result is the state of the robot given as odometry/filtered topic or as tf-transform odom -> base_footprint.

Task: Open RViz. Switch to fixed frame odom and enable the laser scan. Drive around and see that the robot is localizing well in RViz.

Make yourself familiar with the way the EKF was started and parameterized. New things that you should see:

  • Launch files, written in Python
  • Parameters defined in YAML files

Task 1: Try to change those parameters a little. For example: Try to enable the linear acceleration of the IMU as an additional measurement

Task 2: Try to integrate such a robot_localization launch and configuration files into this package

Real Robot

File truncated at 100 lines see the full file

CHANGELOG
No CHANGELOG found.

Package Dependencies

System Dependencies

No direct system dependencies.

Dependant Packages

No known dependants.

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged ex03_state_estimation at Robotics Stack Exchange

No version for distro kilted showing github. Known supported distros are highlighted in the buttons above.

Package Summary

Tags No category tags.
Version 0.0.0
License BSD
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Description Tutorials for the KBS robotics labs
Checkout URI https://github.com/uos/ros2_tutorial.git
VCS Type git
VCS Version main
Last Updated 2025-05-06
Dev Status UNKNOWN
Released UNRELEASED
Tags No category tags.
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

State estimation of ceres robot

Additional Links

No additional links.

Maintainers

  • Alexander Mock

Authors

  • Alexander Mock

ex03_state_estimation

Theory Lesson: state estimation

(TODO write it down here)

Sensors for State Estimation

Wheel Odometry

How far did my robot travel based on the number of revolutions my motor did? Calculate it for the simplest case by hand: Only driving forwards. Think about what changes if the robot has a linear and an angular velocity.

What to know about wheel odometry:

  • Depends on motion model. Equations are different for Ackermann and differential steering.
  • Normally depends on the ground surface, but this is ignored in most software
    1. assumption: The ground is flat
    2. assumption: Wheels have infinite friction. They do not slip over the ground

Task: Find, print and understand the wheel odometry data. What message of which package is used?

IMU

Why is it hard to use linear accelerations for translational changes?

  • Integration problems.

Task: Find, print and understand the IMU data. What message of which package is used?

LiDAR

Start Gazebo open RViz and visualize the scan topic in fixed frame base_link. Now drive the robot forward. Can you estimate how far the robot has moved? Hint: The default size of a grid cell in RViz is 1x1 m.

What you have done by intuition is exactly what the task of point cloud registration is trying to solve. Search the internet for “ICP”. Small overview:

  • Find correspondences between a data set and a model by finding the closest points
  • Estimate the transformation parameters by Umeyama or non-linear optimization
  • Static environment

Locally registering scans is sometimes referred to as LiDAR odometry. It is a core concept of modern SLAM solutions. We do that later.

Camera

It is also possible to use cameras to completely estimate the state of the robot in 3D. A good overview is given by Wikipedia in the Egomotion estimation GIF:

visual-slam

Fusion

If we have different sensors that all can measure the state of the robot we need to fuse them appropriately. In this case “appropriately” means that we want some measurements to influence parts of our state estimation more than others. For example: We can estimate the translational speed by using the linear acceleration of the IMU or the linear velocity of the wheel odometry. Since we already know linear accelerations are not very reliable when it comes to integrating them to positions, we don’t want them to have much influence during fusion. And do not forget: All measurements are noisy. For this, the so-called “Kalman Filter” was invented. It is a simple filter that is a special case of a Bayes-Filter.

This page gives a good overview of how a linear Kalman-Filter works in 1D: https://www.kalmanfilter.net/kalman1d.html :

kalman1

kalman2

Linear Kalman-Filters have optimal filtering properties given that

  • all the sensors are pefectly modelled
  • the system can be modelled by linear transitions
  • sensor noise is normal distributed
  • the belief state is normal distributed and unimodal

But this is rarely the case in reality. At some point, linear Kalman-Filters are becoming inaccurate because we cannot model the reality well enough. Then one usually switches to Extended Kalman Filters (EKF) or Unscented Kalman Filters (UKF).

On mobile robots, there are most probably running at least one EKF as well. It is oftentimes used to fuse internal sensors, such as IMU and wheel odometry. A Kalman-Filter in general is not something that is the same for every robot. It has to be configured properly to fit the robot’s sensors. For example, the ceres robot can be modelled as:

  • Use the cmd_vel topic as action / as prior
  • Use the odom topic as measurement / as posterior. Linear velocity is less noisy than the rotational velocity
  • Use the imu/data_raw as measurement / as posterior. Rotational velocity is far less noisy compared to linear acceleration

Oftentimes the linear acceleration is completely ignored to estimate translational state components. Instead, the linear acceleration is used to improve the angular velocities using e.g. a Madgwick Filter.

With the simulation, we already started a pre-configured EKF from the package robot_localization

The result is the state of the robot given as odometry/filtered topic or as tf-transform odom -> base_footprint.

Task: Open RViz. Switch to fixed frame odom and enable the laser scan. Drive around and see that the robot is localizing well in RViz.

Make yourself familiar with the way the EKF was started and parameterized. New things that you should see:

  • Launch files, written in Python
  • Parameters defined in YAML files

Task 1: Try to change those parameters a little. For example: Try to enable the linear acceleration of the IMU as an additional measurement

Task 2: Try to integrate such a robot_localization launch and configuration files into this package

Real Robot

File truncated at 100 lines see the full file

CHANGELOG
No CHANGELOG found.

Package Dependencies

System Dependencies

No direct system dependencies.

Dependant Packages

No known dependants.

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged ex03_state_estimation at Robotics Stack Exchange

No version for distro rolling showing github. Known supported distros are highlighted in the buttons above.

Package Summary

Tags No category tags.
Version 0.0.0
License BSD
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Description Tutorials for the KBS robotics labs
Checkout URI https://github.com/uos/ros2_tutorial.git
VCS Type git
VCS Version main
Last Updated 2025-05-06
Dev Status UNKNOWN
Released UNRELEASED
Tags No category tags.
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

State estimation of ceres robot

Additional Links

No additional links.

Maintainers

  • Alexander Mock

Authors

  • Alexander Mock

ex03_state_estimation

Theory Lesson: state estimation

(TODO write it down here)

Sensors for State Estimation

Wheel Odometry

How far did my robot travel based on the number of revolutions my motor did? Calculate it for the simplest case by hand: Only driving forwards. Think about what changes if the robot has a linear and an angular velocity.

What to know about wheel odometry:

  • Depends on motion model. Equations are different for Ackermann and differential steering.
  • Normally depends on the ground surface, but this is ignored in most software
    1. assumption: The ground is flat
    2. assumption: Wheels have infinite friction. They do not slip over the ground

Task: Find, print and understand the wheel odometry data. What message of which package is used?

IMU

Why is it hard to use linear accelerations for translational changes?

  • Integration problems.

Task: Find, print and understand the IMU data. What message of which package is used?

LiDAR

Start Gazebo open RViz and visualize the scan topic in fixed frame base_link. Now drive the robot forward. Can you estimate how far the robot has moved? Hint: The default size of a grid cell in RViz is 1x1 m.

What you have done by intuition is exactly what the task of point cloud registration is trying to solve. Search the internet for “ICP”. Small overview:

  • Find correspondences between a data set and a model by finding the closest points
  • Estimate the transformation parameters by Umeyama or non-linear optimization
  • Static environment

Locally registering scans is sometimes referred to as LiDAR odometry. It is a core concept of modern SLAM solutions. We do that later.

Camera

It is also possible to use cameras to completely estimate the state of the robot in 3D. A good overview is given by Wikipedia in the Egomotion estimation GIF:

visual-slam

Fusion

If we have different sensors that all can measure the state of the robot we need to fuse them appropriately. In this case “appropriately” means that we want some measurements to influence parts of our state estimation more than others. For example: We can estimate the translational speed by using the linear acceleration of the IMU or the linear velocity of the wheel odometry. Since we already know linear accelerations are not very reliable when it comes to integrating them to positions, we don’t want them to have much influence during fusion. And do not forget: All measurements are noisy. For this, the so-called “Kalman Filter” was invented. It is a simple filter that is a special case of a Bayes-Filter.

This page gives a good overview of how a linear Kalman-Filter works in 1D: https://www.kalmanfilter.net/kalman1d.html :

kalman1

kalman2

Linear Kalman-Filters have optimal filtering properties given that

  • all the sensors are pefectly modelled
  • the system can be modelled by linear transitions
  • sensor noise is normal distributed
  • the belief state is normal distributed and unimodal

But this is rarely the case in reality. At some point, linear Kalman-Filters are becoming inaccurate because we cannot model the reality well enough. Then one usually switches to Extended Kalman Filters (EKF) or Unscented Kalman Filters (UKF).

On mobile robots, there are most probably running at least one EKF as well. It is oftentimes used to fuse internal sensors, such as IMU and wheel odometry. A Kalman-Filter in general is not something that is the same for every robot. It has to be configured properly to fit the robot’s sensors. For example, the ceres robot can be modelled as:

  • Use the cmd_vel topic as action / as prior
  • Use the odom topic as measurement / as posterior. Linear velocity is less noisy than the rotational velocity
  • Use the imu/data_raw as measurement / as posterior. Rotational velocity is far less noisy compared to linear acceleration

Oftentimes the linear acceleration is completely ignored to estimate translational state components. Instead, the linear acceleration is used to improve the angular velocities using e.g. a Madgwick Filter.

With the simulation, we already started a pre-configured EKF from the package robot_localization

The result is the state of the robot given as odometry/filtered topic or as tf-transform odom -> base_footprint.

Task: Open RViz. Switch to fixed frame odom and enable the laser scan. Drive around and see that the robot is localizing well in RViz.

Make yourself familiar with the way the EKF was started and parameterized. New things that you should see:

  • Launch files, written in Python
  • Parameters defined in YAML files

Task 1: Try to change those parameters a little. For example: Try to enable the linear acceleration of the IMU as an additional measurement

Task 2: Try to integrate such a robot_localization launch and configuration files into this package

Real Robot

File truncated at 100 lines see the full file

CHANGELOG
No CHANGELOG found.

Package Dependencies

System Dependencies

No direct system dependencies.

Dependant Packages

No known dependants.

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged ex03_state_estimation at Robotics Stack Exchange

Package Summary

Tags No category tags.
Version 0.0.0
License BSD
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Description Tutorials for the KBS robotics labs
Checkout URI https://github.com/uos/ros2_tutorial.git
VCS Type git
VCS Version main
Last Updated 2025-05-06
Dev Status UNKNOWN
Released UNRELEASED
Tags No category tags.
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

State estimation of ceres robot

Additional Links

No additional links.

Maintainers

  • Alexander Mock

Authors

  • Alexander Mock

ex03_state_estimation

Theory Lesson: state estimation

(TODO write it down here)

Sensors for State Estimation

Wheel Odometry

How far did my robot travel based on the number of revolutions my motor did? Calculate it for the simplest case by hand: Only driving forwards. Think about what changes if the robot has a linear and an angular velocity.

What to know about wheel odometry:

  • Depends on motion model. Equations are different for Ackermann and differential steering.
  • Normally depends on the ground surface, but this is ignored in most software
    1. assumption: The ground is flat
    2. assumption: Wheels have infinite friction. They do not slip over the ground

Task: Find, print and understand the wheel odometry data. What message of which package is used?

IMU

Why is it hard to use linear accelerations for translational changes?

  • Integration problems.

Task: Find, print and understand the IMU data. What message of which package is used?

LiDAR

Start Gazebo open RViz and visualize the scan topic in fixed frame base_link. Now drive the robot forward. Can you estimate how far the robot has moved? Hint: The default size of a grid cell in RViz is 1x1 m.

What you have done by intuition is exactly what the task of point cloud registration is trying to solve. Search the internet for “ICP”. Small overview:

  • Find correspondences between a data set and a model by finding the closest points
  • Estimate the transformation parameters by Umeyama or non-linear optimization
  • Static environment

Locally registering scans is sometimes referred to as LiDAR odometry. It is a core concept of modern SLAM solutions. We do that later.

Camera

It is also possible to use cameras to completely estimate the state of the robot in 3D. A good overview is given by Wikipedia in the Egomotion estimation GIF:

visual-slam

Fusion

If we have different sensors that all can measure the state of the robot we need to fuse them appropriately. In this case “appropriately” means that we want some measurements to influence parts of our state estimation more than others. For example: We can estimate the translational speed by using the linear acceleration of the IMU or the linear velocity of the wheel odometry. Since we already know linear accelerations are not very reliable when it comes to integrating them to positions, we don’t want them to have much influence during fusion. And do not forget: All measurements are noisy. For this, the so-called “Kalman Filter” was invented. It is a simple filter that is a special case of a Bayes-Filter.

This page gives a good overview of how a linear Kalman-Filter works in 1D: https://www.kalmanfilter.net/kalman1d.html :

kalman1

kalman2

Linear Kalman-Filters have optimal filtering properties given that

  • all the sensors are pefectly modelled
  • the system can be modelled by linear transitions
  • sensor noise is normal distributed
  • the belief state is normal distributed and unimodal

But this is rarely the case in reality. At some point, linear Kalman-Filters are becoming inaccurate because we cannot model the reality well enough. Then one usually switches to Extended Kalman Filters (EKF) or Unscented Kalman Filters (UKF).

On mobile robots, there are most probably running at least one EKF as well. It is oftentimes used to fuse internal sensors, such as IMU and wheel odometry. A Kalman-Filter in general is not something that is the same for every robot. It has to be configured properly to fit the robot’s sensors. For example, the ceres robot can be modelled as:

  • Use the cmd_vel topic as action / as prior
  • Use the odom topic as measurement / as posterior. Linear velocity is less noisy than the rotational velocity
  • Use the imu/data_raw as measurement / as posterior. Rotational velocity is far less noisy compared to linear acceleration

Oftentimes the linear acceleration is completely ignored to estimate translational state components. Instead, the linear acceleration is used to improve the angular velocities using e.g. a Madgwick Filter.

With the simulation, we already started a pre-configured EKF from the package robot_localization

The result is the state of the robot given as odometry/filtered topic or as tf-transform odom -> base_footprint.

Task: Open RViz. Switch to fixed frame odom and enable the laser scan. Drive around and see that the robot is localizing well in RViz.

Make yourself familiar with the way the EKF was started and parameterized. New things that you should see:

  • Launch files, written in Python
  • Parameters defined in YAML files

Task 1: Try to change those parameters a little. For example: Try to enable the linear acceleration of the IMU as an additional measurement

Task 2: Try to integrate such a robot_localization launch and configuration files into this package

Real Robot

File truncated at 100 lines see the full file

CHANGELOG
No CHANGELOG found.

Package Dependencies

System Dependencies

No direct system dependencies.

Dependant Packages

No known dependants.

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged ex03_state_estimation at Robotics Stack Exchange

No version for distro galactic showing github. Known supported distros are highlighted in the buttons above.

Package Summary

Tags No category tags.
Version 0.0.0
License BSD
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Description Tutorials for the KBS robotics labs
Checkout URI https://github.com/uos/ros2_tutorial.git
VCS Type git
VCS Version main
Last Updated 2025-05-06
Dev Status UNKNOWN
Released UNRELEASED
Tags No category tags.
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

State estimation of ceres robot

Additional Links

No additional links.

Maintainers

  • Alexander Mock

Authors

  • Alexander Mock

ex03_state_estimation

Theory Lesson: state estimation

(TODO write it down here)

Sensors for State Estimation

Wheel Odometry

How far did my robot travel based on the number of revolutions my motor did? Calculate it for the simplest case by hand: Only driving forwards. Think about what changes if the robot has a linear and an angular velocity.

What to know about wheel odometry:

  • Depends on motion model. Equations are different for Ackermann and differential steering.
  • Normally depends on the ground surface, but this is ignored in most software
    1. assumption: The ground is flat
    2. assumption: Wheels have infinite friction. They do not slip over the ground

Task: Find, print and understand the wheel odometry data. What message of which package is used?

IMU

Why is it hard to use linear accelerations for translational changes?

  • Integration problems.

Task: Find, print and understand the IMU data. What message of which package is used?

LiDAR

Start Gazebo open RViz and visualize the scan topic in fixed frame base_link. Now drive the robot forward. Can you estimate how far the robot has moved? Hint: The default size of a grid cell in RViz is 1x1 m.

What you have done by intuition is exactly what the task of point cloud registration is trying to solve. Search the internet for “ICP”. Small overview:

  • Find correspondences between a data set and a model by finding the closest points
  • Estimate the transformation parameters by Umeyama or non-linear optimization
  • Static environment

Locally registering scans is sometimes referred to as LiDAR odometry. It is a core concept of modern SLAM solutions. We do that later.

Camera

It is also possible to use cameras to completely estimate the state of the robot in 3D. A good overview is given by Wikipedia in the Egomotion estimation GIF:

visual-slam

Fusion

If we have different sensors that all can measure the state of the robot we need to fuse them appropriately. In this case “appropriately” means that we want some measurements to influence parts of our state estimation more than others. For example: We can estimate the translational speed by using the linear acceleration of the IMU or the linear velocity of the wheel odometry. Since we already know linear accelerations are not very reliable when it comes to integrating them to positions, we don’t want them to have much influence during fusion. And do not forget: All measurements are noisy. For this, the so-called “Kalman Filter” was invented. It is a simple filter that is a special case of a Bayes-Filter.

This page gives a good overview of how a linear Kalman-Filter works in 1D: https://www.kalmanfilter.net/kalman1d.html :

kalman1

kalman2

Linear Kalman-Filters have optimal filtering properties given that

  • all the sensors are pefectly modelled
  • the system can be modelled by linear transitions
  • sensor noise is normal distributed
  • the belief state is normal distributed and unimodal

But this is rarely the case in reality. At some point, linear Kalman-Filters are becoming inaccurate because we cannot model the reality well enough. Then one usually switches to Extended Kalman Filters (EKF) or Unscented Kalman Filters (UKF).

On mobile robots, there are most probably running at least one EKF as well. It is oftentimes used to fuse internal sensors, such as IMU and wheel odometry. A Kalman-Filter in general is not something that is the same for every robot. It has to be configured properly to fit the robot’s sensors. For example, the ceres robot can be modelled as:

  • Use the cmd_vel topic as action / as prior
  • Use the odom topic as measurement / as posterior. Linear velocity is less noisy than the rotational velocity
  • Use the imu/data_raw as measurement / as posterior. Rotational velocity is far less noisy compared to linear acceleration

Oftentimes the linear acceleration is completely ignored to estimate translational state components. Instead, the linear acceleration is used to improve the angular velocities using e.g. a Madgwick Filter.

With the simulation, we already started a pre-configured EKF from the package robot_localization

The result is the state of the robot given as odometry/filtered topic or as tf-transform odom -> base_footprint.

Task: Open RViz. Switch to fixed frame odom and enable the laser scan. Drive around and see that the robot is localizing well in RViz.

Make yourself familiar with the way the EKF was started and parameterized. New things that you should see:

  • Launch files, written in Python
  • Parameters defined in YAML files

Task 1: Try to change those parameters a little. For example: Try to enable the linear acceleration of the IMU as an additional measurement

Task 2: Try to integrate such a robot_localization launch and configuration files into this package

Real Robot

File truncated at 100 lines see the full file

CHANGELOG
No CHANGELOG found.

Package Dependencies

System Dependencies

No direct system dependencies.

Dependant Packages

No known dependants.

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged ex03_state_estimation at Robotics Stack Exchange

No version for distro iron showing github. Known supported distros are highlighted in the buttons above.

Package Summary

Tags No category tags.
Version 0.0.0
License BSD
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Description Tutorials for the KBS robotics labs
Checkout URI https://github.com/uos/ros2_tutorial.git
VCS Type git
VCS Version main
Last Updated 2025-05-06
Dev Status UNKNOWN
Released UNRELEASED
Tags No category tags.
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

State estimation of ceres robot

Additional Links

No additional links.

Maintainers

  • Alexander Mock

Authors

  • Alexander Mock

ex03_state_estimation

Theory Lesson: state estimation

(TODO write it down here)

Sensors for State Estimation

Wheel Odometry

How far did my robot travel based on the number of revolutions my motor did? Calculate it for the simplest case by hand: Only driving forwards. Think about what changes if the robot has a linear and an angular velocity.

What to know about wheel odometry:

  • Depends on motion model. Equations are different for Ackermann and differential steering.
  • Normally depends on the ground surface, but this is ignored in most software
    1. assumption: The ground is flat
    2. assumption: Wheels have infinite friction. They do not slip over the ground

Task: Find, print and understand the wheel odometry data. What message of which package is used?

IMU

Why is it hard to use linear accelerations for translational changes?

  • Integration problems.

Task: Find, print and understand the IMU data. What message of which package is used?

LiDAR

Start Gazebo open RViz and visualize the scan topic in fixed frame base_link. Now drive the robot forward. Can you estimate how far the robot has moved? Hint: The default size of a grid cell in RViz is 1x1 m.

What you have done by intuition is exactly what the task of point cloud registration is trying to solve. Search the internet for “ICP”. Small overview:

  • Find correspondences between a data set and a model by finding the closest points
  • Estimate the transformation parameters by Umeyama or non-linear optimization
  • Static environment

Locally registering scans is sometimes referred to as LiDAR odometry. It is a core concept of modern SLAM solutions. We do that later.

Camera

It is also possible to use cameras to completely estimate the state of the robot in 3D. A good overview is given by Wikipedia in the Egomotion estimation GIF:

visual-slam

Fusion

If we have different sensors that all can measure the state of the robot we need to fuse them appropriately. In this case “appropriately” means that we want some measurements to influence parts of our state estimation more than others. For example: We can estimate the translational speed by using the linear acceleration of the IMU or the linear velocity of the wheel odometry. Since we already know linear accelerations are not very reliable when it comes to integrating them to positions, we don’t want them to have much influence during fusion. And do not forget: All measurements are noisy. For this, the so-called “Kalman Filter” was invented. It is a simple filter that is a special case of a Bayes-Filter.

This page gives a good overview of how a linear Kalman-Filter works in 1D: https://www.kalmanfilter.net/kalman1d.html :

kalman1

kalman2

Linear Kalman-Filters have optimal filtering properties given that

  • all the sensors are pefectly modelled
  • the system can be modelled by linear transitions
  • sensor noise is normal distributed
  • the belief state is normal distributed and unimodal

But this is rarely the case in reality. At some point, linear Kalman-Filters are becoming inaccurate because we cannot model the reality well enough. Then one usually switches to Extended Kalman Filters (EKF) or Unscented Kalman Filters (UKF).

On mobile robots, there are most probably running at least one EKF as well. It is oftentimes used to fuse internal sensors, such as IMU and wheel odometry. A Kalman-Filter in general is not something that is the same for every robot. It has to be configured properly to fit the robot’s sensors. For example, the ceres robot can be modelled as:

  • Use the cmd_vel topic as action / as prior
  • Use the odom topic as measurement / as posterior. Linear velocity is less noisy than the rotational velocity
  • Use the imu/data_raw as measurement / as posterior. Rotational velocity is far less noisy compared to linear acceleration

Oftentimes the linear acceleration is completely ignored to estimate translational state components. Instead, the linear acceleration is used to improve the angular velocities using e.g. a Madgwick Filter.

With the simulation, we already started a pre-configured EKF from the package robot_localization

The result is the state of the robot given as odometry/filtered topic or as tf-transform odom -> base_footprint.

Task: Open RViz. Switch to fixed frame odom and enable the laser scan. Drive around and see that the robot is localizing well in RViz.

Make yourself familiar with the way the EKF was started and parameterized. New things that you should see:

  • Launch files, written in Python
  • Parameters defined in YAML files

Task 1: Try to change those parameters a little. For example: Try to enable the linear acceleration of the IMU as an additional measurement

Task 2: Try to integrate such a robot_localization launch and configuration files into this package

Real Robot

File truncated at 100 lines see the full file

CHANGELOG
No CHANGELOG found.

Package Dependencies

System Dependencies

No direct system dependencies.

Dependant Packages

No known dependants.

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged ex03_state_estimation at Robotics Stack Exchange

No version for distro melodic showing github. Known supported distros are highlighted in the buttons above.

Package Summary

Tags No category tags.
Version 0.0.0
License BSD
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Description Tutorials for the KBS robotics labs
Checkout URI https://github.com/uos/ros2_tutorial.git
VCS Type git
VCS Version main
Last Updated 2025-05-06
Dev Status UNKNOWN
Released UNRELEASED
Tags No category tags.
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

State estimation of ceres robot

Additional Links

No additional links.

Maintainers

  • Alexander Mock

Authors

  • Alexander Mock

ex03_state_estimation

Theory Lesson: state estimation

(TODO write it down here)

Sensors for State Estimation

Wheel Odometry

How far did my robot travel based on the number of revolutions my motor did? Calculate it for the simplest case by hand: Only driving forwards. Think about what changes if the robot has a linear and an angular velocity.

What to know about wheel odometry:

  • Depends on motion model. Equations are different for Ackermann and differential steering.
  • Normally depends on the ground surface, but this is ignored in most software
    1. assumption: The ground is flat
    2. assumption: Wheels have infinite friction. They do not slip over the ground

Task: Find, print and understand the wheel odometry data. What message of which package is used?

IMU

Why is it hard to use linear accelerations for translational changes?

  • Integration problems.

Task: Find, print and understand the IMU data. What message of which package is used?

LiDAR

Start Gazebo open RViz and visualize the scan topic in fixed frame base_link. Now drive the robot forward. Can you estimate how far the robot has moved? Hint: The default size of a grid cell in RViz is 1x1 m.

What you have done by intuition is exactly what the task of point cloud registration is trying to solve. Search the internet for “ICP”. Small overview:

  • Find correspondences between a data set and a model by finding the closest points
  • Estimate the transformation parameters by Umeyama or non-linear optimization
  • Static environment

Locally registering scans is sometimes referred to as LiDAR odometry. It is a core concept of modern SLAM solutions. We do that later.

Camera

It is also possible to use cameras to completely estimate the state of the robot in 3D. A good overview is given by Wikipedia in the Egomotion estimation GIF:

visual-slam

Fusion

If we have different sensors that all can measure the state of the robot we need to fuse them appropriately. In this case “appropriately” means that we want some measurements to influence parts of our state estimation more than others. For example: We can estimate the translational speed by using the linear acceleration of the IMU or the linear velocity of the wheel odometry. Since we already know linear accelerations are not very reliable when it comes to integrating them to positions, we don’t want them to have much influence during fusion. And do not forget: All measurements are noisy. For this, the so-called “Kalman Filter” was invented. It is a simple filter that is a special case of a Bayes-Filter.

This page gives a good overview of how a linear Kalman-Filter works in 1D: https://www.kalmanfilter.net/kalman1d.html :

kalman1

kalman2

Linear Kalman-Filters have optimal filtering properties given that

  • all the sensors are pefectly modelled
  • the system can be modelled by linear transitions
  • sensor noise is normal distributed
  • the belief state is normal distributed and unimodal

But this is rarely the case in reality. At some point, linear Kalman-Filters are becoming inaccurate because we cannot model the reality well enough. Then one usually switches to Extended Kalman Filters (EKF) or Unscented Kalman Filters (UKF).

On mobile robots, there are most probably running at least one EKF as well. It is oftentimes used to fuse internal sensors, such as IMU and wheel odometry. A Kalman-Filter in general is not something that is the same for every robot. It has to be configured properly to fit the robot’s sensors. For example, the ceres robot can be modelled as:

  • Use the cmd_vel topic as action / as prior
  • Use the odom topic as measurement / as posterior. Linear velocity is less noisy than the rotational velocity
  • Use the imu/data_raw as measurement / as posterior. Rotational velocity is far less noisy compared to linear acceleration

Oftentimes the linear acceleration is completely ignored to estimate translational state components. Instead, the linear acceleration is used to improve the angular velocities using e.g. a Madgwick Filter.

With the simulation, we already started a pre-configured EKF from the package robot_localization

The result is the state of the robot given as odometry/filtered topic or as tf-transform odom -> base_footprint.

Task: Open RViz. Switch to fixed frame odom and enable the laser scan. Drive around and see that the robot is localizing well in RViz.

Make yourself familiar with the way the EKF was started and parameterized. New things that you should see:

  • Launch files, written in Python
  • Parameters defined in YAML files

Task 1: Try to change those parameters a little. For example: Try to enable the linear acceleration of the IMU as an additional measurement

Task 2: Try to integrate such a robot_localization launch and configuration files into this package

Real Robot

File truncated at 100 lines see the full file

CHANGELOG
No CHANGELOG found.

Package Dependencies

System Dependencies

No direct system dependencies.

Dependant Packages

No known dependants.

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged ex03_state_estimation at Robotics Stack Exchange

No version for distro noetic showing github. Known supported distros are highlighted in the buttons above.

Package Summary

Tags No category tags.
Version 0.0.0
License BSD
Build type AMENT_CMAKE
Use RECOMMENDED

Repository Summary

Description Tutorials for the KBS robotics labs
Checkout URI https://github.com/uos/ros2_tutorial.git
VCS Type git
VCS Version main
Last Updated 2025-05-06
Dev Status UNKNOWN
Released UNRELEASED
Tags No category tags.
Contributing Help Wanted (-)
Good First Issues (-)
Pull Requests to Review (-)

Package Description

State estimation of ceres robot

Additional Links

No additional links.

Maintainers

  • Alexander Mock

Authors

  • Alexander Mock

ex03_state_estimation

Theory Lesson: state estimation

(TODO write it down here)

Sensors for State Estimation

Wheel Odometry

How far did my robot travel based on the number of revolutions my motor did? Calculate it for the simplest case by hand: Only driving forwards. Think about what changes if the robot has a linear and an angular velocity.

What to know about wheel odometry:

  • Depends on motion model. Equations are different for Ackermann and differential steering.
  • Normally depends on the ground surface, but this is ignored in most software
    1. assumption: The ground is flat
    2. assumption: Wheels have infinite friction. They do not slip over the ground

Task: Find, print and understand the wheel odometry data. What message of which package is used?

IMU

Why is it hard to use linear accelerations for translational changes?

  • Integration problems.

Task: Find, print and understand the IMU data. What message of which package is used?

LiDAR

Start Gazebo open RViz and visualize the scan topic in fixed frame base_link. Now drive the robot forward. Can you estimate how far the robot has moved? Hint: The default size of a grid cell in RViz is 1x1 m.

What you have done by intuition is exactly what the task of point cloud registration is trying to solve. Search the internet for “ICP”. Small overview:

  • Find correspondences between a data set and a model by finding the closest points
  • Estimate the transformation parameters by Umeyama or non-linear optimization
  • Static environment

Locally registering scans is sometimes referred to as LiDAR odometry. It is a core concept of modern SLAM solutions. We do that later.

Camera

It is also possible to use cameras to completely estimate the state of the robot in 3D. A good overview is given by Wikipedia in the Egomotion estimation GIF:

visual-slam

Fusion

If we have different sensors that all can measure the state of the robot we need to fuse them appropriately. In this case “appropriately” means that we want some measurements to influence parts of our state estimation more than others. For example: We can estimate the translational speed by using the linear acceleration of the IMU or the linear velocity of the wheel odometry. Since we already know linear accelerations are not very reliable when it comes to integrating them to positions, we don’t want them to have much influence during fusion. And do not forget: All measurements are noisy. For this, the so-called “Kalman Filter” was invented. It is a simple filter that is a special case of a Bayes-Filter.

This page gives a good overview of how a linear Kalman-Filter works in 1D: https://www.kalmanfilter.net/kalman1d.html :

kalman1

kalman2

Linear Kalman-Filters have optimal filtering properties given that

  • all the sensors are pefectly modelled
  • the system can be modelled by linear transitions
  • sensor noise is normal distributed
  • the belief state is normal distributed and unimodal

But this is rarely the case in reality. At some point, linear Kalman-Filters are becoming inaccurate because we cannot model the reality well enough. Then one usually switches to Extended Kalman Filters (EKF) or Unscented Kalman Filters (UKF).

On mobile robots, there are most probably running at least one EKF as well. It is oftentimes used to fuse internal sensors, such as IMU and wheel odometry. A Kalman-Filter in general is not something that is the same for every robot. It has to be configured properly to fit the robot’s sensors. For example, the ceres robot can be modelled as:

  • Use the cmd_vel topic as action / as prior
  • Use the odom topic as measurement / as posterior. Linear velocity is less noisy than the rotational velocity
  • Use the imu/data_raw as measurement / as posterior. Rotational velocity is far less noisy compared to linear acceleration

Oftentimes the linear acceleration is completely ignored to estimate translational state components. Instead, the linear acceleration is used to improve the angular velocities using e.g. a Madgwick Filter.

With the simulation, we already started a pre-configured EKF from the package robot_localization

The result is the state of the robot given as odometry/filtered topic or as tf-transform odom -> base_footprint.

Task: Open RViz. Switch to fixed frame odom and enable the laser scan. Drive around and see that the robot is localizing well in RViz.

Make yourself familiar with the way the EKF was started and parameterized. New things that you should see:

  • Launch files, written in Python
  • Parameters defined in YAML files

Task 1: Try to change those parameters a little. For example: Try to enable the linear acceleration of the IMU as an additional measurement

Task 2: Try to integrate such a robot_localization launch and configuration files into this package

Real Robot

File truncated at 100 lines see the full file

CHANGELOG
No CHANGELOG found.

Package Dependencies

System Dependencies

No direct system dependencies.

Dependant Packages

No known dependants.

Launch files

No launch files found

Messages

No message files found.

Services

No service files found

Plugins

No plugins found.

Recent questions tagged ex03_state_estimation at Robotics Stack Exchange