top of page
Search

Lidar-Camera calibration using the all new python interface!

In this blog post we explain how to perform a lidar-camera calibration with our newly released calibration library. We will calibrate a sensor setup consisting of a Realsense D455 stereo camera and a Ouster OS1 Lidar using our custom design April-Checkerboard.




Motivation


Autonomous systems need multiple sensors to ensure they can perceive their environment accurately and reliably. Different sensors capture various types of data—like visual, thermal, and spatial information—allowing the system to create a comprehensive understanding of its surroundings. This redundancy helps to overcome the limitations of individual sensors, such as poor performance in certain conditions, and enhances safety, precision, and robustness in decision-making. Multiple sensors working together ensure that the system can adapt to a wide range of scenarios, making it more reliable and effective in real-world applications.



Good sensor calibration is crucial to ensure that all sensors provide accurate and consistent data, enabling the autonomous system to make precise and reliable decisions. Poor calibration can lead to errors in perception and interpretation, compromising the system's performance and safety.



Sensor Setup used in this tutorial


 The setup involves a RealSense D455 stereo camera and an Ouster OS1 LiDAR, both rigidly mounted on the same mechanical base. Although in this example we use the OS1 you might use any other Ouster Lidar.







Data requirements


The requirements for the lidar calibration process are the following


  • A rough initial guess of the camera-to-lidar transformation (up to 5°-10°). (Or a board with reflective tape if this doesn't interfere with your lidar point cloud)

  • A supported calibration target



Calibration Target used in this example

For the calibration target we use combination of April Tags and a Checkerboard - that we call April-Checkerboard. However, you may use other supported calibration boards. We refer you to our previous Blog post (https://www.camcalib.io/post/build-your-own-calibration-tools-and-application-with-the-newest-camcalib-library-release-for-multi)




Lidar calibration with different targets


The lidar calibration can consists of two steps.


  1. Board plane alignment

  2. Intensity alignment (refinement step) - if your lidar provides intensity values.


Step 1 can be performed with every board type our software supports. However, step 2 requires you to use one of the following boards

  • April Checkerboard

  • Radon Board (requires openCV >=4.3)

  • Checkerboard (requires lidar and camera messages to have the same timestamp!)


Calibration Process

In the following we show the step by step process of the calibration.


Data recording (and best practices)

We use rosbags for the calibration. Ideally, one for the intrinsic camera calibration and one for the lidar-camera extrinsics. We will skip the intrinsic calibration for this example and simple use the Realsense factory calibration.


Getting into the code


The example doesn't need anything more than the camcalib module






Calibration


For best results we recommend performing the lidar calibration at least twice. Once without, and once with using the intensity optimization (camcalib.CalibrationSettings.use_lidar_intensity_residuals = True) if you lidar provides reliable intensity data. The simplest approach looks like the following



Which generates the following output



As we can see, we perform 3 steps.


  1. Set the accuracy of the initial guess for the lidar calibration.


    This basically defines a search region for the lidar target. The larger the value, the larger the search region and also the more potential outliers will be in the calibration.

  2. Load data (just like for a standard camera calibration)

  3. Calibrate without intensities

  4. Calibrate again (using the results of the first calibration as initial guess), but this time with intensity optimization.


Visualization of the result

Looking at the results as coordinate systems we can see, that it matches the physical setup above. In the left picture we see the physical sensor setup and in the left picture the computed extrinsics of each sensor - on lidar sensor and two cameras






Visualization of Lidar points on Target



Here, we want to show a before and after illustration of the lidar points on the camera target. This shows the initial miss-alignment of the initial guess vs. the calibrated result. This is a common practice to validate the calibration results visually. The red dotted rectangle in the left image is the projection of the points on the calibration target into the camera image based on the initial process. As we can see the red dotted rectangle does not align with the edges of the calibration target in image space. Once the calibration process is done the estimated extrinsic parameters between lidar and cameras allow the projections of the 3D points in the Lidar frame to be perfectly aligned with the edges of the calibration target in image space (green dotted rectangle) .





Comparison of initial guess vs results


Comparing the initial guess with the actual result gives us


Initial Guess


  • axis_angle: [-1.1, 1.1, -1.1]

  • translation: [0.0, 0.0, 0.0]



Actual Calibration



  • axis_angle: [-1.16517, 1.24054, -1.15777]

  • translation: [0.0929664, 0.237725, -0.114295]


Difference


This is an approximate difference in rotation of and a translation of around 20cm These values are also typically the boundaries we suggest for the inaccuracy for the initial guess of the lidar pose.



Conclusion

With this tutorial you should now be able to use our new calibration library to perform a camera to lidar calibration. You should know the capabilities and requirements to do so, and also the correct workflow. If you have any questions about the software or the calibration process, feel free to contact us.




137 views0 comments

Recent Posts

See All

コメント


Post: Blog2_Post
bottom of page