TY - JOUR
T1 - Physics and semantic informed multi-sensor calibration via optimization theory and self-supervised learning
AU - Hayoun, Shmuel Y.
AU - Halachmi, Meir
AU - Serebro, Doron
AU - Twizer, Kfir
AU - Medezinski, Elinor
AU - Korkidi, Liron
AU - Cohen, Moshik
AU - Orr, Itai
N1 - Publisher Copyright:
© 2024, The Author(s).
PY - 2024/1/30
Y1 - 2024/1/30
N2 - Widespread adaptation of autonomous, robotic systems relies greatly on safe and reliable operation, which in many cases is derived from the ability to maintain accurate and robust perception capabilities. Environmental and operational conditions as well as improper maintenance can produce calibration errors inhibiting sensor fusion and, consequently, degrading the perception performance and overall system usability. Traditionally, sensor calibration is performed in a controlled environment with one or more known targets. Such a procedure can only be carried out in between operations and is done manually; a tedious task if it must be conducted on a regular basis. This creates an acute need for online targetless methods, capable of yielding a set of geometric transformations based on perceived environmental features. However, the often-required redundancy in sensing modalities poses further challenges, as the features captured by each sensor and their distinctiveness may vary. We present a holistic approach to performing joint calibration of a camera–lidar–radar trio in a representative autonomous driving application. Leveraging prior knowledge and physical properties of these sensing modalities together with semantic information, we propose two targetless calibration methods within a cost minimization framework: the first via direct online optimization, and the second through self-supervised learning (SSL).
AB - Widespread adaptation of autonomous, robotic systems relies greatly on safe and reliable operation, which in many cases is derived from the ability to maintain accurate and robust perception capabilities. Environmental and operational conditions as well as improper maintenance can produce calibration errors inhibiting sensor fusion and, consequently, degrading the perception performance and overall system usability. Traditionally, sensor calibration is performed in a controlled environment with one or more known targets. Such a procedure can only be carried out in between operations and is done manually; a tedious task if it must be conducted on a regular basis. This creates an acute need for online targetless methods, capable of yielding a set of geometric transformations based on perceived environmental features. However, the often-required redundancy in sensing modalities poses further challenges, as the features captured by each sensor and their distinctiveness may vary. We present a holistic approach to performing joint calibration of a camera–lidar–radar trio in a representative autonomous driving application. Leveraging prior knowledge and physical properties of these sensing modalities together with semantic information, we propose two targetless calibration methods within a cost minimization framework: the first via direct online optimization, and the second through self-supervised learning (SSL).
UR - http://www.scopus.com/inward/record.url?scp=85183742876&partnerID=8YFLogxK
U2 - 10.1038/s41598-024-53009-z
DO - 10.1038/s41598-024-53009-z
M3 - ???researchoutput.researchoutputtypes.contributiontojournal.article???
C2 - 38291178
AN - SCOPUS:85183742876
SN - 2045-2322
VL - 14
JO - Scientific Reports
JF - Scientific Reports
IS - 1
M1 - 2541
ER -