L4DR: LiDAR-4DRadar Fusion for Weather-Robust 3D Object Detection (2024)

Xun Huang1,Ziyu Xu1,Hai Wu1,Jinlong Wang1,Qiming Xia1,
Yan Xia2,Jonathan Li3,Kyle Gao3,Chenglu Wen1Cheng Wang1
Corresponding author.

Abstract

LiDAR-based vision systems are integral for 3D object detection, which is crucial for autonomous navigation. However, they suffer from performance degradation in adverse weather conditions due to the quality deterioration of LiDAR point clouds.Fusing LiDAR with the weather-robust 4D radar sensor is expected to solve this problem. However, the fusion of LiDAR and 4D radar is challenging because they differ significantly in terms of data quality and the degree of degradation in adverse weather. To address these issues, we introduce L4DR, a weather-robust 3D object detection method that effectively achieves LiDAR and 4D Radar fusion. Our L4DR includes Multi-Modal Encoding (MME) and Foreground-Aware Denoising (FAD) technique to reconcile sensor gaps, which is the first exploration of the complementarity of early fusion between LiDAR and 4D radar. Additionally, we design an Inter-Modal and Intra-Modal ({IM}2) parallel feature extraction backbone coupled with a Multi-Scale Gated Fusion (MSGF) module to counteract the varying degrees of sensor degradation under adverse weather conditions. Experimental evaluation on a VoD dataset with simulated fog proves that L4DR is more adaptable to changing weather conditions. It delivers a significant performance increase under different fog levels, improving the 3D mAP by up to 20.0% over the traditional LiDAR-only approach. Moreover, the results on the K-Radar dataset validate the consistent performance improvement of L4DR in real-world adverse weather conditions.

Introduction

3D object detection is fundamental to vision systems of unmanned platforms, extensively utilized in applications such as intelligent robot navigation (Ghita etal. 2024; Xu etal. 2022) and autonomous driving(Bijelic etal. 2020). Full Driving Automation (FDA, Level 5) relies on weather-robust 3D object detection, providing precise 3D bounding boxes even under various challenging adverse weather conditions(Qian etal. 2021). Owing to the high resolution and strong interference resistance of LiDAR sensors, LiDAR-based 3D object detection has emerged as a mainstream area of research (Yan etal. 2023; Xia etal. 2023; Wu etal. 2024). However, LiDAR sensors exhibit considerable sensitivity to weather conditions. In adverse scenarios such as rain and fog, the scanning signals suffer from substantial degradation and increased noise(Huang etal. 2024; Hahner etal. 2021). This degradation can negatively impact 3D detectors, compromising the reliability of autonomous perception systems.

Aside from LiDAR, 4D (range, azimuth, Doppler, and elevation) millimeter-wave radar has increasingly been recognized (Han etal. 2023; Sun and Zhang 2021). As shown in Figure 1 (a), 4D radar outperforms LiDAR in weather robustness, velocity measurement, and detection range. The millimeter-wave signals of 4D radar have wavelengths much larger than the tiny particles in fog, rain, and snow(Golovachev etal. 2018; Balal, Pinhasi, and Pinhasi 2016), exhibiting reduced susceptibility to weather disturbances. As shown in Fig. 1 (b), the performance gap between LiDAR and 4D radar decreases as the severity of the weather rises. Therefore, the 4D radar sensor is suitable for various weather conditions. However, LiDAR is still far ahead of the radar in the important metrics of object classification and resolution. These circ*mstances make it very feasible to promote the full fusion of 4D radar and LiDAR data to improve 3D object detection. Pioneering approaches such as InterFusion(Wang etal. 2022b), M2Fusion(Wang etal. 2022a), and 3D-LRF (Chae, Kim, and Yoon 2024) represent the initial explorations into the fusion of LiDAR and 4D radar, showing significant performance improvements.

L4DR: LiDAR-4DRadar Fusion for Weather-Robust 3D Object Detection (1)
L4DR: LiDAR-4DRadar Fusion for Weather-Robust 3D Object Detection (2)

Despite these advances, the fusion of LiDAR and 4D radar sensors has not addressed the substantial disparities in data quality and the different degrees of degradation in adverse weather. As shown in Figure 2 (a), a primary challenge arises from the significant quality disparity between the LiDAR sensor and the 4D radar sensor.A second challenge pertains to varying sensor degradation under adverse weather conditions. Figure 2 (b) shows LiDAR sensors undergo severe data degradation in adverse weather. Conversely, the data quality decrease in 4D radar is significantly lower (Han etal. 2023; Sun and Zhang 2021) (Detailed explanations are given in supplementary material.). This motivates the consideration of emphasizing LiDAR and 4D radar sensors with varying degrees while considering weather conditions during data fusion. The substantial data quality disparities and different degrees of sensor degradation are not properly addressed in existing LiDAR and 4D radar fusion methods in adverse weather conditions.

Therefore, to address the above challenges, we propose an innovative fusion framework L4DR, as shown on the right side of Fig. 2. L4DR implements fusion at two positions, firstly the ”3D Fusion Encoder” to address the challenge of substantial data quality disparities as shown in Fig. 2 (a) above. It consists of the Foreground-Aware Denoising (FAD) and Multi-Modal Encoder (MME) modules. The second is the Fusion between 2D Backbone to address the challenge of the substantial data quality disparities Fig. 2 (b) above. It consists of the Inter-Modal and Intra-Modal ({IM}2) backbone and the Multi-Scale Gated Fusion (MSGF) module. Instead of traditional fusing the features before extracting on the right side of Fig. 2 (b), we continuously fused during the extraction process, adaptively focusing on the significant modal features in different weather conditions.

L4DR: LiDAR-4DRadar Fusion for Weather-Robust 3D Object Detection (3)

In Figure 3, our comprehensive testingshowcase the resilience and superior performance of L4DR across various simulated and real-world adverse weather disturbances.Our main contributions are as follows:

  • We introduce the innovative Multi-Modal Encoder (MME) module, which achieves LiDAR and 4D radar early fusion without resorting to error-prone processes (e.g., depth estimation). This approach effectively bridges the substantial LiDAR and 4D radar data quality disparities.

  • We designed an {IM}2 backbone with a Multi-Scale Gated Fusion (MSGF) module, adaptively extracting salient features from LiDAR and 4D radar in different weather conditions. This enables the model to adapt to varying levels of sensor degradation under adverse weather conditions.

  • Extensive experiments on the two benchmarks, VoD and K-Radar, demonstrate the effectiveness of our L4DR under various levels and types of adverse weather, achieving new state-of-the-art performances on both datasets.

L4DR: LiDAR-4DRadar Fusion for Weather-Robust 3D Object Detection (4)

Related work

LiDAR-based 3D object detection.

Researchers have developed single-stage and two-stage methods to tackle challenges for 3D object detection. Single-stage detectors such as VoxelNet(Yan, Mao, and Li 2018), PointPillars(Lang etal. 2019), 3DSSD(Yang etal. 2020), DSVT (Wang etal. 2023a) utilize PointNet++ (Qi etal. 2017), sparse convolution, or other point feature encoder to extract features from point clouds and perform detection in the Bird’s Eye View (BEV) space. Conversely, methods such as PV-RCNN(Shi etal. 2020), PV-RCNN++(Shi etal. 2022), Voxel-RCNN (Deng etal. 2021), and VirConv (Wu etal. 2023) focus on two-stage object detection, integrating RCNN networks into 3D detectors. Even though these mainstream methods have gained excellent performance in normal weather, they still lack robustness under various adverse weather conditions.

LiDAR-based 3D object detection in adverse weather.

LiDAR sensors may undergo degradation under adverse weather conditions such as snow, fog, and rain. Physics-based simulations(Teufel etal. 2022; Hahner etal. 2022, 2021; Kilic etal. 2021) have been explored to reproduce point clouds under adverse weather to alleviate the issue of data scarcity. (Charron, Phillips, and Waslander 2018; Heinzler etal. 2020) utilized the DROR, DSOR, or convolutional neural networks (CNNs) to classify and filter LiDAR noise points. (Xu etal. 2021) designed a general completion framework that addresses the problem of domain adaptation across different weather conditions. (Huang etal. 2024) designed a general knowledge distillation framework that transfers sunny performance to rainy performance. However, these methods primarily rely on single-LiDAR modal data, which will be constrained by the decline in the quality of LiDAR under adverse weather conditions.

LiDAR-radar fusion-based 3D object detection.

LiDAR-radar fusion for 3D object detection has gained increasing attention in recent years. MVDNet(Qian etal. 2021) has designed a framework for fusing LiDAR and radar. ST-MVDNet(Li etal. 2022) and ST-MVDNet++(Li, O’Toole, and Kitani 2023) incorporate a self-training teacher-student to MVDNet to enhance the model. Bi-LRFusion(Wang etal. 2023b) framework employs a bidirectional fusion strategy to improve dynamic object detection performance. However, these studies only focus on 3D radar and LiDAR. As research progresses, the newest studies continue to drive the development of LiDAR-4D radar fusion. M2-Fusion(Wang etal. 2022a), InterFusion(Wang etal. 2022b), and 3D-LRF (Chae, Kim, and Yoon 2024) explore new LiDAR and 4D radar fusion approaches. However, these methods have not considered and overcome the challenges of fusing 4D radar and LiDAR under adverse weather conditions.

Methodology

Problem statement and overall design

LiDAR-4D radar fusion-based 3D object detection.

For an outdoor scene, we denote a LiDAR point cloud as𝒫l={pil}i=1Nlsuperscript𝒫𝑙superscriptsubscriptsubscriptsuperscript𝑝𝑙𝑖𝑖1subscript𝑁𝑙\mathcal{P}^{l}=\{p^{l}_{i}\}_{i=1}^{N_{l}}caligraphic_P start_POSTSUPERSCRIPT italic_l end_POSTSUPERSCRIPT = { italic_p start_POSTSUPERSCRIPT italic_l end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT } start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_N start_POSTSUBSCRIPT italic_l end_POSTSUBSCRIPT end_POSTSUPERSCRIPTand a 4D radar point cloud as𝒫r={pir}i=1Nrsuperscript𝒫𝑟superscriptsubscriptsubscriptsuperscript𝑝𝑟𝑖𝑖1subscript𝑁𝑟\mathcal{P}^{r}=\{p^{r}_{i}\}_{i=1}^{N_{r}}caligraphic_P start_POSTSUPERSCRIPT italic_r end_POSTSUPERSCRIPT = { italic_p start_POSTSUPERSCRIPT italic_r end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT } start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_N start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT end_POSTSUPERSCRIPT,where p𝑝pitalic_p denotes 3D points.Subsequently, a multi-modal model \mathcal{M}caligraphic_M extract deep features msuperscript𝑚\mathcal{F}^{m}caligraphic_F start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT from 𝒫msuperscript𝒫𝑚\mathcal{P}^{m}caligraphic_P start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT, written as m=g(f(𝒫m;Θ))superscript𝑚𝑔subscript𝑓superscript𝒫𝑚Θ\mathcal{F}^{m}=g(f_{\mathcal{M}}(\mathcal{P}^{m};\Theta))caligraphic_F start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT = italic_g ( italic_f start_POSTSUBSCRIPT caligraphic_M end_POSTSUBSCRIPT ( caligraphic_P start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT ; roman_Θ ) ).The fusion features are then obtained by f=ϕ(l,r)superscript𝑓italic-ϕsuperscript𝑙superscript𝑟\mathcal{F}^{f}=\phi(\mathcal{F}^{l},\mathcal{F}^{r})caligraphic_F start_POSTSUPERSCRIPT italic_f end_POSTSUPERSCRIPT = italic_ϕ ( caligraphic_F start_POSTSUPERSCRIPT italic_l end_POSTSUPERSCRIPT , caligraphic_F start_POSTSUPERSCRIPT italic_r end_POSTSUPERSCRIPT ), where ϕitalic-ϕ\phiitalic_ϕ donates fusion method.The objective of 3D object detection is to regress the 3D bounding boxes B={bi}i=1Nb𝐵superscriptsubscriptsubscript𝑏𝑖𝑖1subscript𝑁𝑏B=\{b_{i}\}_{i=1}^{N_{b}}italic_B = { italic_b start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT } start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_N start_POSTSUBSCRIPT italic_b end_POSTSUBSCRIPT end_POSTSUPERSCRIPT, BNb×7𝐵superscriptsubscript𝑁𝑏7B\in\mathbb{R}^{N_{b}\times 7}italic_B ∈ blackboard_R start_POSTSUPERSCRIPT italic_N start_POSTSUBSCRIPT italic_b end_POSTSUBSCRIPT × 7 end_POSTSUPERSCRIPT.

Significant data quality disparity.

As mentioned before, there is a huge difference between 𝒫lsuperscript𝒫𝑙\mathcal{P}^{l}caligraphic_P start_POSTSUPERSCRIPT italic_l end_POSTSUPERSCRIPT and 𝒫rsuperscript𝒫𝑟\mathcal{P}^{r}caligraphic_P start_POSTSUPERSCRIPT italic_r end_POSTSUPERSCRIPT in the same scene. To fully fuse two modalities, to use Plsuperscript𝑃𝑙P^{l}italic_P start_POSTSUPERSCRIPT italic_l end_POSTSUPERSCRIPT to enhance the highly sparse Prsuperscript𝑃𝑟P^{r}italic_P start_POSTSUPERSCRIPT italic_r end_POSTSUPERSCRIPT that lacks discriminative details. Therefore, our L4DR includes a Multi-Modal Encoder (MME, Figure 4 (b)), which performs early-fusion complementarity at the encoder. However, we found that direct data fusion would also cause the noise in 𝒫rsuperscript𝒫𝑟\mathcal{P}^{r}caligraphic_P start_POSTSUPERSCRIPT italic_r end_POSTSUPERSCRIPT to spread to 𝒫lsuperscript𝒫𝑙\mathcal{P}^{l}caligraphic_P start_POSTSUPERSCRIPT italic_l end_POSTSUPERSCRIPT. Therefore, we integrated Foreground-Aware Denoising (FAD, Figure 4 (a)) into L4DR before MME to filter out most of the noise in 𝒫rsuperscript𝒫𝑟\mathcal{P}^{r}caligraphic_P start_POSTSUPERSCRIPT italic_r end_POSTSUPERSCRIPT.

Varying degradation in adverse weather conditions.

Compared to 4D radar, the quality of LiDAR point cloud Plsuperscript𝑃𝑙P^{l}italic_P start_POSTSUPERSCRIPT italic_l end_POSTSUPERSCRIPT is more easily affected by adverse weather conditions, leading to varying feature presentations lsuperscript𝑙\mathcal{F}^{l}caligraphic_F start_POSTSUPERSCRIPT italic_l end_POSTSUPERSCRIPT.Previous backbones focusing solely on fused inter-modal features fsuperscript𝑓\mathcal{F}^{f}caligraphic_F start_POSTSUPERSCRIPT italic_f end_POSTSUPERSCRIPT overlook the weather robustness of 4D radar, leading to challenges in addressing the frequent fluctuation of lsuperscript𝑙\mathcal{F}^{l}caligraphic_F start_POSTSUPERSCRIPT italic_l end_POSTSUPERSCRIPT. To address this issue and ensure robust fusion across diverse weather conditions, we introduce the Inter-Modal and Intra-Modal ({IM}2, Figure 4 (c)) backbone. This design simultaneously emphasizes inter-modal and intra-modal features, enhancing model adaptability. However, redundancy between these features arises. Drawing inspiration from gated fusion techniques (Hosseinpour, Samadzadegan, and Javan 2022; Song, Zhao, and Skinner 2024a), we propose the Multi-Scale Gated Fusion (MSGF, Figure 4 (d)) module. MSGF utilizes inter-modal features fsuperscript𝑓\mathcal{F}^{f}caligraphic_F start_POSTSUPERSCRIPT italic_f end_POSTSUPERSCRIPT to filter intra-modal features lsuperscript𝑙\mathcal{F}^{l}caligraphic_F start_POSTSUPERSCRIPT italic_l end_POSTSUPERSCRIPT and rsuperscript𝑟\mathcal{F}^{r}caligraphic_F start_POSTSUPERSCRIPT italic_r end_POSTSUPERSCRIPT, effectively reducing feature redundancy.

Foreground-Aware Denoising (FAD)

Due to multipath effects, 4D radar contains significant noise points. Despite applying the Constant False Alarm Rate (CFAR) algorithm to filter out noise during the data acquisition process, the noise level remains substantial. It is imperative to further reduce the clutter noise in 4D radar data before early data fusion to avoid noise spreading. Considering the minimal contribution of background points to object detection, this work introduces point-level foreground semantic segmentation to 4D radar denoising, performing a Foreground-Aware Denoising. Specifically, we first utilize PointNet++ (Qi etal. 2017) combined with a segmentation head as χ𝜒\chiitalic_χ, to predict the foreground semantic probability 𝒮=χ(𝒫r)𝒮𝜒superscript𝒫𝑟\mathcal{S}=\chi(\mathcal{P}^{r})caligraphic_S = italic_χ ( caligraphic_P start_POSTSUPERSCRIPT italic_r end_POSTSUPERSCRIPT ) for each point in the 4D radar. Subsequently, points with a foreground probability below a predefined threshold τ𝜏\tauitalic_τ are filtered out, that is 𝒫newr={pir|𝒮iτ}subscriptsuperscript𝒫𝑟𝑛𝑒𝑤conditional-setsubscriptsuperscript𝑝𝑟𝑖subscript𝒮𝑖𝜏\mathcal{P}^{r}_{new}=\{p^{r}_{i}|\mathcal{S}_{i}\geq\tau\}caligraphic_P start_POSTSUPERSCRIPT italic_r end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_n italic_e italic_w end_POSTSUBSCRIPT = { italic_p start_POSTSUPERSCRIPT italic_r end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT | caligraphic_S start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ≥ italic_τ }. FAD effectively filters out as many noise points as possible while preventing the loss of foreground points.

L4DR: LiDAR-4DRadar Fusion for Weather-Robust 3D Object Detection (5)

Multi-Modal Encoder (MME)

Even following denoising using FAD, there remains a significant quality disparity between LiDAR and 4D radar due to limitations in resolution. We thus design a Multi-Modal Encoder module that fuses LiDAR and radar points at an early stage to extract richer features.

As illustrated in Figure 5, we innovate the traditional unimodal Pillars coding into multimodal Pillars coding to perform initial fusion at the data level, extracting richer information for subsequent feature processing. Firstly, referring (Lang etal. 2019), we encode the LiDAR point cloud into a pillar set Pl={pil}i=1Nsuperscript𝑃𝑙superscriptsubscriptsubscriptsuperscript𝑝𝑙𝑖𝑖1𝑁P^{l}=\{p^{l}_{i}\}_{i=1}^{N}italic_P start_POSTSUPERSCRIPT italic_l end_POSTSUPERSCRIPT = { italic_p start_POSTSUPERSCRIPT italic_l end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT } start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_N end_POSTSUPERSCRIPT. Each LiDAR point p(i,j)lsubscriptsuperscript𝑝𝑙𝑖𝑗p^{l}_{(i,j)}italic_p start_POSTSUPERSCRIPT italic_l end_POSTSUPERSCRIPT start_POSTSUBSCRIPT ( italic_i , italic_j ) end_POSTSUBSCRIPT in pillar pilsubscriptsuperscript𝑝𝑙𝑖p^{l}_{i}italic_p start_POSTSUPERSCRIPT italic_l end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT encoded using encoding feature 𝒇(i,j)lsubscriptsuperscript𝒇𝑙𝑖𝑗\boldsymbol{f}^{l}_{(i,j)}bold_italic_f start_POSTSUPERSCRIPT italic_l end_POSTSUPERSCRIPT start_POSTSUBSCRIPT ( italic_i , italic_j ) end_POSTSUBSCRIPT as

𝒇(i,j)l=[𝓧l,𝓨cll,𝓩l,λ],subscriptsuperscript𝒇𝑙𝑖𝑗superscript𝓧𝑙subscriptsuperscript𝓨𝑙𝑐𝑙superscript𝓩𝑙𝜆\boldsymbol{f}^{l}_{(i,j)}=[\boldsymbol{\mathcal{X}}^{l},\boldsymbol{\mathcal{%Y}}^{l}_{cl},\boldsymbol{\mathcal{Z}}^{l},\lambda],bold_italic_f start_POSTSUPERSCRIPT italic_l end_POSTSUPERSCRIPT start_POSTSUBSCRIPT ( italic_i , italic_j ) end_POSTSUBSCRIPT = [ bold_caligraphic_X start_POSTSUPERSCRIPT italic_l end_POSTSUPERSCRIPT , bold_caligraphic_Y start_POSTSUPERSCRIPT italic_l end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_c italic_l end_POSTSUBSCRIPT , bold_caligraphic_Z start_POSTSUPERSCRIPT italic_l end_POSTSUPERSCRIPT , italic_λ ] ,(1)

where 𝓧l=[xl,yl,zl]superscript𝓧𝑙superscript𝑥𝑙superscript𝑦𝑙superscript𝑧𝑙\boldsymbol{\mathcal{X}}^{l}=[x^{l},y^{l},z^{l}]bold_caligraphic_X start_POSTSUPERSCRIPT italic_l end_POSTSUPERSCRIPT = [ italic_x start_POSTSUPERSCRIPT italic_l end_POSTSUPERSCRIPT , italic_y start_POSTSUPERSCRIPT italic_l end_POSTSUPERSCRIPT , italic_z start_POSTSUPERSCRIPT italic_l end_POSTSUPERSCRIPT ] is the coordinate of the LiDAR point, 𝓨cllsubscriptsuperscript𝓨𝑙𝑐𝑙\boldsymbol{\mathcal{Y}}^{l}_{cl}bold_caligraphic_Y start_POSTSUPERSCRIPT italic_l end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_c italic_l end_POSTSUBSCRIPT denotes the distance from the LiDAR point to the arithmetic mean of all LiDAR points in the pillar, 𝓩lsuperscript𝓩𝑙\boldsymbol{\mathcal{Z}}^{l}bold_caligraphic_Z start_POSTSUPERSCRIPT italic_l end_POSTSUPERSCRIPT denotes the (horizontal) offset from the pillar center in x,y𝑥𝑦x,yitalic_x , italic_y coordinates, and λ𝜆\lambdaitalic_λ is the reflectance. Similarly, each 4D radar point p(i,j)rsubscriptsuperscript𝑝𝑟𝑖𝑗p^{r}_{(i,j)}italic_p start_POSTSUPERSCRIPT italic_r end_POSTSUPERSCRIPT start_POSTSUBSCRIPT ( italic_i , italic_j ) end_POSTSUBSCRIPT in pillar pirsubscriptsuperscript𝑝𝑟𝑖p^{r}_{i}italic_p start_POSTSUPERSCRIPT italic_r end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT encoded using encoding feature 𝒇(i,j)rsubscriptsuperscript𝒇𝑟𝑖𝑗\boldsymbol{f}^{r}_{(i,j)}bold_italic_f start_POSTSUPERSCRIPT italic_r end_POSTSUPERSCRIPT start_POSTSUBSCRIPT ( italic_i , italic_j ) end_POSTSUBSCRIPT as

𝒇(i,j)r=[𝓧r,𝓨crr,𝓩r,𝓥,Ω],subscriptsuperscript𝒇𝑟𝑖𝑗superscript𝓧𝑟subscriptsuperscript𝓨𝑟𝑐𝑟superscript𝓩𝑟𝓥Ω\boldsymbol{f}^{r}_{(i,j)}=[\boldsymbol{\mathcal{X}}^{r},\boldsymbol{\mathcal{%Y}}^{r}_{cr},\boldsymbol{\mathcal{Z}}^{r},\boldsymbol{\mathcal{V}},\Omega],bold_italic_f start_POSTSUPERSCRIPT italic_r end_POSTSUPERSCRIPT start_POSTSUBSCRIPT ( italic_i , italic_j ) end_POSTSUBSCRIPT = [ bold_caligraphic_X start_POSTSUPERSCRIPT italic_r end_POSTSUPERSCRIPT , bold_caligraphic_Y start_POSTSUPERSCRIPT italic_r end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_c italic_r end_POSTSUBSCRIPT , bold_caligraphic_Z start_POSTSUPERSCRIPT italic_r end_POSTSUPERSCRIPT , bold_caligraphic_V , roman_Ω ] ,(2)

where 𝓧rsuperscript𝓧𝑟\boldsymbol{\mathcal{X}}^{r}bold_caligraphic_X start_POSTSUPERSCRIPT italic_r end_POSTSUPERSCRIPT, 𝓨crrsubscriptsuperscript𝓨𝑟𝑐𝑟\boldsymbol{\mathcal{Y}}^{r}_{cr}bold_caligraphic_Y start_POSTSUPERSCRIPT italic_r end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_c italic_r end_POSTSUBSCRIPT, and 𝓩rsuperscript𝓩𝑟\boldsymbol{\mathcal{Z}}^{r}bold_caligraphic_Z start_POSTSUPERSCRIPT italic_r end_POSTSUPERSCRIPT are similar in meaning to those in Eq.1. 𝓥=[𝒱x,𝒱y]𝓥subscript𝒱𝑥subscript𝒱𝑦\boldsymbol{\mathcal{V}}=[\mathcal{V}_{x},\mathcal{V}_{y}]bold_caligraphic_V = [ caligraphic_V start_POSTSUBSCRIPT italic_x end_POSTSUBSCRIPT , caligraphic_V start_POSTSUBSCRIPT italic_y end_POSTSUBSCRIPT ] is Doppler information along each axis, and ΩΩ\Omegaroman_Ω is Radar Cross-Section (RCS).

We then perform cross-modal feature propagation for LiDAR pillar encoding features 𝒇(i,j)lsubscriptsuperscript𝒇𝑙𝑖𝑗\boldsymbol{f}^{l}_{(i,j)}bold_italic_f start_POSTSUPERSCRIPT italic_l end_POSTSUPERSCRIPT start_POSTSUBSCRIPT ( italic_i , italic_j ) end_POSTSUBSCRIPT and radar pillar encoding features 𝒇(i,j)rsubscriptsuperscript𝒇𝑟𝑖𝑗\boldsymbol{f}^{r}_{(i,j)}bold_italic_f start_POSTSUPERSCRIPT italic_r end_POSTSUPERSCRIPT start_POSTSUBSCRIPT ( italic_i , italic_j ) end_POSTSUBSCRIPT that occupy the same coordinates. The fused LiDAR pillar encoding features 𝒇^(i,j)lsubscriptsuperscript^𝒇𝑙𝑖𝑗\widehat{\boldsymbol{f}}^{l}_{(i,j)}over^ start_ARG bold_italic_f end_ARG start_POSTSUPERSCRIPT italic_l end_POSTSUPERSCRIPT start_POSTSUBSCRIPT ( italic_i , italic_j ) end_POSTSUBSCRIPT and 4D radar points 𝒇^(i,j)rsubscriptsuperscript^𝒇𝑟𝑖𝑗\widehat{\boldsymbol{f}}^{r}_{(i,j)}over^ start_ARG bold_italic_f end_ARG start_POSTSUPERSCRIPT italic_r end_POSTSUPERSCRIPT start_POSTSUBSCRIPT ( italic_i , italic_j ) end_POSTSUBSCRIPT are obtained by fusing 𝒇(i,j)lsubscriptsuperscript𝒇𝑙𝑖𝑗\boldsymbol{f}^{l}_{(i,j)}bold_italic_f start_POSTSUPERSCRIPT italic_l end_POSTSUPERSCRIPT start_POSTSUBSCRIPT ( italic_i , italic_j ) end_POSTSUBSCRIPT and 𝒇(i,j)rsubscriptsuperscript𝒇𝑟𝑖𝑗\boldsymbol{f}^{r}_{(i,j)}bold_italic_f start_POSTSUPERSCRIPT italic_r end_POSTSUPERSCRIPT start_POSTSUBSCRIPT ( italic_i , italic_j ) end_POSTSUBSCRIPT as follows:

𝒇^(i,j)l=[𝓧l,𝓨cll,𝓨crl,𝓩l,λ,𝓥¯,Ω¯],subscriptsuperscript^𝒇𝑙𝑖𝑗superscript𝓧𝑙subscriptsuperscript𝓨𝑙𝑐𝑙subscriptsuperscript𝓨𝑙𝑐𝑟superscript𝓩𝑙𝜆¯𝓥¯Ω\displaystyle\widehat{\boldsymbol{f}}^{l}_{(i,j)}=[\boldsymbol{\mathcal{X}}^{l%},\boldsymbol{\mathcal{Y}}^{l}_{cl},\boldsymbol{\mathcal{Y}}^{l}_{cr},%\boldsymbol{\mathcal{Z}}^{l},\lambda,\overline{\boldsymbol{\mathcal{V}}},%\overline{\Omega}],over^ start_ARG bold_italic_f end_ARG start_POSTSUPERSCRIPT italic_l end_POSTSUPERSCRIPT start_POSTSUBSCRIPT ( italic_i , italic_j ) end_POSTSUBSCRIPT = [ bold_caligraphic_X start_POSTSUPERSCRIPT italic_l end_POSTSUPERSCRIPT , bold_caligraphic_Y start_POSTSUPERSCRIPT italic_l end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_c italic_l end_POSTSUBSCRIPT , bold_caligraphic_Y start_POSTSUPERSCRIPT italic_l end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_c italic_r end_POSTSUBSCRIPT , bold_caligraphic_Z start_POSTSUPERSCRIPT italic_l end_POSTSUPERSCRIPT , italic_λ , over¯ start_ARG bold_caligraphic_V end_ARG , over¯ start_ARG roman_Ω end_ARG ] ,(3)
and𝒇^(i,j)r=[𝓧r,𝓨clr,𝓨crr,𝓩r,λ¯,𝓥,Ω],andsubscriptsuperscript^𝒇𝑟𝑖𝑗superscript𝓧𝑟subscriptsuperscript𝓨𝑟𝑐𝑙subscriptsuperscript𝓨𝑟𝑐𝑟superscript𝓩𝑟¯𝜆𝓥Ω\displaystyle\enspace\mathrm{and}\enspace\widehat{\boldsymbol{f}}^{r}_{(i,j)}=%[\boldsymbol{\mathcal{X}}^{r},\boldsymbol{\mathcal{Y}}^{r}_{cl},\boldsymbol{%\mathcal{Y}}^{r}_{cr},\boldsymbol{\mathcal{Z}}^{r},\overline{\lambda},%\boldsymbol{\mathcal{V}},\Omega],roman_and over^ start_ARG bold_italic_f end_ARG start_POSTSUPERSCRIPT italic_r end_POSTSUPERSCRIPT start_POSTSUBSCRIPT ( italic_i , italic_j ) end_POSTSUBSCRIPT = [ bold_caligraphic_X start_POSTSUPERSCRIPT italic_r end_POSTSUPERSCRIPT , bold_caligraphic_Y start_POSTSUPERSCRIPT italic_r end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_c italic_l end_POSTSUBSCRIPT , bold_caligraphic_Y start_POSTSUPERSCRIPT italic_r end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_c italic_r end_POSTSUBSCRIPT , bold_caligraphic_Z start_POSTSUPERSCRIPT italic_r end_POSTSUPERSCRIPT , over¯ start_ARG italic_λ end_ARG , bold_caligraphic_V , roman_Ω ] ,

where the overline denotes the average of all point features of another modality in that pillar.

The feature propagation is beneficial because λ𝜆\lambdaitalic_λ and ΩΩ\Omegaroman_Ω are helpful for object classification, while Doppler information 𝓥𝓥\boldsymbol{\mathcal{V}}bold_caligraphic_V is crucial for distinguishing dynamic objects (Song, Zhao, and Skinner 2024b). Cross-modal feature sharing makes comprehensive use of these advantages and cross-modal offsets [𝓨clm,𝓨crmsubscriptsuperscript𝓨𝑚𝑐𝑙subscriptsuperscript𝓨𝑚𝑐𝑟\boldsymbol{\mathcal{Y}}^{m}_{cl},\boldsymbol{\mathcal{Y}}^{m}_{cr}bold_caligraphic_Y start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_c italic_l end_POSTSUBSCRIPT , bold_caligraphic_Y start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_c italic_r end_POSTSUBSCRIPT], m{l,r}𝑚𝑙𝑟m\in\{l,r\}italic_m ∈ { italic_l , italic_r } also further enrich the geometric information.This MME method compensates for the data quality of 4D radar under normal weather conditions and can also enhance the quality of LiDAR in adverse weather. Subsequently, we applied a linear layer and max pooling operations to the fused pillar encoding features 𝒇^^𝒇\widehat{\boldsymbol{f}}over^ start_ARG bold_italic_f end_ARG to obtain the corresponding modal BEV features \mathcal{F}caligraphic_F.

L4DR: LiDAR-4DRadar Fusion for Weather-Robust 3D Object Detection (6)

{IM}2 Backbone and MSGF Block

To take full advantage of the respective advantages of LiDAR and 4D radar, it is necessary to focus on both inter-modal features and intra-modal features. We introduce the Inter-Modal and Intra-Modal backbone ({IM}2). {IM}2 serves as a multi-modal branch feature extraction module that concurrently extracts inter-modal feature (fsuperscript𝑓\mathcal{F}^{f}caligraphic_F start_POSTSUPERSCRIPT italic_f end_POSTSUPERSCRIPT) and intra-modal features (l,rsuperscript𝑙superscript𝑟\mathcal{F}^{l},\mathcal{F}^{r}caligraphic_F start_POSTSUPERSCRIPT italic_l end_POSTSUPERSCRIPT , caligraphic_F start_POSTSUPERSCRIPT italic_r end_POSTSUPERSCRIPT). Specifically, we fuse two Intra-Modal features to form an Inter-Modal feature,

f=ϕ(l,r),superscript𝑓italic-ϕsuperscript𝑙superscript𝑟\mathcal{F}^{f}=\phi(\mathcal{F}^{l},\mathcal{F}^{r}),caligraphic_F start_POSTSUPERSCRIPT italic_f end_POSTSUPERSCRIPT = italic_ϕ ( caligraphic_F start_POSTSUPERSCRIPT italic_l end_POSTSUPERSCRIPT , caligraphic_F start_POSTSUPERSCRIPT italic_r end_POSTSUPERSCRIPT ) ,(4)

where ϕitalic-ϕ\phiitalic_ϕ denotes fusion approach (we use concatenation).Subsequently, we apply a convolutional block to each modal branch lsuperscript𝑙\mathcal{F}^{l}caligraphic_F start_POSTSUPERSCRIPT italic_l end_POSTSUPERSCRIPT, rsuperscript𝑟\mathcal{F}^{r}caligraphic_F start_POSTSUPERSCRIPT italic_r end_POSTSUPERSCRIPT, and fsuperscript𝑓\mathcal{F}^{f}caligraphic_F start_POSTSUPERSCRIPT italic_f end_POSTSUPERSCRIPT independently,

𝒟m=κ(𝒟1m),subscriptsuperscript𝑚𝒟𝜅subscriptsuperscript𝑚𝒟1\mathcal{F}^{m}_{\mathcal{D}}=\kappa(\mathcal{F}^{m}_{\mathcal{D}-1}),caligraphic_F start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT start_POSTSUBSCRIPT caligraphic_D end_POSTSUBSCRIPT = italic_κ ( caligraphic_F start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT start_POSTSUBSCRIPT caligraphic_D - 1 end_POSTSUBSCRIPT ) ,(5)

where 𝒟[1,3]𝒟13\mathcal{D}\in[1,3]caligraphic_D ∈ [ 1 , 3 ] denotes layer, and m{l,r,f}𝑚𝑙𝑟𝑓m\in\{l,r,f\}italic_m ∈ { italic_l , italic_r , italic_f } indicates different modality. κ𝜅\kappaitalic_κ represents a convolutional layer with batch normalization and ReLU activation.

However, while {IM}2 addresses some deficiencies in feature representation, this naive approach inevitably introduces redundant features. Inspired by (Song, Zhao, and Skinner 2024b) to adaptively filter each modal feature, we design an MSGF for adaptive gated fusion on each LiDAR and 4D radar scale feature map.

As depicted in Fig. 6, the gated network 𝒢𝒢\mathcal{G}caligraphic_G in MSGF processes input feature maps from LiDAR lsuperscript𝑙\mathcal{F}^{l}caligraphic_F start_POSTSUPERSCRIPT italic_l end_POSTSUPERSCRIPT, 4D radar rsuperscript𝑟\mathcal{F}^{r}caligraphic_F start_POSTSUPERSCRIPT italic_r end_POSTSUPERSCRIPT and fused counterpart fsuperscript𝑓\mathcal{F}^{f}caligraphic_F start_POSTSUPERSCRIPT italic_f end_POSTSUPERSCRIPT. Subsequently, on the LiDAR and 4D radar branches, the adaptive gating weights 𝒲lsuperscript𝒲𝑙\mathcal{W}^{l}caligraphic_W start_POSTSUPERSCRIPT italic_l end_POSTSUPERSCRIPT for lsuperscript𝑙\mathcal{F}^{l}caligraphic_F start_POSTSUPERSCRIPT italic_l end_POSTSUPERSCRIPT and 𝒲rsuperscript𝒲𝑟\mathcal{W}^{r}caligraphic_W start_POSTSUPERSCRIPT italic_r end_POSTSUPERSCRIPT for rsuperscript𝑟\mathcal{F}^{r}caligraphic_F start_POSTSUPERSCRIPT italic_r end_POSTSUPERSCRIPT were obtained by a convolution block and sigmoid activation function, respectively. These weights are applied to the initial feature via element-wise multiplication, thus enabling filter lsuperscript𝑙\mathcal{F}^{l}caligraphic_F start_POSTSUPERSCRIPT italic_l end_POSTSUPERSCRIPT and rsuperscript𝑟\mathcal{F}^{r}caligraphic_F start_POSTSUPERSCRIPT italic_r end_POSTSUPERSCRIPT in the gated mechanism.Formally, the gated network 𝒢𝒢\mathcal{G}caligraphic_G guides lsuperscript𝑙\mathcal{F}^{l}caligraphic_F start_POSTSUPERSCRIPT italic_l end_POSTSUPERSCRIPT and lsuperscript𝑙\mathcal{F}^{l}caligraphic_F start_POSTSUPERSCRIPT italic_l end_POSTSUPERSCRIPT at convolution layer index 𝒟𝒟\mathcal{D}caligraphic_D to filter out redundant information as following:

𝒟m=𝒢𝒟(𝒟m,𝒟f),m{l,r},formulae-sequencesubscriptsuperscript𝑚𝒟subscript𝒢𝒟subscriptsuperscript𝑚𝒟subscriptsuperscript𝑓𝒟𝑚𝑙𝑟\displaystyle\mathcal{F}^{m}_{\mathcal{D}}=\mathcal{G}_{\mathcal{D}}(\mathcal{%F}^{m}_{\mathcal{D}},\mathcal{F}^{f}_{\mathcal{D}}),m\in\{l,r\},caligraphic_F start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT start_POSTSUBSCRIPT caligraphic_D end_POSTSUBSCRIPT = caligraphic_G start_POSTSUBSCRIPT caligraphic_D end_POSTSUBSCRIPT ( caligraphic_F start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT start_POSTSUBSCRIPT caligraphic_D end_POSTSUBSCRIPT , caligraphic_F start_POSTSUPERSCRIPT italic_f end_POSTSUPERSCRIPT start_POSTSUBSCRIPT caligraphic_D end_POSTSUBSCRIPT ) , italic_m ∈ { italic_l , italic_r } ,(6)
and𝒢𝒟m(𝒟m,𝒟f)=𝒟mδ(κ(𝒟f)),andsubscriptsuperscript𝒢𝑚𝒟superscriptsubscript𝒟𝑚superscriptsubscript𝒟𝑓superscriptsubscript𝒟𝑚𝛿𝜅superscriptsubscript𝒟𝑓\displaystyle\mathrm{and}\enspace\mathcal{G}^{m}_{\mathcal{D}}(\mathcal{F}_{%\mathcal{D}}^{m},\mathcal{F}_{\mathcal{D}}^{f})=\mathcal{F}_{\mathcal{D}}^{m}*%\delta(\kappa(\mathcal{F}_{\mathcal{D}}^{f})),roman_and caligraphic_G start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT start_POSTSUBSCRIPT caligraphic_D end_POSTSUBSCRIPT ( caligraphic_F start_POSTSUBSCRIPT caligraphic_D end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT , caligraphic_F start_POSTSUBSCRIPT caligraphic_D end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_f end_POSTSUPERSCRIPT ) = caligraphic_F start_POSTSUBSCRIPT caligraphic_D end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT ∗ italic_δ ( italic_κ ( caligraphic_F start_POSTSUBSCRIPT caligraphic_D end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_f end_POSTSUPERSCRIPT ) ) ,

where κ𝜅\kappaitalic_κ is a 3x3 convolution block and δ𝛿\deltaitalic_δ a sigmoid function.fsuperscript𝑓\mathcal{F}^{f}caligraphic_F start_POSTSUPERSCRIPT italic_f end_POSTSUPERSCRIPT is the fused feature with information about the interactions between modalities. It discerns whether features in lsuperscript𝑙\mathcal{F}^{l}caligraphic_F start_POSTSUPERSCRIPT italic_l end_POSTSUPERSCRIPT and rsuperscript𝑟\mathcal{F}^{r}caligraphic_F start_POSTSUPERSCRIPT italic_r end_POSTSUPERSCRIPT are helpful or redundant. Using fsuperscript𝑓\mathcal{F}^{f}caligraphic_F start_POSTSUPERSCRIPT italic_f end_POSTSUPERSCRIPT for gated filtering can flexibly weight and extract features from lsuperscript𝑙\mathcal{F}^{l}caligraphic_F start_POSTSUPERSCRIPT italic_l end_POSTSUPERSCRIPT and rsuperscript𝑟\mathcal{F}^{r}caligraphic_F start_POSTSUPERSCRIPT italic_r end_POSTSUPERSCRIPT while significantly reducing feature redundancy.

Loss Function and Training Strategy.

We trained our L4DR with the following losses :

=1Npos(βclscls+βlocloc+βfadfad),1subscript𝑁𝑝𝑜𝑠subscript𝛽𝑐𝑙𝑠subscript𝑐𝑙𝑠subscript𝛽𝑙𝑜𝑐subscript𝑙𝑜𝑐subscript𝛽𝑓𝑎𝑑subscript𝑓𝑎𝑑\mathcal{L}=\frac{1}{N_{pos}}\left(\beta_{cls}\mathcal{L}_{cls}+\beta_{loc}%\mathcal{L}_{loc}+\beta_{fad}\mathcal{L}_{fad}\right),caligraphic_L = divide start_ARG 1 end_ARG start_ARG italic_N start_POSTSUBSCRIPT italic_p italic_o italic_s end_POSTSUBSCRIPT end_ARG ( italic_β start_POSTSUBSCRIPT italic_c italic_l italic_s end_POSTSUBSCRIPT caligraphic_L start_POSTSUBSCRIPT italic_c italic_l italic_s end_POSTSUBSCRIPT + italic_β start_POSTSUBSCRIPT italic_l italic_o italic_c end_POSTSUBSCRIPT caligraphic_L start_POSTSUBSCRIPT italic_l italic_o italic_c end_POSTSUBSCRIPT + italic_β start_POSTSUBSCRIPT italic_f italic_a italic_d end_POSTSUBSCRIPT caligraphic_L start_POSTSUBSCRIPT italic_f italic_a italic_d end_POSTSUBSCRIPT ) ,(7)

where Npossubscript𝑁𝑝𝑜𝑠N_{pos}italic_N start_POSTSUBSCRIPT italic_p italic_o italic_s end_POSTSUBSCRIPT is the number of positive anchors and βclssubscript𝛽𝑐𝑙𝑠\beta_{cls}italic_β start_POSTSUBSCRIPT italic_c italic_l italic_s end_POSTSUBSCRIPT = 1, βlocsubscript𝛽𝑙𝑜𝑐\beta_{loc}italic_β start_POSTSUBSCRIPT italic_l italic_o italic_c end_POSTSUBSCRIPT = 2, βclssubscript𝛽𝑐𝑙𝑠\beta_{cls}italic_β start_POSTSUBSCRIPT italic_c italic_l italic_s end_POSTSUBSCRIPT = 0.5, the fadsubscript𝑓𝑎𝑑\mathcal{L}_{fad}caligraphic_L start_POSTSUBSCRIPT italic_f italic_a italic_d end_POSTSUBSCRIPT is object classification focal loss, the locsubscript𝑙𝑜𝑐\mathcal{L}_{loc}caligraphic_L start_POSTSUBSCRIPT italic_l italic_o italic_c end_POSTSUBSCRIPT is object localization regression loss, and the fadsubscript𝑓𝑎𝑑\mathcal{L}_{fad}caligraphic_L start_POSTSUBSCRIPT italic_f italic_a italic_d end_POSTSUBSCRIPT is 4D radar noise classification focal loss in FAD module.We use Adam optimizer with lr = 1e-3, β𝛽\betaitalic_β1 = 0.9, β𝛽\betaitalic_β2 = 0.999.

Experiments

MethodsModalityMetricTotalNormalOvercastFogRainSleetLightsnowHeavysnow
RTNH(NeurIPS 2022)4DRAPBEV𝐴subscript𝑃𝐵𝐸𝑉AP_{BEV}italic_A italic_P start_POSTSUBSCRIPT italic_B italic_E italic_V end_POSTSUBSCRIPT41.141.044.645.432.950.681.556.3
AP3D𝐴subscript𝑃3𝐷AP_{3D}italic_A italic_P start_POSTSUBSCRIPT 3 italic_D end_POSTSUBSCRIPT37.437.642.041.229.249.163.943.1
PointPillars(CVPR 2019)LAPBEV𝐴subscript𝑃𝐵𝐸𝑉AP_{BEV}italic_A italic_P start_POSTSUBSCRIPT italic_B italic_E italic_V end_POSTSUBSCRIPT49.148.253.045.444.245.974.553.8
AP3D𝐴subscript𝑃3𝐷AP_{3D}italic_A italic_P start_POSTSUBSCRIPT 3 italic_D end_POSTSUBSCRIPT22.421.828.028.227.222.623.212.9
RTNH(NeurIPS 2022)LAPBEV𝐴subscript𝑃𝐵𝐸𝑉AP_{BEV}italic_A italic_P start_POSTSUBSCRIPT italic_B italic_E italic_V end_POSTSUBSCRIPT66.365.487.483.873.748.878.548.1
AP3D𝐴subscript𝑃3𝐷AP_{3D}italic_A italic_P start_POSTSUBSCRIPT 3 italic_D end_POSTSUBSCRIPT37.839.846.359.828.231.450.724.6
InterFusion(IROS 2023)L+4DRAPBEV𝐴subscript𝑃𝐵𝐸𝑉AP_{BEV}italic_A italic_P start_POSTSUBSCRIPT italic_B italic_E italic_V end_POSTSUBSCRIPT52.950.059.080.350.022.772.253.3
AP3D𝐴subscript𝑃3𝐷AP_{3D}italic_A italic_P start_POSTSUBSCRIPT 3 italic_D end_POSTSUBSCRIPT17.515.320.547.612.99.3356.825.7
3D-LRF(CVPR 2024)L+4DRAPBEV𝐴subscript𝑃𝐵𝐸𝑉AP_{BEV}italic_A italic_P start_POSTSUBSCRIPT italic_B italic_E italic_V end_POSTSUBSCRIPT73.672.388.486.676.647.579.664.1
AP3D𝐴subscript𝑃3𝐷AP_{3D}italic_A italic_P start_POSTSUBSCRIPT 3 italic_D end_POSTSUBSCRIPT45.245.355.851.838.323.460.236.9
L4DR(Ours)L+4DRAPBEV𝐴subscript𝑃𝐵𝐸𝑉AP_{BEV}italic_A italic_P start_POSTSUBSCRIPT italic_B italic_E italic_V end_POSTSUBSCRIPT77.576.888.689.778.259.380.953.8
AP3D𝐴subscript𝑃3𝐷AP_{3D}italic_A italic_P start_POSTSUBSCRIPT 3 italic_D end_POSTSUBSCRIPT53.553.064.173.253.846.252.437.0

Implement Details.

We implement L4DR with PointPillars (Lang etal. 2019), the most commonly used base architecture in radar-based, LiDAR and 4D radar fusion-based 3D object detection. This can effectively verify the effectiveness of our L4DR and avoid unfair comparisons caused by inherent improvements in the base architecture. We set τ𝜏\tauitalic_τ in section 3.2 as 0.3 while training and 0.2 while inferring (more discussions can be found in supplementary material). We conduct all experiments with a batch size of 16 on 2 RTX 3090 GPUs. Other parameter settings refer to the default official configuration in the OpenPCDet (Team etal. 2020) tool.

Dataset and Evaluation Metrics

K-Radar dataset.

The K-Radar dataset (Paek, Kong, and Wijaya 2022) contains 58 sequences with 34944 frames of 64-line LiDAR, camera, and 4D radar data in various weather conditions. According to the official K-Radar split, we used 17458 frames for training and 17536 frames for testing. We adopt two evaluation metrics for 3D object detection: AP3D𝐴subscript𝑃3𝐷AP_{3D}italic_A italic_P start_POSTSUBSCRIPT 3 italic_D end_POSTSUBSCRIPT and APBEV𝐴subscript𝑃𝐵𝐸𝑉AP_{BEV}italic_A italic_P start_POSTSUBSCRIPT italic_B italic_E italic_V end_POSTSUBSCRIPT of the class ”Sedan” at IoU = 0.5. We also provide more quantitative results for other IoU thresholds as well as using the latest v2.1 version of the label (see supplementary material).

View-of-Delft (VoD) dataset.

The VoD dataset (Palffy etal. 2022) contains 8693 frames of 64-line LiDAR, camera, and 4D radar data. Following the official partition, we divided the dataset into a training and validation set with 5139 and 1296 frames. All of methods used the official radar with 5 scans accumulation and single frame LiDAR. Meanwhile, to explore the performance under different fog intensities, following a series of previous work (Qian etal. 2021; Li etal. 2022), we similarly performed fog simulations (Hahner etal. 2021) (with fog level \mathcal{L}caligraphic_L from 0 to 4, fog density (α𝛼\alphaitalic_α) = [0.00, 0.03, 0.06, 0.10, 0.20]) on the VoD dataset and 4D Radar remains unchanged to simulate weather robustness. We express this as the Vod-Fog dataset in the following. Noteworthy, we used two evaluation metrics groups. The VoD official metrics are used to compare better the results reported by previous state-of-the-art methods. The KITTI official metrics are for better demonstrating and analyzing the performance of ”easy”, ”moderate” and ”hard” objects with different difficulties under foggy weather.

Results on K-Radar Adverse Weather Dataset

Following 3D-LRF (Chae, Kim, and Yoon 2024), we compare our L4DR with LiDAR only, 4D radar only, and LiDAR-4D radar fusion-based 3D object detection methods: PointPillars (Lang etal. 2019), RTNH (Paek, KONG, and Wijaya 2022), InterFusion (Wang etal. 2022b) and 3D-LRF (Chae, Kim, and Yoon 2024). The results in Table 1 highlight the superior performance of our L4DR model in all weather conditions. Our L4DR model surpasses 3D-LRF by 8.3% in total AP3D𝐴subscript𝑃3𝐷AP_{3D}italic_A italic_P start_POSTSUBSCRIPT 3 italic_D end_POSTSUBSCRIPT. This demonstrates that compared to previous fusion frameworks, our proposed fusion method utilizes the advantages of LiDAR and 4D Radar more effectively. Note that we only compare 3D-LRF on the K-Radar dataset because its code is not open-sourced, the results of 3D-LRF are available on the K-Radar dataset only. Meanwhile, it is worth noting that the performance in many bad weather conditions instead significantly exceeds the performance in normal weather (e.g., Overcast, Fog, etc.). This is a phenomenon also reflected in the official benchmark of the K-Radar dataset, and we discuss this counter-intuitive phenomenon in detail in the supplementary material. We also discuss other valuable results such as different IoU thresholds and new version labels.

MethodsModalityEntire AreaDriving Area
CarPed.Cyc.CarPed.Cyc.
Pointpillars4DR39.731.065.171.640.587.8
LXL4DR32.839.768.170.347.387.9
FUTR3DC+4DR46.035.166.078.743.186.2
BEVFusionC+4DR37.941.069.070.245.989.5
RCFusionC+4DR41.739.068.371.947.588.3
LXLC+4DR42.349.577.172.258.388.3
PointpillarsL66.055.675.088.768.488.4
InterFusionL+4DR66.564.578.590.772.088.7
L4DR (Ours)L+4DR69.166.282.890.876.195.5

Fog LevelMethodsModalityCar (IoU = 0.5)Pedestrian (IoU = 0.25)Cyclist (IoU = 0.25)
EasyMod.HardEasyMod.HardEasyMod.Hard
0(W/o Fog)PointPillarsL84.973.567.562.758.453.485.579.072.7
InterFusionL+4DR67.665.858.873.770.164.790.387.081.2
L4DR (Ours)L+4DR85.076.669.474.472.365.793.490.483.0
1PointPillarsL79.972.767.059.955.650.585.578.272.0
InterFusionL+4DR66.164.056.974.070.664.591.687.482.0
L4DR (Ours)L+4DR77.973.267.875.472.166.793.891.083.2
2PointPillarsL67.051.444.453.147.242.769.662.757.2
InterFusionL+4DR56.048.541.563.257.852.977.371.166.2
L4DR (Ours)L+4DR68.556.449.363.159.955.182.770.870.7
3PointPillarsL44.531.927.040.237.734.053.246.741.8
InterFusionL+4DR41.233.127.052.949.244.859.957.753.1
L4DR (Ours)L+4DR46.241.434.653.550.646.272.267.760.9
4PointPillarsL13.08.777.1910.612.911.36.154.894.57
InterFusionL+4DR15.210.88.4025.725.122.66.687.956.99
L4DR (Ours)L+4DR26.926.221.633.130.727.930.329.726.3

Results on VoD Dataset

We compared our L4DR fusion performance with different state-of-the-art methods of different modalities on the VoD dataset with VoD metric. As shown in Table 2, our L4DR fusion outperforms the existing LiDAR and 4D radar fusion method InterFusion (Wang etal. 2022b) in all categories. We outperformed by 6.8% in the Cyc. class in the Driving Area. Meanwhile, our L4DR also significantly outperforms other modality-based state-of-the-art methods such as LXL (Xiong etal. 2024). These experimental results demonstrate that our method can comprehensively fuse the two modalities of LiDAR and 4D radar. As a consequence, our L4DR method also shows superior performance even in clear weather.

Results on Vod-Fog Simulated Dataset

We evaluated our L4DR model in comparison with LiDAR and 4D radar fusion methods using the Vod-Fog dataset using the KITTI metrics across varying levels of fog. Table 3 demonstrates that our L4DR model outperforms LiDAR-only PointPillars in different difficulty categories and fog intensities. Particularly in the most severe fog conditions (fog level = 4), our L4DR model achieves performance improvements of 17.43%, 17.8%, and 24.81% mAP in moderate difficulty categories, surpassing the gains obtained by InterFusion. Furthermore, our approach consistently exhibits superior performance compared to InterFusion across various scenarios, showcasing the adaptability of our L4DR fusion under adverse weather conditions.

Ablation study

Effect of each component.

We systematically evaluated each component, with the results summarized in Table4. The 1stsuperscript1𝑠𝑡1^{st}1 start_POSTSUPERSCRIPT italic_s italic_t end_POSTSUPERSCRIPT row represents the performance of the LiDAR-only baseline model. Subsequent 2ndsuperscript2𝑛𝑑2^{nd}2 start_POSTSUPERSCRIPT italic_n italic_d end_POSTSUPERSCRIPT row and 3rdsuperscript3𝑟𝑑3^{rd}3 start_POSTSUPERSCRIPT italic_r italic_d end_POSTSUPERSCRIPT row are fused by directly concatenating the BEV features from LiDAR and 4D radar modality. The results showing the enhancements were observed with the addition of MME and FAD respectively, highlighting that our fusion method fully utilizes the weather robustness of the 4D radar while excellently handling the noise problem of the 4D radar. The 4thsuperscript4𝑡4^{th}4 start_POSTSUPERSCRIPT italic_t italic_h end_POSTSUPERSCRIPT row indicates that the performance boost from incorporating the {IM}2 model alone was not substantial, primarily due to feature redundancy introduced by the {IM}2 backbone. This issue was effectively addressed by utilizing the MSGF module in the 5thsuperscript5𝑡5^{th}5 start_POSTSUPERSCRIPT italic_t italic_h end_POSTSUPERSCRIPT row, leading to the most optimal performance.

Module3D mAP
MMEFAD{IM}2MSGFW/o Fog\mathcal{L}caligraphic_L = 1\mathcal{L}caligraphic_L = 2\mathcal{L}caligraphic_L = 3\mathcal{L}caligraphic_L = 4
70.368.953.838.88.92
77.175.463.252.323.4
78.777.663.352.224.7
78.176.862.052.326.5
79.878.864.753.328.9

Comparison with other multi-modal feature fusion.

We compared different multi-modal feature fusion blocks, including basic concatenation (Concat.) and various attention-based methods such as Transformer-based (Vaswani etal. 2017) Cross-Modal Attention (Cross-Attn.) and Self-Attention (Self-Attn.), SE Block (Hu, Shen, and Sun 2018), and CBAM Block (Woo etal. 2018), see supplementary material for detailed fusion implements. Experimental results (Table 5) show that while attention mechanisms outperform concatenation to some extent, they do not effectively address the challenge of fluctuating features under varying weather conditions. In contrast, our proposed MSGF method, focusing on significant features of LiDAR and 4D radar, achieves superior performance and robustness under different weather.

Fusion3D mAP
W/o Fog\mathcal{L}caligraphic_L = 1\mathcal{L}caligraphic_L = 2\mathcal{L}caligraphic_L = 3\mathcal{L}caligraphic_L = 4
Concat.77.976.361.949.317.7
Cross-Attn.77.276.063.052.730.1
Self-Attn.78.477.464.352.825.8
SE Block77.377.963.850.125.0
CBAM Block78.078.164.052.326.4
MSGF (Ours)79.878.864.753.228.8

Conclusion

In this paper, we analyzed the challenges of fusing LiDAR and 4D radar in adverse weather and proposed L4DR, an effective LiDAR and 4D radar fusion method. We provide an innovative and feasible solution for achieving weather-robust outdoor 3D object detection in various weather conditions. Our experiments on VoD and K-Radar datasets have demonstrated the effectiveness and superiority of our method in various simulated fog levels and real-world adverse weather. In summary, our proposed L4DR fusion method not only offers a promising solution for robust outdoor 3D object detection in adverse weather conditions but also sets a new benchmark for performance and robustness compared to existing fusion techniques, paving the way for enhanced safety and reliability in autonomous driving and other applications.

Limitations. While the introduction of the {IM}2 and MSGF modules has allowed the model to focus on more salient features, it inevitably introduces additional computations that reduce the computational efficiency to a certain extent. The inference speed is reduced to about 10 FPS, which is just enough to satisfy the real-time threshold (equal to the LiDAR acquisition frequency), but the computational performance optimization is a valuable future area of research.

References

  • Balal, Pinhasi, and Pinhasi (2016)Balal, N.; Pinhasi, G.; and Pinhasi, Y. 2016.Atmospheric and Fog Effects on Ultra-Wide Band Radar Operating at Extremely High Frequencies.Sensors, 16(5): 751.
  • Bijelic etal. (2020)Bijelic, M.; Gruber, T.; Mannan, F.; Kraus, F.; Ritter, W.; Dietmayer, K.; and Heide, F. 2020.Seeing Through Fog Without Seeing Fog: Deep Multimodal Sensor Fusion in Unseen Adverse Weather.In CVPR.
  • Chae, Kim, and Yoon (2024)Chae, Y.; Kim, H.; and Yoon, K.-J. 2024.Towards Robust 3D Object Detection with LiDAR and 4D Radar Fusion in Various Weather Conditions.In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 15162–15172.
  • Charron, Phillips, and Waslander (2018)Charron, N.; Phillips, S.; and Waslander, S.L. 2018.De-Noising of Lidar Point Clouds Corrupted by Snowfall.In CRV.
  • Deng etal. (2021)Deng, J.; Shi, S.; Li, P.; Zhou, W.; Zhang, Y.; and Li, H. 2021.Voxel R-CNN: Towards High Performance Voxel-based 3D Object Detection.AAAI, 35.
  • Geiger, Lenz, and Urtasun (2012)Geiger, A.; Lenz, P.; and Urtasun, R. 2012.Are we ready for autonomous driving? The KITTI vision benchmark suite.In 2012 IEEE Conference on Computer Vision and Pattern Recognition, 3354–3361.
  • Ghita etal. (2024)Ghita, A.; Antoniussen, B.; Zimmer, W.; Greer, R.; Creß, C.; Møgelmose, A.; Trivedi, M.M.; and Knoll, A.C. 2024.ActiveAnno3D–An Active Learning Framework for Multi-Modal 3D Object Detection.arXiv preprint arXiv:2402.03235.
  • Golovachev etal. (2018)Golovachev, Y.; Etinger, A.; Pinhasi, G.A.; and Pinhasi, Y. 2018.Millimeter wave high resolution radar accuracy in fog conditions—theory and experimental verification.Sensors, 18(7): 2148.
  • Hahner etal. (2022)Hahner, M.; Sakaridis, C.; Bijelic, M.; Heide, F.; Yu, F.; Dai, D.; and VanGool, L. 2022.LiDAR Snowfall Simulation for Robust 3D Object Detection.In CVPR.
  • Hahner etal. (2021)Hahner, M.; Sakaridis, C.; Dai, D.; and VanGool, L. 2021.Fog Simulation on Real LiDAR Point Clouds for 3D Object Detection in Adverse Weather.In ICCV.
  • Han etal. (2023)Han, Z.; Wang, J.; Xu, Z.; Yang, S.; He, L.; Xu, S.; and Wang, J. 2023.4D Millimeter-Wave Radar in Autonomous Driving: A Survey. arXiv 2023.arXiv preprint arXiv:2306.04242.
  • Heinzler etal. (2020)Heinzler, R.; Piewak, F.; Schindler, P.; and Stork, W. 2020.CNN-Based Lidar Point Cloud De-Noising in Adverse Weather.IEEE Robotics and Automation Letters, 5.
  • Hosseinpour, Samadzadegan, and Javan (2022)Hosseinpour, H.; Samadzadegan, F.; and Javan, F.D. 2022.CMGFNet: A deep cross-modal gated fusion network for building extraction from very high-resolution remote sensing images.ISPRS Journal of Photogrammetry and Remote Sensing, 184: 96–115.
  • Hu, Shen, and Sun (2018)Hu, J.; Shen, L.; and Sun, G. 2018.Squeeze-and-excitation networks.In Proceedings of the IEEE conference on computer vision and pattern recognition, 7132–7141.
  • Huang etal. (2024)Huang, X.; Wu, H.; Li, X.; Fan, X.; Wen, C.; and Wang, C. 2024.Sunshine to rainstorm: Cross-weather knowledge distillation for robust 3d object detection.In Proceedings of the AAAI Conference on Artificial Intelligence, volume38, 2409–2416.
  • Kilic etal. (2021)Kilic, V.; Hegde, D.; Sindagi, V.A.; Cooper, A.; Foster, M.; and Patel, V.M. 2021.Lidar Light Scattering Augmentation (LISA): Physics-based Simulation of Adverse Weather Conditions for 3D Object Detection.ArXiv.
  • Lang etal. (2019)Lang, A.H.; Vora, S.; Caesar, H.; Zhou, L.; Yang, J.; and Beijbom, O. 2019.PointPillars: Fast Encoders for Object Detection From Point Clouds.In 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 12689–12697.
  • Li, O’Toole, and Kitani (2023)Li, Y.-J.; O’Toole, M.; and Kitani, K. 2023.St-mvdnet++: Improve vehicle detection with lidar-radar geometrical augmentation via self-training.In ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 1–5. IEEE.
  • Li etal. (2022)Li, Y.-J.; Park, J.; O’Toole, M.; and Kitani, K. 2022.Modality-Agnostic Learning for Radar-Lidar Fusion in Vehicle Detection.In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 918–927.
  • Paek, Kong, and Wijaya (2022)Paek, D.-H.; Kong, S.-H.; and Wijaya, K.T. 2022.K-radar: 4d radar object detection for autonomous driving in various weather conditions.Advances in Neural Information Processing Systems, 35: 3819–3829.
  • Paek, KONG, and Wijaya (2022)Paek, D.-H.; KONG, S.-H.; and Wijaya, K.T. 2022.K-Radar: 4D Radar Object Detection for Autonomous Driving in Various Weather Conditions.In Koyejo, S.; Mohamed, S.; Agarwal, A.; Belgrave, D.; Cho, K.; and Oh, A., eds., Advances in Neural Information Processing Systems, volume35, 3819–3829. Curran Associates, Inc.
  • Palffy etal. (2022)Palffy, A.; Pool, E.; Baratam, S.; Kooij, J.F.; and Gavrila, D.M. 2022.Multi-class road user detection with 3+ 1D radar in the View-of-Delft dataset.IEEE Robotics and Automation Letters, 7(2): 4961–4968.
  • Qi etal. (2017)Qi, C.R.; Yi, L.; Su, H.; and Guibas, L.J. 2017.PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space.arXiv:1706.02413.
  • Qian etal. (2021)Qian, K.; Zhu, S.; Zhang, X.; and Li, L.E. 2021.Robust Multimodal Vehicle Detection in Foggy Weather Using Complementary Lidar and Radar Signals.In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 444–453.
  • Shi etal. (2020)Shi, S.; Guo, C.; Jiang, L.; Wang, Z.; Shi, J.; Wang, X.; and Li, H. 2020.PV-RCNN: Point-Voxel Feature Set Abstraction for 3D Object Detection.In CVPR.
  • Shi etal. (2022)Shi, S.; Jiang, L.; Deng, J.; Wang, Z.; Guo, C.; Shi, J.; Wang, X.; and Li, H. 2022.PV-RCNN++: Point-Voxel Feature Set Abstraction With Local Vector Representation for 3D Object Detection.Int. J. Comput. Vision, 131.
  • Song, Zhao, and Skinner (2024a)Song, J.; Zhao, L.; and Skinner, K.A. 2024a.LiRaFusion: Deep Adaptive LiDAR-Radar Fusion for 3D Object Detection.arXiv preprint arXiv:2402.11735.
  • Song, Zhao, and Skinner (2024b)Song, J.; Zhao, L.; and Skinner, K.A. 2024b.LiRaFusion: Deep Adaptive LiDAR-Radar Fusion for 3D Object Detection.arXiv preprint arXiv:2402.11735.
  • Sun and Zhang (2021)Sun, S.; and Zhang, Y.D. 2021.4D automotive radar sensing for autonomous vehicles: A sparsity-oriented approach.IEEE Journal of Selected Topics in Signal Processing, 15(4): 879–891.
  • Team etal. (2020)Team, O.; etal. 2020.Openpcdet: An open-source toolbox for 3d object detection from point clouds.
  • Teufel etal. (2022)Teufel, S.; Volk, G.; VonBernuth, A.; and Bringmann, O. 2022.Simulating Realistic Rain, Snow, and Fog Variations For Comprehensive Performance Characterization of LiDAR Perception.In 2022 IEEE 95th Vehicular Technology Conference: (VTC2022-Spring).
  • Vaswani etal. (2017)Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; and Polosukhin, I. 2017.Attention is all you need.Advances in neural information processing systems, 30.
  • Wang etal. (2023a)Wang, H.; Shi, C.; Shi, S.; Lei, M.; Wang, S.; He, D.; Schiele, B.; and Wang, L. 2023a.DSVT: Dynamic Sparse Voxel Transformer With Rotated Sets.In CVPR.
  • Wang etal. (2022a)Wang, L.; Zhang, X.; Li, J.; Xv, B.; Fu, R.; Chen, H.; Yang, L.; Jin, D.; and Zhao, L. 2022a.Multi-modal and multi-scale fusion 3D object detection of 4D radar and LiDAR for autonomous driving.IEEE Transactions on Vehicular Technology.
  • Wang etal. (2022b)Wang, L.; Zhang, X.; Xv, B.; Zhang, J.; Fu, R.; Wang, X.; Zhu, L.; Ren, H.; Lu, P.; Li, J.; and Liu, H. 2022b.InterFusion: Interaction-based 4D Radar and LiDAR Fusion for 3D Object Detection.In 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 12247–12253.
  • Wang etal. (2023b)Wang, Y.; Deng, J.; Li, Y.; Hu, J.; Liu, C.; Zhang, Y.; Ji, J.; Ouyang, W.; and Zhang, Y. 2023b.Bi-LRFusion: Bi-Directional LiDAR-Radar Fusion for 3D Dynamic Object Detection.In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 13394–13403.
  • Woo etal. (2018)Woo, S.; Park, J.; Lee, J.-Y.; and Kweon, I.S. 2018.Cbam: Convolutional block attention module.In Proceedings of the European conference on computer vision (ECCV), 3–19.
  • Wu etal. (2023)Wu, H.; Wen, C.; Shi, S.; Li, X.; and Wang, C. 2023.Virtual Sparse Convolution for Multimodal 3D Object Detection.In CVPR.
  • Wu etal. (2024)Wu, H.; Zhao, S.; Huang, X.; Wen, C.; Li, X.; and Wang, C. 2024.Commonsense Prototype for Outdoor Unsupervised 3D Object Detection.arXiv preprint arXiv:2404.16493.
  • Xia etal. (2023)Xia, Q.; Deng, J.; Wen, C.; Wu, H.; Shi, S.; Li, X.; and Wang, C. 2023.CoIn: Contrastive Instance Feature Mining for Outdoor 3D Object Detection with Very Limited Annotations.In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 6254–6263.
  • Xiong etal. (2024)Xiong, W.; Liu, J.; Huang, T.; Han, Q.-L.; Xia, Y.; and Zhu, B. 2024.LXL: LiDAR Excluded Lean 3D Object Detection With 4D Imaging Radar and Camera Fusion.IEEE Transactions on Intelligent Vehicles, 9(1): 79–92.
  • Xu etal. (2022)Xu, G.; Khan, A.; Moshayedi, A.J.; Zhang, X.; and Shuxin, Y. 2022.The Object Detection, Perspective and Obstacles In Robotic: A Review.EAI Endorsed Transactions on AI and Robotics, 1: 7–15.
  • Xu etal. (2021)Xu, Q.; Zhou, Y.; Wang, W.; Qi, C.R.; and Anguelov, D. 2021.SPG: Unsupervised Domain Adaptation for 3D Object Detection via Semantic Point Generation.In ICCV.
  • Yan etal. (2023)Yan, J.; Liu, Y.; Sun, J.; Jia, F.; Li, S.; Wang, T.; and Zhang, X. 2023.Cross modal transformer: Towards fast and robust 3d object detection.In Proceedings of the IEEE/CVF International Conference on Computer Vision, 18268–18278.
  • Yan, Mao, and Li (2018)Yan, Y.; Mao, Y.; and Li, B. 2018.SECOND: Sparsely Embedded Convolutional Detection.Sensors, 18.
  • Yang etal. (2020)Yang, Z.; Sun, Y.; Liu, S.; and Jia, J. 2020.3DSSD: Point-Based 3D Single Stage Object Detector.In CVPR.

Appendix / Supplemental Material

Analysis of Point Distribution of LiDAR and 4D Radar under Different Weather Conditions

Although the weather robustness advantage of 4D radar sensors has been mentioned as a priori knowledge in existing work (Han etal. 2023; Sun and Zhang 2021), this aspect remains less studied. Here, we utilize a variety of real-world adverse weather datasets from K-Radar to examine and corroborate this phenomenon. As depicted in Figure 7, we have compiled plots of the point counts averaged across various types of real-world adverse weather conditions for both LiDAR and 4D radar. It is observed that under different categories of adverse weather conditions, the point counts of LiDAR at different distances from the sensor location (a) exhibit a pronounced decreasing trend, reflecting the significant degradation of LiDAR data quality in adverse weather. In contrast, the point counts of 4D radar at different distances from the sensor location (b) do not show a clear correlation with weather conditions. It is important to note that the large differences in data scenes and dynamic object distributions, and the sensitivity of 4D radar to dynamic objects result in greater fluctuations in point count distribution. However, the lack of correlation between point counts and weather conditions still demonstrates to a certain extent the weather robustness advantage of 4D radar.

L4DR: LiDAR-4DRadar Fusion for Weather-Robust 3D Object Detection (7)

λ𝜆\lambdaitalic_λ in trainingλ𝜆\lambdaitalic_λ in testing
3D mAP
(fog level = 0)
3D mAP
(fog level = 1)
3D mAP
(fog level = 2)
3D mAP
(fog level = 3)
3D mAP
(fog level = 4)
0.10.177.5677.0362.4050.1125.27
0.275.9275.7961.0949.0025.28
0.375.4674.4660.8448.8425.72
0.573.5771.7959.1047.1524.99
0.20.177.5977.4364.0652.5723.90
0.277.2176.6362.9451.1023.99
0.375.8475.2362.1550.4423.97
0.573.7372.6159.4148.8722.40
0.30.179.5178.7763.7852.9525.94
0.279.8078.8464.7353.2628.87
0.379.6777.9163.3352.0226.28
0.576.7175.7561.2851.5626.46
0.50.177.1876.6762.3551.4524.30
0.278.9177.8763.6153.4928.49
0.379.4778.3563.5752.2227.19
0.578.5777.1262.4751.1826.29

More Implement Details

For the training strategy, we train the entire network with the loss of 30 epochs. We use Adam optimizer with lr= 1e-3, β𝛽\betaitalic_β1 = 0.9, β𝛽\betaitalic_β2 = 0.999.

For the K-Radar dataset, we preprocess the 4D radar sparse tensor by selecting only the top 10240 points with high power measurement. We present the set the point cloud range as [0m, 72m] for the X axis, [6.4m, 6.4m] for the Y axis, and [-2m, 6m] for the Z axis setting the same environment with version 1.0 K-Radar. And [0m, 72m] for the X axis, [-16m, 16m] for the Y axis, and [-2m, 7.6m] for the Z axis setting the same environment with version 2.1 K-Radar. The voxel size is set to (0.4m, 0.4m, 0.4m).

For the VoD dataset, following KITTI (Geiger, Lenz, and Urtasun 2012), we calculate the 3D Average Precision (3D AP) across 40 recall thresholds (R40) for different classes. Also, following VoD’s (Palffy etal. 2022) evaluation metrics, we calculate class-wise AP and mAP averaged over classes. The calculation encompasses the entire annotated region (camera FoV up to 50 meters) and the ”Driving Corridor” region ([-4 m ¡ x ¡ +4 m, z ¡ 25 m]). For both KITTI metrics and VoD metrics, for AP calculations, we used an IoU threshold specified in VoD, requiring a 50% overlap for car class and 25% overlap for pedestrian and cyclist classes.

Experimental Visualization Results

L4DR: LiDAR-4DRadar Fusion for Weather-Robust 3D Object Detection (8)

To better visualize how our method improves detection performance, we compare our L4DR with InterFusion (Wang etal. 2022b) under different simulated fog levels, as shown in Figure 8. Our L4DR effectively filters out a substantial amount of noise in 4D radar points (depicted as colored points). Furthermore, our L4DR achieves an effective fusion of LiDAR and 4D radar to increase the precise recall of hard-to-detect objects and reduce false detections.

Experiments of the hyperparameter λ𝜆\lambdaitalic_λ in FAD

We have conducted sufficient experimental discussion on the hyperparameter λ𝜆\lambdaitalic_λ in FAD both in the training and testing stages. The experimental results are shown in Table 6, which shows that too small λ𝜆\lambdaitalic_λ will lead to too much noise residue and an insignificant denoising effect, while too large λ𝜆\lambdaitalic_λ will lose a large number of foreground points affecting the detection of the object. Moreover, the performance degree of different λ𝜆\lambdaitalic_λ under different fog levels is also different, which is due to the different importance of 4D radar under different fog levels. In the end, we chose the setting with the best overall performance with λ𝜆\lambdaitalic_λ = 0.3 for training and λ𝜆\lambdaitalic_λ = 0.2 for testing, which is also in line with our expectations. Firstly, λ𝜆\lambdaitalic_λ cannot be used with the 0.5 threshold for conventional binary classification, which needs to be appropriately lowered. Secondly, the threshold λ𝜆\lambdaitalic_λ for training should be slightly higher than that for testing due to the increased number of foreground points caused by the data augmentations, such as Ground Truth Sampling.

Foreground Semantic Segmentation Results in FAD

λ𝜆\lambdaitalic_λ in testingRecallIoUPA
0.184.3538.0788.43
0.278.0450.1193.45
0.373.1554.1494.78
0.565.5254.3995.52

We also tested the experimental results of FAD for the denoising stage in semantic segmentation. We used Recall, IoU, and Point Accuracy (PA) as the evaluation metrics, as shown in Table 7. With the increase of hyperparameter λ𝜆\lambdaitalic_λ, Recall decreases and IoU and PA gradually increase. At λ𝜆\lambdaitalic_λ = 0.5, we obtain the best IoU and PA, but the lowest recall. It verifies the correctness of our denoising algorithm for semantic segmentation. However, we find that the performance of the 3D object detection task is not as good when λ𝜆\lambdaitalic_λ is higher because losing more foreground points is more detrimental to object detection than retaining some background points.

More Performance on K-Radar Dataset

MethodsModalityIoUMetricTotalNormalOvercastFogRainSleetLightsnowHeavysnow
RTNH(NeurIPS 2022)4DR0.5APBEV𝐴subscript𝑃𝐵𝐸𝑉AP_{BEV}italic_A italic_P start_POSTSUBSCRIPT italic_B italic_E italic_V end_POSTSUBSCRIPT41.141.044.645.432.950.681.556.3
AP3D𝐴subscript𝑃3𝐷AP_{3D}italic_A italic_P start_POSTSUBSCRIPT 3 italic_D end_POSTSUBSCRIPT37.437.642.041.229.249.163.943.1
0.3APBEV𝐴subscript𝑃𝐵𝐸𝑉AP_{BEV}italic_A italic_P start_POSTSUBSCRIPT italic_B italic_E italic_V end_POSTSUBSCRIPT36.035.841.944.830.234.563.955.1
AP3D𝐴subscript𝑃3𝐷AP_{3D}italic_A italic_P start_POSTSUBSCRIPT 3 italic_D end_POSTSUBSCRIPT14.119.720.515.913.013.521.06.36
PointPillars(CVPR 2019)L0.5APBEV𝐴subscript𝑃𝐵𝐸𝑉AP_{BEV}italic_A italic_P start_POSTSUBSCRIPT italic_B italic_E italic_V end_POSTSUBSCRIPT49.148.253.045.444.245.974.553.8
AP3D𝐴subscript𝑃3𝐷AP_{3D}italic_A italic_P start_POSTSUBSCRIPT 3 italic_D end_POSTSUBSCRIPT22.421.828.028.227.222.623.212.9
0.3APBEV𝐴subscript𝑃𝐵𝐸𝑉AP_{BEV}italic_A italic_P start_POSTSUBSCRIPT italic_B italic_E italic_V end_POSTSUBSCRIPT51.951.653.545.444.754.381.255.2
AP3D𝐴subscript𝑃3𝐷AP_{3D}italic_A italic_P start_POSTSUBSCRIPT 3 italic_D end_POSTSUBSCRIPT47.346.751.944.842.445.559.255.2
RTNH(NeurIPS 2022)L0.5APBEV𝐴subscript𝑃𝐵𝐸𝑉AP_{BEV}italic_A italic_P start_POSTSUBSCRIPT italic_B italic_E italic_V end_POSTSUBSCRIPT66.365.487.483.873.748.878.548.1
AP3D𝐴subscript𝑃3𝐷AP_{3D}italic_A italic_P start_POSTSUBSCRIPT 3 italic_D end_POSTSUBSCRIPT37.839.846.359.828.231.450.724.6
0.3APBEV𝐴subscript𝑃𝐵𝐸𝑉AP_{BEV}italic_A italic_P start_POSTSUBSCRIPT italic_B italic_E italic_V end_POSTSUBSCRIPT76.576.588.286.377.355.381.159.5
AP3D𝐴subscript𝑃3𝐷AP_{3D}italic_A italic_P start_POSTSUBSCRIPT 3 italic_D end_POSTSUBSCRIPT72.773.176.584.864.553.480.352.9
InterFusion(IROS 2023)L+4DR0.5APBEV𝐴subscript𝑃𝐵𝐸𝑉AP_{BEV}italic_A italic_P start_POSTSUBSCRIPT italic_B italic_E italic_V end_POSTSUBSCRIPT52.950.059.080.350.022.772.253.3
AP3D𝐴subscript𝑃3𝐷AP_{3D}italic_A italic_P start_POSTSUBSCRIPT 3 italic_D end_POSTSUBSCRIPT17.515.320.547.612.99.3356.825.7
0.3APBEV𝐴subscript𝑃𝐵𝐸𝑉AP_{BEV}italic_A italic_P start_POSTSUBSCRIPT italic_B italic_E italic_V end_POSTSUBSCRIPT57.557.260.881.252.827.572.657.2
AP3D𝐴subscript𝑃3𝐷AP_{3D}italic_A italic_P start_POSTSUBSCRIPT 3 italic_D end_POSTSUBSCRIPT53.051.158.180.940.423.071.055.2
3D-LRF(CVPR 2024)L+4DR0.5APBEV𝐴subscript𝑃𝐵𝐸𝑉AP_{BEV}italic_A italic_P start_POSTSUBSCRIPT italic_B italic_E italic_V end_POSTSUBSCRIPT73.672.388.486.676.647.579.664.1
AP3D𝐴subscript𝑃3𝐷AP_{3D}italic_A italic_P start_POSTSUBSCRIPT 3 italic_D end_POSTSUBSCRIPT45.245.355.851.838.323.460.236.9
0.3APBEV𝐴subscript𝑃𝐵𝐸𝑉AP_{BEV}italic_A italic_P start_POSTSUBSCRIPT italic_B italic_E italic_V end_POSTSUBSCRIPT84.083.789.295.478.360.788.974.9
AP3D𝐴subscript𝑃3𝐷AP_{3D}italic_A italic_P start_POSTSUBSCRIPT 3 italic_D end_POSTSUBSCRIPT74.881.287.286.173.849.587.967.2
L4DR(Ours)L+4DR0.5APBEV𝐴subscript𝑃𝐵𝐸𝑉AP_{BEV}italic_A italic_P start_POSTSUBSCRIPT italic_B italic_E italic_V end_POSTSUBSCRIPT77.576.888.689.778.259.380.953.8
AP3D𝐴subscript𝑃3𝐷AP_{3D}italic_A italic_P start_POSTSUBSCRIPT 3 italic_D end_POSTSUBSCRIPT53.553.064.173.253.846.252.437.0
0.3APBEV𝐴subscript𝑃𝐵𝐸𝑉AP_{BEV}italic_A italic_P start_POSTSUBSCRIPT italic_B italic_E italic_V end_POSTSUBSCRIPT79.586.089.689.981.162.389.161.3
AP3D𝐴subscript𝑃3𝐷AP_{3D}italic_A italic_P start_POSTSUBSCRIPT 3 italic_D end_POSTSUBSCRIPT78.077.780.088.679.260.178.951.9

ClassMethodModalityTotalNormalLi. SnowHe. SnowRainSleetOvercastFog
SedanPointpillars* (CVPR2019)4DR42.835.053.648.337.437.553.977.3
RTNH(NIPS2022)4DR48.235.565.652.640.348.158.879.3
Pointpillars* (CVPR2019)L69.768.179.051.577.759.179.089.2
InterFusion* (IROS2022)L+4DR69.969.079.151.777.158.977.989.5
L4DR (Ours)L+4DR75.874.687.558.477.861.479.289.3
BusorTruckPointpillars* (CVPR2019)4DR29.425.864.134.90.018.021.5-
RTNH(NIPS2022)4DR34.425.378.246.30.028.531.1-
Pointpillars* (CVPR2019)L53.852.984.150.73.761.877.3-
InterFusion* (IROS2022)L+4DR56.956.285.740.56.470.680.5-
L4DR (Ours)L+4DR59.759.484.451.98.166.186.4-

The main text is bound by space constraints and only results using IoU=0.5 and v1.0 labeling are shown on K-Radar. Here we additionally show results using IoU=0.3 with v1.0 labels as in Table. 8 and results using IoU=0.3 with v2.0 labels as in Table. 9. The experimental results all demonstrate the superior performance of our L4DR.

More Fusion Details

Below we present the implementation details of the individual fusion methods compared in Table 6 of the main text, all of which are implemented on the PointPiillars baseline.

Concat.

We directly concatenate the pseudo-images of LiDAR with 4DRadar in the channel dimension after PointPillar coding.

Cross-Attn.

We used a 32-dimensional sin/cos position-encoded 4-head attention layer to calculate the Cross-Modal Pillar feature added to the 4DRadar Pillar feature from the LiDAR Pillar feature to the 4DRadar P illar feature, and also to calculate the 4DRadar Pillar feature to the LiDAR P illar feature’s Cross-Modal Pillar feature and added to the LiDAR Pillar feature.

Self-Attn.

We use a 32-dimensional sin/cos position-encoded 4-head attentional layer to compute self-attentional features on the last two BEV features of the 2D BackBone and add them to the original features.

SE Block.

We use 2x Squeeze’s SEBlock to compute SE features for each BEV feature of the 2D BackBone and add them to the original feature.

CBAM Block.

We use CBAM Block to compute SE features for each BEV feature of the 2D BackBone and add them to the original feature.

L4DR: LiDAR-4DRadar Fusion for Weather-Robust 3D Object Detection (2024)

References

Top Articles
5 takeaways from the opening ceremony of the 2024 Paris Olympics | CNN
O'reilly's Okmulgee
Craigslist San Francisco Bay
Jordanbush Only Fans
Metra Union Pacific West Schedule
Noaa Charleston Wv
Dlnet Retiree Login
Unitedhealthcare Hwp
New Slayer Boss - The Araxyte
Comforting Nectar Bee Swarm
oklahoma city for sale "new tulsa" - craigslist
How Much Is 10000 Nickels
سریال رویای شیرین جوانی قسمت 338
30% OFF Jellycat Promo Code - September 2024 (*NEW*)
Weapons Storehouse Nyt Crossword
Www.paystubportal.com/7-11 Login
Morocco Forum Tripadvisor
Superhot Unblocked Games
Boston Gang Map
Craigslist Toy Hauler For Sale By Owner
Lonesome Valley Barber
Where to Find Scavs in Customs in Escape from Tarkov
Buy Swap Sell Dirt Late Model
Talkstreamlive
Gas Buddy Prices Near Me Zip Code
The Listings Project New York
Toothio Login
Caring Hearts For Canines Aberdeen Nc
Great ATV Riding Tips for Beginners
Downtown Dispensary Promo Code
Little Einsteins Transcript
Stubhub Elton John Dodger Stadium
Vip Lounge Odu
Taktube Irani
Roadtoutopiasweepstakes.con
Fbsm Greenville Sc
Metra Union Pacific West Schedule
Pitco Foods San Leandro
Studentvue Columbia Heights
Is Arnold Swansinger Married
Claim loopt uit op pr-drama voor Hohenzollern
Directions To Advance Auto
Craigslist En Brownsville Texas
Inducement Small Bribe
Kb Home The Overlook At Medio Creek
Mbfs Com Login
Academic Notice and Subject to Dismissal
How to Connect Jabra Earbuds to an iPhone | Decortweaks
Value Village Silver Spring Photos
Runescape Death Guard
Where To Find Mega Ring In Pokemon Radical Red
Acellus Grading Scale
Latest Posts
Article information

Author: Ouida Strosin DO

Last Updated:

Views: 6256

Rating: 4.6 / 5 (76 voted)

Reviews: 91% of readers found this page helpful

Author information

Name: Ouida Strosin DO

Birthday: 1995-04-27

Address: Suite 927 930 Kilback Radial, Candidaville, TN 87795

Phone: +8561498978366

Job: Legacy Manufacturing Specialist

Hobby: Singing, Mountain biking, Water sports, Water sports, Taxidermy, Polo, Pet

Introduction: My name is Ouida Strosin DO, I am a precious, combative, spotless, modern, spotless, beautiful, precious person who loves writing and wants to share my knowledge and understanding with you.