Xun Huang1,Ziyu Xu1,Hai Wu1,Jinlong Wang1,Qiming Xia1,
Yan Xia2,Jonathan Li3,Kyle Gao3,Chenglu Wen1Cheng Wang1 Corresponding author.
Abstract
LiDAR-based vision systems are integral for 3D object detection, which is crucial for autonomous navigation. However, they suffer from performance degradation in adverse weather conditions due to the quality deterioration of LiDAR point clouds.Fusing LiDAR with the weather-robust 4D radar sensor is expected to solve this problem. However, the fusion of LiDAR and 4D radar is challenging because they differ significantly in terms of data quality and the degree of degradation in adverse weather. To address these issues, we introduce L4DR, a weather-robust 3D object detection method that effectively achieves LiDAR and 4D Radar fusion. Our L4DR includes Multi-Modal Encoding (MME) and Foreground-Aware Denoising (FAD) technique to reconcile sensor gaps, which is the first exploration of the complementarity of early fusion between LiDAR and 4D radar. Additionally, we design an Inter-Modal and Intra-Modal ({IM}2) parallel feature extraction backbone coupled with a Multi-Scale Gated Fusion (MSGF) module to counteract the varying degrees of sensor degradation under adverse weather conditions. Experimental evaluation on a VoD dataset with simulated fog proves that L4DR is more adaptable to changing weather conditions. It delivers a significant performance increase under different fog levels, improving the 3D mAP by up to 20.0% over the traditional LiDAR-only approach. Moreover, the results on the K-Radar dataset validate the consistent performance improvement of L4DR in real-world adverse weather conditions.
Introduction
3D object detection is fundamental to vision systems of unmanned platforms, extensively utilized in applications such as intelligent robot navigation (Ghita etal. 2024; Xu etal. 2022) and autonomous driving(Bijelic etal. 2020). Full Driving Automation (FDA, Level 5) relies on weather-robust 3D object detection, providing precise 3D bounding boxes even under various challenging adverse weather conditions(Qian etal. 2021). Owing to the high resolution and strong interference resistance of LiDAR sensors, LiDAR-based 3D object detection has emerged as a mainstream area of research (Yan etal. 2023; Xia etal. 2023; Wu etal. 2024). However, LiDAR sensors exhibit considerable sensitivity to weather conditions. In adverse scenarios such as rain and fog, the scanning signals suffer from substantial degradation and increased noise(Huang etal. 2024; Hahner etal. 2021). This degradation can negatively impact 3D detectors, compromising the reliability of autonomous perception systems.
Aside from LiDAR, 4D (range, azimuth, Doppler, and elevation) millimeter-wave radar has increasingly been recognized (Han etal. 2023; Sun and Zhang 2021). As shown in Figure 1 (a), 4D radar outperforms LiDAR in weather robustness, velocity measurement, and detection range. The millimeter-wave signals of 4D radar have wavelengths much larger than the tiny particles in fog, rain, and snow(Golovachev etal. 2018; Balal, Pinhasi, and Pinhasi 2016), exhibiting reduced susceptibility to weather disturbances. As shown in Fig. 1 (b), the performance gap between LiDAR and 4D radar decreases as the severity of the weather rises. Therefore, the 4D radar sensor is suitable for various weather conditions. However, LiDAR is still far ahead of the radar in the important metrics of object classification and resolution. These circ*mstances make it very feasible to promote the full fusion of 4D radar and LiDAR data to improve 3D object detection. Pioneering approaches such as InterFusion(Wang etal. 2022b), M2Fusion(Wang etal. 2022a), and 3D-LRF (Chae, Kim, and Yoon 2024) represent the initial explorations into the fusion of LiDAR and 4D radar, showing significant performance improvements.
Despite these advances, the fusion of LiDAR and 4D radar sensors has not addressed the substantial disparities in data quality and the different degrees of degradation in adverse weather. As shown in Figure 2 (a), a primary challenge arises from the significant quality disparity between the LiDAR sensor and the 4D radar sensor.A second challenge pertains to varying sensor degradation under adverse weather conditions. Figure 2 (b) shows LiDAR sensors undergo severe data degradation in adverse weather. Conversely, the data quality decrease in 4D radar is significantly lower (Han etal. 2023; Sun and Zhang 2021) (Detailed explanations are given in supplementary material.). This motivates the consideration of emphasizing LiDAR and 4D radar sensors with varying degrees while considering weather conditions during data fusion. The substantial data quality disparities and different degrees of sensor degradation are not properly addressed in existing LiDAR and 4D radar fusion methods in adverse weather conditions.
Therefore, to address the above challenges, we propose an innovative fusion framework L4DR, as shown on the right side of Fig. 2. L4DR implements fusion at two positions, firstly the ”3D Fusion Encoder” to address the challenge of substantial data quality disparities as shown in Fig. 2 (a) above. It consists of the Foreground-Aware Denoising (FAD) and Multi-Modal Encoder (MME) modules. The second is the Fusion between 2D Backbone to address the challenge of the substantial data quality disparities Fig. 2 (b) above. It consists of the Inter-Modal and Intra-Modal ({IM}2) backbone and the Multi-Scale Gated Fusion (MSGF) module. Instead of traditional fusing the features before extracting on the right side of Fig. 2 (b), we continuously fused during the extraction process, adaptively focusing on the significant modal features in different weather conditions.
In Figure 3, our comprehensive testingshowcase the resilience and superior performance of L4DR across various simulated and real-world adverse weather disturbances.Our main contributions are as follows:
- •
We introduce the innovative Multi-Modal Encoder (MME) module, which achieves LiDAR and 4D radar early fusion without resorting to error-prone processes (e.g., depth estimation). This approach effectively bridges the substantial LiDAR and 4D radar data quality disparities.
- •
We designed an {IM}2 backbone with a Multi-Scale Gated Fusion (MSGF) module, adaptively extracting salient features from LiDAR and 4D radar in different weather conditions. This enables the model to adapt to varying levels of sensor degradation under adverse weather conditions.
- •
Extensive experiments on the two benchmarks, VoD and K-Radar, demonstrate the effectiveness of our L4DR under various levels and types of adverse weather, achieving new state-of-the-art performances on both datasets.
Related work
LiDAR-based 3D object detection.
Researchers have developed single-stage and two-stage methods to tackle challenges for 3D object detection. Single-stage detectors such as VoxelNet(Yan, Mao, and Li 2018), PointPillars(Lang etal. 2019), 3DSSD(Yang etal. 2020), DSVT (Wang etal. 2023a) utilize PointNet++ (Qi etal. 2017), sparse convolution, or other point feature encoder to extract features from point clouds and perform detection in the Bird’s Eye View (BEV) space. Conversely, methods such as PV-RCNN(Shi etal. 2020), PV-RCNN++(Shi etal. 2022), Voxel-RCNN (Deng etal. 2021), and VirConv (Wu etal. 2023) focus on two-stage object detection, integrating RCNN networks into 3D detectors. Even though these mainstream methods have gained excellent performance in normal weather, they still lack robustness under various adverse weather conditions.
LiDAR-based 3D object detection in adverse weather.
LiDAR sensors may undergo degradation under adverse weather conditions such as snow, fog, and rain. Physics-based simulations(Teufel etal. 2022; Hahner etal. 2022, 2021; Kilic etal. 2021) have been explored to reproduce point clouds under adverse weather to alleviate the issue of data scarcity. (Charron, Phillips, and Waslander 2018; Heinzler etal. 2020) utilized the DROR, DSOR, or convolutional neural networks (CNNs) to classify and filter LiDAR noise points. (Xu etal. 2021) designed a general completion framework that addresses the problem of domain adaptation across different weather conditions. (Huang etal. 2024) designed a general knowledge distillation framework that transfers sunny performance to rainy performance. However, these methods primarily rely on single-LiDAR modal data, which will be constrained by the decline in the quality of LiDAR under adverse weather conditions.
LiDAR-radar fusion-based 3D object detection.
LiDAR-radar fusion for 3D object detection has gained increasing attention in recent years. MVDNet(Qian etal. 2021) has designed a framework for fusing LiDAR and radar. ST-MVDNet(Li etal. 2022) and ST-MVDNet++(Li, O’Toole, and Kitani 2023) incorporate a self-training teacher-student to MVDNet to enhance the model. Bi-LRFusion(Wang etal. 2023b) framework employs a bidirectional fusion strategy to improve dynamic object detection performance. However, these studies only focus on 3D radar and LiDAR. As research progresses, the newest studies continue to drive the development of LiDAR-4D radar fusion. M2-Fusion(Wang etal. 2022a), InterFusion(Wang etal. 2022b), and 3D-LRF (Chae, Kim, and Yoon 2024) explore new LiDAR and 4D radar fusion approaches. However, these methods have not considered and overcome the challenges of fusing 4D radar and LiDAR under adverse weather conditions.
Methodology
Problem statement and overall design
LiDAR-4D radar fusion-based 3D object detection.
For an outdoor scene, we denote a LiDAR point cloud asand a 4D radar point cloud as,where denotes 3D points.Subsequently, a multi-modal model extract deep features from , written as .The fusion features are then obtained by , where donates fusion method.The objective of 3D object detection is to regress the 3D bounding boxes , .
Significant data quality disparity.
As mentioned before, there is a huge difference between and in the same scene. To fully fuse two modalities, to use to enhance the highly sparse that lacks discriminative details. Therefore, our L4DR includes a Multi-Modal Encoder (MME, Figure 4 (b)), which performs early-fusion complementarity at the encoder. However, we found that direct data fusion would also cause the noise in to spread to . Therefore, we integrated Foreground-Aware Denoising (FAD, Figure 4 (a)) into L4DR before MME to filter out most of the noise in .
Varying degradation in adverse weather conditions.
Compared to 4D radar, the quality of LiDAR point cloud is more easily affected by adverse weather conditions, leading to varying feature presentations .Previous backbones focusing solely on fused inter-modal features overlook the weather robustness of 4D radar, leading to challenges in addressing the frequent fluctuation of . To address this issue and ensure robust fusion across diverse weather conditions, we introduce the Inter-Modal and Intra-Modal ({IM}2, Figure 4 (c)) backbone. This design simultaneously emphasizes inter-modal and intra-modal features, enhancing model adaptability. However, redundancy between these features arises. Drawing inspiration from gated fusion techniques (Hosseinpour, Samadzadegan, and Javan 2022; Song, Zhao, and Skinner 2024a), we propose the Multi-Scale Gated Fusion (MSGF, Figure 4 (d)) module. MSGF utilizes inter-modal features to filter intra-modal features and , effectively reducing feature redundancy.
Foreground-Aware Denoising (FAD)
Due to multipath effects, 4D radar contains significant noise points. Despite applying the Constant False Alarm Rate (CFAR) algorithm to filter out noise during the data acquisition process, the noise level remains substantial. It is imperative to further reduce the clutter noise in 4D radar data before early data fusion to avoid noise spreading. Considering the minimal contribution of background points to object detection, this work introduces point-level foreground semantic segmentation to 4D radar denoising, performing a Foreground-Aware Denoising. Specifically, we first utilize PointNet++ (Qi etal. 2017) combined with a segmentation head as , to predict the foreground semantic probability for each point in the 4D radar. Subsequently, points with a foreground probability below a predefined threshold are filtered out, that is . FAD effectively filters out as many noise points as possible while preventing the loss of foreground points.
Multi-Modal Encoder (MME)
Even following denoising using FAD, there remains a significant quality disparity between LiDAR and 4D radar due to limitations in resolution. We thus design a Multi-Modal Encoder module that fuses LiDAR and radar points at an early stage to extract richer features.
As illustrated in Figure 5, we innovate the traditional unimodal Pillars coding into multimodal Pillars coding to perform initial fusion at the data level, extracting richer information for subsequent feature processing. Firstly, referring (Lang etal. 2019), we encode the LiDAR point cloud into a pillar set . Each LiDAR point in pillar encoded using encoding feature as
(1) |
where is the coordinate of the LiDAR point, denotes the distance from the LiDAR point to the arithmetic mean of all LiDAR points in the pillar, denotes the (horizontal) offset from the pillar center in coordinates, and is the reflectance. Similarly, each 4D radar point in pillar encoded using encoding feature as
(2) |
where , , and are similar in meaning to those in Eq.1. is Doppler information along each axis, and is Radar Cross-Section (RCS).
We then perform cross-modal feature propagation for LiDAR pillar encoding features and radar pillar encoding features that occupy the same coordinates. The fused LiDAR pillar encoding features and 4D radar points are obtained by fusing and as follows:
(3) | |||
where the overline denotes the average of all point features of another modality in that pillar.
The feature propagation is beneficial because and are helpful for object classification, while Doppler information is crucial for distinguishing dynamic objects (Song, Zhao, and Skinner 2024b). Cross-modal feature sharing makes comprehensive use of these advantages and cross-modal offsets [], also further enrich the geometric information.This MME method compensates for the data quality of 4D radar under normal weather conditions and can also enhance the quality of LiDAR in adverse weather. Subsequently, we applied a linear layer and max pooling operations to the fused pillar encoding features to obtain the corresponding modal BEV features .
{IM}2 Backbone and MSGF Block
To take full advantage of the respective advantages of LiDAR and 4D radar, it is necessary to focus on both inter-modal features and intra-modal features. We introduce the Inter-Modal and Intra-Modal backbone ({IM}2). {IM}2 serves as a multi-modal branch feature extraction module that concurrently extracts inter-modal feature () and intra-modal features (). Specifically, we fuse two Intra-Modal features to form an Inter-Modal feature,
(4) |
where denotes fusion approach (we use concatenation).Subsequently, we apply a convolutional block to each modal branch , , and independently,
(5) |
where denotes layer, and indicates different modality. represents a convolutional layer with batch normalization and ReLU activation.
However, while {IM}2 addresses some deficiencies in feature representation, this naive approach inevitably introduces redundant features. Inspired by (Song, Zhao, and Skinner 2024b) to adaptively filter each modal feature, we design an MSGF for adaptive gated fusion on each LiDAR and 4D radar scale feature map.
As depicted in Fig. 6, the gated network in MSGF processes input feature maps from LiDAR , 4D radar and fused counterpart . Subsequently, on the LiDAR and 4D radar branches, the adaptive gating weights for and for were obtained by a convolution block and sigmoid activation function, respectively. These weights are applied to the initial feature via element-wise multiplication, thus enabling filter and in the gated mechanism.Formally, the gated network guides and at convolution layer index to filter out redundant information as following:
(6) | |||
where is a 3x3 convolution block and a sigmoid function. is the fused feature with information about the interactions between modalities. It discerns whether features in and are helpful or redundant. Using for gated filtering can flexibly weight and extract features from and while significantly reducing feature redundancy.
Loss Function and Training Strategy.
We trained our L4DR with the following losses :
(7) |
where is the number of positive anchors and = 1, = 2, = 0.5, the is object classification focal loss, the is object localization regression loss, and the is 4D radar noise classification focal loss in FAD module.We use Adam optimizer with lr = 1e-3, 1 = 0.9, 2 = 0.999.
Experiments
Methods Modality Metric Total Normal Overcast Fog Rain Sleet Lightsnow Heavysnow RTNH(NeurIPS 2022) 4DR 41.1 41.0 44.6 45.4 32.9 50.6 81.5 56.3 37.4 37.6 42.0 41.2 29.2 49.1 63.9 43.1 PointPillars(CVPR 2019) L 49.1 48.2 53.0 45.4 44.2 45.9 74.5 53.8 22.4 21.8 28.0 28.2 27.2 22.6 23.2 12.9 RTNH(NeurIPS 2022) L 66.3 65.4 87.4 83.8 73.7 48.8 78.5 48.1 37.8 39.8 46.3 59.8 28.2 31.4 50.7 24.6 InterFusion(IROS 2023) L+4DR 52.9 50.0 59.0 80.3 50.0 22.7 72.2 53.3 17.5 15.3 20.5 47.6 12.9 9.33 56.8 25.7 3D-LRF(CVPR 2024) L+4DR 73.6 72.3 88.4 86.6 76.6 47.5 79.6 64.1 45.2 45.3 55.8 51.8 38.3 23.4 60.2 36.9 L4DR(Ours) L+4DR 77.5 76.8 88.6 89.7 78.2 59.3 80.9 53.8 53.5 53.0 64.1 73.2 53.8 46.2 52.4 37.0
Implement Details.
We implement L4DR with PointPillars (Lang etal. 2019), the most commonly used base architecture in radar-based, LiDAR and 4D radar fusion-based 3D object detection. This can effectively verify the effectiveness of our L4DR and avoid unfair comparisons caused by inherent improvements in the base architecture. We set in section 3.2 as 0.3 while training and 0.2 while inferring (more discussions can be found in supplementary material). We conduct all experiments with a batch size of 16 on 2 RTX 3090 GPUs. Other parameter settings refer to the default official configuration in the OpenPCDet (Team etal. 2020) tool.
Dataset and Evaluation Metrics
K-Radar dataset.
The K-Radar dataset (Paek, Kong, and Wijaya 2022) contains 58 sequences with 34944 frames of 64-line LiDAR, camera, and 4D radar data in various weather conditions. According to the official K-Radar split, we used 17458 frames for training and 17536 frames for testing. We adopt two evaluation metrics for 3D object detection: and of the class ”Sedan” at IoU = 0.5. We also provide more quantitative results for other IoU thresholds as well as using the latest v2.1 version of the label (see supplementary material).
View-of-Delft (VoD) dataset.
The VoD dataset (Palffy etal. 2022) contains 8693 frames of 64-line LiDAR, camera, and 4D radar data. Following the official partition, we divided the dataset into a training and validation set with 5139 and 1296 frames. All of methods used the official radar with 5 scans accumulation and single frame LiDAR. Meanwhile, to explore the performance under different fog intensities, following a series of previous work (Qian etal. 2021; Li etal. 2022), we similarly performed fog simulations (Hahner etal. 2021) (with fog level from 0 to 4, fog density () = [0.00, 0.03, 0.06, 0.10, 0.20]) on the VoD dataset and 4D Radar remains unchanged to simulate weather robustness. We express this as the Vod-Fog dataset in the following. Noteworthy, we used two evaluation metrics groups. The VoD official metrics are used to compare better the results reported by previous state-of-the-art methods. The KITTI official metrics are for better demonstrating and analyzing the performance of ”easy”, ”moderate” and ”hard” objects with different difficulties under foggy weather.
Results on K-Radar Adverse Weather Dataset
Following 3D-LRF (Chae, Kim, and Yoon 2024), we compare our L4DR with LiDAR only, 4D radar only, and LiDAR-4D radar fusion-based 3D object detection methods: PointPillars (Lang etal. 2019), RTNH (Paek, KONG, and Wijaya 2022), InterFusion (Wang etal. 2022b) and 3D-LRF (Chae, Kim, and Yoon 2024). The results in Table 1 highlight the superior performance of our L4DR model in all weather conditions. Our L4DR model surpasses 3D-LRF by 8.3% in total . This demonstrates that compared to previous fusion frameworks, our proposed fusion method utilizes the advantages of LiDAR and 4D Radar more effectively. Note that we only compare 3D-LRF on the K-Radar dataset because its code is not open-sourced, the results of 3D-LRF are available on the K-Radar dataset only. Meanwhile, it is worth noting that the performance in many bad weather conditions instead significantly exceeds the performance in normal weather (e.g., Overcast, Fog, etc.). This is a phenomenon also reflected in the official benchmark of the K-Radar dataset, and we discuss this counter-intuitive phenomenon in detail in the supplementary material. We also discuss other valuable results such as different IoU thresholds and new version labels.
Methods Modality Entire Area Driving Area Car Ped. Cyc. Car Ped. Cyc. Pointpillars 4DR 39.7 31.0 65.1 71.6 40.5 87.8 LXL 4DR 32.8 39.7 68.1 70.3 47.3 87.9 FUTR3D C+4DR 46.0 35.1 66.0 78.7 43.1 86.2 BEVFusion C+4DR 37.9 41.0 69.0 70.2 45.9 89.5 RCFusion C+4DR 41.7 39.0 68.3 71.9 47.5 88.3 LXL C+4DR 42.3 49.5 77.1 72.2 58.3 88.3 Pointpillars L 66.0 55.6 75.0 88.7 68.4 88.4 InterFusion L+4DR 66.5 64.5 78.5 90.7 72.0 88.7 L4DR (Ours) L+4DR 69.1 66.2 82.8 90.8 76.1 95.5
Fog Level Methods Modality Car (IoU = 0.5) Pedestrian (IoU = 0.25) Cyclist (IoU = 0.25) Easy Mod. Hard Easy Mod. Hard Easy Mod. Hard 0(W/o Fog) PointPillars L 84.9 73.5 67.5 62.7 58.4 53.4 85.5 79.0 72.7 InterFusion L+4DR 67.6 65.8 58.8 73.7 70.1 64.7 90.3 87.0 81.2 L4DR (Ours) L+4DR 85.0 76.6 69.4 74.4 72.3 65.7 93.4 90.4 83.0 1 PointPillars L 79.9 72.7 67.0 59.9 55.6 50.5 85.5 78.2 72.0 InterFusion L+4DR 66.1 64.0 56.9 74.0 70.6 64.5 91.6 87.4 82.0 L4DR (Ours) L+4DR 77.9 73.2 67.8 75.4 72.1 66.7 93.8 91.0 83.2 2 PointPillars L 67.0 51.4 44.4 53.1 47.2 42.7 69.6 62.7 57.2 InterFusion L+4DR 56.0 48.5 41.5 63.2 57.8 52.9 77.3 71.1 66.2 L4DR (Ours) L+4DR 68.5 56.4 49.3 63.1 59.9 55.1 82.7 70.8 70.7 3 PointPillars L 44.5 31.9 27.0 40.2 37.7 34.0 53.2 46.7 41.8 InterFusion L+4DR 41.2 33.1 27.0 52.9 49.2 44.8 59.9 57.7 53.1 L4DR (Ours) L+4DR 46.2 41.4 34.6 53.5 50.6 46.2 72.2 67.7 60.9 4 PointPillars L 13.0 8.77 7.19 10.6 12.9 11.3 6.15 4.89 4.57 InterFusion L+4DR 15.2 10.8 8.40 25.7 25.1 22.6 6.68 7.95 6.99 L4DR (Ours) L+4DR 26.9 26.2 21.6 33.1 30.7 27.9 30.3 29.7 26.3
Results on VoD Dataset
We compared our L4DR fusion performance with different state-of-the-art methods of different modalities on the VoD dataset with VoD metric. As shown in Table 2, our L4DR fusion outperforms the existing LiDAR and 4D radar fusion method InterFusion (Wang etal. 2022b) in all categories. We outperformed by 6.8% in the Cyc. class in the Driving Area. Meanwhile, our L4DR also significantly outperforms other modality-based state-of-the-art methods such as LXL (Xiong etal. 2024). These experimental results demonstrate that our method can comprehensively fuse the two modalities of LiDAR and 4D radar. As a consequence, our L4DR method also shows superior performance even in clear weather.
Results on Vod-Fog Simulated Dataset
We evaluated our L4DR model in comparison with LiDAR and 4D radar fusion methods using the Vod-Fog dataset using the KITTI metrics across varying levels of fog. Table 3 demonstrates that our L4DR model outperforms LiDAR-only PointPillars in different difficulty categories and fog intensities. Particularly in the most severe fog conditions (fog level = 4), our L4DR model achieves performance improvements of 17.43%, 17.8%, and 24.81% mAP in moderate difficulty categories, surpassing the gains obtained by InterFusion. Furthermore, our approach consistently exhibits superior performance compared to InterFusion across various scenarios, showcasing the adaptability of our L4DR fusion under adverse weather conditions.
Ablation study
Effect of each component.
We systematically evaluated each component, with the results summarized in Table4. The row represents the performance of the LiDAR-only baseline model. Subsequent row and row are fused by directly concatenating the BEV features from LiDAR and 4D radar modality. The results showing the enhancements were observed with the addition of MME and FAD respectively, highlighting that our fusion method fully utilizes the weather robustness of the 4D radar while excellently handling the noise problem of the 4D radar. The row indicates that the performance boost from incorporating the {IM}2 model alone was not substantial, primarily due to feature redundancy introduced by the {IM}2 backbone. This issue was effectively addressed by utilizing the MSGF module in the row, leading to the most optimal performance.
Module 3D mAP MME FAD {IM}2 MSGF W/o Fog = 1 = 2 = 3 = 4 70.3 68.9 53.8 38.8 8.92 ✓ 77.1 75.4 63.2 52.3 23.4 ✓ ✓ 78.7 77.6 63.3 52.2 24.7 ✓ ✓ ✓ 78.1 76.8 62.0 52.3 26.5 ✓ ✓ ✓ ✓ 79.8 78.8 64.7 53.3 28.9
Comparison with other multi-modal feature fusion.
We compared different multi-modal feature fusion blocks, including basic concatenation (Concat.) and various attention-based methods such as Transformer-based (Vaswani etal. 2017) Cross-Modal Attention (Cross-Attn.) and Self-Attention (Self-Attn.), SE Block (Hu, Shen, and Sun 2018), and CBAM Block (Woo etal. 2018), see supplementary material for detailed fusion implements. Experimental results (Table 5) show that while attention mechanisms outperform concatenation to some extent, they do not effectively address the challenge of fluctuating features under varying weather conditions. In contrast, our proposed MSGF method, focusing on significant features of LiDAR and 4D radar, achieves superior performance and robustness under different weather.
Fusion 3D mAP W/o Fog = 1 = 2 = 3 = 4 Concat. 77.9 76.3 61.9 49.3 17.7 Cross-Attn. 77.2 76.0 63.0 52.7 30.1 Self-Attn. 78.4 77.4 64.3 52.8 25.8 SE Block 77.3 77.9 63.8 50.1 25.0 CBAM Block 78.0 78.1 64.0 52.3 26.4 MSGF (Ours) 79.8 78.8 64.7 53.2 28.8
Conclusion
In this paper, we analyzed the challenges of fusing LiDAR and 4D radar in adverse weather and proposed L4DR, an effective LiDAR and 4D radar fusion method. We provide an innovative and feasible solution for achieving weather-robust outdoor 3D object detection in various weather conditions. Our experiments on VoD and K-Radar datasets have demonstrated the effectiveness and superiority of our method in various simulated fog levels and real-world adverse weather. In summary, our proposed L4DR fusion method not only offers a promising solution for robust outdoor 3D object detection in adverse weather conditions but also sets a new benchmark for performance and robustness compared to existing fusion techniques, paving the way for enhanced safety and reliability in autonomous driving and other applications.
Limitations. While the introduction of the {IM}2 and MSGF modules has allowed the model to focus on more salient features, it inevitably introduces additional computations that reduce the computational efficiency to a certain extent. The inference speed is reduced to about 10 FPS, which is just enough to satisfy the real-time threshold (equal to the LiDAR acquisition frequency), but the computational performance optimization is a valuable future area of research.
References
- Balal, Pinhasi, and Pinhasi (2016)Balal, N.; Pinhasi, G.; and Pinhasi, Y. 2016.Atmospheric and Fog Effects on Ultra-Wide Band Radar Operating at Extremely High Frequencies.Sensors, 16(5): 751.
- Bijelic etal. (2020)Bijelic, M.; Gruber, T.; Mannan, F.; Kraus, F.; Ritter, W.; Dietmayer, K.; and Heide, F. 2020.Seeing Through Fog Without Seeing Fog: Deep Multimodal Sensor Fusion in Unseen Adverse Weather.In CVPR.
- Chae, Kim, and Yoon (2024)Chae, Y.; Kim, H.; and Yoon, K.-J. 2024.Towards Robust 3D Object Detection with LiDAR and 4D Radar Fusion in Various Weather Conditions.In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 15162–15172.
- Charron, Phillips, and Waslander (2018)Charron, N.; Phillips, S.; and Waslander, S.L. 2018.De-Noising of Lidar Point Clouds Corrupted by Snowfall.In CRV.
- Deng etal. (2021)Deng, J.; Shi, S.; Li, P.; Zhou, W.; Zhang, Y.; and Li, H. 2021.Voxel R-CNN: Towards High Performance Voxel-based 3D Object Detection.AAAI, 35.
- Geiger, Lenz, and Urtasun (2012)Geiger, A.; Lenz, P.; and Urtasun, R. 2012.Are we ready for autonomous driving? The KITTI vision benchmark suite.In 2012 IEEE Conference on Computer Vision and Pattern Recognition, 3354–3361.
- Ghita etal. (2024)Ghita, A.; Antoniussen, B.; Zimmer, W.; Greer, R.; Creß, C.; Møgelmose, A.; Trivedi, M.M.; and Knoll, A.C. 2024.ActiveAnno3D–An Active Learning Framework for Multi-Modal 3D Object Detection.arXiv preprint arXiv:2402.03235.
- Golovachev etal. (2018)Golovachev, Y.; Etinger, A.; Pinhasi, G.A.; and Pinhasi, Y. 2018.Millimeter wave high resolution radar accuracy in fog conditions—theory and experimental verification.Sensors, 18(7): 2148.
- Hahner etal. (2022)Hahner, M.; Sakaridis, C.; Bijelic, M.; Heide, F.; Yu, F.; Dai, D.; and VanGool, L. 2022.LiDAR Snowfall Simulation for Robust 3D Object Detection.In CVPR.
- Hahner etal. (2021)Hahner, M.; Sakaridis, C.; Dai, D.; and VanGool, L. 2021.Fog Simulation on Real LiDAR Point Clouds for 3D Object Detection in Adverse Weather.In ICCV.
- Han etal. (2023)Han, Z.; Wang, J.; Xu, Z.; Yang, S.; He, L.; Xu, S.; and Wang, J. 2023.4D Millimeter-Wave Radar in Autonomous Driving: A Survey. arXiv 2023.arXiv preprint arXiv:2306.04242.
- Heinzler etal. (2020)Heinzler, R.; Piewak, F.; Schindler, P.; and Stork, W. 2020.CNN-Based Lidar Point Cloud De-Noising in Adverse Weather.IEEE Robotics and Automation Letters, 5.
- Hosseinpour, Samadzadegan, and Javan (2022)Hosseinpour, H.; Samadzadegan, F.; and Javan, F.D. 2022.CMGFNet: A deep cross-modal gated fusion network for building extraction from very high-resolution remote sensing images.ISPRS Journal of Photogrammetry and Remote Sensing, 184: 96–115.
- Hu, Shen, and Sun (2018)Hu, J.; Shen, L.; and Sun, G. 2018.Squeeze-and-excitation networks.In Proceedings of the IEEE conference on computer vision and pattern recognition, 7132–7141.
- Huang etal. (2024)Huang, X.; Wu, H.; Li, X.; Fan, X.; Wen, C.; and Wang, C. 2024.Sunshine to rainstorm: Cross-weather knowledge distillation for robust 3d object detection.In Proceedings of the AAAI Conference on Artificial Intelligence, volume38, 2409–2416.
- Kilic etal. (2021)Kilic, V.; Hegde, D.; Sindagi, V.A.; Cooper, A.; Foster, M.; and Patel, V.M. 2021.Lidar Light Scattering Augmentation (LISA): Physics-based Simulation of Adverse Weather Conditions for 3D Object Detection.ArXiv.
- Lang etal. (2019)Lang, A.H.; Vora, S.; Caesar, H.; Zhou, L.; Yang, J.; and Beijbom, O. 2019.PointPillars: Fast Encoders for Object Detection From Point Clouds.In 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 12689–12697.
- Li, O’Toole, and Kitani (2023)Li, Y.-J.; O’Toole, M.; and Kitani, K. 2023.St-mvdnet++: Improve vehicle detection with lidar-radar geometrical augmentation via self-training.In ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 1–5. IEEE.
- Li etal. (2022)Li, Y.-J.; Park, J.; O’Toole, M.; and Kitani, K. 2022.Modality-Agnostic Learning for Radar-Lidar Fusion in Vehicle Detection.In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 918–927.
- Paek, Kong, and Wijaya (2022)Paek, D.-H.; Kong, S.-H.; and Wijaya, K.T. 2022.K-radar: 4d radar object detection for autonomous driving in various weather conditions.Advances in Neural Information Processing Systems, 35: 3819–3829.
- Paek, KONG, and Wijaya (2022)Paek, D.-H.; KONG, S.-H.; and Wijaya, K.T. 2022.K-Radar: 4D Radar Object Detection for Autonomous Driving in Various Weather Conditions.In Koyejo, S.; Mohamed, S.; Agarwal, A.; Belgrave, D.; Cho, K.; and Oh, A., eds., Advances in Neural Information Processing Systems, volume35, 3819–3829. Curran Associates, Inc.
- Palffy etal. (2022)Palffy, A.; Pool, E.; Baratam, S.; Kooij, J.F.; and Gavrila, D.M. 2022.Multi-class road user detection with 3+ 1D radar in the View-of-Delft dataset.IEEE Robotics and Automation Letters, 7(2): 4961–4968.
- Qi etal. (2017)Qi, C.R.; Yi, L.; Su, H.; and Guibas, L.J. 2017.PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space.arXiv:1706.02413.
- Qian etal. (2021)Qian, K.; Zhu, S.; Zhang, X.; and Li, L.E. 2021.Robust Multimodal Vehicle Detection in Foggy Weather Using Complementary Lidar and Radar Signals.In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 444–453.
- Shi etal. (2020)Shi, S.; Guo, C.; Jiang, L.; Wang, Z.; Shi, J.; Wang, X.; and Li, H. 2020.PV-RCNN: Point-Voxel Feature Set Abstraction for 3D Object Detection.In CVPR.
- Shi etal. (2022)Shi, S.; Jiang, L.; Deng, J.; Wang, Z.; Guo, C.; Shi, J.; Wang, X.; and Li, H. 2022.PV-RCNN++: Point-Voxel Feature Set Abstraction With Local Vector Representation for 3D Object Detection.Int. J. Comput. Vision, 131.
- Song, Zhao, and Skinner (2024a)Song, J.; Zhao, L.; and Skinner, K.A. 2024a.LiRaFusion: Deep Adaptive LiDAR-Radar Fusion for 3D Object Detection.arXiv preprint arXiv:2402.11735.
- Song, Zhao, and Skinner (2024b)Song, J.; Zhao, L.; and Skinner, K.A. 2024b.LiRaFusion: Deep Adaptive LiDAR-Radar Fusion for 3D Object Detection.arXiv preprint arXiv:2402.11735.
- Sun and Zhang (2021)Sun, S.; and Zhang, Y.D. 2021.4D automotive radar sensing for autonomous vehicles: A sparsity-oriented approach.IEEE Journal of Selected Topics in Signal Processing, 15(4): 879–891.
- Team etal. (2020)Team, O.; etal. 2020.Openpcdet: An open-source toolbox for 3d object detection from point clouds.
- Teufel etal. (2022)Teufel, S.; Volk, G.; VonBernuth, A.; and Bringmann, O. 2022.Simulating Realistic Rain, Snow, and Fog Variations For Comprehensive Performance Characterization of LiDAR Perception.In 2022 IEEE 95th Vehicular Technology Conference: (VTC2022-Spring).
- Vaswani etal. (2017)Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; and Polosukhin, I. 2017.Attention is all you need.Advances in neural information processing systems, 30.
- Wang etal. (2023a)Wang, H.; Shi, C.; Shi, S.; Lei, M.; Wang, S.; He, D.; Schiele, B.; and Wang, L. 2023a.DSVT: Dynamic Sparse Voxel Transformer With Rotated Sets.In CVPR.
- Wang etal. (2022a)Wang, L.; Zhang, X.; Li, J.; Xv, B.; Fu, R.; Chen, H.; Yang, L.; Jin, D.; and Zhao, L. 2022a.Multi-modal and multi-scale fusion 3D object detection of 4D radar and LiDAR for autonomous driving.IEEE Transactions on Vehicular Technology.
- Wang etal. (2022b)Wang, L.; Zhang, X.; Xv, B.; Zhang, J.; Fu, R.; Wang, X.; Zhu, L.; Ren, H.; Lu, P.; Li, J.; and Liu, H. 2022b.InterFusion: Interaction-based 4D Radar and LiDAR Fusion for 3D Object Detection.In 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 12247–12253.
- Wang etal. (2023b)Wang, Y.; Deng, J.; Li, Y.; Hu, J.; Liu, C.; Zhang, Y.; Ji, J.; Ouyang, W.; and Zhang, Y. 2023b.Bi-LRFusion: Bi-Directional LiDAR-Radar Fusion for 3D Dynamic Object Detection.In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 13394–13403.
- Woo etal. (2018)Woo, S.; Park, J.; Lee, J.-Y.; and Kweon, I.S. 2018.Cbam: Convolutional block attention module.In Proceedings of the European conference on computer vision (ECCV), 3–19.
- Wu etal. (2023)Wu, H.; Wen, C.; Shi, S.; Li, X.; and Wang, C. 2023.Virtual Sparse Convolution for Multimodal 3D Object Detection.In CVPR.
- Wu etal. (2024)Wu, H.; Zhao, S.; Huang, X.; Wen, C.; Li, X.; and Wang, C. 2024.Commonsense Prototype for Outdoor Unsupervised 3D Object Detection.arXiv preprint arXiv:2404.16493.
- Xia etal. (2023)Xia, Q.; Deng, J.; Wen, C.; Wu, H.; Shi, S.; Li, X.; and Wang, C. 2023.CoIn: Contrastive Instance Feature Mining for Outdoor 3D Object Detection with Very Limited Annotations.In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 6254–6263.
- Xiong etal. (2024)Xiong, W.; Liu, J.; Huang, T.; Han, Q.-L.; Xia, Y.; and Zhu, B. 2024.LXL: LiDAR Excluded Lean 3D Object Detection With 4D Imaging Radar and Camera Fusion.IEEE Transactions on Intelligent Vehicles, 9(1): 79–92.
- Xu etal. (2022)Xu, G.; Khan, A.; Moshayedi, A.J.; Zhang, X.; and Shuxin, Y. 2022.The Object Detection, Perspective and Obstacles In Robotic: A Review.EAI Endorsed Transactions on AI and Robotics, 1: 7–15.
- Xu etal. (2021)Xu, Q.; Zhou, Y.; Wang, W.; Qi, C.R.; and Anguelov, D. 2021.SPG: Unsupervised Domain Adaptation for 3D Object Detection via Semantic Point Generation.In ICCV.
- Yan etal. (2023)Yan, J.; Liu, Y.; Sun, J.; Jia, F.; Li, S.; Wang, T.; and Zhang, X. 2023.Cross modal transformer: Towards fast and robust 3d object detection.In Proceedings of the IEEE/CVF International Conference on Computer Vision, 18268–18278.
- Yan, Mao, and Li (2018)Yan, Y.; Mao, Y.; and Li, B. 2018.SECOND: Sparsely Embedded Convolutional Detection.Sensors, 18.
- Yang etal. (2020)Yang, Z.; Sun, Y.; Liu, S.; and Jia, J. 2020.3DSSD: Point-Based 3D Single Stage Object Detector.In CVPR.
Appendix / Supplemental Material
Analysis of Point Distribution of LiDAR and 4D Radar under Different Weather Conditions
Although the weather robustness advantage of 4D radar sensors has been mentioned as a priori knowledge in existing work (Han etal. 2023; Sun and Zhang 2021), this aspect remains less studied. Here, we utilize a variety of real-world adverse weather datasets from K-Radar to examine and corroborate this phenomenon. As depicted in Figure 7, we have compiled plots of the point counts averaged across various types of real-world adverse weather conditions for both LiDAR and 4D radar. It is observed that under different categories of adverse weather conditions, the point counts of LiDAR at different distances from the sensor location (a) exhibit a pronounced decreasing trend, reflecting the significant degradation of LiDAR data quality in adverse weather. In contrast, the point counts of 4D radar at different distances from the sensor location (b) do not show a clear correlation with weather conditions. It is important to note that the large differences in data scenes and dynamic object distributions, and the sensitivity of 4D radar to dynamic objects result in greater fluctuations in point count distribution. However, the lack of correlation between point counts and weather conditions still demonstrates to a certain extent the weather robustness advantage of 4D radar.
in training in testing 3D mAP (fog level = 0) 3D mAP (fog level = 1) 3D mAP (fog level = 2) 3D mAP (fog level = 3) 3D mAP (fog level = 4) 0.1 0.1 77.56 77.03 62.40 50.11 25.27 0.2 75.92 75.79 61.09 49.00 25.28 0.3 75.46 74.46 60.84 48.84 25.72 0.5 73.57 71.79 59.10 47.15 24.99 0.2 0.1 77.59 77.43 64.06 52.57 23.90 0.2 77.21 76.63 62.94 51.10 23.99 0.3 75.84 75.23 62.15 50.44 23.97 0.5 73.73 72.61 59.41 48.87 22.40 0.3 0.1 79.51 78.77 63.78 52.95 25.94 0.2 79.80 78.84 64.73 53.26 28.87 0.3 79.67 77.91 63.33 52.02 26.28 0.5 76.71 75.75 61.28 51.56 26.46 0.5 0.1 77.18 76.67 62.35 51.45 24.30 0.2 78.91 77.87 63.61 53.49 28.49 0.3 79.47 78.35 63.57 52.22 27.19 0.5 78.57 77.12 62.47 51.18 26.29
More Implement Details
For the training strategy, we train the entire network with the loss of 30 epochs. We use Adam optimizer with lr= 1e-3, 1 = 0.9, 2 = 0.999.
For the K-Radar dataset, we preprocess the 4D radar sparse tensor by selecting only the top 10240 points with high power measurement. We present the set the point cloud range as [0m, 72m] for the X axis, [6.4m, 6.4m] for the Y axis, and [-2m, 6m] for the Z axis setting the same environment with version 1.0 K-Radar. And [0m, 72m] for the X axis, [-16m, 16m] for the Y axis, and [-2m, 7.6m] for the Z axis setting the same environment with version 2.1 K-Radar. The voxel size is set to (0.4m, 0.4m, 0.4m).
For the VoD dataset, following KITTI (Geiger, Lenz, and Urtasun 2012), we calculate the 3D Average Precision (3D AP) across 40 recall thresholds (R40) for different classes. Also, following VoD’s (Palffy etal. 2022) evaluation metrics, we calculate class-wise AP and mAP averaged over classes. The calculation encompasses the entire annotated region (camera FoV up to 50 meters) and the ”Driving Corridor” region ([-4 m ¡ x ¡ +4 m, z ¡ 25 m]). For both KITTI metrics and VoD metrics, for AP calculations, we used an IoU threshold specified in VoD, requiring a 50% overlap for car class and 25% overlap for pedestrian and cyclist classes.
Experimental Visualization Results
To better visualize how our method improves detection performance, we compare our L4DR with InterFusion (Wang etal. 2022b) under different simulated fog levels, as shown in Figure 8. Our L4DR effectively filters out a substantial amount of noise in 4D radar points (depicted as colored points). Furthermore, our L4DR achieves an effective fusion of LiDAR and 4D radar to increase the precise recall of hard-to-detect objects and reduce false detections.
Experiments of the hyperparameter in FAD
We have conducted sufficient experimental discussion on the hyperparameter in FAD both in the training and testing stages. The experimental results are shown in Table 6, which shows that too small will lead to too much noise residue and an insignificant denoising effect, while too large will lose a large number of foreground points affecting the detection of the object. Moreover, the performance degree of different under different fog levels is also different, which is due to the different importance of 4D radar under different fog levels. In the end, we chose the setting with the best overall performance with = 0.3 for training and = 0.2 for testing, which is also in line with our expectations. Firstly, cannot be used with the 0.5 threshold for conventional binary classification, which needs to be appropriately lowered. Secondly, the threshold for training should be slightly higher than that for testing due to the increased number of foreground points caused by the data augmentations, such as Ground Truth Sampling.
Foreground Semantic Segmentation Results in FAD
in testing | Recall | IoU | PA |
---|---|---|---|
0.1 | 84.35 | 38.07 | 88.43 |
0.2 | 78.04 | 50.11 | 93.45 |
0.3 | 73.15 | 54.14 | 94.78 |
0.5 | 65.52 | 54.39 | 95.52 |
We also tested the experimental results of FAD for the denoising stage in semantic segmentation. We used Recall, IoU, and Point Accuracy (PA) as the evaluation metrics, as shown in Table 7. With the increase of hyperparameter , Recall decreases and IoU and PA gradually increase. At = 0.5, we obtain the best IoU and PA, but the lowest recall. It verifies the correctness of our denoising algorithm for semantic segmentation. However, we find that the performance of the 3D object detection task is not as good when is higher because losing more foreground points is more detrimental to object detection than retaining some background points.
More Performance on K-Radar Dataset
Methods Modality IoU Metric Total Normal Overcast Fog Rain Sleet Lightsnow Heavysnow RTNH(NeurIPS 2022) 4DR 0.5 41.1 41.0 44.6 45.4 32.9 50.6 81.5 56.3 37.4 37.6 42.0 41.2 29.2 49.1 63.9 43.1 0.3 36.0 35.8 41.9 44.8 30.2 34.5 63.9 55.1 14.1 19.7 20.5 15.9 13.0 13.5 21.0 6.36 PointPillars(CVPR 2019) L 0.5 49.1 48.2 53.0 45.4 44.2 45.9 74.5 53.8 22.4 21.8 28.0 28.2 27.2 22.6 23.2 12.9 0.3 51.9 51.6 53.5 45.4 44.7 54.3 81.2 55.2 47.3 46.7 51.9 44.8 42.4 45.5 59.2 55.2 RTNH(NeurIPS 2022) L 0.5 66.3 65.4 87.4 83.8 73.7 48.8 78.5 48.1 37.8 39.8 46.3 59.8 28.2 31.4 50.7 24.6 0.3 76.5 76.5 88.2 86.3 77.3 55.3 81.1 59.5 72.7 73.1 76.5 84.8 64.5 53.4 80.3 52.9 InterFusion(IROS 2023) L+4DR 0.5 52.9 50.0 59.0 80.3 50.0 22.7 72.2 53.3 17.5 15.3 20.5 47.6 12.9 9.33 56.8 25.7 0.3 57.5 57.2 60.8 81.2 52.8 27.5 72.6 57.2 53.0 51.1 58.1 80.9 40.4 23.0 71.0 55.2 3D-LRF(CVPR 2024) L+4DR 0.5 73.6 72.3 88.4 86.6 76.6 47.5 79.6 64.1 45.2 45.3 55.8 51.8 38.3 23.4 60.2 36.9 0.3 84.0 83.7 89.2 95.4 78.3 60.7 88.9 74.9 74.8 81.2 87.2 86.1 73.8 49.5 87.9 67.2 L4DR(Ours) L+4DR 0.5 77.5 76.8 88.6 89.7 78.2 59.3 80.9 53.8 53.5 53.0 64.1 73.2 53.8 46.2 52.4 37.0 0.3 79.5 86.0 89.6 89.9 81.1 62.3 89.1 61.3 78.0 77.7 80.0 88.6 79.2 60.1 78.9 51.9
Class Method Modality Total Normal Li. Snow He. Snow Rain Sleet Overcast Fog Sedan Pointpillars* (CVPR2019) 4DR 42.8 35.0 53.6 48.3 37.4 37.5 53.9 77.3 RTNH(NIPS2022) 4DR 48.2 35.5 65.6 52.6 40.3 48.1 58.8 79.3 Pointpillars* (CVPR2019) L 69.7 68.1 79.0 51.5 77.7 59.1 79.0 89.2 InterFusion* (IROS2022) L+4DR 69.9 69.0 79.1 51.7 77.1 58.9 77.9 89.5 L4DR (Ours) L+4DR 75.8 74.6 87.5 58.4 77.8 61.4 79.2 89.3 BusorTruck Pointpillars* (CVPR2019) 4DR 29.4 25.8 64.1 34.9 0.0 18.0 21.5 - RTNH(NIPS2022) 4DR 34.4 25.3 78.2 46.3 0.0 28.5 31.1 - Pointpillars* (CVPR2019) L 53.8 52.9 84.1 50.7 3.7 61.8 77.3 - InterFusion* (IROS2022) L+4DR 56.9 56.2 85.7 40.5 6.4 70.6 80.5 - L4DR (Ours) L+4DR 59.7 59.4 84.4 51.9 8.1 66.1 86.4 -
The main text is bound by space constraints and only results using IoU=0.5 and v1.0 labeling are shown on K-Radar. Here we additionally show results using IoU=0.3 with v1.0 labels as in Table. 8 and results using IoU=0.3 with v2.0 labels as in Table. 9. The experimental results all demonstrate the superior performance of our L4DR.
More Fusion Details
Below we present the implementation details of the individual fusion methods compared in Table 6 of the main text, all of which are implemented on the PointPiillars baseline.
Concat.
We directly concatenate the pseudo-images of LiDAR with 4DRadar in the channel dimension after PointPillar coding.
Cross-Attn.
We used a 32-dimensional sin/cos position-encoded 4-head attention layer to calculate the Cross-Modal Pillar feature added to the 4DRadar Pillar feature from the LiDAR Pillar feature to the 4DRadar P illar feature, and also to calculate the 4DRadar Pillar feature to the LiDAR P illar feature’s Cross-Modal Pillar feature and added to the LiDAR Pillar feature.
Self-Attn.
We use a 32-dimensional sin/cos position-encoded 4-head attentional layer to compute self-attentional features on the last two BEV features of the 2D BackBone and add them to the original features.
SE Block.
We use 2x Squeeze’s SEBlock to compute SE features for each BEV feature of the 2D BackBone and add them to the original feature.
CBAM Block.
We use CBAM Block to compute SE features for each BEV feature of the 2D BackBone and add them to the original feature.