Optical LiDAR Communication: Repurposing Existing LiDAR Sensors for Infrastructure-to-Vehicle Communication

As autonomous mobile robots increasingly operate in real-world environments, safety has emerged as a critical challenge, particularly regarding obstacle and pedestrian detection in building blind spots and reliable traffic signal recognition. While traditional Vehicle-to-Infrastructure (V2I) systems adopt high-capacity communication through 5G networks or via Optical Wireless Communication (OWC), these approaches require dedicated communication hardware that proves impractical for small, low-cost robots. Additionally, the communication bandwidth required for robot-oriented V2I, such as blind spot object detection and traffic signal states, is relatively limited; the high-capacity communication of 5G is often unnecessary. To address these challenges, we propose a novel optical communication system named Optical LiDAR Communication (OLC), which repurposes existing LiDAR sensors as communication devices. By integrating LiDAR Injection with 2D Code technology, OLC achieves cost-effectiveness through V2I communication without requiring additional hardware on robots. Real-world experiments confirmed that the proposed method achieves a communication success rate of over 76% at distances up to 30 meters. Furthermore, as a proof-of-concept, we develop two key V2I systems utilizing OLC: traffic signal information transmission and blind-spot obstacle detection, and real-time communication performance was demonstrated. These results indicate that the proposed method has potential as a V2I platform for next-generation robotics infrastructure.

RA-L 2025
On the Realism of LiDAR Spoofing Attacks against Autonomous Driving Vehicle at High Speed and Long Distance

The rapid deployment of Autonomous Driving (AD) technologies on public roads presents significant social challenges. The security of LiDAR (Light Detection and Ranging) is one of the emerging challenges in AD deployment, given its crucial role in enabling Level 4 autonomy through accurate 3D environmental sensing. Recent lines of research have demonstrated that LiDAR can be compromised by LiDAR spoofing attacks that overwrite legitimate sensing by emitting malicious lasers to the LiDAR. However, previous studies have successfully demonstrated their attacks in controlled environments, yet gaps exist in the feasibility of their attacks in realistic high-speed, long-distance AD scenarios. To bridge these gaps, we design a novel Moving Vehicle Spoofing (MVS) system consisting of 3 subsystems: the LiDAR detection and tracking system, the auto-aiming system, and the LiDAR spoofing system. Furthermore, we design a new object removal attack, an adaptive high-frequency removal (A-HFR) attack, that can be effective even against recent LiDARs with pulse fingerprinting features, by leveraging gray-box knowledge of the scan timing of target LiDARs. With our MVS system, we are not only the first to demonstrate LiDAR spoofing attacks against practical AD scenarios where the victim vehicle is driving at high speeds (60 km/h) and the attack is launched from long distances (110 meters), but we are also the first to perform LiDAR spoofing attacks against a vehicle actually operated by a popular AD stack. Our object removal attack achieves ≥96% attack success rates against the vehicle driving at 60 km/h to the braking distances (20 meters). Finally, we discuss possible countermeasures against attacks with our MVS system. This study not only bridges critical gaps between LiDAR security and AD security research but also sets a foundation for developing robust countermeasures against emerging threats.

NDSS 2025
LiDAR Spoofing Meets the New-Gen: Capability Improvements, Broken Assumptions, and New Attack Strategies

LiDAR (Light Detection And Ranging) is an indispensable sensor for precise long- and wide-range 3D sensing, which directly benefited the recent rapid deployment of autonomous driving (AD). Meanwhile, such a safety-critical application strongly motivates its security research. A recent line of research demonstrates that one can manipulate the LiDAR point cloud and fool object detection by firing malicious lasers against LiDAR. However, these efforts face 3 critical research gaps: (1) evaluating only on a specific LiDAR (VLP-16); (2) assuming unvalidated attack capabilities; and (3) evaluating with models trained on limited datasets. To fill these critical research gaps, we conduct the first large-scale measurement study on LiDAR spoofing attack capabilities on object detectors with 9 popular LiDARs in total and 3 major types of object detectors. To perform this measurement, we significantly improved the LiDAR spoofing capability with more careful optics and functional electronics, which allows us to be the first to clearly demonstrate and quantify key attack capabilities assumed in prior works. However, we further find that such key assumptions actually can no longer hold for all the other (8 out of 9) LiDARs that are more recent than VLP-16 due to various recent LiDAR features. To this end, we further identify a new type of LiDAR spoofing attack that can improve on this and be applicable to a much more general and recent set of LiDARs. We find that its attack capability is enough to (1) cause end-to-end safety hazards in simulated AD scenarios, and (2) remove real vehicles in the physical world. We also discuss the defense side.

NDSS 2024

Ghost-FWL: A Large-Scale Full-Waveform LiDAR Dataset for Ghost Detection and Removal

Kazuma Ikeda*,1, Ryosei Hara*,1, Rokuto Nagata1, Ozora Sako1, Zihao Ding1, Takahiro Kado2, Ibuki Fujioka2, Taro Beppu2, Mariko Isogawa1, Kentaro Yoshioka1
1Keio University   2Sony Semiconductor Solutions
CVPR 2026

*Indicates Equal Contribution
Teaser figure

LiDAR data often contains ghost points caused by multi-path reflections from glass and reflective materials (top right), which appear as spurious structures that do not physically exist. Ghost leads to substantial errors in tasks such as detection (left (a)) and localization and mapping (left (b)). We address this issue by introducing the Ghost-FWL dataset (bottom right) and a ghost removal framework.

Abstract

LiDAR has become an essential sensing modality in autonomous driving, robotics, and smart-city applications. However, ghost points (or ghost), which are false reflections caused by multi-path laser returns from glass and reflective surfaces, severely degrade 3D mapping and localization accuracy. Prior ghost removal relies on geometric consistency in dense point clouds, failing on mobile LiDAR's sparse, dynamic data. We address this by exploiting full-waveform LiDAR (FWL), which captures complete temporal intensity profiles rather than just peak distances, providing crucial cues for distinguishing ghosts from genuine reflections in mobile scenarios. As this is a new task, we present Ghost-FWL, the first and largest annotated mobile FWL dataset for ghost detection and removal. Ghost-FWL comprises 24K frames across 10 diverse scenes with 7.5 billion peak-level annotations, which is 100× larger than existing annotated FWL datasets.

Dataset

This section presents Ghost-FWL, the largest FWL dataset to date, which is specialized for ghost removal. Conventional LiDAR datasets provide only point cloud-level information, discarding the temporal multi-path information crucial for identifying ghosts caused by glass and reflective surfaces. Ghost-FWL addresses this gap by capturing complete temporal intensity histograms and providing peak-level annotations indicating the physical cause of each reflection (object, glass, ghost, or noise). Spanning 10 diverse scenes with 24,412 annotated frames and 7.5B peak-level labels, Ghost-FWL is 100× larger than prior annotated FWL datasets [Scheuble et al.], enabling learning-based ghost detection and removal at the waveform level.

Supplementary dataset figure

Overview of the Ghost-FWL. Left: Our dataset includes both indoor and outdoor scenes. Based on the dense 3D maps as shown in Scene, we annotated FWL data with semantic labels: Ghost (red), Object (green), Glass (blue), Noise. Gray regions are excluded from annotation. Right shows the data acquisition setup and dataset statistics, including the incident angle distribution and LiDAR positions examples. Data were collected at three different times of day: Morning (AM10–12), Daytime (PM12–5), and Evening (PM5–7).

Dataset Point Clouds

Interactive visualization of Ghost-FWL point clouds. Select a scene to inspect the raw dataset geometry.

Ready
Comparison of LiDAR real-world datasets for ghost detection and/or full-waveform analysis. Our Ghost-FWL contains mobile LiDAR full-waveform measurements and is one hundred times larger than prior work, making it the largest annotated FWL dataset.
Access & Platform Sensor Labels
Dataset Year Public Platform FWL LiDAR
Dim.
Ray
Den.
Ghost FWL
Data
Frames/
Scenes
Annotated
Peaks
UNIST[1] 2017Stationary3D278----
Leddar PixSet[2] 2021Mobile3D0.267----
Lee et al.[3] 2023Stationary3D278----
FRACTAL[4] 2024Aerial2D------
Scheuble et al.[5] 2025Mobile3D2.560.24k / 2NA
Ghost-FWL (Ours) 2025Mobile 3D200 24k / 107.5B

FWL: Full-Waveform LiDAR. Frames/Scenes: number of annotated frames and number of scenes within the real-world FWL data.

[1] Yun et al., "Virtual Point Removal for Large-Scale 3D Point Clouds with Multiple Glass Planes", IEEE Transactions on Pattern Analysis and Machine Intelligence, 2019.

[2] Deziel et al., "An Opportunity for 3D Computer Vision to Go Beyond Point Clouds with a Full-Waveform LiDAR Dataset", IEEE International Intelligent Transportation Systems Conference, 2021.

[3] Lee et al., "Learning-Based Reflection-Aware Virtual Point Removal for Large-Scale 3D Point Clouds", IEEE Robotics and Automation Letters, 2023.

[4] Gaydon et al., "FRACTAL: An Ultra-Large-Scale Aerial Lidar Dataset for 3D Semantic Segmentation of Diverse Landscapes", arXiv preprint arXiv:2405.04634, 2024.

[5] Scheuble et al., "Lidar Waveforms are Worth 40x128x33 Words", ICCV, 2025.

Method

Given FWL data, our framework predicts and removes ghost-related signals. Our model consists of a transformer-based encoder and an MLP head. We further introduce FWL-MAE, a masked autoencoder designed for representation learning on FWL data, explicitly trained to reconstruct peak position, amplitude, and width. The ghosts detected by our model are then removed from FWL data, and the cleaned data are utilized for downstream tasks such as SLAM and 3D object detection.

Method overview figure

Results

Classification

Peak classification results and point cloud visualization after applying ghost removal. All results were obtained using the proposed framework. Red, green, and blue indicate Ghost, Object, and Glass, respectively.

Classification results figure
Scene
Ready

SLAM

Trajectory and mapping generated by SLAM using Multi-Peak processing (left) and our ghost removal method (right). Multi-Peak processing includes numerous ghost points in the reconstructed map, leading to trajectory drift. The proposed method yields a trajectory that more closely follows the ground-truth path (white) by effectively removing ghost artifacts.

3D Object Detection

Qualitative evaluation of 3D object detection with Multi-Peak processing (left) and our ghost removal (right). Green bounding boxes indicate persons. With Multi-Peak, a ghost person is detected behind the glass wall, whereas our method suppresses this false detection.

Object detection results
3D Object Detection
SLAM results
SLAM

BibTeX

        
          TBA