NFL-BA: Near-Field Light Bundle Adjustment for

SLAM in Dynamic Lighting

Andrea Dunn Beltran*, Daniel Rho*, Stephen M. Pizer, Marc Niethammer, Roni Sengupta
University of North Carolina at Chapel Hill
*Equal contribution

[Paper]



Abstract

Simultaneous Localization and Mapping (SLAM) systems typically assume static, distant illumination; however, many real-world scenarios, such as endoscopy, subterranean robotics, and search & rescue in collapsed environments, require agents to operate with a co-located light and camera in the absence of external lighting. In such cases, dynamic near-field lighting introduces strong, view-dependent shading that significantly degrades SLAM performance. We introduce Near-Field Lighting Bundle Adjustment Loss (NFL-BA) which explicitly models near-field lighting as a part of Bundle Adjustment loss and enables better performance for scenes captured with dynamic lighting. NFL-BA can be integrated into neural rendering-based SLAM systems with implicit or explicit scene representations. Our evaluations mainly focus on endoscopy procedure where SLAM can enable autonomous navigation, guidance to unsurveyed regions, blindspot detections, and 3D visualizations, which can significantly improve patient outcomes and endoscopy experience for both physicians and patients. Replacing Photometric Bundle Adjustment loss of SLAM systems with NFL-BA leads to significant improvement in camera tracking, 37% for MonoGS and 14% for EndoGS, and leads to state-of-the-art camera tracking and mapping performance on the C3VD colonoscopy dataset. Further evaluation on indoor scenes captured with phone camera with flashlight turned on, also demonstrate significant improvement in SLAM performance due to NFL-BA.


C3VD Visualizations

Please allow a moment for point clouds to load after changing sequences.

SLAM
Depth Input
Sequence
Input Video
SLAM Baseline Trajectory
+NFL_BA (Ours)
GT Point Cloud
SLAM Baseline Mapping
+NFL_BA (Ours)

Controls: Click and drag inside the viewer to rotate. Use mouse wheel to zoom. Keyboard arrows ( ↑ ↓ ← → ) to move, 'a'/'d' to yaw, 'w'/'s' to pitch, 'q'/'e' to roll. Click here to reset view to default.


C3VD Results


Colon10k Visualizations

Sequence 3

RGB Video
Base Point Cloud
Our Point Cloud

Sequence 4

RGB Video
Base Point Cloud
Our Point Cloud

In-Door Self Capture Visualizations

Indoor Scene
Input Video
Photo-BA Trajectory
NFL-BA

Acknowledgment

This work is supported by a National Institute of Health (NIH) project #1R21EB035832 "Next-gen 3D Modeling of Endoscopy Videos". We also thank Prof. Stephen M. Pizer and Dr. Sarah McGill for helpful discussions during the project.


Bibtex


We used the project page of Fuzzy Metaballs as a template.