Underwater SLAM Research
This semester I started doing research with the Field Robotics Group on underwater SLAM. The basic problem is that SLAM is already hard — you are trying to build a map and figure out where you are in that map at the same time — but underwater it gets a lot harder because you lose GPS and visibility gets unpredictable. My work is focused on whether neural networks can improve the feature matching step, specifically using camera data in conditions where the image quality degrades.
We are still in early stages. Most of what I have done so far is in simulation; we have not tested in real water yet. The idea is that traditional feature detectors (things like ORB or SIFT) struggle when the images are blurry or the lighting is bad, which is pretty much the norm underwater. So we are trying to see if learned features from a neural network can do better in those conditions. It connects to a lot of what I got to learn in the SLAM class (Fall 2024) and the ROS work from last semester — same kind of pipeline, just a different and messier environment.
I am not sure yet how far this will go. The simulation results look promising but that is always different from putting something on real hardware. I will update this when we have more to report.