Adding More Sensors Won’t Fix Your System—Here’s What I Learned the Hard Way

Many automotive brands argue about the usage of sensor for driving assistance systems, claiming that theirs is the best. The question is, is theirs really flawless? If not, what is actually the perfect sensor system? You know, the magical combination of LiDAR, radar, cameras, GNSS, IMU, ultrasonic, thermal, probably a ToF sensor duct-taped somewhere, that will finally make perception flawless. No edge cases. No missed detections. No weird false positives in foggy industrial zones. Just clean point clouds, crisp images, and perfectly aligned timestamps flowing through ROS2 like a garden of messages.

Yeah. That sensor suite doesn’t exist. Let’s break this down properly, without pretending that throwing more sensors at a problem magically makes autonomy easy.

Where the Myth Comes From

The belief in a perfect sensor suite usually starts innocently. For me, it started in May 2025, when I was assigned to a project focused on driver and road safety. My task sounded straightforward enough: implement object detection using a steady-state LiDAR sensor. “That shouldn’t be that difficult, right?” Famous last words.

On paper, things went well. Object detection worked. Classification worked. Boxes showed up where they were supposed to. Then we took the system out for testing, and reality showed up. The moment the vehicle started moving, the point cloud started smearing. Distortion correction was no longer optional. To fix that, I needed vehicle motion information. Enter the IMU. Great. Now the system should work while the vehicle is in motion.

Except now we had another problem. Detecting and classifying objects is fine, but what are those detections relative to? We needed object coordinates in a global frame. Local LiDAR coordinates were no longer enough. So GNSS joined the family. What started as a “simple LiDAR-only setup” had quietly turned into a multi-sensor system with LiDAR, IMU, and GNSS all tightly coupled.

sensor

This is how it usually happens. One sensor exposes a limitation, another sensor patches it, and before you realize it, your system has grown a sensor stack that no one originally planned for. In situations like this, we instinctively reach for hardware as the solution. Something doesn’t work? The obvious question pops up almost automatically: “What if we add another sensor?”

At first, that actually helps. A camera struggles at night, so you add LiDAR. LiDAR struggles in rain, so you add radar. Radar has poor angular resolution, so you add more cameras. Cameras get blinded by the sun, so maybe a thermal sensor. GNSS is jumpy in urban canyons, so you add RTK. RTK drops under trees, so you add visual odometry. Before you know it, your “minimal viable platform” looks like a rolling science experiment with enough data bandwidth to heat a small village.

This incremental success creates a dangerous illusion. Each added sensor solves a visible problem, so it’s tempting to believe that with enough sensors, you can eliminate all problems. That’s the myth. It’s a linear mindset applied to a non-linear system.

Perception is not a checklist where each sensor ticks off a failure mode. It’s a coupled system where adding sensors also adds complexity, failure surfaces, calibration drift, synchronization issues, and software fragility.

Sensors Don’t Fail Gracefully, They Fail Creatively

One of the biggest misconceptions is that sensors fail in simple, predictable ways. Camera blinded? Easy. LiDAR noisy? Fine. GNSS lost? Switch to dead reckoning.

In reality, sensors fail creatively and often in ways that look valid to downstream algorithms. A LiDAR in light rain doesn’t go “offline”. It produces points. Those points look spatially consistent. They even cluster nicely. Unfortunately, half of them are raindrops. Your object detector happily sees phantom obstacles, your tracker assigns IDs, and your planner starts braking for ghosts.

sensor

A camera under flickering LED streetlights doesn’t black out. It produces frames with subtle rolling brightness shifts that break feature tracking just enough to destabilize visual odometry. Your system doesn’t crash. It slowly drifts.

GNSS in an urban canyon doesn’t disappear. It snaps. One epoch you’re accurate to 2 cm, the next you’re confidently wrong by 5 meters. Fused incorrectly, that error pollutes your entire state estimate.

From a ROS2 perspective, this is particularly nasty because everything still looks “healthy”. Topics are publishing. Messages have valid headers. QoS is fine. There are no exceptions, no NaNs, no fatal logs. Just wrong data moving efficiently through your graph.

No sensor suite can prevent this. At best, it changes the shape of the failure.

More Sensors Mean More Calibration Debt

Every sensor you add introduces a calibration problem. Not just once, but continuously.

Extrinsic calibration between sensors is not a static YAML file you generate once and forget. Mechanical vibrations, temperature changes, aging mounts, minor accidents, and even aggressive driving can shift sensor alignment over time. A few millimeters of LiDAR movement or half a degree of camera yaw error is enough to break multi-sensor fusion in subtle ways.

Time synchronization is just as bad. Hardware timestamps drift. PPS signals get jittery. Software clocks slip under load. ROS2 does a decent job with timestamps, but it cannot magically fix bad time sources. If your LiDAR is 500 ms behind your camera and your ego-motion estimate is slightly off, your fused perception is now a spatial lie.

Now multiply that by six, eight, or twelve sensors. Calibration becomes technical debt with interest. The more sensors you have, the more time your team spends fighting calibration instead of improving algorithms.

sensor

Ironically, teams chasing the perfect sensor suite often end up with worse perception than teams using fewer sensors but understanding them deeply.

Redundancy Is Not the Same as Robustness

This one deserves special attention because redundancy is often used as a buzzword to justify massive sensor stacks. Redundancy sounds great on paper. If one sensor fails, another covers for it. In practice, redundancy only helps if the failure modes are independent and detectable. Many are not.

Rain affects cameras and LiDAR differently, but both degrade. Snow messes with LiDAR reflectivity and camera texture simultaneously. Low sun angles cause glare in cameras and multipath in GNSS. Urban environments degrade GNSS and radar interpretation at the same time.

Worse, redundancy often assumes you can detect which sensor is wrong. That’s hard. Sensor fusion algorithms tend to average, weight, or probabilistically combine inputs. If one sensor is confidently wrong, it can dominate the fusion output unless explicitly guarded against.

From a systems perspective, robustness comes less from adding sensors and more from understanding failure characteristics, modeling uncertainty honestly, and designing algorithms that degrade safely when inputs become unreliable. A smaller, well-characterized sensor suite with conservative fusion often outperforms a bloated suite with naive assumptions.

The Software Complexity Explosion

Every new sensor doesn’t just add hardware. It adds drivers, message types, launch files, QoS tuning, synchronization logic, failure handling, diagnostics, and test cases.

In ROS2, this complexity is very real. Each sensor node publishes multiple topics. Point clouds, images, metadata, diagnostics. Each needs correct QoS settings depending on reliability and latency requirements. Miss one mismatch and your system works in simulation but drops messages on the real vehicle.

Then comes fusion. ApproximateTime synchronizers, custom message filters, buffering strategies, and interpolation logic. Suddenly your perception stack isn’t just about detecting objects; it’s about managing data plumbing.

This complexity has a cost. Bugs hide in the glue code. Latency creeps in unnoticed. Debugging becomes harder because failures emerge from interactions, not single components.

The myth of the perfect sensor suite ignores this entirely. It assumes software scales effortlessly with hardware. Anyone who has actually shipped a ROS2 stack knows that’s laughably optimistic.

sensor

The Real Goal: System-Level Understanding

The uncomfortable truth is that there is no perfect sensor suite because autonomy is not a sensor problem. It’s a system problem.

Sensors are inputs to a chain that includes calibration, synchronization, state estimation, perception, prediction, planning, and control. Weakness anywhere in that chain can dominate performance.

Great autonomy stacks are built by teams who understand their sensors deeply, model uncertainty honestly, and design software that expects things to go wrong. They invest in diagnostics, validation, simulation, and failure analysis instead of endlessly adding hardware.

They ask questions like: What does this sensor do when it fails? How confident should we be in this measurement? How do we detect silent degradation? How does this propagate through the system?

Those questions matter far more than whether you have one LiDAR or three.

Accepting Imperfection Is a Feature, Not a Bug

The most mature autonomous systems are not chasing perfection. They’re designed around imperfection. They assume sensors will lie sometimes. They assume calibration will drift. They assume environments will surprise them. And they build architectures that survive that reality.

This mindset shift is hard, especially in a field full of flashy demos and bold claims. But it’s necessary. The myth of the perfect sensor suite is comforting because it promises a hardware solution to a fundamentally hard problem. There is no magic sensor combination that saves you from understanding your system.

And honestly, that’s a good thing. Because if autonomy were just about finding the right sensors, it would already be solved. It is always about understanding how things work and optimizing it. If you are interested too, have a look at some of the interesting fields of sensor technology in self-driving cars:

If you are interested in such topics, I’m usually share posts over on Instagram at @machinelearningsite, sharing short insights, failure stories, and practical engineering lessons from robotics, ROS2, and autonomous systems. Come and have a look!

Leave a Reply