Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Amusingly, the question of "where to set the threshold" is actually a problem for pulsed lidar in general. What constitutes the pulse returning, and how do you know which pulse is dominant? In any case, my point was that I don't think they are implying that 20% is "the one true threshold" that determines if a LIDAR is sensitive enough. Indeed, many OEMs specify the bottom bracket at 10% already, and there will always be other variables such as weather and texture that will ruin your day no matter what threshold you chose.

As with software performance, it's all about reasonable benchmarking. If you're in some lidar application, for example building a self driving car, and you currently use velodyne HDL-64 sensors that can register returns from 10% targets at 80m, then Livox's specifications give you a clue as to how their unit might compare in a similar circumstance. That's all. Past that, you have to rig up a test with the unit yourself and profile, it's the only way. I'd also add that many objects would appear different in brightness if you looked at them under a pure wavelength like 905nm, rather than the while light your eye sees.

All that said, your concerns aren't misplaced. One of the leaders in the 'new wave' lidar OEMs is Luminar, and one of their original value propositions was that they went to a different wavelength (1550nm) which has a higher eye-safe power limit. This means that they could pump out higher energy pulses, and thus get more photons back from low reflectivity targets such as tires and dark cars. The jury is still out on what works best to cover the real world range of reflectivities, largely because there are just a lot more variables at play that a simple thresholding would imply.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: