How did we test our top contenders, you ask? Well, folks, we drove around a lot. From long straight highways in Kansas to the windy back roads of Kentucky and the coast of California, we were sure to switch up the terrain often. To test false alerts, we spent time in various shopping center parking lots and our personal favorite, the Las Vegas strip, which is littered with automatic doors that throw most detectors on a K- band alerting frenzy. We also used apps like Waze to find local speed traps and red-light cameras to drive past, again and again, tallying how many times each detector went off and how far in advance.
Features
Testing every single feature on each detector was quite the endeavor. This began with a lot studying of the manuals. We made sure that we understood all the bells and whistles each device has to offer and put them to the test. We downloaded the respective apps on our phones, sifted through all our options, and began testing. We started by leaving every single feature on. This was quite overwhelming as false alerts were triggered left and right. Next, we tested out the manual lockout features by locking out a location and then driving past it again to see if the alerts seized. To compare and contrast the efficacy of each feature, we set every device to the same (or similar) settings, stuck two on the windshield, and evaluated which one was more efficient. Did the lockouts work? Does turning down specific band sensitivities lower the number of false alerts? We switched up the settings repeatedly with various combinations of features turned on and off to ensure we had thoroughly tested every possible option.
Accuracy
There are two main questions we asked ourselves while testing for this metric. Firstly, does the detector go off when there is a threat present? Secondly, does it go off when there isn't a threat present? We answered our first question by seeking out stationary speed traps and traffic cameras through local knowledge or by using the app Waze. We then drove past the same threat three times for each detector, keeping track of how many times it went off. We experienced some funny looks from law enforcement when they realized the same car had driven by them ten plus times. This proved to be a pretty concrete way of testing out the true threat alerts. The false alert testing was a little less black and white. On days that we were not actively testing, we were always passively testing these radar detectors. We tallied each false alert and next to it wrote down what most likely triggered it. It was often pretty obvious what was setting the alert off, even if it was false. More often than not, it was a speed indicator, Blind Spot Monitoring on the car next to us, or automatic doors. We also tested the accuracy by intentionally driving through areas we knew had the potential to set off false alarms, like the Las Vegas Strip, grocery store parking lots, and mall lots. We then used the manual GPS location lockout feature, if applicable, to silence the alerts and tested them out again while driving through the same areas.
Range
This was a fun but hectic metric to test as it involved multiple detectors racing to alert us first. We began by pairing two detectors at random and placing them on the dash. Then we would drive by one of the speed traps that we discovered using Waze. We would do this three times each and tally the winners. Then the winners would move up, tournament-style, to compete against the ones with the highest range while the losers would compete with the other losers.
Ease of Use
We always test our categories out with an unbiased and inclusive mindset. During this metric testing, we took into account the user's potential technological skill or knowledge level as well as their ability to hear and see. We first tried setting up the devices after a quick skim of the manual to test how intuitive the setup was. We then read the manuals in-depth to ensure we set our devices up correctly and did it over again. We spent time turning on and off features and asked friends of the technologically gifted AND challenged variety to attempt to work each device as well. We took into account screen size and audible clarity. Some of the verbal alerts were lost completely in the noise of the cab, while others were crisp and clear regardless of background noise.