By Kristin Nelson-Patel
Previously, I introduced the field of sensor systems architecture and posed a real world example scenario of the unnecessary resource costs and hazards that can happen when the deployment of sensors isn't carefully thought out.
How could this use-case of deploying an alarm system be better approached with sensor architecture thinking? First, before shopping for the alarm system, we would write down what decisions would be triggered by the intrusion alarm. In this case, we want to know immediately if a real intrusion happened in order to start an incident, call the proper authorities, and mitigate any losses while evidence is fresh of any damage done. The key concept there that often gets overlooked is "real". For any single sensor, there is a class of events that it detects that are intended to be detected by a decision-maker; there are almost always other classes of events/phenomena that the sensor also detects that look like the one intended, but aren't. Those are called "clutter".
Discriminating clutter from the phenomena you want to detect to avoid making impactful decisions incorrectly is a central challenge in the successful deployment of sensors. A common real-world example of clutter is migrating flocks of birds that can intermittently look just like commercial aircraft to FAA radars, as they ride atmospheric currents and group themselves densely. This problem has largely been mitigated through algorithms using doppler signatures and other sensor data, but it posed a real challenge to decision-making about defense and safety after the terrorist attacks of 9/11/01 that raised a need to detect aircraft doing unusual things.
No single sensor is perfectly sensitive, or perfectly specific, to any real-world impactful decision-making needs. Hard questions need to be asked of sensor vendors about what kinds of phenomena trigger a detection under what operating conditions, how likely the sensor is to miss a detection under what conditions, and whether in addition there is a measured statistical false alarm rate just due to some sort of noise operating in or around the sensing mechanism, and what impacts that rate. How the vendor answers these questions (or doesn't) gives you a first pass at whether their sensor is useful to you or not, by comparing those performance characteristics to your requirements.
The requirements come from thinking through the scenario of what would actually follow the alarm going off, or not, and missing a real event. It involves getting real and detailed about what quality and type of evidence we want to require in order to actually initiate an incident or call the authorities with confidence. What does it cost to spin up an incident that didn't have to happen? What other ways might we catch a missed event? Then, we compare vendor test data to our requirements estimates of a maximum tolerable false incident rate and minimum required sensitivity. The vendor's performance specs will likely be based on optimal operating conditions, so determining whether those are even in the ballpark of what you need up front is an important step. But this is hopefully just the first down-select analysis to pick the top few vendors who might meet the needs.
Finally, we would want to insist on testing the sensors in the operating environments in which they will be deployed. There is no other way to get a handle on what deployment performance will actually be like. This is the phase when we have a chance to demonstrate that folks are going to accidentally trigger it by jiggling a handle or dropping a box, or that it's set off more often in warmer temperatures, for example. When we multiply the estimated frequency of these clutter phenomena and statistical false alarms by the number of sensors we want to deploy, that number will often be significantly larger than the tolerable false incident rate we estimated in our requirements. In most cases, the confidence required to make an impactful decision to act or to rest easy in quiet leads to a requirement for more than one independent sensor, and multiple kinds of sensors. We would architect in supporting sensors to discriminate the real intrusions from the clutter conditions with requirements-based and testing-based justification, such that when the system of sensors are purchased, deployed, and integrated to process, we will be much more likely to be in control of and happy with the outcome.
Full-blown sensor architecture processes may not be called for in most practical circumstances. I blew past many potential topics and techniques that could be developed further, but a few basic concepts are very powerful and not so common in education and work outside of experience in the field. As we go about our business, if we can just try to be mindful of concepts like statistical false alarms, clutter discrimination, and the benefits of testing in the operating environment, we might be surprised by where they come in handy. Perhaps most importantly, if we keep in mind the questions, "What decisions do I need to make and what do I require to support them?" the emergent consequences of our thinking in this way may very well help improve our reliability and uptime.