Friday, July 13, 2007

Adding Search and Rescue Capabilities (part II): Modeling what we see and do not see

One of the concern that one has during a search and rescue operation (part I is here) is whether or not, the item of interest was seen or detected. I am not entirely sure for instance that SAROPS includes this, so here is the result of some of the discussions I have had with some friends on this. While the discussions were about the Tenacious, one should keep an eye on how it applies to other types of mishap that may lead to a similar undertaking.

In the search for the Tenacious, there were several sensors used at different times:
  • Mark One Eyeball from Coast Guards or from some private parties or from onlookers from the coast
  • sensors used by the Coast Guard in Planes and Boats
  • sensors (Radar, visual, IR, multispectral) from satellites or high altitude planes
  • webcams looking at the SF port and bay.

Each and every one of these sensors give some information about their field of view but they are limited by their capabilities. The information from the sensor is dependent on its resolution and other elements. While the issue of resolution is well understood, at least spatially, sensor visibility is dependent on:
  • cloud cover (high altitude, satellites), haze (low altitude)
  • the calmness of the sea
  • the orientation of the sensor (was the object of interest in the sensor cone ?)
  • the ability of the sensor to discriminate the target of interest from the background (signature of the target)
  • the size of the target (are we looking for a full boat or debris ?)
And so whenever there is a negative sighting over an area, the statement is really about the inability of the detector to detect the target of interest due the elements listed above. And so the probability of the target of interest not being there is not zero (except in very specific circumstances). In effect, when the data fusion occurs when merging information from all these sensors, it is important to be able to quantify what we don't know as much as what we know. It is also important to realize that different maps are really needed for each scenario. A scenario about searching for debris is different from that of searching for a full size boat. What the detectors/sensors see is different in these two scenarios. While one can expect to have a good signal when searching for a full size boat, most sensors are useless when it comes to detecting minutes debris.

In the aerospace business, some of us use software like STK that provides different modules in order to schedule and understand information about specific satellite trajectories and so forth. It may be a good add-on to the current SAROPS capabilities in terms of quantifying the field of view.



But the main issue is really about building the right probability distribution as the search goes on and how one can add any heterogenous information into a coherent view of the search.

Time is also a variable that becomes more and more important as the search goes. In particular it is important to figure out the ability to do data fusion with time stamped data. One can see in this presentation, that while the search grid is regular, one can see some elements drifting out of the field of view as the search is underway. So the issue is really about quantifying data fusion with sensors input as well as maritime currents and provide a probability of escaping the search grid. SAROPS already does some of this, but I am not sure the timing element of the actual search (made by CG planes, boat) is entered in the software as the search go on. It was difficult for us to get back that timing from the search effort (it was rightfully not their priority) and one simply wonders if this is an input to SAROPS when iterating on the first empty searches. If one thinks along the lines of the 8000 containers scenario, this is important as it has been shown that some of these containers have different lifespan at sea level and right under the surface. In this case, the correlation between time stamped sensor outputs become central as a submerged but within a few feet underwater containers may not be viewable from specific sensors (but would remain dangerous to navigation). Also this is not because we did not see anything on the second path at the same location (provided no current) that the object is not here anymore, rather the sensor did not detect it. In the illustration below one can see the different targets found by the Radarsat/John Hopkins team for the Tenacious. Without time stamp it is nearly impossible to make a correlation between hits on the first and the second satellite path.

The bayesian framework seems to have already been adopted by SAROPS and previous versions. It may need some additional capabilities to take into account most the issues mentioned above (sensor network or EPH). In either case, a challenge of some kind, with real data might be a way to advance the current state of the art.

No comments:

Printfriendly