The device could come in handy for emergency situations

Sep 24, 2012 09:47 GMT  ·  By
This wearable sensor can be used to create digital, real-time, 3D maps of the insides of buildings
   This wearable sensor can be used to create digital, real-time, 3D maps of the insides of buildings

A new instrument to enable efficient search-and-rescue operations has been developed by experts at the Massachusetts Institute of Technology (MIT), in Cambridge. It comes in the form of a wearable sensor that can create real-time, 3D maps covering the interiors of buildings.

The wearable sensor system produces digital information that can be relayed live to remote operators in an emergency control room. The MIT group that built it says that it could be especially useful in mounting an effective disaster response.

A full presentation of the new system is scheduled to occur at the Intelligent Robots and Systems conference, to be held in Portugal next month. The video below demonstrates the capabilities of the device inside the MIT campus.

The main components of the instrument are a Microsoft Kinect camera, used for depth perception, an inertial sensor and a laser rangefinder. The latter is a LIDAR (light detection and ranging) light source, which sweeps at a wide angle (270 degrees), as the person using the device moves.

A backpack contains the processing unit for the system, as well as the instrumentation necessary for transmitting the data to other users. Additionally, a handheld pushbutton device enables the user to mark areas of interest on the newly obtained maps.

In the near future, the MIT team wants to construct a similar system, capable of adding voice and text tags to the 3D maps. This would be very useful when handling a toxic biological or chemical spill, or for pointing out areas of structural damage in buildings, following an earthquake.

“The operational scenario that was envisioned for this was a hazmat situation where people are suited up with the full suit, and they go in and explore an environment,” Maurice Fallon explains.

“The current approach would be to textually summarize what they had seen afterward – 'I went into this room on the left, I saw this, I went into the next room,' and so on. We want to try to automate that,” the expert goes on to say.

Fallon, the lead author of the new paper, holds an appointment as a research scientist with the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). Professors John Leonard and Seth Teller, and graduate students Hordur Johannsson and Jonathan Brookshire, were also part of the study.

The investigation was supported by both the US Air Force and the Office of Naval Research.