New framework for processing data could also be more energy-efficient

Nov 4, 2013 12:50 GMT  ·  By
Allowing computers to make mistakes every once in a while could pave the way to faster computing
   Allowing computers to make mistakes every once in a while could pave the way to faster computing

Allowing our computers to make a series of harmless mistakes could pave the way towards faster, more energy-efficient computing. A group of researchers at the Massachusetts Institute of Technology (MIT) announces that the development of a programming framework that allows programmers to tell their computers when errors are allowed to happen. 

At the moment, perfect decoding is mandatory for all manner of applications, from video and image editing to rendering games and animation frames. However, if some pixels in a high-definition image are not properly decoded, users are not likely to observe them.

According to the MIT team, this could be a good instance to allow some mistakes to occur in codes, so that the overall energy-efficiency and speed of computers may increase. This way of addressing the speed problem is contemplated due to limitations in further advancing existing microchip technologies.

Until now, the number of transistors that could fit in a given space has doubled every few years. But this brisk pace of miniaturization cannot last for long, since transistors are now just a few nanometers across. Making them smaller is becoming increasingly difficult.

The new MIT programming framework was developed by researcher Martin Rinard, from the Institutes' Department of Electrical Engineering and Computer Science, and graduate students Michael Carbin and Sasa Misailovic.

“If the hardware really is going to stop working, this is a pretty big deal for computer science. Rather than making it a problem, we’d like to make it an opportunity. What we have here is a […] system that lets you reason about the effect of this potential unreliability on your program,” Rinard explains.

One of the main advantages the system has is that it can calculate the probability of a software performing the way it should, even if it is programmed to display errors as well. A paper on the framework was recently presented at the Association for Computing Machinery’s Object-Oriented Programming, Systems, Languages and Applications conference.

“This is a foundation result, if you will. This explains how to connect the mathematics behind reliability to the languages that we would use to write code in an unreliable environment,” comments University of Washington associate professor of computer science and engineering, Dan Grossman.