Most deaf people from the US would prefer to communicate via sign language, and up until now this was impossible over the mobile networks, because the bandwidth that was available meant the video was too low as quality for someone to accurately depict the arm, finger or face movements.
US researchers are now attempting to solve this problem with video compression tools, making it possible to send live pictures of people signing even across the low bandwidth networks.
By only sending data about which parts of each frame have changed, the system cuts down on the bandwidth needed. The only thing left to do is for the researchers to talk it over with mobile firms and find a way to make the technology available for deaf people.
While the video compression tools bring many advantages, there are also several inconveniences. "To do all this calculation and video compression runs down your battery pretty fast," said University of Washington computer scientist Richard Ladner, one of the principal investigators on the project.
The system only looks for hand, arm and face movements and ensures that the face of a signer is presented more in detail. There is an obvious reason for this. "The large, slower movements of hands and arms can be picked up at low fidelity," said Prof Ladner. "The face needs higher fidelity because the movements are much smaller."
Most of the people interpreting sign language look at the face of the signer 95 percent of the time. The recently developed system will work across networks that only provide a bandwidth of 10-12 kb per second and it looks like the research has gone well enough for the team to be in talks with handset makers and operators to put it on phones.