It's doing more harm than good at the rate we are going

Jan 13, 2015 14:46 GMT  ·  By

Artificial Intelligence has been one of the major goals of human technological development ever since the concept was first coined by science fiction. And at present time, much of the common man's attention is focused on the research and development of AIs. More than ever before actually.

As it turns out, this may cause more harm than good. Indeed, according to experts I talked to recently, it is already doing more harm than good.

Normally, hype is something that companies try to cultivate, to encourage in order for there to be a good number of people willing to stop and watch closely when they launch a product or other.

However, as we've seen numerous times in the past, going overboard with hyping something up can backfire, causing a highly anticipated product release to conclude in disappointment.

A good enough case may be made for this being impossible to happen in regards to artificial intelligence. Well, except if they turn out to be evil and omnicidal, in which case we suppose we've had just as much warning.

However, it is the opinion of certain experts, with whom I happen to agree, that AI research is too hyped up at present, and that the gap between the perception of AI research and the reality of it is far too wide.

No artificial intelligence actually exists in reality, the term is badly misused

Recently, I was able to speak directly with three people who basically amount to the ultimate authority in such things, three executives at Sentient Technologies: Co-Founder and Chief Scientist Babak Hodjat, Chief Technology Officer Nigel Duffy, and Chief Marketing Officer Leslie Nakajima.

Sentient Technologies is a company which has been developing technology that will ultimately provide massively scalable artificial intelligence to everyone in the world. They intend to enable companies to scale computing and AI software.

In the grand scheme of things, they are a fairly young company. However, their field of operation hasn't exactly passed the nascent stage either.

Indeed, a strong case could be made for there not even being any real AI in the world, no matter what the Marvel cinematic universe and Tony Stark would have us believe. If we were to compare AI development to a human's, the former is barely in the embryo stage, if that.

Despite this, everyone seems to believe that a true artificial intelligence is just around the corner. I admit I may have used such a turn of phrase myself at times, even though I never believed I would see anything major accomplished in this field during my lifetime.

The many robots capable of self-guided flight, not to mention self-driving cars, are one party responsible for this. However, in the end, they are only using interconnected proactive and reactionary systems based on pre-programmed protocols. There is no real intelligence there.

What we don't currently have

If you're hoping to see a Cortana, Serina or Hal 9000 before you go on your next great adventure, you'll be disappointed. Your children or your children's children might get to see the birth of such things, but it's really impossible to speculate without spontaneously developing clairvoyance and scrying the future.

So while it's true that AI will add a lot of value to all products, they won't be the product for a long, long time, and we'll have to rely on individual pieces or “puzzles” of AI building blocks while we try to navigate around the dead ends dotting the path towards human-like intelligence.

Even the cloud-based Internet of Things living model won't use real artificial intelligence, despite it being set to launch as early as 2016.

What we do have right now

Relatively complex voice, image and motion recognition software and sensors have allowed for augmented reality devices like Google Glass, self-driving cars, semi-autonomous robots and drones, and increasingly intuitive programs.

Nevertheless, nascent “artificial intelligence” is only in play within supercomputers, and even then it's mostly about number crunching and simulations.

Investing in equities is another area that has been improving, and there should soon be programs capable of making stock market investments for you, anticipating highs and lows and buying shares accordingly.

That leaves machine learning, which is the only thing even remotely similar to the concept of “AI” from the minds of people.

Google along with researchers from Stanford University were able to create a supercomputer that mimicked the workings of a human brain and managed to learn on its own how to recognize the concept of a “cat.” The revelation was made in 2012.

The operators didn't tell the supercomputer it was a cat. The system was only “shown” the images, and later, it was able to create its own image of a cat.

Part of the irony is that the cluster used 16,000 computing cores and 10 million images to do it (mostly unlabeled YouTube thumbnails), and the “learning” took several years. Also, asking it to do the same for a different concept, like a “car,” subsequently failed for some reason, although there seemed to be better success with human faces and bodies.

The other part of the irony? This seemingly basic ability to recognize images via independent machine learning is a huge accomplishment, more than anyone in the scientific community actually hoped for.

What all of this means

In a nutshell, we have no AIs and will not see the emergence of any AIs for a long, long time, so people should stop rooting for them, passively or otherwise, because they're only setting themselves up for disappointment. Not to mention it's placing undue pressure on the people we expect to see developing them and other useful things.

If anything, we'll need to master quantum computing first, and that's not something we can look forward to in the near future, when the most advanced quantum HDDs only hold data for 6 hours. And that's actually a lot, considering that before it was a hundredth of that at best. And let's not even get into quantum CPUs.