The company considers many-core chips as viable for mobile devices

Oct 31, 2012 08:57 GMT  ·  By

Despite mobile devices only now getting comfortable with quad-core processors, Intel thinks it can and should increase the number by a factor of a bit over 10.

To cut right to the chase, Intel thinks that it can adapt its 48-core single-chip cloud computer (SCC) to be used in mobile applications like smartphones and tablets.

The company's chief technology officer actually believes that mobile devices will outright need many-core processors before long.

We don't dispute the fact that technology has been steadily settling into a multi-core mindset, especially with all the data centers and supercomputers in the world.

Making phones and tablets jump to 48-core designs is still a bit unlikely though, not to mention unnecessary.

After all, tablets are only now getting comfortable with quad-core CPUs, and only because the ARM architecture is very energy efficient.

For Intel, whose x86 architecture has failed to score, even now, phone design wins, promoting its many-core architecture on the mobile market is curious.

True, sharing tasks between multiple cores could be more energy-efficient, and the overall performance superior, but still.

At least Chipzilla doesn't expect the jump to be sudden. The earliest many-core slates and/or phones are said to be on track for 2017 through 2022.

That leaves us with five to ten years of speculations, and software makers with time to develop applications that can actually use all the cores at once. Not a very easy task, since it isn't as simple as modifying a bit of code. Software makers have to completely change how programs are developed in order to enable parallel processing of this caliber.

That leaves only one other matter: device makers are embracing heterogeneous chip designs, with CPU cores and GPUs on the same die. If Intel's dream is to come true, it will somehow have to offer similar video capabilities and somehow persuade software developers to its cause rather than leaving them to implement GPU computing methods instead.