Intel says that GPUs do not have the accuracy or power for branching and scheduling

Jun 5, 2008 13:34 GMT  ·  By

Yesterday, at a meeting with some journalists, Intel stated that CPUs realize video encoding, as that is a task designed for them and that the future would not bring any changes. According to Intel representatives, the CPU is used by DivX for dynamical adjustments of details across scenes. The processor is the only part that can ensure that the denser areas of scenes are provided with the high level of detail they require.

"When you?re encoding on the CPU, the quality will be higher because we?re determining which parts of the scene need higher bit-rates applying to them," said Fran?ois Piednoel, senior performance analyst at Intel.

The discussion turned to CUDA and the massive performance increase Nvidia claimed it would bring. CUDA may deliver a change of paradigm for software developers, and some of the big names like Adobe seem already on the trend. According to Piednoel, CUDA will probably not deliver the level of performance expected, due to the fact that it spits the scene up and every pixel is treated the same, a brute force method. Yet, although Intel speculates the area, Nvidia never said anything about video quality - until now.

"The science of video encoding is about making smarter use of the bits and not brute force," added Piednoel.

Since Larrabee is so close to the market, and this is, in fact, a massively parallel processor, we would all be interested in knowing if Intel changes its tune when it has its CPU featuring the processing power that can deliver similar application performance to that claimed by CUDA. But Intel would not make the comparison, because the GPU doesn't feature full x86 cores. According to Intel, CUDA will only allow coding in C and C++, a big limitation since x86 allows any programming language to be used. All those who do not use C for coding will surely look into this.

The idea is that not all developers use or understand C or C++, said Intel. On the other hand, C is the first language learned during a Computer Science degree, so Intel may not be entirely right on this part. Yet, we should not forget that many developers use their knowledge of C to learn other languages, and use the last ones to write procedural programs.

According to Intel, GPUs are not suited for branching and scheduling, and the execution units themselves are very good only for graphics processing. Today's GPUs are yet capable of handling tens of thousands of threads at once and have a specific design on branching; this is specified by major graphics APIs. This is surely a technology that knows much more than just working in graphics workloads.

The only thing that can be made out of Intel's sayings is that the stream processing units in AMD?s and Nvidia?s latest graphics architectures are so low and inaccurate that they are only able to push pixels. This explains why Intel uses fully functional x86 cores in the architecture of its Larrabee processor.