That way, program developers won't have to allocate resources themselves

Mar 26, 2014 07:47 GMT  ·  By

Right now, there is little hardware-level communication between the graphics processing unit (GPU) and central processing unit (CPU) of a PC, as the two still can't really share resources. That will change with Pascal though, NVIDIA says.

During the GTC keynote of yesterday, NVIDIA presented the Pascal GPU and the few new breakthroughs that would place it above all its predecessors.

We have already outlined the NVLINK memory technology (which improves communication over PCI Express) and the 3D memory (stacked VRAM chips).

What we haven't touched on was the Unified Memory technology that the Santa Clara, California-based company will launch with Pascal.

In a nutshell, Unified Memory will allow the GPU to access the GPU's memory, and the GPU to access and use the GPU's memory.

Right now, software developers have to do that in the code of their programs, to allocate resources between the two, or define a method by which the application can do it itself.

With Unified Memory, however, the CPU and GPU will do it themselves, so that's one less issue that devs have to tackle.

The concept is somewhat similar to the hUMA technology that Advanced Micro Devices introduced along with the Kaveri accelerated processing unit (APU), though that one only applies to the integrated GPU.

So, say a PC has 32 GB of DDR3 and, why not, one or two Pascal graphics processing units, each with 6 GB or 8 GB of GDDR5 VRAM backing them up (or much more, given the 3D stacked memory tech).

That would make for a total of 48 GB, of which 16 GB would be just as good as the other 32, so it would be as if both the CPU and GPU would have full access to 64 GB DDR3. Or DDR4, since it will be out by then.

Add to that the 5 to 12X better task division over PCI Express, enabled by NVLINK (that means interfaces of thousands of bits per GPU), and you're looking at some pretty mean high-end PCs. No wonder NVIDIA went on and on about how good Pascal would be in supercomputers and “machine learning.”

For those of you unfamiliar with the term, “machine learning” defines a computer that can assimilate information and answer questions based on that experience. Well, answering questions is all they're able to do now. Eventually, true artificial intelligence should be made possible, as computers “watch” video and surf the web for information, thus figuring out for themselves what is common and what isn't.

All the while, software makers will have an easier time supplying programs for various things, and there won't be any room for bugs that will waste or miss-allocate RAM (or leak it over time, until your PC is bogged down enough to freeze), since it won't be the program's job to do it anymore.