PCI Express 4.0 Runs at 16 GT/s, Twice as Good as PCI Express 3.0

Will cost as much as current implementations but will be better all around

By on November 30th, 2011 21:31 GMT

There is a certain phrase roaming the web called “awesome but impractical” and this may very well be the epithet best suited for the newly announced PCI Express 4.0 specification.

Normally, any leap in a certain technology would be welcomed with open arms but, in the case of PCI Express 4.0, it might be a blessing that it won't be available for years to come.

This is because PCI Express 3.0 only managed to show up itself and it isn't totally certain that it will come mainstream next year, not with Intel's Sandy Bridge-E chips only 'almost' 3.0-compatible.

There is also the matter that the existing specification, PCI Express 2.0, already has a high bandwidth and is sufficient for all hardware so far revealed.

As such, even though PCI Express 3.0 will be twice as mighty as 2.0, it won't really be relevant, especially at first.

By extension, PCI Express 4.0, which is twice as fast as 3.0, wouldn't actually do anything in current systems.

In other words, it is farther ahead on the development axis than the rest of the IT industry, hence why “awesome but impractical” could be said to fit it so well.

Then again, 4.0 is just as cheap to manufacture and implement as 2.0 and also features a higher power efficiency, or will be once it is finalized in 2014.

By then, graphics cards or whatever else may actually have what it takes to put those 16 GT/s to work.

“The PCI Express architecture has become the de facto I/O technology within the industry, in large part due to PCI-SIG’s dedication to I/O innovation and the insight of those who defined earlier versions in such an extensible manner,” said Nathan Brookwood, research fellow at Insight 64.

“Like its predecessors, the PCIe 4.0 architecture is well positioned to preserve the industry's investments in earlier generations of PCI Express specifications while extending the technology in a manner that enables new applications and usage models.”

Comments