New architecture improves parallel computing

Jul 5, 2016 10:50 GMT  ·  By

A team of scientists from the Massachusetts Institute of Technology (MIT) have developed Swarm, a new multi-core CPU architecture that can yield higher speeds by utilizing a new system for splitting processing tasks among its cores.

Swarm is available as a 64-core CPU, which, in theory, should be 64 times faster than a normal CPU. Unfortunately, like most multi-core CPUs, it's not.

The problem relies on the fact that applications that run on multi-core CPUs need to have their source code adapted, split into tasks, and then have tasks classified based on priorities to avoid data overwrite issues. This operation is time-consuming and relies on human labor, which is often imperfect.

Swarm multi-core CPUs have extra circuitry

The new Swarm system comes with special circuitry that's responsible for classifying tasks using timestamps and running the tasks in parallel, starting with the highest-priority.

Swarm avoids data storage conflicts when two or more tasks want to write data to the same memory spot by also including special circuitry that backs up memory data and allows the highest-priority task to run first, and then restoring the data for the lower-priority task.

During tests, MIT's Swarm achieved computation speed-ups between 3 and 18 times compared to classic multi-core CPU programs. The programs that ran on the Swarm architecture also required a tenth, or less, of code modifications when compared to the changes needed to adapt software for classic multi-core CPUs.

Scientists managed to run on Swarm a previously unportable application

MIT says the new Swarm architecture has even achieved a 75 times computation result for an app that couldn't be ported to the classic multi-core platform.

Swarm's secret lies in using graphs for classifying task prioritization and then running the parallel computing operations. All of this is automated and excludes the human factor from the process.

"Multicore systems are really hard to program," says Daniel Sanchez, Swarm project lead and an assistant professor in MIT’s Department of Electrical Engineering and Computer Science. "You have to explicitly divide the work that you’re doing into tasks, and then you need to enforce some synchronization between tasks accessing shared data. What this architecture does, essentially, is to remove all sorts of explicit synchronization, to make parallel programming much easier."