Chip design for faster parallel programming created

Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed a chip design called Swarm, that is designed to make parallel programs more efficient but easier to write.


Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed a chip design called Swarm, that is designed to make parallel programs more efficient but easier to write. Courtesy: Christine Daniloff, Massachusetts InComputer chips have stopped getting faster. For the past 10 years, chips' performance improvements have come from the addition of processing units known as cores. In theory, a program on a 64-core machine would be 64 times as fast as it would be on a single-core machine. It rarely works that way, though. Most computer programs are sequential, and splitting them up so that chunks of them can run in parallel causes all kinds of complications.

Researchers from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed a chip design called Swarm, that is designed to make parallel programs more efficient but easier to write.

In simulations, the researchers compared Swarm versions of six common algorithms with the best existing parallel versions, which had been individually engineered by seasoned software developers. The Swarm versions were between three and 18 times as fast, but they generally required only one-tenth as much code—or even less. And in one case, Swarm achieved a 75-fold speedup on a program that computer scientists had so far failed to parallelize.

"Multicore systems are really hard to program," said Daniel Sanchez, an assistant professor in MIT's Department of Electrical Engineering and Computer Science, who led the project. "You have to explicitly divide the work that you're doing into tasks, and then you need to enforce some synchronization between tasks accessing shared data. What this architecture does, essentially, is to remove all sorts of explicit synchronization, to make parallel programming much easier. There's an especially hard set of applications that have resisted parallelization for many, many years, and those are the kinds of applications we've focused on in this paper."

Many of those applications involve the exploration of what computer scientists call graphs. A graph consists of nodes, typically depicted as circles, and edges, typically depicted as line segments connecting the nodes. Frequently, the edges have associated numbers called "weights," which might represent, say, the strength of correlations between data points in a data set, or the distances between cities.

Graphs crop up in a wide range of computer science problems, but their most intuitive use may be to describe geographic relationships. Indeed, one of the algorithms that the CSAIL researchers evaluated is the standard algorithm for finding the fastest driving route between two points.

Setting priorities

In principle, exploring graphs would seem to be something that could be parallelized: Different cores could analyze different regions of a graph or different paths through the graph at the same time. The problem is that with most graph-exploring algorithms, it gradually becomes clear that whole regions of the graph are irrelevant to the problem at hand. If, right off the bat, cores are tasked with exploring those regions, their exertions end up being fruitless.

Fruitless analysis of irrelevant regions is a problem for sequential graph-exploring algorithms, too; not just parallel ones. So computer scientists have developed a host of application-specific techniques for prioritizing graph exploration. An algorithm might begin by exploring just those paths whose edges have the lowest weights, for instance, or it might look first at those nodes with the lowest number of edges.

Swarm has extra circuitry built in for handling that type of prioritization. It time-stamps tasks according to their priorities and begins working on the highest-priority tasks in parallel. Higher-priority tasks may engender their own lower-priority tasks, but Swarm slots those into its queue of tasks automatically.

Occasionally, tasks running in parallel may come into conflict. For instance, a task with a lower priority may write data to a particular memory location before a higher-priority task has read the same location. In those cases, Swarm automatically backs out the results of the lower-priority tasks. It thus maintains the synchronization between cores accessing the same data that programmers previously had to worry about themselves.

Keeping tabs

The hard work falls to the chip itself, which Sanchez designed in collaboration with Mark Jeffrey and Suvinay Subramanian, both MIT graduate students in electrical engineering and computer science; Cong Yan, who did her master's as a member of Sanchez's group and is now a PhD student at the University of Washington; and Joel Emer, a professor of the practice in MIT's Department of Electrical Engineering and Computer Science, and a senior distinguished research scientist at the chip manufacturer NVidia.

The Swarm chip has extra circuitry to store and manage its queue of tasks. It also has a circuit that records the memory addresses of all the data its cores are currently working on. That circuit implements something called a Bloom filter, which crams data into a fixed allotment of space and answers yes/no questions about its contents. If too many addresses are loaded into the filter, it will occasionally yield false positives—indicating "yes, I'm storing that address"—but it will not yield false negatives.

The Bloom filter is one of several circuits that help Swarm identify memory access conflicts. The researchers were able to show that time-stamping makes synchronization between cores easier to enforce. For instance, each data item is labeled with the time stamp of the last task that updated it, so tasks with later time-stamps know they can read that data without bothering to determine who else is using it.

Finally, all the cores occasionally report the time stamps of the highest-priority tasks they're still executing. If a core has finished tasks that have earlier time stamps than any of those reported by its fellows, it knows it can write its results to memory without courting any conflicts.

Massachusetts Institute of Technology (MIT) 

- Edited by Chris Vavra, production editor, Control Engineering, CFE Media, See more Control Engineering energy and power stories.

No comments
The Engineers' Choice Awards highlight some of the best new control, instrumentation and automation products as chosen by...
The System Integrator Giants program lists the top 100 system integrators among companies listed in CFE Media's Global System Integrator Database.
The Engineering Leaders Under 40 program identifies and gives recognition to young engineers who...
This eGuide illustrates solutions, applications and benefits of machine vision systems.
Learn how to increase device reliability in harsh environments and decrease unplanned system downtime.
This eGuide contains a series of articles and videos that considers theoretical and practical; immediate needs and a look into the future.
Make Big Data and Industrial Internet of Things work for you, 2017 Engineers' Choice Finalists, Avoid control design pitfalls, Managing IIoT processes
Engineering Leaders Under 40; System integration improving packaging operation; Process sensing; PID velocity; Cybersecurity and functional safety
Mobile HMI; PID tuning tips; Mechatronics; Intelligent project management; Cybersecurity in Russia; Engineering education; Road to IANA
This article collection contains several articles on the Industrial Internet of Things (IIoT) and how it is transforming manufacturing.

Find and connect with the most suitable service provider for your unique application. Start searching the Global System Integrator Database Now!

SCADA at the junction, Managing risk through maintenance, Moving at the speed of data
Flexible offshore fire protection; Big Data's impact on operations; Bridging the skills gap; Identifying security risks
The digital oilfield: Utilizing Big Data can yield big savings; Virtualization a real solution; Tracking SIS performance
click me