AI and Machine Learning

Energy-efficient memory developed to spur new computer applications

Researchers at Stanford University have developed spin-orbit torque technology designed for data-intensive computing applications like artificial intelligence and machine learning (ML).
By Andrew Myers November 19, 2018
Courtesy: CFE Media

The computers that run artificial intelligence (AI), machine learning (ML), and other advanced applications require computational assistance from small, fast, energy-efficient memory chips located near, or even inside, the CPU.

These memory chips must be programmable and work very quickly to keep up with the CPU.

In this role today’s computers typically use static random-access memory (SRAM), a technology that is fast and compact, but has the drawback of being power-hungry.

However, a team led by Stanford professor Shan Wang has developed an alternative technology that is just as fast as SRAM, even more compact, and far more energy efficient. The technology is called SOT-MRAM, short for spin-orbit torque magnetoresistive random-access memory.

Wang’s approach combines MRAM—a new type of “spintronic” memory just now being commercialized—and SOT circuity. The SOT circuitry is based on the interaction between an ultrathin wire of tantalum—a heavy metal common in electronic devices—and a nanosized stack of other metals and insulators.

By sending small jolts of electricity through the tantalum, the magnetic components in the alloy can be switched to face upward or downward. In computer terms, they can be programmed as the electrical representations of the ones and zeroes needed to store digital data. Favorably, SOT-MRAM is what engineers call a non-volatile form of memory, which means it can store data even when power to the device is turned off.

SOT technology has excellent electrical efficiency, but to replace SRAM, it must also be small. Wang’s team achieved the necessary compactness by engineering the thickness of the various layers in the device to make it writable using just two electrical terminals. Previously, SOT-MRAM was thought to require three terminals.

Speed is the ultimate prerequisite. Existing MRAM technologies, which take 10 to 20 nanoseconds to write a zero or a one, are too slow for data-intensive computing applications. Wang’s research shows SOT-MRAM can write a zero or one in a nanosecond or less. This sub-nanosecond threshold will allow SOT-MRAM to keep pace with today’s ever-faster CPUs, deeper learning neural networks, and massive databases.

Wang said the current experiments are in the very early stages and SOT technology is years away from commercialization. But, he said, it already outperforms a competing form of MRAM known as spin-transfer torque (STT), which has been widely commercialized.

The next challenge is to develop manufacturing techniques and more potent SOT materials to produce SOT-MRAM in the profound numbers needed to make the technology commercially viable. That’s no easy task.

“If you don’t get the metal layers exactly right, you destroy the device,” he said.

Still, Wang believes those manufacturing challenges are solvable and SOT-MRAM’s size, speed, and energy efficiency will make spin-orbit torque memory attractive to the industry.

“There are a lot of new ideas pouring out in this area,” he said. “I’m very optimistic this memory will be commonly used in numerous widgets.”

Andrew Myers, Stanford University.

Want this article on your website? Click here to see how ContentStream® can make that happen.


Andrew Myers
Author Bio: Stanford University