Skip to main content

New AI Circuitry That Mimics Human Brains Makes Models Smarter

A new kind of transistor allows AI hardware to remember and process information more like the human brain does

Artist concept of a circuit board in the shape of a brain along with a blue image of a brain overlaid on top of a colorless human-like robot figure on a black background

Artificial intelligence and human thought both run on electricity, but that’s about where the physical similarities end. AI’s output arises from silicon and metal circuitry; human cognition arises from a mass of living tissue. And the architectures of these systems are fundamentally different, too. Conventional computers store and compute information in distinct parts of the hardware, shuttling data back and forth between memory and microprocessor. The human brain, on the other hand, entangles memory with processing, helping to make it more efficient.

Computers’ relative inefficiency makes running AI models extremely costly. Data centers, where computing machines and hardware are stored, account for 1 to 1.5 percent of global electricity use, and by 2027, new AI server units could consume at least 85.4 terrawatt-hours annually, or more than many small countries use every year. The brain is “way more efficient,” says Mark Hersam, a chemist and engineer at Northwestern University. “We’ve been trying for years to come up with devices and materials that can better mimic how the brain does computation,” called neuromorphic computer systems.

Now Hersam and his colleagues have taken a crucial early step toward this goal by redesigning one of electronic circuitry’s most fundamental building blocks, the transistor, to function more like a neuron. Transistors are minuscule, switchlike devices that control and generate electrical signals—they are like the nerve cells of a computer chip and underlie nearly all modern electronics. The new type of transistor, called a moiré synaptic transistor, integrates memory with processing to use less energy. As described in a study in Nature last December, these brainlike circuits improve energy efficiency and allow AI systems to move beyond simple pattern recognition and into more brainlike decision-making.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


To incorporate memory directly into how transistors function, Hersam’s team turned to two-dimensional materials with remarkably thin arrangements of atoms that, when layered on top of one another at different orientations, form mesmerizing, kaleidoscopelike patterns called moiré superstructures. When an electrical current is applied to the materials, the highly customizable patterns allow scientists to precisely control the current’s flow. Their special quantum properties allow them to create a particular electronic state that can store data without a continuous power supply.

While other moiré transistors exist, they’ve been limited to extremely low temperatures. The new device works at room temperature and consumes 20 times less energy than other kinds of synaptic devices.

And though experts have not fully tested the speed of these transistors yet, the integrated design of systems built from them suggests that they should be faster and more energy-efficient than traditional computing architecture, the researchers say. Still, the methods used to produce the new synaptic transistors are not scalable, and further research into manufacturing methods will be necessary to realize the circuitry’s full potential.

Tsu-Jae King Liu, who studies electrical engineering at the University of California Berkeley and was not involved in the new work, says using moiré devices to achieve synapselike circuitry is a novel approach. “The proof-of-concept demonstration at room temperature shows that this concept is worthwhile to investigate further for potential implementation in neuromorphic computing systems,” Liu says.

Though future brainlike circuits could be used to increase efficiency in many computing applications, Hersam and his colleagues have focused mostly on AI because of its massive energy consumption. Thanks to its integrated hardware, this circuitry can also make AI models more like the brain on a higher level. The transistor can "learn" from the data it processes, the researchers say. Eventually, it can establish connections between different inputs, recognize patterns and then make associations, similar to how the human brain forms memories and associations between concepts—a capability called associative learning.

Current AI models can go beyond finding and regurgitating common patterns and into associative learning. But their separate memory and processing components make this computationally challenging, and they often struggle in differentiating signal from noise in their data, Hersam explains.

AI models that can’t do associative learning might take two strings of numbers, such as 111 and 000, and report that they are completely dissimilar. But a model with this higher-level reasoning—such as one run on the new brainlike circuitry—would report that they are similar because they are both three of the same number in a row.

“That’s easy for us to do as humans but very difficult for traditional AI to do,” Hersam says. This extrapolative ability can be useful for applications such as self-driving vehicles piloted by artificial intelligence, he says. Noisy data from bad road conditions or poor visibility can interfere with an AI pilot’s decisions—and lead to results such as the AI mistaking someone crossing the road for a plastic bag. “Your current AI just does not [navigate] that well,” Hersam adds. “But this device, at least in principle, would be more effective at those types of tasks.”