Analog Computing

As data centers hit an energy bottleneck, analog chips and in-memory computing offer a low-power alternative

AI training and running large models demands massive computational resources, and the GPUs doing the heavy lifting are absolute energy sinks. Data centers are expanding at a staggering pace, putting real pressure on power grids and raising hard questions about whether scaling AI indefinitely on current hardware is even sustainable. But analog computing is a different approach gaining traction — one that takes cues from the human brain itself.

Instead of the conventional digital method of processing billions of ones and zeros, analog chips work with continuous physical signals — like voltages, currents, and other actual electrical properties. A growing wave of startups and research labs are making the case that this older, largely abandoned computing paradigm could handle AI inference at a tiny fraction of the energy cost we’re paying today. It’s still early-stage technology and nobody’s calling it a silver bullet, but the potential is serious enough to draw real interest from chipmakers, AI researchers, and the edge computing world.

The energy bottleneck in digital AI

At the heart of modern AI workloads sits matrix multiplication, which is the fundamental mathematical operation powering neural networks. Digital GPUs tackle this by moving enormous volumes of data back and forth between memory and processor, billions of times over during a single inference or training run. All that data movement is where the real energy goes, often far exceeding the cost of the actual math being performed.

The underlying issue traces back to what computer scientists call the “Von Neumann bottleneck,” a structural limitation baked into virtually all computing architecture since the mid-20th century. Processing and memory live in physically separate locations under this model, so every single calculation requires fetching data from memory, running it through the processor, and writing it back. When you’re dealing with neural networks that have millions or billions of parameters, you end up with a staggering number of data transfers — plus a staggering amount of energy burned in the process. All information gets represented as discrete binary digits, which means constant conversion and movement are unavoidable. Not only that, but as AI models get bigger and more complex, this energy cost scales right alongside them. It’s a bottleneck that improvements in transistor density alone simply can’t fix.

How analog computing architecture works

Analog computing approaches the problem from a completely different direction. Rather than encoding everything as discrete ones and zeros, analog circuits work with continuous physical quantities, like voltages, currents, and electrical charges that exist on a spectrum rather than in binary states. This maps more naturally to how the physical world actually operates, and it’s notably closer to how biological neurons function. The brain doesn’t run on binary logic gates. It processes continuous electrical and chemical signals across billions of synaptic connections. Analog AI chips borrow from this principle, at least conceptually.

The real architectural breakthrough here is in-memory compute. Instead of parking neural network parameters in separate memory chips and fetching them repeatedly during processing, analog AI accelerators encode those parameters directly into the physical properties of on-chip memory elements. An input signal passes through a network of resistors whose resistance values represent the network’s weights, and the resulting currents and voltages naturally perform the matrix multiplication that neural networks need. Basic resistor networks and operational amplifier circuits can execute these calculations in a single step, skipping the billions of read-write-compute cycles that digital processors depend on. Computation happens right where the data lives, which effectively eliminates the Von Neumann bottleneck by design.

Advantages in speed and efficiency

The most compelling case for analog AI hardware comes down to power consumption — or rather, the dramatic reduction of it. The core computations happen passively, driven by the physics of electrical resistance and current flow rather than billions of transistor switching events. The result is that analog chips can handle neural network inference using a fraction of the energy that equivalent digital processors demand. For applications where every milliwatt matters, that’s a big deal.

Speed is a natural byproduct of the same architectural shift. When you strip away the repeated memory access cycles that dominate digital processing, you get results with minimal latency. Input signals move through the circuit and produce outputs almost instantaneously in computational terms, which makes analog processors a strong fit for anything requiring real-time responses — whether it be autonomous vehicles making split-second decisions or robotic systems reacting to their environment. Put the low power and low latency together and you get hardware that’s particularly well-suited for edge devices, where computation runs locally rather than in some distant data center. Battery-powered sensors, mobile devices, drones, and embedded systems all stand to gain from hardware that can run meaningful AI inference without killing a battery or needing active cooling.

Who’s who in analog computing

Worth noting is the fact that the broader analog integrated circuit market is already huge and well-established, sitting at roughly $90 billion. Analog chips have been doing heavy lifting for years in signal processing, power management, and sensor interfaces. What’s newer is the application of analog principles specifically to AI acceleration. 

On the startup side, Mythic is probably the most visible company pushing analog AI forward. Their M1076 analog AI accelerator is built on in-memory compute technology and purpose-built for edge inference. Mythic’s approach stores neural network weights in flash memory cells and uses analog computation to process them, targeting smart cameras, drones, and other devices that need local intelligence without cloud connectivity. From the research angle, IBM has been doing significant work exploring analog AI through phase-change memory (PCM) devices. Their research centers on leveraging the variable resistance states of PCM materials to both store and compute neural network weights, which could open a path to more scalable and manufacturable analog AI hardware down the road. The space is still relatively young when it comes to purpose-built analog AI products, but investment and momentum are clearly building.

Limitations

For all the potential, analog AI computing has some pretty significant limitations. The most fundamental one is precision. Analog circuits are inherently vulnerable to electronic noise and component variability, like manufacturing differences, temperature shifts, and signal degradation, which all introduce errors that compound across a computation. For straightforward pattern recognition and basic inference, this imprecision is workable. But the high-precision nonlinear activation functions that complex neural networks demand require larger, more elaborate circuitry, and that starts eating into the very efficiency gains that make analog compelling in the first place. This precision bottleneck is a core constraint for applying analog computing to advanced AI systems, and it’s a big reason why nobody’s running large language models on analog chips.

Programmability is another major challenge. Digital chips can be reprogrammed through software to handle completely different tasks — the same GPU can train an image classifier one day and run a language model the next. Analog processors, on the other hand, tend to have specific calculations physically embedded in their structure. Changing what an analog chip does often means redesigning the hardware, not just pushing a software update. That rigidity makes analog chips a tough sell for general-purpose AI development.

Then there’s scaling. Today’s analog AI implementations work best for relatively simple, well-defined inference tasks at the edge. Pushing the technology into large-scale, complex workloads is still an open engineering problem. And, it’s important to flag that current analog AI accelerators are built exclusively for inference, not training. They can run a pre-trained neural network efficiently, but they can’t train one from scratch. So for the foreseeable future, analog hardware is a complement to digital systems, not a replacement.

A hybrid approach

Given these constraints, most of the development energy right now is flowing toward hybrid analog-digital architectures rather than purely analog solutions. This mixed-signal approach tries to capture the best of both worlds, though it comes with its own design complexity.

It’s also worth drawing a line between analog computing and neuromorphic computing, which are related but distinct. Neuromorphic chips try to more directly replicate the brain’s architecture using spiking neural networks and event-driven processing, where artificial neurons fire only when they hit a threshold, much like biological neurons. Analog circuits can enable neuromorphic designs, but analog AI accelerators also work with conventional neural network architectures without adopting the spiking model. The two fields overlap, but they aren’t the same thing.

Advances in fabrication, like in mixed-signal integrated circuits and resistive RAM (ReRAM), are steadily making analog AI hardware more viable and cost-effective. Tighter manufacturing tolerances and better integration of analog components alongside digital logic on the same chip are becoming more achievable. That said, there’s a genuine debate in the field about whether fully analog solutions can ever meet the demands of complex AI tasks, or whether hybrid designs are the inevitable endpoint. Some researchers argue that the precision-efficiency trade-off is a fundamental property of analog physics that will never be fully resolved for demanding applications, which would mean hybrid architectures aren’t a transitional phase but the permanent destination. Whether analog computing becomes a mainstream pillar of AI infrastructure or stays a specialized tool for specific edge use cases may ultimately come down to how that debate plays out over the next several years.

What you need to know in 5 minutes

Join 37,000+ professionals receiving the AI Infrastructure Daily Newsletter

This field is for validation purposes and should be left unchanged.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More