Computing with Oscillators
We developed this circuit to realize the weighted Kuramoto model using coupled electronic oscillators. It is designed for combinatorial optimization and can be understood as an oscillator-based Ising machine (OIM).


The circuit contains 8 LC oscillators (top board) connected through a programmable resistive network (bottom board). The system is a fully-connected graph, where each node is a LC oscillator, and each edge is a tunable resistor. The phase of each oscillator $\theta_i$ responds to the binary variable $x_{i}$. The ESP32 microcontroller sets the resistive couplings between each oscillator to encode the matrix $\mathbf{J}$ that defines the problem. The oscillators will then stabilize to phase configurations that minimize \begin{equation} H(x) = x^{T}\mathbf{J}x \end{equation} This quadratic optimization is generally NP-Hard and represents a board class of problems. The weighted Kuramoto model admits an equivalent Lyapunov function, meaning the natural phase dynamics can be exploited to minimize $(1)$. In fact, these dynamics actually implement continuous-time Lagrange multiplier optimization. However, in our case, we must ensure the phases settle to binary (0 or $\pi$) values instead of continuous ones. To do this, an external perturbation signal at twice the oscillation frequency is used to create bi-stable modes, a technique known as subharmoic injection locking. The amplitude can be varied over time, which effectively performs annealing to gradually enforce bi-stability. This is required because not all stable phase configurations represent the global minimum of $(1)$.
This principle can be demonstrated with a simple breadboard experiment. In this example, 10K ohm resistors connect the variable $x_3$ (blue) to $x_1$ (yellow), $x_2$ (purple), and $x_4$ (green) which themselves share no connections. No injection locking is used for this small experiment. Probes measure the voltage waveform at the inductor at each oscillator. The oscilloscope shows the set of variables $\{x_1, x_2, x_4\}$ are mis-aligned with $\{x_3\}$, which indicates the optimal solution $x^{*} = [1 1 0 1]$ or equivalently $x^{*}=[0 0 1 0]$. In context of the MaxCut problem, this corresponds to “cutting all the resistors” which is the optimal partition for this graph. The oscillators perform the computation “for free”.

This analog approach can be significantly more power-efficient than GPUs or TPUs for combinatorial optimization. In practice, classical algorithms use simulated annealing or Monte Carlo methods that cost about 20 nJ per trial bitflip. However, dedicated ASICs using analog in-memory compute or even FPGAs can achieve 2-5 orders of magnitude less energy per bitflip.
The simulation below shows a larger example of 100 oscillators using a ramped annealing strength. Here, it almost reaches the optimal configuration but in general varies per problem.

The optimal annealing schedule can be understood as optimal control of a non-equilibrium system. This remains an open problem. However, recent work shows that thermodynamically optimal paths minimize entropy production. Our work aims to show how multi-dimensional control can improve performance on constrained problems and give provable guidelines on annealing, rather than the heuristics used in practice.
