Tarski Neuromorphic Chip
Neuromorphic Computing

AI that thinks
like a brain

An analog spiking neural network chip that runs AI at microwatt power, thousands of times more efficiently than conventional processors.

0.03mW
Power draw
>96%
MNIST accuracy
12,000x
Faster simulation
The Problem

AI is devouring the grid

Data centres in Ireland already consume over 20% of the country's electricity. Frontier AI labs are building gigawatt-scale facilities that rival entire cities. And leaders in the space are clear: this is accelerating, not slowing down.

0 kWh
Energy for a single frontier model run on 16x H200 GPUs. No battery can hold that.
0 GW
OpenAI's Stargate capacity commitment. Roughly Ireland's entire peak demand, just for AI.
Water consumption
0 gal/day
A single large data centre consumes as much water as a town of 50,000 people. This water evaporates during cooling and cannot be recovered.
Why this matters for Tarski
0 water
Tarski draws 30 microwatts. No heat. No cooling. No water. Neuromorphic inference at the edge eliminates the data centre from the equation entirely.
Energy per device
Battery capacity (solid) vs. daily GPU inference energy (striped). Toggle to linear to see the real punchline.
Battery capacity
24h energy need
On a linear scale every other bar vanishes. Running Kimi K2 for one day consumes 269 kWh -- the stored energy of 117 Optimus robots, 19,300 iPhones, or 53,800 brain implants.
What is Tarski

A chip that computes
with physics, not code

Traditional AI
Simulating a rainstorm on a spreadsheet. Every number calculated, every cycle.
vs
Tarski
Actually making it rain. Electrons behaving like neurons. Physics doing the math.
Billions of multiply-accumulate operations per inference. Energy spent describing physics.
Capacitors charge. Resistors leak. Comparators fire. Zero multiply operations. Just physics.
How It Works

Two kinds of neural network

Traditional AI

Network Behaviour

Every neuron fires, every connection computes. All at once, every cycle.

Signal Type

A smooth, continuous stream of numbers, always flowing.

Timing

A rigid clock. Everything in lockstep.

Everything computes. All the time.

Like leaving every light in a building on to read one book.

Tarski (SNN)

Network Behaviour

Only active neurons fire. The rest stay silent.

Signal Type

Sharp spikes, only when something happens. Silence is information.

Timing

No clock. Neurons respond when events arrive.

Only what matters computes. Only when it matters.

Like your brain: billions of neurons, only a fraction active at any moment.

The Neuron

Analog LIF circuits

Analog LIF Neuron vs Biological Neuron

Integrate

Input currents charge a membrane capacitor. Weights are resistances: I = V/R. Multiple inputs sum via Kirchhoff's current law.

Leak

A parallel resistor drains the capacitor with tau_m = R*C. Exponential decay makes the neuron forget old inputs.

Fire

Membrane crosses 0.8V, a comparator (LMV7219) fires a spike. Propagates to downstream neurons as current injection.

Reset

An analog switch (SN74LVC1G66) shorts the capacitor to ground. Brief refractory period before it can fire again.

Gilgamesh Simulator

12,000x faster than SPICE

Designing analog hardware means simulating real physics. SPICE takes 30 minutes per inference. We need millions. Gilgamesh makes training possible.

SPICE

vs

Gilgamesh

One MNIST inference through our best small-scale 36-12-10 spiking neural network.

Training requires tens of thousands of these simulations. At 30 minutes each, SPICE would take years. Gilgamesh makes it possible in hours.

SPICE
Industry Standard
MNIST Neural Network // 36-12-10 Spiking Architecture
1xreal-time
Simulating...0%
00:00
elapsed time
Gilgamesh
Custom Engine
Same network. Same physics. Written in Rust.
Simulating...0%
0.000s
elapsed time
12,000x faster
Same physics. Same accuracy. 0.15 seconds instead of 30 minutes.
This is what makes training possible. A single training run needs tens of thousands of forward passes. At 30 minutes each, SPICE would take years. Gilgamesh does it in hours.
SPICE30 min 00s
Gilgamesh0.150s
Accuracy96.43%
Arch36-12-10 SNN
Proven Results

Demo board validated. Manufacturing next.

We built a proof-of-concept board to validate the architecture. The results exceeded expectations. Now we are manufacturing a dedicated 49-9-10 chip.

MNIST Classification

Handwritten digits fed as analog voltages. Classified by counting output spikes across 10 output neurons.

>96%
Physics-mode test accuracy

Minimal Architecture

The manufacturing target is a 49-9-10 network: 49 inputs, 9 hidden LIF neurons, 10 outputs. Each synapse is a physical resistor.

531
Total synapses on chip

Microwatt Power

The analog core draws ~30 microwatts during inference. When idle, it is fully off -- zero standby drain. Not sleeping. Off.

0.03 mW
Active inference only. Zero idle.
Demo Board

See it classify digits

A 7x7 pixel image goes in as analog voltages. Spikes come out. The neuron that fires most wins. The whole thing runs on microwatts.

Input (7x7 pixels)
Tarski Chip
Tarski SNN Chip
Classification
1
confidence: 97.2%
spike count winner
49
Input voltages (7x7)
9 LIF
Hidden neurons
10
Output neurons (0-9)
Why It Matters

What microwatt AI unlocks

Medical Devices That Never Die

Implants running neural inference on microwatts, powered by body heat. No batteries to replace.

10+ years
Device lifespan

Year-Long Fault Monitoring

Always-on anomaly detection for infrastructure. Stick a sensor on and forget about it for years.

<1 mW
Total power draw

AI Agents Costing Cents

Edge inference at nanojoule scale. Drones, robots, and IoT that think locally.

$0.0001
Per inference
Benchmarks

Architecture performance on MNIST

MNIST handwritten digit classification is the standard benchmark for proving AI hardware feasibility. These results validate that Tarski's analog circuits can learn and classify, not a production deployment target.

ArchitectureImageParamsModeAccuracy
36-6-106x6276Physics85.05%
36-12-106x6552Physics91.38%
49-9-107x7531Physics90.14%MANUFACTURING
36-12-106x6552Physics96.43%BEST
Software Stack

Gilgamesh: train, simulate, deploy

neuron/forward.rs
let decay = (-dt / tau_m).exp(); let steady = input * tau_m; mem = steady + (mem - steady) * decay; // RC membrane dynamics let spike = mem >= threshold; // Threshold crossing if spike { mem = 0.0; }
terminal
# Train a physics-mode SNN $ gilgamesh train --config physics.json # Evaluate on test set $ gilgamesh evaluate --model model.json Accuracy: 96.43% (9643/10000) # Generate SPICE netlist $ gilgamesh spice --model model.json Written: snn_36_12_10.cir Components: 552 synapses, 22 neurons
Team

Built by

Co-Designer

Third-year EEE student at University of Galway. Built the Gilgamesh simulator, designed the network architecture, and wrote the training pipeline. Runs Eltrus Limited, a medical software company serving 400+ patients.

Co-Designer

Hardware co-designer on the 22,000-component PCB layout and assembly. Responsible for the physical neuron circuits, component selection, and board-level integration.