Supercomputers used to look like something out of a Bond villain’s lair – room-sized beasts humming under liquid-cooled floors, processing seismic data or simulating nuclear physics. Fast forward a decade, and now we’ve got a soda-can-sized contraption that hums with the potential of 28 CPU cores and over 100GB of RAM. It’s called the NanoCluster by Sipeed, and it might just be the nerdiest flex in consumer-grade computing since someone jammed Doom into a pregnancy test.
Built around the modular muscle of up to seven Raspberry Pi Compute Modules (CM4 or CM5), the NanoCluster takes ARM architecture and folds it into a compact, extensible setup that feels more like LEGO for sysadmins than an actual compute platform. Each compute module plugs into its own M.2-style adapter board – svelte and minimalist – bringing up to 16GB RAM and a quad-core CPU per node. Seven of these together puts you at a theoretical 112 gigaflops, which, for comparison, can outpace the base M2 MacBook Air in some parallel workloads. Not bad for something that fits in your palm.
Designer: Sipeed
The whole system draws power either through USB-C using a 65W GaN charger or via PoE++, offering up to 60W. Here’s where things get spicy: power and cooling are tightly intertwined in this setup. Stressing the CPUs too much – like running `stress-ng –matrix 0` on all six or seven modules – starts to push the boundaries of the system’s power budget, leading to throttling or outright node instability. Temperatures creep past 85°C, and the fan, which hovers around 58 dB at full tilt, kicks into jet-engine mode. It’s functional, but far from whisper-quiet.
What really sets this board apart is the inclusion of a managed RISC-V network switch tucked beneath the main board. It offers VLAN support, port toggling, and console access, though currently the interface is stuck in Chinese with a few browser hiccups. Still, the fact that you can control the entire cluster’s network behavior from any node is impressive, especially when you realize that the whole thing runs on 20 to 70 watts, depending on load.
And while the single 1 Gbps uplink does present a bottleneck for data-heavy workloads (Ceph over this network would be a stretch), individual nodes get full gigabit access. That’s more than enough for most hobbyist-grade Kubernetes deployments, distributed AI workloads like Llama, or even CI/CD pipelines using tools like distcc. In fact, a full kernel compile drops from 45 to 22 minutes with just four nodes humming in harmony.
Design-wise, the board is clearly built for tinkerers. Every inch reveals a decision made with modularity in mind: M.2 adapters, USB-C ports, NVMe SSD support, and even a redundant power configuration that switches between PoE and USB-C based on load demand. It’s not plug-and-play; you’ll need to flash OS images, understand power limits, and maybe even tweak a fan control script that didn’t work out of the box. But that’s part of the charm.
At a price point ranging from $50 to $150, depending on configuration, the NanoCluster invites experimentation without the gut-punch expense of enterprise gear. It’s not ideal for everyone – and definitely not for those allergic to the idea of debugging UART headers or reading through wiki pages – but for devs, educators, and anyone curious about distributed systems, it’s a sandbox worth jumping into.
You probably won’t replace your workstation with this thing. You won’t mine crypto, or render Pixar-quality animations, or host a billion-user database. But you WILL learn. And in a world where we casually carry LLMs in our pockets, having a pocket-sized supercomputer for hands-on experimentation feels like a natural next step. Welcome to the era of distributed soda-can computing.
The post Supercomputer In A Soda Can – The NanoCluster packs 100GB RAM in an Ultra-Compact Design first appeared on Yanko Design.