Discussion about this post

User's avatar
Robots and Chips's avatar

This is an excellent summary of the DOE's new public-private partnership model for AI supercomputers. The speed at which Lux will be deployed (early 2026) using the AMD Instinct MI355X GPUs, EPYC CPUs, and Pensando networking is remarkable - compressing what normally takes years into months. Your historical context about using ORNL's Heat Pump Design Model in 1987 with Fortran on tape (with the EBCDIC/ASCII mixup!) really underscore how far we've come. The progression from your 300-baud modem to the over $1 billion in public-private investment for exascale AI systems is staggering. What strikes me most is the strategic alignment: DOE gets faster deployment and shared infrastructure, AMD/HPE get early real-world workload validation at scale, and America accelerates its compute leadership. The fact that Discovery (arriving 2028) will far exceed Frontier's performance while pioneering HPC-AI-quantum convergence is the real story. Outstanding write-up!

Expand full comment
1 more comment...

No posts