2 Comments
User's avatar
Robots and Chips's avatar

This is an excellent summary of the DOE's new public-private partnership model for AI supercomputers. The speed at which Lux will be deployed (early 2026) using the AMD Instinct MI355X GPUs, EPYC CPUs, and Pensando networking is remarkable - compressing what normally takes years into months. Your historical context about using ORNL's Heat Pump Design Model in 1987 with Fortran on tape (with the EBCDIC/ASCII mixup!) really underscore how far we've come. The progression from your 300-baud modem to the over $1 billion in public-private investment for exascale AI systems is staggering. What strikes me most is the strategic alignment: DOE gets faster deployment and shared infrastructure, AMD/HPE get early real-world workload validation at scale, and America accelerates its compute leadership. The fact that Discovery (arriving 2028) will far exceed Frontier's performance while pioneering HPC-AI-quantum convergence is the real story. Outstanding write-up!

Expand full comment
Richard Rusk's avatar

Thank you. ORNL’s heat pump model helped me with the project for my master’s degree thesis and a paper the professors and I published about my engine-driven heat pump model. The paper has been cited about 30 times. I feel I made a small contribution to the world as part of the continuum of research projects.

This product is on the market. When used in the heating mode the system recovers engine heat, and the refrigeration cycle captures heat from the cold outdoor air and delivers it to the space. The thermodynamics is sound. Making a business of selling them apparently is not so easy.

Expand full comment