Investigating Chiplets for Scalable and Cost Effective HPC Beyond Exascale
John Shalf is the Department Head for Computer Science at Lawrence Berkeley National Laboratory. He also formerly served as the Deputy Director for Hardware Technology on the US Department of Energy (DOE)-led Exascale Computing Project (ECP) before he returned to his department head position at LBNL. He has co-authored over 100 peer-reviewed publications in parallel computing software and HPC technology, including the widely cited report “The Landscape of Parallel Computing Research: A View from Berkeley” (with David Patterson and others).
Join John Shalf from Berkeley Laboratory for an insightful lecture that spans from the historical beginnings of the lab to the cutting-edge advancements in supercomputing. In this talk, John covers the following key points:
Historical Context:
- Berkeley Lab's legacy as the first Department of Energy National Laboratory.
- Historical figures such as Oppenheimer and E.O. Lawrence and their contributions, including the first atom smasher in 1939.
Current State of the Lab:
- Growth of the lab to a $2 billion annual budget and 15 associated Nobel prizes.
- Focus on computer science and the design of next-generation supercomputers.
Supercomputing Applications:
- Processing astronomical data to detect supernovae.
- Predictive biology and brain scans for understanding speech production.
- Data analysis from particle accelerators and material design.
Exascale Computing Project:
- Transition from petascale to exascale computing (10^18 operations per second).
- Achievements and challenges in energy efficiency, with the first exascale system delivered at Oak Ridge.
Energy Efficiency and Market Dynamics:
- The importance of energy efficiency in supercomputing.
- Market dynamics driving the need for specialized and energy-efficient systems.
Challenges and Future Directions:
- The need for more powerful supercomputing systems for accurate climate modeling and other scientific imperatives.
- Specialization and advanced packaging (chiplets) as solutions for energy efficiency and performance.
Innovations in HPC:
- The transition from general-purpose processors to specialized hardware.
- The role of chiplets in lowering costs and improving performance.
- The open chiplet marketplace and its potential for future HPC advancements.
Conclusion:
- The necessity of following industry trends to maintain and enhance HPC capabilities.
- The importance of targeted specializations for scientific progress.
- Explore the journey of Berkeley Lab from its atomic age origins to its forefront position in super computing. Understand the technical challenges and innovative solutions driving the future of high-performance computing.
Related Videos
- TSMC 3Dblox™: Unleashing 3D IC Innovations for the Next Generation of AI, HPC, Mobile and IoT
- Cost And Quality Of Chiplets
- Impact of Chiplets, Heterogeneous Integration and Modularity on AI and HPC systems
- IBM Research: Benefits and challenges of Chiplets
Latest Videos
- Accelerating AI Innovation with Arm Total Design: A Case Study
- The Rise of The Hublet™ & FPGA Chiplets
- From Internal Designs to Open Chiplet Economy: Discussion on How to Create Open, Democratized Access to Chiplet Technology
- “Dating” and “Marrying”: An AI Chiplet’s Perspective
- Impact of Chiplets, Heterogeneous Integration and Modularity on AI and HPC systems