The next generations of extreme scale systems face many challenges. The end of frequency scaling forces the use of extreme amounts of concurrency. Power constraints are forcing a reconsideration of the processor architecture, eliminating features that provide small performance benefit relative to the power consumed. So-called heterogeneous architectures that use combinations of simpler, less general processing elements such as graphics processing units (GPUs) or processors in memory (PIM) offer better performance per unit energy. Future systems will need to combine these and other approaches to approach Exascale performance.
Achieving good performance on any system requires balancing many competing factors. More than just minimizing communication (or floating point or memory motion), for high end systems the goal is to achieve the lowest cost solution. And while cost is typically considered in terms of time to solution, other metrics, including total energy consumed, are likely to be important in the future.
Making effective use of the next generations of extreme scale systems requires rethinking the algorithms, the programming models, and the development process. This talk will discuss these challenges and argue that performance modeling, combined with a more dynamic and adaptive style of programming, will be necessary for extreme scale systems.
Extreme scale computing, Petaflops, Exaflops, Scalable,Algorithms, Performance modeling
プレゼン資料(08/16版)[0.5MB] 講演後Q&A