ACM TechNews, Friday, January 7, 2011 European Exascale Project Drives Toward Next Supercomputing Milestone HPC Wire (01/06/11) The goal of the European Exascale Software Initiative (EESI) is to help effect the migration from petascale to exascale systems over the next 10 years by bringing together industry and government organizations. "The expected outputs of the project is an exascale roadmap and set of recommendations to the funding agencies shared by the European [high performance computing (HPC)] community, on software--tools, methods, and applications--to be developed for this new generation of supercomputers," says EESI program leader Jean-Yves Berthou. EESI's first international workshop in Amsterdam convened 80 experts in the fields of software development, performance analysis, applications knowledge, funding models, and governance aspects in HPC. Eight working groups (WGs)--four focused on application grand challenges and four concentrating on enabling exaflop computing methods--have been organized to identify and classify the main challenges in their scientific area or technology component. Netherlands National Computing Facilities foundation's Peter Michielse says the workshops' purpose were twofold--to ensure that each WG was mulling the correct challenges within its scientific and technology discipline, and to become familiar with Asian and U.S. initiatives with respect to their exascale software projects. "An important role of EESI is to make sure that Europe is involved in global discussions on hardware, software, and applications design," Michielse says. Better Benchmarking for Supercomputers IEEE Spectrum (01/11) Mark Anderson Many computer scientists say the High-Performance Linpack test used to rate the world's Top 500 supercomputers is not the best performance measurement for supercomputers. "What we're most interested in is being able to traverse the whole memory of the machine," says Sandia National Laboratory researcher Richard Murphy. He and his colleagues have developed the Graph500, a new benchmark that which rates supercomputers based on gigateps (billions of traversed edges) instead of petaflops. By the Graph500 standard, supercomputers have actually been slowing down, according to Notre Dame University professor Peter Kogge. Over the past 15 years, each 1,000-fold increase in flops has resulted in a 10-fold decrease in accessible memory. According to the Graph500 standard, the top supercomputer would be Argonne National Laboratory's IBM Blue Gene-based Intrepid, which recorded 6.6 gigateps. The U.S. Defense Advanced Research Projects Agency, the Department of Energy, and the National Science Foundation also have developed a new benchmark called the HPC Challenge, which tests computing power and memory accessibility.