Geneva, 1 July 2009. Preparations are under way for the restart of the Large Hadron Collider (LHC) the world's most powerful particle accelerator. One of the most important systems needed to support the experiments that will utilise this great machine is the global computing grid: the worldwide LHC Computing Grid (WLCG). After months of preparation and two intensive weeks of 24 x 7 operation the LHC experiments are celebrating the achievement of a new set of goals aimed at demonstrating full readiness for the LHC data taking run expected to start later this year.
While there have been several large-scale data-processing tests in recent years, this was the first production demonstration involving all of the key elements from data taking through to analysis. Records of all sorts were established: data taking throughput, data import and export rates between the various Grid sites, as well as huge numbers of analysis, simulation and reprocessing jobs – ATLAS alone running close to 1M analysis jobs and achieving 6GB/s, of "Grid traffic", the equivalent of a DVD worth of data a second, sustained over long periods. This result is particularly timely as it coincides with the transition of Grids into long-term sustainable e-infrastructures, clearly of fundamental importance to projects of the lifetime of the LHC. With the restart of the LHC only months away, one can expect a large increase in the number of Grid users: from several hundred unique users today to several thousand when data taking and analysis commences. This can only happen through significant streamlining of operations and the simplification of end-users' interaction with the Grid. STEP'09 included massive-scale testing of end-user analysis scenarios, including "community-support" infrastructures, whereby the community is trained and enabled to be largely self-supporting, backed by a core of Grid and application experts.
WLCG combines the IT power of more than 140 computer centres, the result of collaboration between 33 countries.
Sergio Bertolucci, director of research and computing at CERN1 said: "The 4 LHC experiments – ATLAS, CMS, ALICE and LHCb – have demonstrated their ability to manage their nominal data rates concurrently. For the first time all aspects of the experiments' computing were exercised simultaneously: simulation, data processing and analysis. This gives them the confidence that they will be able to efficiently analyze the first data from the LHC later this year."
Bob Jones, director of the EGEE project remarked "such a significant achievement is also a valuable testament to the state of maturity of the EGEE infrastructure and its ability to interoperate with major Grid infrastructures in other parts of the world. Ensuring that this level of service continues uninterrupted as we transition from EGEE to EGI is clearly essential to our users, including flagship communities such as High Energy Physics."
"This is another significant step to demonstrating shared infrastructures can be used by multiple high throughput science communities simultaneously," said Ruth Pordes, executive director of the Open Science Grid consortium. "ATLAS and CMS are not only proving the usability of OSG, but contributing to maturing national distributed facilities in the US for other sciences."
David Britton, the GridPP Project Leader reported: "In the UK, STEP09 ran very smoothly at the majority of sites, which allowed the focus to be on understanding the performance and tuning the infrastructure. The RAL Tier-1 performed exceedingly well with only a single out-of-hours call out over the two week period. Valuable information was obtained on the performance of tape-drives under realistic workflows; the OPN network was tested by laying down additional UDP traffic on top of the STEP09 data; and the fairshare system was successfully tuned to balance the load between experiments."
Gonzalo Merino, manager of the Tier1 centre in Barcelona wrote: "The Spanish WLCG sites met the STEP09 targets. It has been a very valuable exercise since many of the experiment workflows have been tested simultaneously at unprecedented scale, well above the nominal values for LHC data taking. The Tier-1 at PIC has provided a very stable and reliable service at record breaking levels: exchanging up to 80 Terabytes per day with other WLCG sites and processing data at more than 2 GBytes per second. This gives us confidence that the Spanish WLCG sites are ready for data taking."
David Foster, head of the LHC Optical Private Network activity concluded "The LHC Optical Private Network transporting data between the sites has proven its capability both in terms of performance and resiliency during STEP'09. New capabilities emerging in the 40Gbps and 100Gbps range should enable us to keep up with the anticipated data distribution needs of the LHC experiments."
About the Large Hadron Collider
The LHC, located at CERN near Geneva, Switzerland, is the world's largest particle accelerator. For thousands of physicists, analysing LHC data using the LHC Computing Grid will be like sifting for digital gold. Their search is predicted to unearth evidence of new fundamental particles that will provide clues to the ultimate nature of matter and the origins of our Universe.
About grid computing
Grid computing connects computers distributed over a wide geographic area. Just as the World Wide Web enables access to information, computing grids enable access to computing resources. These resources include data storage capacity, processing power, sensors, visualisation tools and more. Grids can combine the resources of thousands of different computers to create a massively powerful computing resource, accessible from the comfort of a personal computer and useful for multiple applications in science, business and beyond.
For more information:
Kendra Snyder – BNL/US ATLAS public relations matters
Rhianna Wisniewski – Editor in the Fermilab Office of Communication
Timothy I. Meyer – Head Strategic Planning & Communications