Home PublicationsCommentary CERN: A Case Study In “Big Science” Data Management

CERN: A Case Study In “Big Science” Data Management

by Travis Korte
by
A visualization of CERN data depicting a Higgs Boson

The European Organization for Nuclear Research (CERN) became a poster-child for organizing and processing enormous quantities of data during its search for the Higgs boson and other physics investigations, but scaling up is not the only data hurdle the organization has had to jump. In its efforts to maximize the long-tail value of its data, it has had to create roadmaps for preserving and releasing public data. In the same way that CERN’s scalable processing and storage innovations can be applied in industry, its data preservation and dissemination efforts can serve as examples for other “big science” projects.

Data Preservation

CERN’s data preservation imperative arises from two factors. First, data re-use is a crucial component of particle physics. A 2012 special report from the Data Preservation in High Energy Physics (DPHEP) study group notes that important research can arise from data sets over a decade after their collection (see “Other examples of long-term data re-use”). The same report notes that long-term analysis accounts for 5-10% of total scientific output of high energy physics collaborations and a survey of over a thousand high energy physicists showed that 70% regarded data preservation as “very important” or “crucial” to their work.

The second factor is that in general, particle collision data is only collected once. As accelerators grow more powerful, they tend to collect data at energy levels never before achievable, so data from older accelerators risks being lost forever if it is not adequately preserved. Scientists cannot hope to reap the benefits of running new analysis on old data if this one-of-a-kind data disappears.

In light of these two factors, CERN has been proactive about preserving data for future re-use, even as it remains plagued by non-standardized, ad-hoc formats, and highly complex data models. CERN was a co-founding organization in the Alliance for Permanent Access, a European data preservation working group. Internal efforts are underway as well, although these have not yet been implemented completely. In a 2011 presentation, the computing coordinator of the Toroidal Large Hadron Collider Apparatus (ATLAS) experiment at CERN (one of the detectors that contributed measurements to the Higgs boson finding) stated that he was confident that technological progress would enable preservation of large quantities of data in the medium term. The collider’s Compact Muon Solenoid (CMS) experiment (the other detector involved in the Higgs finding), has made progress as well, announcing that it would begin implementing a new metadata collection tool this month and mentioning an emerging initiative to guarantee “bit-level data preservation” in the future.

These efforts may benefit from government nudging, including writing data preservation rules into grants and providing funding for data management staff. The DPHEP report found that a staffing increase of 2-3 full time employees leads to a significant improvement in the ability to implement long-term data preservation systems. This might seem like a drop in the bucket for a large, multinational effort such as CERN, but individual experiment teams might be more inclined to spend general allocations on detector systems than on practices such as data preservation that benefit the greater community.

Open Data

At present, CERN only gives a select few unaffiliated scientists access to raw data from the Large Hadron Collider, but the organization has expressed commitment to open data access in the longer term. The CMS experiment leads current open data efforts, having approved a data preservation, re-use and open access policy in 2012, and issued guidance on implementing the policy earlier this month. Public data releases will take place yearly, during Large Hadron Collider machine shut-downs, and data will be released 3 years after collection, to ensure that CERN and affiliated physicists have access to the data before it goes public. In addition, where special software is required to read the data, it will be made publicly available in open-source.

The ATLAS experiment has also issued preliminary guidelines, although these are much less comprehensive, providing data access for unaffiliated physicists only insofar as they are involved with a particular research project.

To the extent that CERN’s physics data is produced from publicly-funded experiments, the organization should strive to make it available and accessible to the public. In the future, member governments could mandate open releases; by encouraging the open availability of taxpayer-funded physics data, governments can maximize the data’s re-use value and encourage more meaningful collaboration throughout the scientific community.

You may also like

Show Buttons
Hide Buttons