Valerie C Hendrix

and 13 more

Diverse, complex data are a significant component of Earth Science’s “big data” challenge. Some earth science data, like remote sensing observations, are well understood, are uniformly structured, and have well-developed standards that are adopted broadly within the scientific community. Unfortunately, for other types of Earth Science data, like ecological, geochemical and hydrological observations, few standards exist and their adoption is limited. The synthesis challenge is compounded in interdisciplinary projects in which many disciplines, each with their own cultures, must synthesize data to solve cutting edge research questions. Data synthesis for research analysis is a common, resource intensive bottleneck in data management workflows. We have faced this challenge in several U.S. Department of Energy research projects in which data synthesis is essential to addressing the science. These projects include AmeriFlux, Next Generation Ecosystem Experiment (NGEE) - Tropics, Watershed Function Science Focus Area, Environmental Systems Science Data Infrastructure for a Virtual Ecosystem (ESS-DIVE), and a DOE Early Career project using data-driven approaches to predict water quality. In these projects, we have taken a range of approaches to support (meta)data synthesis. At one end of the spectrum, data providers apply well-defined standards or reporting formats before sharing their data, and at the other, data users apply standards after data acquisition. As these projects continue to evolve, we have gained insights from these experiences, including advantages and disadvantages, how project history and resources led to choice of approach, and enabled data harmonization. In this talk, we discuss the pros and cons of the various approaches, and also present flexible applications of standards to support diverse needs when dealing with complex data.

Dylan O'Ryan

and 8 more

Data standardization can enable data reuse by streamlining the way data are collected, providing descriptive metadata, and enabling machine readability. Standardized open-source data can be more readily reused in interdisciplinary research that requires large amounts of data, such as climate modeling. Despite the importance given to both FAIR (Findable, Accessible, Interoperable, Reusable) data practices and the need for open-source data, a remaining question is how community data standards and open-source data can be adopted by research data providers and ultimately achieve FAIR data practices. In an attempt to answer this question, we used newly created water quality community data reporting formats and applied them to open-source water quality data. The development of this water quality data format was curated with several other related formats (e.g., CSV, Sample metadata reporting formats), aimed at targeting the research community that have historically published water quality data in a variety of formats. The water quality community data format aims to standardize how these types of data are stored in the data repository, ESS-DIVE (Environmental Systems Science Data Infrastructure for a Virtual Ecosystem). Adoption of these formats will also follow FAIR practices, increase machine readability, and increase the reuse of this data. We applied this community format to open-source water quality data produced by the Watershed Function Scientific Focus Area (WFSFA), a large watershed study in the East River Colorado, which involves many national laboratories, institutions, scientists, and disciplines. In this presentation, we provide a demonstration of a relatively efficient process for converting open-source water quality data into a format that adheres to a community data standard. We created examples of water quality data translated to the reporting formats that demonstrated the functionality of these data standards; descriptive metadata and sample names, streamlined data entries, and increased machine readability were products of this translation. As the community data standards are integrated within the WFSFA data collection processes, and ultimately all data providers of ESS-DIVE, these steps may enable interdisciplinary data discovery, increase reuse, and follow FAIR data practices.

Justin Buck

and 11 more

A gap in community practice on data citation that emerged during the AGU fall meeting 2020 Data FAIR Town Hall, “Why Is Citing Data Still Hard?” with the goal of addressing the use case of citing a large number of datasets such that credit for individual datasets is assigned properly. The discussion included the concept of a “Data Collection” and the infrastructure and guidance still needed to fully implement the capability so it is easier for researchers to use and receive credit when their data are cited in this manner. Such collections of data may contain thousands to millions of elements with a citation needing to include subsets of elements potentially from multiple collections. Such citations will be crucial to enable reproducible research and credit to data and digital object creators. To address this gap, the data citation community of practice formed including members from data centres, research journals, informatics research communities, and data citation infrastructure. The community has the goal of recommending an approach that is realistic for researchers to use and for each stakeholder to implement that leverages existing infrastructure. To achieve data citation of these subsets of large data collections the concept of a “reliquary” is introduced. In this context the reliquary is a container of persistent identifiers (PIDs) or references defining the objects used in a research study. This can include any number of elements. The reliquary can then be cited as a single entity in academic publications. The reliquary concept will enable data citation use cases such as the citation of elements within a data collection that are formed from numerous underlying datasets that have their own PIDs, unambiguous citation of data used in IPCC Assessment Reports, and citing the subsets of collections of research data that contain millions of elements. The discussions over the course of 2021 have developed a theoretical concept, at the time of writing formal use cases and initial applications are being defined. The recommendation developed by this effort will be available for review and comment by communities such as ESIP and RDA. All are welcome.

Emily Robles

and 6 more

Quality metadata and data are critical to advancing science and preserving data for long-term use. The Next Generation Ecosystem Experiments (NGEE) Tropics project funded by the U.S. Department of Energy generates and utilizes ecological, hydrological, and meteorological data from tropical forests for scientific analysis and model parameterization. The project’s data team manages an archive for users to internally curate and publish data with a digital object identifier (DOI). A key focus of our project is to ensure NGEE Tropics data can be interpreted and utilized by current and future research teams. However, the education and participation of project members to prioritize and be involved in data curation is necessary to reach this goal. We have taken an interdisciplinary approach involving domain and data scientists to create a process that makes it easy for scientists to curate high-quality data packages for archival. First, the NGEE Tropics Archive and metadata reporting templates (FRAMES) were designed using user-experience research methods to incorporate user feedback through interviews and surveys. Upon submission of data packages, thorough checks are performed to ensure quality expectations are met. Each dataset is curated individually, and feedback is provided directly to scientists to identify the optimal data organization for their packages. The data team also provides training to project members using presentations, tutorials, and 1:1 training. As a result of our efforts, package and file-level metadata reporting to the NGEE Tropics archive fits within the existing workflow of scientists, establishing data curation as a core aspect of research. By educating the NGEE Tropics team through integration and communication, we have enabled the production of quality data packages that are findable, accessible and usable by any member of the public. This work will enhance the legacy of NGEE Tropics, and provide a lasting resource for the tropical research community.

Zarine Kakalia

and 13 more

The U.S. Department of Energy’s (DOE) East River community observatory (ER) in the Upper Colorado River Basin was established in 2015 as a representative mountainous, snow-dominated watershed to study hydrobiogeochemical responses to hydrological perturbations in headwater systems. Led by the Watershed Function Science Focus Area (SFA), the ER has both long-term and spatially-extensive observations paired with experimental campaigns. The Watershed Function SFA, led by Berkeley Laboratory, includes researchers from over 30 organizations who conduct cross-disciplinary process-based investigations and mechanistic modeling of watershed behavior in the ER. The data generated at the ER are extremely heterogeneous, and include hydrological, biogeochemical, climate, vegetation, geological, remote sensing, and model data that together comprise an unprecedented collection of data and value-added products within a mountainous watershed, across multiple spatiotemporal scales, compartments, and life zones. Within 5 years of data collection, these datasets have already revealed insights into numerous aspects of watershed function such as factors influencing snow accumulation and melt timing, water balance partitioning, and impacts of floodplain biogeochemistry and hillslope ecohydrology on riverine geochemical exports. Data generated by the SFA are managed and curated through its Data Management Framework. The SFA has an open data policy, and over sixty ER datasets are publicly available through relevant data repositories. A public interactive map of data collection sites run by the SFA is available to inform the broader community about SFA field activities. Here, we describe the ER and the SFA measurement network, present the public data collection generated by the SFA and partner institutions, and highlight the value of collecting multidisciplinary multiscale measurements in representative catchment observatories.