Organizational Unit:
Library

Research Organization Registry ID
Description
Previous Names
Parent Organization
Includes Organization(s)

Publication Search Results

Now showing 1 - 4 of 4
  • Item
    Toward an Improved Understanding of Research Data Management Needs: Designing and Using a Rubric to Analyze Data Management Plans
    (Georgia Institute of Technology, 2015-06-08) Parham, Susan Wells ; Hswe, Patricia ; Whitmire, Amanda ; Carlson, Jake ; Westra, Brian ; Rolando, Lizzy
  • Item
    Development of an Analytic Rubric to Facilitate and Standardize the Review of NSF Data Management Plans
    ( 2015-02-09) Parham, Susan Wells ; Carlson, Jake ; Hswe, Patricia ; Rolando, Lizzy ; Westra, Brian ; Whitmire, Amanda
    The last decade has seen a dramatic increase in calls for greater accessibility to research results and the datsets underlying them. In the United States, federal agencies with over $100 million in annual research and development expenditures are now compelled to create policies regarding public access to research outcomes.1 A sense of urgency has arisen, as researchers, administrators, and institutions must now determine how to comply with new funding agency requirements for data management planning and the sharing of data. As academic institutions develop or expand services to support researchers in meeting these planning and accessibility mandates, there is an increasing demand for mechanisms to better understand researcher needs and practices. The National Science Foundation (NSF) has required a data management plan (DMP) with each new proposal since January 2011. As a document produced by researchers themselves, DMPs provide a window into researchers’ data management knowledge, practices, and needs. They can be used to identify gaps and weaknesses in researchers’ understanding of data management concepts and practices, as well as existing barriers in applying best practices. Formal analysis of DMPs can provide a means to develop data services that are responsive to the needs of local data producers. The IMLS-funded “Data management plans as A Research Tool (DART) Project” has developed an analytic rubric to standardize the review of NSF DMPs. We seek to complement existing tools that have been designed to assist in the creation of a data management plan, such as DMPTool and DMPonline, by developing a tool that will enable consistent analysis of DMP content and quality ex post facto. In this poster, we describe the methodology for developing the analytic rubric, and present results from an initial assessment of DMPs from five U.S. research universities: Oregon State University (lead), Georgia Institute of Technology, Pennsylvania State University, the University of Michigan, and the University of Oregon. The rubric was developed through a review of the NSF’s general guidelines, as well as additional requirements from individual NSF directorates.2 In the rubric, DMP guidelines are translated into a set of discrete, defined tasks (e.g., “Describes what types of data will be captured, created, or collected”), describes levels of compliance for each task, and provides some illustrative examples. We are now conducting a more comprehensive study of DMPs, applying the rubric against a minimum of 100 plans from each study partner. The resulting data set will be analysed with a focus on common observations between study partners and will provide a broad perspective on the data management practices and needs of academic researchers. Once the analysis takes place, the rubric will be openly shared with the community in ways that facilitate its adoption and use by other institutions.
  • Item
    Paper seismograms shake up research data workflows at Georgia Tech
    (Georgia Institute of Technology, 2014-11-03) Rolando, Lizzy ; Hagenmaier, Wendy ; Gentilello, Katie
    Although most research data collections submitted for inclusion in Georgia Tech’s institutional repository SMARTech are born digital and comprised of only a few digital files, some researchers still have valuable, non-digital collections. Case in point is a retired seismologist who offered the Library ownership over of a collection of original paper seismograms containing over 30 years of unique readings on seismic events that had occurred in the Southeast region. Given the unique and longitudinal nature of the collection, the Library, with support from the University Archives, agreed to digitized, preserve, and make accessible the complete collection through the Institution’s DSpace repository. The project was a strategic opportunity to provide access to a valuable collection of data files, and to collaboratively review and assess existing practices and workflows for dealing with digital collections. Areas of interest include: the need for review and subsequent adjustment to the existing repository deposit agreement to allow for the transfer of ownership and eventual destruction of the paper records; the expansion of digitization services to include patron submitted materials ; digitization of oddly shaped and often poorly documented paper records; struggles with the hierarchical collections and communities in DSpace when archiving a complex and highly interrelated collection; finding the balance between customized, discipline-specific metadata and the standard fields used for all repository items; and the creation of collection level metadata, using the Encoded Archival Description standard, to comprehensively document the breadth of the collection and allow future users more direct access to individual items contained within the entire collection. Our poster will discuss the specifics our process and reflect on lessons learned, highlighting areas for future consideration and collaboration.
  • Item
    Re-purposing Archival Theory in the Practice of Data Curation
    (Georgia Institute of Technology, 2014-02-25) Rolando, Lizzy ; Hagenmaier, Wendy ; Parham, Susan Wells
    The research data sharing imperative has produced an explosion of interest around institutional research data curation and archiving. For institutions seeking to capture their intellectual output and ensure compliance with funding agency requirements, data archiving and data curation are increasingly necessary. With some notable exceptions, data curation in academic institutions is still a fairly nascent field, lacking the theoretical underpinnings of disciplines like archival science. As has been previously noted elsewhere, the intersection between data curation and archival theory provides data curators and digital archivists alike with important theoretical and practical contributions that can challenge, contextualize, or reinforce past, present, and future theory. Archival theory has critical implications for defining the workflows that should be established for an institutional data curation program. The Georgia Institute of Technology Library and Archives has been developing the services and infrastructure to support trustworthy data curation and born-digital archives. As the need for archiving research data has increased, the intersection between data curation and digital archives has become progressively apparent; therefore, we sought to bring archival theory to bear on our data curation workflows, and to root the actions taken against research data collections in long-standing archival theory. By examining two different cases of digital archiving and by mapping core archival concepts to elements of data curation, we explored the junction of data curation and archival theory and are applying the resulting theoretical framework in our practice. In turn, this work also leads us to question long held archival assumptions and improve workflows for born-digital archival collections.