Person:
Parham, Susan Wells

Associated Organization(s)
Organizational Unit
ORCID
0000-0001-6630-1488
ArchiveSpace Name Record

Publication Search Results

Now showing 1 - 3 of 3
  • Item
    Development of an Analytic Rubric to Facilitate and Standardize the Review of NSF Data Management Plans
    ( 2015-02-09) Parham, Susan Wells ; Carlson, Jake ; Hswe, Patricia ; Rolando, Lizzy ; Westra, Brian ; Whitmire, Amanda
    The last decade has seen a dramatic increase in calls for greater accessibility to research results and the datsets underlying them. In the United States, federal agencies with over $100 million in annual research and development expenditures are now compelled to create policies regarding public access to research outcomes.1 A sense of urgency has arisen, as researchers, administrators, and institutions must now determine how to comply with new funding agency requirements for data management planning and the sharing of data. As academic institutions develop or expand services to support researchers in meeting these planning and accessibility mandates, there is an increasing demand for mechanisms to better understand researcher needs and practices. The National Science Foundation (NSF) has required a data management plan (DMP) with each new proposal since January 2011. As a document produced by researchers themselves, DMPs provide a window into researchers’ data management knowledge, practices, and needs. They can be used to identify gaps and weaknesses in researchers’ understanding of data management concepts and practices, as well as existing barriers in applying best practices. Formal analysis of DMPs can provide a means to develop data services that are responsive to the needs of local data producers. The IMLS-funded “Data management plans as A Research Tool (DART) Project” has developed an analytic rubric to standardize the review of NSF DMPs. We seek to complement existing tools that have been designed to assist in the creation of a data management plan, such as DMPTool and DMPonline, by developing a tool that will enable consistent analysis of DMP content and quality ex post facto. In this poster, we describe the methodology for developing the analytic rubric, and present results from an initial assessment of DMPs from five U.S. research universities: Oregon State University (lead), Georgia Institute of Technology, Pennsylvania State University, the University of Michigan, and the University of Oregon. The rubric was developed through a review of the NSF’s general guidelines, as well as additional requirements from individual NSF directorates.2 In the rubric, DMP guidelines are translated into a set of discrete, defined tasks (e.g., “Describes what types of data will be captured, created, or collected”), describes levels of compliance for each task, and provides some illustrative examples. We are now conducting a more comprehensive study of DMPs, applying the rubric against a minimum of 100 plans from each study partner. The resulting data set will be analysed with a focus on common observations between study partners and will provide a broad perspective on the data management practices and needs of academic researchers. Once the analysis takes place, the rubric will be openly shared with the community in ways that facilitate its adoption and use by other institutions.
  • Item
    Re-purposing Archival Theory in the Practice of Data Curation
    (Georgia Institute of Technology, 2014-02-25) Rolando, Lizzy ; Hagenmaier, Wendy ; Parham, Susan Wells
    The research data sharing imperative has produced an explosion of interest around institutional research data curation and archiving. For institutions seeking to capture their intellectual output and ensure compliance with funding agency requirements, data archiving and data curation are increasingly necessary. With some notable exceptions, data curation in academic institutions is still a fairly nascent field, lacking the theoretical underpinnings of disciplines like archival science. As has been previously noted elsewhere, the intersection between data curation and archival theory provides data curators and digital archivists alike with important theoretical and practical contributions that can challenge, contextualize, or reinforce past, present, and future theory. Archival theory has critical implications for defining the workflows that should be established for an institutional data curation program. The Georgia Institute of Technology Library and Archives has been developing the services and infrastructure to support trustworthy data curation and born-digital archives. As the need for archiving research data has increased, the intersection between data curation and digital archives has become progressively apparent; therefore, we sought to bring archival theory to bear on our data curation workflows, and to root the actions taken against research data collections in long-standing archival theory. By examining two different cases of digital archiving and by mapping core archival concepts to elements of data curation, we explored the junction of data curation and archival theory and are applying the resulting theoretical framework in our practice. In turn, this work also leads us to question long held archival assumptions and improve workflows for born-digital archival collections.
  • Item
    Research Data Needs Assessment at Georgia Tech
    (Georgia Institute of Technology, 2013-10-21) Rolando, Lizzy ; Parham, Susan Wells ; Doty, Chris ; Valk, Alison
    From late 2010 through spring of 2013, Georgia Tech Library’s Research Data Project Team conducted a multi-faceted assessment of GT research data needs. In this program, we will discuss the four methodologies used in our data needs assessment. Each methodology served a different purpose, allowing us to collect different but complementary information. While our survey provided a broad overview of practices, individual interviews contributed to a more thorough and nuanced understanding of trends observed in the survey. By analyzing data management plans submitted alongside NSF proposals, we better understand how researchers expect to comply with funding agency requirements for data management and sharing. Finally, data archiving case studies prompted deep discussions with researchers about their data, as well as critical conversations within the Library about the types, formats, and volumes of data we can commit to preserving. This combination of methodologies and results informs our strategic goal to develop campus partnerships to collect, manage, share, and preserve Georgia Tech digital research data. While our assessment was conducted with a narrow scope of research data services, the methodologies employed can easily be adapted and used to study and assess other Library services.