Person:
Parham, Susan Wells

Associated Organization(s)
Organizational Unit
ORCID
0000-0001-6630-1488
ArchiveSpace Name Record

Publication Search Results

Now showing 1 - 6 of 6
  • Item
    NSF Data Management Plans as a Repository Research Tool
    (Georgia Institute of Technology, 2016-06) Carlson, Jake ; Hswe, Patricia ; Westra, Brian ; Whitmire, Amanda ; Parham, Susan Wells
  • Item
    Development of an Analytic Rubric to Facilitate and Standardize the Review of NSF Data Management Plans
    ( 2015-02-09) Parham, Susan Wells ; Carlson, Jake ; Hswe, Patricia ; Rolando, Lizzy ; Westra, Brian ; Whitmire, Amanda
    The last decade has seen a dramatic increase in calls for greater accessibility to research results and the datsets underlying them. In the United States, federal agencies with over $100 million in annual research and development expenditures are now compelled to create policies regarding public access to research outcomes.1 A sense of urgency has arisen, as researchers, administrators, and institutions must now determine how to comply with new funding agency requirements for data management planning and the sharing of data. As academic institutions develop or expand services to support researchers in meeting these planning and accessibility mandates, there is an increasing demand for mechanisms to better understand researcher needs and practices. The National Science Foundation (NSF) has required a data management plan (DMP) with each new proposal since January 2011. As a document produced by researchers themselves, DMPs provide a window into researchers’ data management knowledge, practices, and needs. They can be used to identify gaps and weaknesses in researchers’ understanding of data management concepts and practices, as well as existing barriers in applying best practices. Formal analysis of DMPs can provide a means to develop data services that are responsive to the needs of local data producers. The IMLS-funded “Data management plans as A Research Tool (DART) Project” has developed an analytic rubric to standardize the review of NSF DMPs. We seek to complement existing tools that have been designed to assist in the creation of a data management plan, such as DMPTool and DMPonline, by developing a tool that will enable consistent analysis of DMP content and quality ex post facto. In this poster, we describe the methodology for developing the analytic rubric, and present results from an initial assessment of DMPs from five U.S. research universities: Oregon State University (lead), Georgia Institute of Technology, Pennsylvania State University, the University of Michigan, and the University of Oregon. The rubric was developed through a review of the NSF’s general guidelines, as well as additional requirements from individual NSF directorates.2 In the rubric, DMP guidelines are translated into a set of discrete, defined tasks (e.g., “Describes what types of data will be captured, created, or collected”), describes levels of compliance for each task, and provides some illustrative examples. We are now conducting a more comprehensive study of DMPs, applying the rubric against a minimum of 100 plans from each study partner. The resulting data set will be analysed with a focus on common observations between study partners and will provide a broad perspective on the data management practices and needs of academic researchers. Once the analysis takes place, the rubric will be openly shared with the community in ways that facilitate its adoption and use by other institutions.
  • Item
    Re-purposing Archival Theory in the Practice of Data Curation
    (Georgia Institute of Technology, 2014-02-25) Rolando, Lizzy ; Hagenmaier, Wendy ; Parham, Susan Wells
    The research data sharing imperative has produced an explosion of interest around institutional research data curation and archiving. For institutions seeking to capture their intellectual output and ensure compliance with funding agency requirements, data archiving and data curation are increasingly necessary. With some notable exceptions, data curation in academic institutions is still a fairly nascent field, lacking the theoretical underpinnings of disciplines like archival science. As has been previously noted elsewhere, the intersection between data curation and archival theory provides data curators and digital archivists alike with important theoretical and practical contributions that can challenge, contextualize, or reinforce past, present, and future theory. Archival theory has critical implications for defining the workflows that should be established for an institutional data curation program. The Georgia Institute of Technology Library and Archives has been developing the services and infrastructure to support trustworthy data curation and born-digital archives. As the need for archiving research data has increased, the intersection between data curation and digital archives has become progressively apparent; therefore, we sought to bring archival theory to bear on our data curation workflows, and to root the actions taken against research data collections in long-standing archival theory. By examining two different cases of digital archiving and by mapping core archival concepts to elements of data curation, we explored the junction of data curation and archival theory and are applying the resulting theoretical framework in our practice. In turn, this work also leads us to question long held archival assumptions and improve workflows for born-digital archival collections.
  • Item
    NSF DMP Content Analysis: What Are Researchers Saying?
    (Georgia Institute of Technology, 2012-07-23) Parham, Susan Wells ; Doty, Chris
    The National Science Foundation (NSF) implemented its requirement that all grant proposals include a two-­‐page data management plan (DMP) in January 2011. Like our colleagues at research institutions across the U.S., librarians and technologists at Georgia Tech developed services to support this mandate, including guidelines and workshops for developing a DMP. Toward the end of the requirement's first year, we assessed the impact of our consultation and outreach services by reviewing the content of submitted data management plans. In collaboration with the GT Office of Sponsored Programs, we examined NSF DMPs submitted by Georgia Tech researchers during the first eight months of the mandate (through 9/6/11). Of the 335 submitted proposals, we reviewed the content of 183 plans. We excluded those proposals that were grant supplements or transfers. While it is too early to draw conclusions regarding a correlation between the quality of data management plans and proposal approval, we used the DMP review to inform our own data services, data repository planning, and outreach.
  • Item
    Planning Cooperative Data Curation Services
    (Georgia Institute of Technology, 2011-06) Parham, Susan Wells ; Fuchs, Sara
  • Item
    Testing the DAF for Implementation at Georgia Tech
    (Georgia Institute of Technology, 2010-12) Parham, Susan Wells
    Formal data management has become an increasingly pressing need for researchers in every discipline, and the Georgia Tech Library is investigating ways in which we can support campus researchers in this area. Vast quantities of research data are generated each year – the creation of which is dependent upon a great investment of both intellectual effort and financial backing by individuals and groups affiliated with Georgia Tech, from departments and research centers to federal funding agencies and private donors. The curation of these assets is of strategic importance to the university and all those involved in their creation. As part of our investigation into providing data management services to GT faculty and researchers, the library is conducting an assessment of campus research data outputs based upon the Data Asset Framework (DAF), an assessment tool developed by HATII at the University of Glasgow in conjunction with the Digital Curation Centre. In preparation for implementing the DAF, the Research Data Project Team first determined the goals and scope of our assessment, and identified available resources, such as funding, technical support, discipline expertise, and institutional partners. Based on these criteria, we modified the tool to match our local requirements. Rather than focusing on a comprehensive audit of a single school or research group, we developed a plan to canvas the entire campus; we require a broad understanding of the research data environment across a university known for its de-centralized nature. While much attention in the professional literature is focused on the data-intensive disciplines within science and engineering, we also wanted to include other technology-rich disciplines that have a strong presence at Georgia Tech –including computing, architecture, music technology, and humanities-based digital media. We conducted a pilot study across all seven university colleges, along with a number of major research centers and affiliated campus units. Because we plan to survey research projects with a wide spectrum of methodologies, practices, budgets, and data management requirements, we needed to insure that the assessment questions were not biased toward any one discipline or research scenario. This poster will outline the findings from the assessment pilot study. I will report on our initial tool design, researcher feedback, survey results, a comparison of expected and actual study outcomes, and modifications made to the assessment tool. By working with this cross-section of the Georgia Tech research community, we were able to refine and improve our original version of the assessment tool for a full, campus-wide implementation in late 2010.