[MUSIC PLAYING] ANNOUNCER: All that is solid melts into air. To put out a manifesto, you must want A, B, C to fulminate against 1, 2, 3. AMEET DOSHI: You are listening to WREK Atlanta, and this is Lost in the Stacks, a research library rock-and-roll radio show. I'm Ameet Doshi in the virtual studio with Fred, Amanda, and Wendy. Each week on Lost in the Stacks, we pick a theme and then use it to create a mix of music and library talk. Whichever you're here for, we hope you dig it. FRED: And today's show is called The Leiden Manifesto. AMANDA PELLERIN: We are wearing our purple berets and combat boots, and I also have a 12-week-old puppy right beside me ready to destroy anything that moves. WENDY: Revolution! Dissent! Yes, it is a manifesto, all right. A manifesto about bad metrics. FRED: And somebody said something about purple berets? AMEET DOSHI: Yeah. You're right, Fred. There shouldn't be purple. They should be raspberry. FRED: What? AMEET DOSHI: The Leiden Manifesto was published in a 2015 issue of Nature, and it's starting to move the needle on how we evaluate the impact of scientific research. We'll discover just how the authors managed to begin that conversation in a few moments. FRED: And our songs today are about measurements, comparative quantities, evaluations, and judgments. Everything you want in a good metric. And you know what you need to make sure everyone knows about a good metric? AMEET DOSHI: A beret. AMANDA PELLERIN: A purple beret. WENDY: A raspberry beret. FRED: I was going to say photocopied pamphlets, but yeah, let's go with that. This is "Raspberry Beret" by Prince and the Revolution right here on Lost in the Stacks. [MUSIC - PRINCE, "RASPBERRY BERET"] (SINGING) 1, 2, 1, 2, 3, uh! AMEET DOSHI: This is Lost in the Stacks, and joining us in our virtual studio is Dr. Diana Hicks, professor in the Georgia Tech School of Public Policy and lead author of the Leiden Manifesto for Research Metrics. So for those of you that aren't aware of research metrics, I'll give you a brief-- a very brief definition. Basically, these are ways of measuring-- some would say quantifying-- the influence or the impact of a scholarly work. This could be an article, it could be a journal, it could be a book or a book chapter. When you're trying to rank impact, a lot of people look to these research metrics, sometimes referenced as bibliometrics, as ways of doing that. So we have an expert with us. Dr. Hicks, thanks so much for joining us. DIANA HICKS: Thank you very much for inviting me on your show. AMEET DOSHI: Could you give us an overview of the Leiden Manifesto and the purpose of this manifesto? Basically, why was it needed? DIANA HICKS: Well, it was needed because building these metrics has got a lot easier with the advances in technology, so a lot more people were able to do it. And with that, it used to be done in the expert community of bibliometricians. And with its spreading out, you got some people, the best in the world, didn't quite know what they were doing. So there's a lot of bad practice proliferating as well. Institutions and governments had got a lot more interested in evaluating their scientists and they were setting up systems that were not really very well-designed. And all of this could be harming science because these numbers do have consequences when you use them for hiring or for evaluating people for promotion and things like that. So we wanted to get something out there that would put a marker in the ground and help defend science and its practice against the bad evaluation practice with metrics. AMEET DOSHI: And do you think that there were particular practices that had an outsized negative influence? And I think in particular to a term that our library listeners will be very aware of, and that's the journal impact factor. This is a widely-used metric in evaluating the ranking of a journal. And was this really focused on a small set of practices or were there other kinds of metrics that began the conversation that led to the manifesto? DIANA HICKS: Well, yeah, the overuse of the impact factor was a big one. Substituting for reading people's papers and evaluating their actual scholarly work, people would just reach for that number. Another thing which encouraged bad practice is the widespread availability of the h index. Now Google Scholar puts it out there and everybody bandes that around. And just a whole-- just a lot easier for people to get a hold of these numbers for-- way back in the day, when-- there was a time when that science citation index was only available in hard copy form. And so nobody paid any attention. And then it went to CD-ROMS, and then some of the experts had it, and the librarians had it. And then it's available from your laptop and everybody's digging into it and not necessarily knowing the background to these things or the valid domain of application. And so it got-- it was getting wild and crazy Well, as well, with these numbers becoming easier to access, a lot of governments, a lot of universities were like, oh, we can use numbers, it's so much easier to evaluate people. And bad evaluation is going to harm science, we don't want that. FRED RASCOE: So is the-- I guess the intended audience of the Leiden Manifesto, which is suggesting ways to make better the practice of measuring the quality and impact of researchers, is the audience the folks that are evaluating researchers? Like, I guess, really, the tenure and promotion committees. Is that who the primary audience is? DIANA HICKS: Well, in this country, that is-- the tenure promotion committees are the primary audience that evaluates scientists, but that practice is not so subject to metrics. I mean, they're kind of sneaking in, but the method of tenure and promotion evaluation in this country is so well-grounded historically in small groups of people reading small numbers of papers and evaluating the whole record and things like that. So the numbers are only in around the edges. In other countries, however, it is much worse because they don't have that tradition of qualitative intense evaluation that we do in the United States. And so governments are trying to do it from afar or institute-- or universities are trying to do it for the whole university, and that's where they reach for the numbers, universities do and ministries do. So the audience for the manifesto is intended to be the-- well, the scientists. That's why it was published in Nature, it's because it's high visibility. So all scientists will know it's there and then get to it easily. And so that they can wave this at in front of some bureaucrat who is trying to impose on them some nonsensical evaluation system and say, well, that's just not best practice, because here is best practice and you're violating it in x, y, and z ways. And then the scientists wanted their ministers and their bureaucrats to have access to it. So 25 groups of volunteers have translated it into 25 different languages, because of course, the people in the ministry don't necessarily work in English, so we have a website and we've posted all these translations. And so pretty much any-- most of the people in the world can read this thing in their own language, which is helpful for having an influence beyond the scientists. AMEET DOSHI: And certainly, one of the interesting things about this work is that you all did an animated version of the manifesto. So first calling it a manifesto, and then also branching out into another media format. What was the impetus there? Were you just trying to reach a broader audience? DIANA HICKS: Well, absolutely. I mean, that was the impetus for going to get it into Nature and not a specialist journal for the science metric community, because the intent was to go for this broad audience of scientists, to get it into Nature. Well, then people started translating it, and so we had a website to mount that, and then we thought, we've got the website, we can have a place to put this video, and the video would help, again, reach broader audiences. And I've heard it does. I've heard comments made that-- especially if you're not a social scientist and you read it it's all a little baffling and you watch the video and maybe that helps you understand it. And also, we get asked to talk at conferences a lot, so it was easy, too. It's an easy way to do a conference presentation, it's more engaging, pop the video up there to start off. And so yeah, it's engaging and that's what we wanted. And so this led, then, to the manifesto getting an award from the Professional Society for the Social Studies of Science, their Award for Outreach because of the video and the translations. So people recognized it as an innovative model to achieve the broader goals that we had with putting this piece together. AMEET DOSHI: Well it sounds like the next thing is a punk rock song describing the-- so Fred and I are going to work on that. We're speaking with Professor Diana Hicks from the Georgia Tech School of Public Policy, and we'll be back with more about improving practices in research metrics after a music set. FRED RASCOE: File this set under QC91.I53. AMANDA PELLERIN: That was "Manifesto" by Slug featuring Nafets and Chucky Blk. Before that, "Tell Me More" by the Palmettes. And we started with "Not Good Enough" by Chain and the Gang. Those were songs about trying to measure up to high standards. FRED RASCOE: Welcome back. Today's Lost in the Stacks is called the Leiden Manifesto, and we're talking with Professor Diana Hicks of the Georgia Tech School of Public Policy. So Dr. Hicks, in the last segment, we talked a little bit about research metrics. We defined what those are and how the Leiden Manifesto outlines how to go about coming up with better research metrics. Now the Leiden Manifesto is a series of 10 points with some explanation about all 10 points. If you're at a cocktail party, which I'm sure you must go to a lot and people come up to you and ask you, hey, so I don't know about research metrics. What is this Leiden Manifesto? What do you say to folks that ask you about that? DIANA HICKS: Well, the short-ish version of the Leiden Manifesto would be explaining these principles of best practice that we should be using if we are setting out to evaluate scientists. It's things like you can't just give up all your responsibility for judgment to the numbers. That the experts and their qualitative assessment remain primary, and they can't duck it by just saying, oh, here's a number, that's the answer. And you have to be careful with who you're evaluating because different departments, different even universities, government labs have different missions and you have to be sensitive to that in how you evaluate people's work because not everybody is oriented to publishing papers. Some are, for example, really trying to make social change or to give technologies into industry. So these things mean you have to evaluate differently. And then there's another issue, very important in other countries-- not here, but the excellent journals in the world tend to be American for historical reasons. So governments tend to think, well, if our researchers are doing high-quality work, they are getting into international journals, which can mean in some fields getting into American journals. Which means that Americans are setting the standards and the topics that are of interest and things like that. In the social science humanities, this matters because it means that if you're a Spanish researcher, you're not going to be studying migrant workers in the fields of Southern Spain and get that-- you can't get that in an American journal. So if your government is telling you you have to get in an international journal, you can't study that. So there is an idea that we shouldn't just throw out all locally relevant research and only count the stuff that's in international journals. That has a tendency of-- the systems that have been implemented is to just say, oh, the local stuff is rubbish, you need to be an English language journal. AMEET DOSHI: There's another-- I'm sorry, Dr. Hicks, to interrupt, but there was another phrase that caught my attention in the manifesto about false precision. And I wonder if you could define that because the reason it struck me is it seems like a problem in many different fields. DIANA HICKS: Yeah, well, that was just a specifically targeted at this impact factor, which is in the Web of Science, and is a number it's attached to journals and how highly cited they are. And it's a company that publishes it, and they publish it to three decimal places because if you publish it to three decimal places, things change. And so that brings people back to the product. But it's not meaningful. The rankings change because of the change in the third decimal place. It's just some random fluctuation in the numbers, it's not substantive. So it's really important to back away from those sorts of things, and if you're going to look at them, just look at numbers that really-- changes in numbers that really make a difference. And some of these things, you can really only differentiate the top, middle, and lower groups. That's really all you could say with any confidence. And so we should be not chasing these decimal places so much to give us this false sense of scientificity or something like that. FRED RASCOE: And again, this h index is exactly what the Leiden Manifesto is warning against, it's one number that's often being used as a measure, whereas the Leiden Manifesto is asking to put into practice multitude of different metrics and different numbers. And I wonder if you could take all these points in the Leiden Manifesto, and is it to implement them, is it just a matter of being willing to do it or are there other considerations? Budget, other kinds of resources that are needed? How easy or difficult would it be to implement this kind of policy? DIANA HICKS: Yeah, there are budgetary implications to doing it right. So there is a statement in there somewhere about spend the money to do it right. So if you're going to evaluate people, it's going to make a difference, you should have accurate comprehensive data, and getting the data-comprehensive is-- it does cost money. And so you have to budget for that, you really do. Well, and also, there's just the way people do it. Universities in this country have had numbers floating around from academic analytics for a few years now. A company-- who knows how they put that together? That's not open, transparent, simple, and anybody who gets some data from them and digs into it finds it's incorrect. So it just shows that it's so easy to do this wrong. And so these principles, the openness, putting together your numbers, and allowing those evaluated to go in and be able to verify it are very, very important. AMEET DOSHI: Back with more from Dr. Hicks on the left side of the hour. AISHA JOHNSON: Hi this is Dr. Aisha Johnson, assistant professor and Program Director for the MLS program at North Carolina Central University. And you are listening to Lost in the Stacks on WREK Atlanta. AMEET DOSHI: Today's lost in the Stacks is called the Leiden Manifesto. The following passage was written by two leading biliomatrians, Professors Blaise Cronin and Cassidy Sugimoto, and published in Beyond Biliomatrics, Harnessing Multidimensional Indicators of Scholarly Impact. Quote, "Through the tools we use and the indicators we favor, measure what we claim or believe them to measure, and if so, are those measures reliable? That is, capable of producing consistent and ideally transparent results? Are we not sometimes so seduced by the incrementing sophistication of our procedures-- data capture and cleaning, weighting, normalization, multivariate analysis, modeling, visualization-- that the technical tale could almost be said to be wagging the disciplinary dog?" While you think about that, file this set under Z669.8.B49-2-2014-EB. And you know what that EB is for, right, folks? AMANDA PELLERIN: You just heard "Cheap Signals" by Soft Shadows, "calculated" by New Color, "Measurement" by Vacation Days, "Songs About Counting and Compiling and Wondering if it all Means Anything." AMEET DOSHI: We're talking about the Leiden Manifesto for Research Metrics on Lost in the Stacks today, and our guest is Professor Diana Hicks of the Georgia Tech School of Public Policy and lead author of the Leiden Manifesto. So Professor Hicks, we bury the lead a bit. The manifesto itself was published in 2015 in Nature. So we have some opportunity to look back and-- not to get too meta, but what's been the impact of your work in one number? DIANA HICKS: In one number, right. 25. That's how many it translates to. So that's how many-- well, it's not how many countries, but it certainly adds up to more than half the world's people can now read this thing in their own language. So there you go, there's the demand. As I said, particularly in government bureaucrats. And we don't-- it gets out there and not everything gets back to the authors, so we don't know exactly how has-- I do-- a couple of things have. We do know that there have been some efforts at universities. For example, universities that go into framing up guidance for a metrics-based evaluation do then reference the Leiden Manifesto. So so it has helped develop some guidelines like that. And we do know it was hugely downloaded. The website to our traffic has been quite good, sustained. So so the interest is there. And, well, and it seems like anybody now who writes a review of how-- or measure things with citations references this. So it's become like this obligatory point of passage which was a technical term by a scholar, Bruno Latour, in that area. So it certainly has-- for the specialists in science metrics, it's not like any of this was new, but it pulled it all together and teed it up nicely. And so it just becomes a standard way when you're reviewing things in this area to say, oh, the Leiden Manifesto and just stands for a whole bunch of stuff. And so it's being used in the original area that we work in as well as, I think more broadly, in policy-making from the little hints we've got, which is satisfying to have that kind of interest and impact. FRED RASCOE: Obviously, as you said, the Leiden Manifesto wants measures of impact and wants metrics to be drawn from multiple sources. But I wonder, in the five years since this thing has been published, have there been entities, organizations, companies, even, that have tried to develop metrics tools that are using this kind of as some sort of a guidepost, maybe? DIANA HICKS: Well, it's interesting, because yeah, Elsevier recently put out a press release to the effect that they were committing themselves to the principles and the manifesto and not publishing their version of the journal indicator to three decimal places and things like that. Because companies, that's kind of important because a lot of the training of people using indicators around the world is done by these companies, the companies that sell Scopus and Web of Science, the databases that are used in the evaluation. So their sales teams will go out and they'll train people around the world in best practice. And so having them on board with it is important for disseminating it into the actual places where the numbers are going to be developed and applied. And I think actually, Thomson Reuters, Web of Science, did something on that early on. We saw some of their marketing material which was saying, oh look, we're compliant with a Leiden Manifesto. So it's become a thing like that. These companies want to say, oh, look, here's best practice and here's how we align with it. So it has got that kind of symbolic status, I guess. AMEET DOSHI: Well, we just have time for a few more questions, just a few more minutes here. I am curious to know, Professor Hicks, any other manifestos that you're working on or your team of co-conspirators here? I would be excited to know if there's other manifestos in the works. DIANA HICKS: No, no, there were-- that was just something-- this one was just the thing that was on our minds at that time that was developing so poorly. So-- I mean, there was the DORA Declaration around the same time. That was a bunch of publishers got together to say stop using the impact factor so much. Stop relying on that so much. It's wrong, don't do that. And that one, I think, had some impact. I mean-- it means everybody now has at least some idea that they should not just default over to the impact factor when they ever have to make a judgment on something. So I think there has been some benefit to these things. People are getting more sophisticated. AMEET DOSHI: And I'll just finish by saying, this podcast will be available to everyone to download, but you're going to be giving a talk-- an online talk with the Georgia Tech Library on Monday, October 13. So if you're hearing this in your car in Atlanta, there's an opportunity to actually do a Q&A with Professor Hicks via the GT Library. You can find out more at the Lost in the Stacks website or going to Georgia Tech Library's Events page. Professor Hicks, thanks so much for coming on the show and talking with us about the Leiden Manifesto. I learned a lot and I'm sure our listeners did as well. We really appreciate your time. DIANA HICKS: Well, thank you for having me. AMEET DOSHI: We've been speaking today with Dr. Diana Hicks, Georgia Tech Professor in the School of Public Policy and Lead Author of the Leiden Manifesto for Research Metrics. File this set under 62.R368. FRED RASCOE: You just heard "Making it Right" by Remember Sports. And before that "Are We Going to Be All Right?" by the Springfields. And we started off with "I Never Knew" by The Avocados, songs about judgments, consequences, corrections. AMANDA PELLERIN: Today's show was called the Leiden Manifesto. We spoke to Dr. Diana Hicks about best practices and research metrics. FRED RASCOE: If you are interested in learning more about this topic, Professor Hicks will be giving an online talk on Tuesday, October 13. AMANDA PELLERIN: You can find out more and register to attend at library.gatech.edu/events. Roll those credits. ANNOUNCER: Lost in the Stacks is a collaboration between WREK Atlanta and the Georgia Tech Library. Written and produced by Ameet Doshi, Amanda Pellerin, Charlie Bennett, Brett Rascoe, Marla Givens, and Wendy Hagen Meyer. AMANDA PELLERIN: Today's show was expertly edited, assembled, counted, measured, and quantified by Fred. AMEET DOSHI: Legal counsel and a tattered and torn 1963 edition of Little Science, Big Science were provided by the Burrus Intellectual Property Law Group in Atlanta, Georgia. WENDY HAGENMAIER: Special Thanks to Professor Hicks for being on the show, to the team of co-authors of the Leiden Manifesto, and thanks, as always, to each and every one of you for listening. AMANDA PELLERIN: Find us online at lostinthestacks.org. And you can subscribe to our podcast pretty much anywhere you get your audio fix. FRED RASCOE: We got a couple of reruns coming up, but we will have new episodes for you in just a few weeks. AMEET DOSHI: It's time for our last song today. The Leiden Manifesto may not have been yet widely accepted or adopted by all scholarly metric providers. However, the manifesto has shown the way to something better. So let's close with Looking for a Better Thing by Atlanta's own Ruby Velle and the Soulphonics right here on Lost in the Stacks. Have a great weekend everyone. [THEME MUSIC PLAYING] [MUSIC - RUBY VELLE AND THE SOULPHONICS, "LOOKING FOR A BETTER THING"]