If your archives has limited resources and lots of collections that need attention, how do you decide which ones to focus on? In the HSP Archives Department, one of the main tools we use is the HSP collection survey methodology, which has become a model for collection assessment work at dozens of institutions around the U.S. In this blog post I'd like to give an overview of our survey method -- how it works, how we use it, and where we're headed with it in the future.
The HSP survey methodology uses a combination of qualitative and quantitative measures to assess collections. Each collection is rated on a 1-5 numerical scale for physical condition, quality of housing, physical access, and intellectual access, with 5 being the highest. A research value rating is determined by adding together separate 1-5 ratings for a collection's interest and documentation quality. Surveyors also record notes that provide substance and specifics to help explain the numerical ratings.
As an example, the numerical ratings for the Beath family menu card collection, 1860-1913 (7.5 linear feet) look like this:
The General Note and Conservation Note for this collection look like this:
Here are a couple of examples of what the different survey ratings mean. Before it was processed in 2010, the George G. Meade collection got a middling score of 3 for quality of housing. This photo is above average for that collection -- closer to a 4 than a 3:
By contrast, the Belfield papers got a 1 for quality of housing -- as low as you can go. This image from the PACSCL processing project blog shows why:
Numerical survey ratings enable us to set priorities across all of our archival collections. Generally speaking, a collection that gets a high research value rating (7 or above) and low ratings for physical condition, housing, and/or access is a high priority for processing (and in many cases for conservation work). This helps us pick collections to include in grant proposals, feature in our Adopt-a-Collection program, and assign to staff members and interns. We don't rely on ratings alone to make these selections, but they are a starting point.
Suppose we want to put together a grant proposal focusing on business history collections. Using database query functions, we can generate a list of candidate collections that have specific rating combinations and feature business-related keywords in their descriptions. If we want, we can factor in collection size, span dates, or other attributes as well. Then we can go through the candidate list and pick out the collections that are most appropriate for this particular grant. This stage usually involves staff discussion, poking around in the collections themselves, and considering issues that the survey data can't capture. (Is a given collection likely to grow or shrink when it's processed? Could we feature it in a publication or public program? Does it tie in with particular interests of the funder we're going to pitch to?)
Querying the survey numbers usually turns up some high priority collections that are already on our radar, but there are often some surprises as well – collections that have lots of potential but haven’t gotten any attention since they were surveyed years ago. I used to think HSP didn't have any sports history collections to speak of. Then I crunched some survey numbers and rediscovered a 100-linear foot collection that documents the development of a local tennis tournament into an international event.
The HSP survey methodology was developed by David Moltke-Hansen, who was HSP's president from 1999 to 2007, and Rachel Onuf, who led a Mellon-funded project to survey HSP's manuscript and graphic collections in 2000-2002 and then headed HSP's Manuscripts and Archives Department until 2004. Since then, the Mellon Foundation has funded survey projects based on the HSP methodology by Columbia University, the University of Virginia, the Philadelphia Area Consortium of Special Collections Libraries (PACSCL), and the Black Metropolis Research Consortium (BMRC) in Chicago. Other institutions that have conducted collection surveys based on the HSP method include the University of Massachusetts-Amherst, the Chicago History Museum, and Penn State University (for a survey of Civil War homefront collections at small repositories around Pennsylvania). Each of these institutions has adapted or modified the survey method to some degree to meet its particular needs.
In 2009, the developers of Archivists' Toolkit, an open-source archival collections management database application, added an assessment module based closely on the HSP survey method. This made it possible for HSP to begin shifting our collections management data from our old MS Access database to AT.
At HSP, we have attempted to make surveying an integral part of the accessioning workflow. This has not always been successful, mainly because of lack of time, but after a hiatus we are back to surveying regularly and chipping away at the small backlog that has built up. Currently, each new collection larger than 1 linear foot (and some smaller ones) gets surveyed within a couple of months after it is acquired. Cary Majewicz (HSP's technical services archivist) and I do the surveying together as a team. Sometimes we invite other staff members or interns to join us, both to get the benefit of their knowledge and expertise and to help more people understand the survey methodology and its uses as a collections management tool.
Numerical ratings have an aura of objectivity that can be misleading. Inevitably, different people looking at the same collection will sometimes come up with different ratings. This is especially true when assessing a collection’s research value, where individual interests and biases most easily come into play. It’s important for surveyors to have a grounding in different areas of knowledge, be familiar with broader trends in historical research, and learn to set aside their own likes and dislikes as much as possible. Working in teams also helps to even out differences between individual surveyors. In the end, we see the survey ratings as an imperfect but useful tool. They’re not fully objective, but they do provide a consistent yardstick and shorthand for comparing different collections.
Over the past couple of years we've made a couple of small additions to the survey protocol. First, we added a numerical rating for "Recommended Processing Level," to represent our five possible processing levels, from basic collection-level record (Level 1) to full-scale traditional processing (Level 5). (We developed this five-tiered processing schema starting in 2007 based on "More Product, Less Process" principles.) More recently, we started including a processing cost assessment as part of the survey record for all collections that get a research value rating of 6 or higher. This makes it easier to plug these collections into our Adopt-a-Collection program [link].
This fall, we'll be exploring a new use of the survey method. HSP recently launched a pilot project to gather information about archival collections at small, non-professionally run repositories in the Philadelphia area, such as historic houses, small museums, and neighborhood historical societies. The project has the unwieldy name of Hidden Collections Initiative for Pennsylvania Small Archival Repositories (HCI-PSAR). Once again, the Mellon Foundation is the funder. (For more information on this grant, see our press release.)
HCI-PSAR surveying began this week at the Byberry Library in northeastern Philadelphia. We'll be featuring reports on this work both on Fondly, Pennsylvania and on a new project blog to be launched soon. We expect that the HSP survey method will need to be further adapted to address the particular circumstances of non-professionally run institutions, especially given the lack of standard archival management practices. (For example, are materials even divided into discrete collections?) It will be interesting to see how this unfolds.