A pictures may describe a thousand words but unless someone put a description with the photo a viewer may perceive thousands of inaccurate words.
Thousands of online repositories house historical image collections. If a person were to view a photo of a ship, the viewer could make guesses as to its significance. If indexers were to put resource descriptions, subject headings, and tags to that photo, the viewer would know exactly the significance and even hold the photo at higher value due to knowing what was pictured.
This study researches how well images are described according to their preferred method of description: the Shatford/Panofsky framework. In the Shatford/Panofsky framework, Stewart condenses the method (page 9) to four major facets: Who (object and beings), what (activities, events and emotions), where (place), and when (time); each of these having three aspects: iconography (specific of), pre-iconography (generic of), and iconology (abstract or about).
In the study, Stewart (page 9) observed how 10 indexers assigned subject headings on 28 historic photos. The total equaled 223 subject headings with their categories being 52% specific, 46.2% generic, and 1.8% abstract. This confirmed indexer reluctance in assigning abstract headings. I would state it this way: This shows that indexers are very conscious of giving a possible wrong interpretation. I must agree that this does leave out searchers who may be searching for exactly the interpretation by the indexer. In an indexer’s carefulness many searchers are missing photos in their search results due to not having the abstract description, especially given the online interface where the images are being search without assistance of a librarian.
What if users assigned tags instead of professionals? Stewart set up a 4-week trial (page 17) with 33 photos split into sets: a 1st set of 11 untitled photos and a 2nd set of 22 photos with title and photographer. Total tags from all 33 photos were 1,934, with their categories being 19.62% specific, 60.3% generic, and 20% abstract, though almost half of the category spots (4 facets each with 3 facets for 12 total spots) were left blank. Analysis comparing the categories of the 1st set and 2nd set were very close. The tag quantity is a significant increase for number of terms used as taggers were also including personal reactions and feelings to the images.
Twenty eight taggers were chosen to be trained on the Shatford/Panofsky method and studied as they tagged another set of images (page 21). This resulted in an increase of terms used in all the 4 facets with 3 aspects, and an increase of terms overall. The categories were 15.8% specific, 58.4% generic, and 25.7% abstract.
Terms will evolve as years go by and a need for more recent natural language is at hand. I assumed non-professional taggers would use a glut of miscellaneous terms when tagging but with some training and compensation, can be trained to give appropriate terms to images along with abstract terms that professional indexers have not been supplying. I expected that user tag terms would wildly vary for terms used but the research indicates that taggers showed a great interest in using natural language (page 20). I am encouraged by this research and look forward to more use of non-professionals, though this does give me pause when thinking of the number of LIS professional jobs turning to non-MLIS holders.