Ten years ago, the Harvard Art Museums, a teaching and research museum on the campus of Harvard University, started using multiple computer vision (CV) services to tag and describe its collections.
The initial goal was to improve search and discovery of the collections in both internal and external systems by augmenting the curatorial written descriptions with machine generated metadata.
During early tests it was apparent that CV showed a lot of promise for describing representational art in ways our catalogers didn’t have time to do, but quickly stumbled when presented with more abstract imagery. While assessing the stumbles we started asking a lot of questions. Including:
Ten years later, we’ve fully embraced the inconsistency of CV, and now modern large-language models (LLMs).
Jeff Steward will cover:
HS DAM events always provide insights relevant to my day-to-day work, as well as future planning, for our assets and rights management. The conference sessions often provide more than technical and practical applications to incorporate related areas not to be overlooked such as, user adoption, workflows, best practices and emerging technologies. These events are a must have for my DAM journey.
Henry Stewart DAM events are well designed and well managed, with useful presentations drawn from a wide range of disciplines and challenges. They help us evaluate our DAM strategy against actual use cases and solutions.
HS DAM events are essential for my sanity. The topics covered are always relevant to the challenges we have managing our visual assets, and planning our expansion from a department MAM to an integrated institutional DAM.