VisIRR: Interactive Visual Information Retrieval and Recommendation for Large-scale Document Data

Thumbnail Image
Choo, Jaegul
Lee, Changhyun
Clarkson, Edward
Liu, Zhicheng
Lee, Hanseung
Chau, Duen Horng
Li, Fuxin
Kannan, Ramakrishnan
Stolper, Charles D.
Inouye, David
Mehta, Nishant
Ouyang, Hua
Som, Subhojit
Gray, Alexander
Stasko, John T.
Park, Haesun
Associated Organization(s)
Supplementary to
We present a visual analytics system called VisIRR, which is an interactive visual information retrieval and recommendation system for document discovery. VisIRR effectively combines both paradigms of passive pull through a query processes for retrieval and active push that recommends the items of potential interest based on the user preferences. Equipped with efficient dynamic query interfaces for a large corpus of document data, VisIRR visualizes the retrieved documents in a scatter plot form with their overall topic clusters. At the same time, based on interactive personalized preference feedback on documents, VisIRR provides recommended documents reaching out to the entire corpus beyond the retrieved sets. Such recommended documents are represented in the same scatter space of the retrieved documents so that users can perform integrated analyses of both retrieved and recommended documents seamlessly. We describe the state-of-the-art computational methods that make these integrated and informative representations as well as real time interaction possible. We illustrate the way the system works by using detailed usage scenarios. In addition, we present a preliminary user study that evaluates the effectiveness of the system.
Date Issued
Resource Type
Resource Subtype
Technical Report
Rights Statement
Rights URI