While the complexity of the datasets used in computer vision has steadily increased over the years, the supervision that vision systems receive from their human teachers has remained limited. In recent years, our vision community has seen increased interest in approaches that enrich the interaction between humans and machines for computer vision tasks, in order to allow communication beyond labels. Researchers have explored the role of the human teacher for visual recognition and search and expanded the channel over which human users can "teach" visual learning systems. For instance, in recognition, a human can give supervision or feedback to the system so it improves its predictions. There is also great value in enhancing the system-to-human direction of human-machine communication for visual recognition. Researchers have recently studied how to form sentences to explain images, and how to describe items in a way that is most natural to human users. There has also been work in using computer vision to visualize to a human user the system's perception of an image.
The goal of this workshop is to study approaches for allowing humans to provide richer supervision to visual learning systems, and to interactively give feedback to the system so it can learn better models for recognition or make more accurate predictions at test time. We are also interested in strategies for making the work of vision systems more interpretable for its human users. Related topics include:
- crowdsourcing / human computation for vision
- active visual learning
- recognition with a human in the loop
- visual discovery with a human in the loop
- human feedback to algorithms
- human debugging
- semantic visual attributes
- interactions between language and vision
- visualization of the system's internal models in a way intuitive to human users
SubmissionWe invite 4-page extended abstracts that study strategies for improving the communication (broadly defined) between a computer vision system and a human user, with applications to recognition and image/video retrieval.
We encourage submissions on both new unpublished work and on work that was previously published in a conference (including ECCV 2014) or a journal. We require 4-page abstracts in ECCV format by the submission deadline. Reviewing will be double-blind, or single-blind in the case of previously published work. We will give a best paper award to one of the accepted abstracts, and this work will be presented as an oral. Other accepted papers will be presented as posters with 3-minute spotlights. There will be no proceedings.
Submission link: Please submit your extended abstracts here by July 15. Submission is now open.
Important DatesJuly 15, 2014
July 28, 2014: Acceptance notification September 7, 2014: Workshop
ProgramTo be announced.
- Serge Belongie (Cornell Tech)
- Larry Zitnick (Microsoft Research)
- Vlad Morariu (University of Maryland)
- Ashish Kapoor (Microsoft Research)
- Antonio Torralba (MIT)
- Adriana Kovashka (UT Austin, University of Pittsburgh)
- Kristen Grauman (UT Austin)
- Devi Parikh (Virginia Tech)
- Alex Berg (UNC Chapel Hill)
- Steve Branson (Caltech)
- Ian Endres
- David Forsyth (UIUC)
- James Hays (Brown)
- Derek Hoiem (UIUC)
- Jonathan Krause (Stanford)
- Vicente Ordonez (UNC Chapel Hill)
- Genevieve Patterson (Brown)
- Alexander Sorokin
- Catherine Wah (UCSD)
- Kota Yamaguchi (Stony Brook)