MODVIS 2024: Computational and Mathematical Models in Vision
A satellite workshop to the 2024 Vision Sciences Society Annual Meeting
May 15-17, 2024
MODVIS 2024 will be held at the VSS conference venue (the Tradewinds Island Resorts in St. Pete Beach, FL). We hope you will consider participating; we are providing this heads-up concerning the dates so that those of you who are interested can take them into consideration when making travel plans. The organization will be essentially identical to the previous workshops.
For those of you wishing to stay at the Tradewinds resort for the workshop, we suggest booking your room early. To find out more about Vision Sciences Society Annual Meeting visit their website.
MODVIS is a small workshop dedicated to the investigation of formal models in vision research. We intend this to be a very special workshop, one that fills a critical niche. Specifically, we hope that it will:
- Help move our field forward, because substantial progress in any field is not possible without formal theories. Modeling work at VSS and similar conferences is scattered among the diverse parallel sessions, making it nearly impossible to keep up with new developments. Our proposed format will bring a diverse group of modelers together.
- Facilitate interactions among theoretically-minded researchers. A discussion among modelers always goes beyond phenomena and effects. Many papers at ordinary meetings rarely go beyond the data. Our intent is to focus on the role of mechanisms (vision algorithms).
- Encourage people to present their formal theories and models in some detail (presentations may be slightly longer than at VSS). We all would like to see more equations and cost functions and to be able to discuss the stability and complexity of our models. Time will be available to do this within the sessions and on the beach between them. Models used in only one part of the field could prove to be very useful in another.
- Attract machine vision people interested in human vision, again, because our current knowledge of human vision has finally reached the point where our models can actually be used by seeing machines.
- Keep us up-to-date with modeling across all specialized areas within vision. This will be beneficial for those of us who teach. Note that one could offer a graduate seminar in vision every year based on the talks that will be presented at the kind of conference we are proposing.
- Help integrate vision into a single field: We think that solving the obvious binding problem in our field should be possible, and even helpful, for solving the binding problem in the brain.
Organizers
Marianne Maertens, Technische Universität Berlin, marianne.maertens@tu-berlin.de
Jeffrey B. Mulligan, Freelance Vision Scientist, jbmull@gmail.com
Zygmunt Pizlo, UC Irvine, zpizlo@uci.edu
Anne B. Sereno, Purdue University, asereno@purdue.edu
Qasim Zaidi, SUNY College of Optometry, qz@sunyopt.edu
Abstract Submission
The abstract submission system is expected to be on-line in early December, watch this space for updates.
Early submission deadline is December 29, 2023
Late submission deadline is March 15, 2024
Please note: You cannot submit an abstract without registering.
Registration
Early Registration Deadline: March 31, 2024
Early Registration Fees: $65 for student/retiree and $150 for regular
Registration Fees: $80 for student/retiree and $180 for regular
Fees pay for audio-visual expenses, coffee and snacks, and the VSS satellite fee.
Prof. Anitha Pasupathy, University of Washington
Neuronal basis of object segmentation in macaque visual cortex
Image segmentation – the process by which scenes are segmented into component objects – is a fundamental aspect of vision and a cornerstone of scene understanding; its neural basis, however, is largely unknown. To begin to understand how early visual representations are transformed in successive stages to facilitate segmentation and scene understanding, we studied the responses of neurons in mid- and high-level processing stages along the ventral object processing pathway of the primate brain.