Project Abstract/Summary
Humans perform visual search tasks many times throughout the day. Examples include searching for the perfect snack in a supermarket, looking for a misplaced phone in a cluttered room, or finding a friend in a busy restaurant. The efficiency with which humans perform these tasks has a significant impact on the quality of daily life. The current project uses behavior, eye-tracking, virtual reality, brain activity, and computational modeling to test the idea that the efficiency of visual search depends on two stages of processing with two different computational goals: 1) a first stage with “good enough” attentional guidance to select potential targets for closer inspection, and 2) a second stage involving a decision that emphasizes accuracy over speed. The overall goal is to understand organizing principles that make visual search efficient. In addition to the scientific work, this project supports the training and professional development of undergraduate students with activities that build scientific literacy, analytical skills, and emphasize links between classroom learning and the workforce.
The project tests hypotheses that guidance and decisions in visual search rely on different types of information because they have different computational goals, not because they work on different feature types. One set of experiments tests the prediction that during visual search, attentional guidance prioritizes speed over accuracy, but that decisions prioritize accuracy over speed. To do this, simple stimulus arrays in which the precision and feature-dimension used for guidance and decisions are separately measured using eye-tracking data, electroencephalography/event-related potentials (EEG/ERPs), and drift-diffusion modeling (DDM). Another set of experiments, also using behavior, EEG, and DDMs, test the prediction that attentional guidance relies on non-target information such as prior knowledge about typical scenes because they act as rapidly detectable spatial cues for the target. Additionally, the project uses immersive virtual reality and DDMs to examine search efficiency in naturalistic contexts and test for hierarchical processes in which guidance prioritizes information and leads to reductions in the search space. Overall, the project aims to address important limitations of existing models of attention and aims to understand how visual search is efficient.
This award reflects NSF’s statutory mission and has been deemed worthy of support through evaluation using the Foundation’s intellectual merit and broader impacts review criteria.
Principal Investigator
Joy Geng – University of California-Davis located in DAVIS, CA
Co-Principal Investigators
Funders
Funding Amount
$341,347.00
Project Start Date
03/15/2025
Project End Date
02/29/2028
Will the project remain active for the next two years?
The project has more than two years remaining
Source: National Science Foundation
Please be advised that recent changes in federal funding schemes may have impacted the project’s scope and status.
Updated: April, 2025