{Reference Type}: Journal Article {Title}: A computational account of transsaccadic attentional allocation based on visual gain fields. {Author}: Harrison WJ;Stead I;Wallis TSA;Bex PJ;Mattingley JB; {Journal}: Proc Natl Acad Sci U S A {Volume}: 121 {Issue}: 27 {Year}: 2024 Jul 2 {Factor}: 12.779 {DOI}: 10.1073/pnas.2316608121 {Abstract}: Coordination of goal-directed behavior depends on the brain's ability to recover the locations of relevant objects in the world. In humans, the visual system encodes the spatial organization of sensory inputs, but neurons in early visual areas map objects according to their retinal positions, rather than where they are in the world. How the brain computes world-referenced spatial information across eye movements has been widely researched and debated. Here, we tested whether shifts of covert attention are sufficiently precise in space and time to track an object's real-world location across eye movements. We found that observers' attentional selectivity is remarkably precise and is barely perturbed by the execution of saccades. Inspired by recent neurophysiological discoveries, we developed an observer model that rapidly estimates the real-world locations of objects and allocates attention within this reference frame. The model recapitulates the human data and provides a parsimonious explanation for previously reported phenomena in which observers allocate attention to task-irrelevant locations across eye movements. Our findings reveal that visual attention operates in real-world coordinates, which can be computed rapidly at the earliest stages of cortical processing.