2023 Program Abstracts

Award Lectures

2023 Edgar D. Tillyer Lecture

Computational modeling of vision: Ten lessons   Andrew Watson, Apple

Computational models are the means by which we test our ideas about how vision works. They have well-defined inputs, that bear some defined relation to the light impinging on the eyes. And they have well defined quantitative outputs, that relate to human judgements. Their success can be judged by how well they predict those human judgements, by how broad the range of conditions they encompass, and possibly by how well their internals match the neurophysiology. Tangentially, they may be judged by how useful they are in practical applications. In this talk I will share a few lessons that I have learned from my adventures in modeling of human vision.


2023 Robert M. Boynton Lecture

Wiring specificity and plasticity of the vertebrate retina   Rachel Wong, Department of Biological Structure, University of Washington

Vision relies on the output of the many functionally distinct and precisely wired circuits of the retina. Using transgenic techniques, imaging methods and electrophysiology, we seek to uncover the developmental mechanisms that help establish the wiring specificity of retinal circuits in vertebrates. Moreover, because injury or disease can cause rewiring after maturation, we are also reconstructing primate retinal circuits impacted by the loss of input, in order to identify the challenges to circuit repair.

Funding acknowledgements:  Supported by NIH EY10699, EY033447 and EY01730



Invited session: 30 years of normalization in the visual system 

The modern view of neuronal normalization dates back to the late 80s and early 90s and in particular, to a paper published in 1992 by David Heeger entitled "Normalization of Cell Responses in Cat Visual Cortex" in which a single, elegant model of contrast gain control was shown to explain much of the then current literature on early visual processing. Thirty years later, normalization appears to be one of a set of fundamental 'canonical computations' that can used to explain how the brain works. This session features four speakers who have used neuronal normalization to explain different aspects of vision in a modern context including binocular combination, attention and adaptation. 

Moderator: Geoffrey M. Boynton, University of Washington


Spatiotemporal fluctuations in E/I balance & their impact on perception   John Reynolds, The Salk Institute for Biological Studies

Co-authors: Zachary Davis, Salk; Lyle Muller, Western University; Gabriel Benigno, Western University; Terry Sejnowski, Salk; Julio Martinez-Trujillo, Western University; Charlee Fletterman, Salk; Theo Desordes, Salk; Christopher Steward, Western University

The Normalization Model has helped illuminate our understanding of neural computations in a wide variety of contexts.  Here, I will describe our recent efforts to understand how intrinsic fluctuations in the ratio of excitatory to inhibitory conductances play out in space and time, and how these fluctuations impact perception. We find that spontaneous cortical activity is organized into traveling waves that traverse visual cortex several times per second. Recording in Area MT of the common marmoset, we find that these intrinsic traveling waves (iTWs) regulate both the gain of the stimulus-evoked spiking response and the monkey’s perceptual sensitivity. A large-scale spiking network model recapitulates the properties of iTWs measured in vivo and explains the observed fluctuations in response gain as resulting from momentary fluctuations in E/I balance.  The model predicts that iTWs are sparse, in the sense that only a small fraction of the neural population participates in any individual iTW. As a result, iTWs can occur without inducing correlated variability, which we have found, in separate experiments, can impair sensory discrimination. We thus refer to the model as the sparse-wave model of iTWs. Taken together, these findings lead to the conclusion that traveling waves strongly modulate neural and perceptual sensitivity. 

Funding acknowledgements:  P30 EY019005/EY/NEI  R01 EY028723/EY/NEI  T32 EY020503/EY/NEI   P30 EY019005/EY/NEI  R01 EY028723/EY/NEI    T32 MH020002/MH/NIMH


Testing and expanding the Reynolds & Heeger Normalization Model of Attention   Marisa Carrasco, Dept of Psychology and Center for Neural Science, New York University

I present evidence supporting key predictions of the Reynolds-Heeger Normalization Model of Attention: 1) Both exogenous and endogenous covert spatial attention, can affect performance by contrast or response gain changes, depending on the stimulus size and the relative size of the attention field. 2) Feature-based attention improves performance, consistent with a change in response gain, when the featural extent of the attention field is small (low uncertainty) or large (high uncertainty) relative to the featural extent of the stimulus. 3) An expanded model of spatial resolution and attention that distinguishes between exogenous and endogenous attention, focusing on texture-based segmentation as a model system, which has revealed a dissociation between these types of attention across eccentricity. Our model emulates sensory encoding to segment figures from their background and predict performance. To explain attentional effects, exogenous and endogenous attention require separate operating regimes across spatial frequency. The model reproduces behavioral performance across several experiments and resolves three unexplained phenomena: 1) the parafoveal advantage in segmentation, 2) the uniform improvements across eccentricity by endogenous attention, and 3) the peripheral improvements and central impairments by exogenous attention. This model provides a generalizable framework for predicting effects of endogenous and exogenous attention on perception across the visual field.

Funding acknowledgements:  NIH R01-EY019693


Contrast normalization accounts for binocular interactions in healthy vision and amblyopia   Preeti Verghese, Smith-Kettlewell Eye Research Institute

Co-author: Chuan Hou, Smith-Kettlewell Eye Research Institute

We investigated the interaction between dichoptic stimuli using steady-state visual evoked potentials and frequency-domain analysis. The stimulus in each eye flickered with a unique temporal frequency, which allowed us to “frequency-tag” the responses to each eye’s input (self  terms) as well as the responses to the combination of inputs from the two eyes (intermodulation terms). We measured two forms of binocular interaction: one associated with the suppressive effect of one eye’s stimulus on the other, and the other associated with a direct measure of interocular interaction between the two eyes’ inputs. Fits of a contrast gain control model to the data demonstrated that a common gain control mechanism is consistent with both forms of binocular interaction.  We then used the contrast normalization framework to investigate the disruptions to binocular interaction in amblyopia.  Although anisometropic amblyopes showed a similar pattern of responses to normal-vision observers, strabismic amblyopes exhibited both reduced responses to the amblyopic eye stimulus in the presence of a mask in the other eye, as well as substantially reduced intermodulation responses indicating reduced interocular interactions in visual cortex. A contrast normalization model that simultaneously fit self- and IM-term responses showed that the excitatory contribution to binocular interaction is significantly reduced in strabismic amblyopia. 

Funding acknowledgements:  NIH grants R01EY034370 and R01EY025018


Exponentiation and normalization as mechanisms for efficient selection of sensory signals    Justin Gardner, Stanford University

The standard formulation of the normalization operation comprises two canonical neural computations, exponentiation and division; a counter-balancing of two forces. Exponentiation favors the strong. Larger responses grow even larger than smaller ones. In the extreme, exponentiation is a winner-take-all computation. Division by the pooled responses of a neural population brings run-away responses back down to an operable range. Division normalizes to the average population activity responses that have grown large through exponentiation. Such simple operations have surprisingly many applications across sensory and cognitive systems of the brain. Here, I will focus on how these operations can facilitate selection of sensory signals for prioritized processing, and thereby explain perceptual benefits of spatial attention.

Funding acknowledgements:  Wu Tsai Neurosciences Institute and Stanford Institute for Human Centered Artificial Intelligence


Invited session: Retinal remodeling and regeneration

Regenerative therapies aim to restore light sensitivity to blind retina or prevent further vision loss by replacing compromised or degenerated retinal tissues with new healthy cells. While these approaches have the potential to deliver high quality restored vision they must also contend with anatomical and physiological changes that occur in the remaining retinal architecture after vision loss. This session will cover the latest advances in cell transplantation & regeneration and consider how native retinal circuits adapt and remodel in the face of retinal degeneration.

Moderator: Ala Moshiri, UC Davis


Partial cone loss triggers differential modification of inhibition across retinal pathways    Felice Dunn, Department of Ophthalmology, University of California, San Francisco

Co-authors: Joo Yeun Lee, Department of Ophthalmology, University of California, San Francisco; Luca Della Santina, College of Optometry, University of Houston

Cone loss up to 40-60% can be undetected by visual acuity or sensitivity metrics. These findings suggest that conventional diagnostics can be insensitive to cone loss below a threshold and/or that visual circuits can be resilient to cone loss up to a threshold. To distinguish between these possibilities, our lab studied the effects of partial cone loss in the mature mouse retina. We induced controlled cone loss by selectively expressing the diphtheria toxin receptor under the cone opsin promoter (cone-DTR). We examined the effects of 50% cone loss on the three of the most sensitive cell types in mouse retina: alpha ON sustained (AON-S), OFF sustained (AOFF-S), and OFF transient (AOFF-T) ganglion cells. With multiple cell types, we can discern between effects common across pathways, i.e., changes to common circuit components, and effects unique to pathways, i.e., changes to circuit components after the pathways diverge to each cell type. In AON-S ganglion cells, partial cone loss triggers inhibition to increase spatiotemporal integration, recover contrast gain, and receive increased synaptic inputs. While OFF pathways also exhibit modified spatiotemporal processing with fewer cones, the extent of functional adjustments was unique between the AOFF-S and AOFF-T. Cone loss caused differential modifications to inhibition in each of these retinal pathways. These findings demonstrate that partial cone loss induced circuitry changes after the divergence of OFF retinal pathways.

Funding acknowledgements:  R01 EY-029772


Stimulation of functional neural regeneration in adult mouse retina    Thomas Reh, University of Washington

Neurons are not regenerated in the adult mammalian retina. However, in non-mammalian vertebrates, specialized glial cells (Muller glia, MG) spontaneously generate neural progenitors, which go on to replace neurons after injury. We have recently developed strategies to stimulate regeneration of functional neurons in the adult mouse retina by over-expressing developmental transcription factors, including Ascl1, Atoh1, Islet1 and Pou4f2.  We have found that over-expressing Ascl1 in MG in vivo, followed by injury and TSA, induces regeneration of functional retinal neurons in adult mice. Adding additional TFs, such as Atoh1, substantially increases the efficiency of neurogenesis from the MG, while combining other TFs with Ascl1 can alter the types of neurons that are regenerated after injury.  Together our data show that we can considerably recapitulate  in mice much of the regeneration that occurs naturally in fish. 

Funding acknowledgements:  NEI, FFB, Genentech, GVRI, Tenpoint


Developing autologous iPSC-RPE therapy for the treatment of AMD patients   Ruchi Sharma, National Eye Institute, Bethesda, Maryland

Co-author: Kapil Bharti, National Eye Institute, Bethesda, Maryland

Age-related macular degeneration (AMD) is one the leading causes of vision loss in older people in the Western world. The disease is caused by degeneration of the retinal pigment epithelium (RPE) monolayer that supports photoreceptors. We used induced pluripotent stem cells (iPSC) to develop an autologous cell replacement therapy for treating dry AMD patients. Patients' blood cells were reprogrammed into iPSCs and differentiated iPSCs into RPE cells using a protocol developed in our lab. RPE cells were matured on a biodegradable polylactic co-glycolic acid (PLGA) scaffold for five weeks. Quality control assays confirmed the iPSC-RPE patch's purity, maturity, and functionality. Pre-clinical studies in rats and pigs demonstrated the safety and efficacy of iPSC-RPE-patch. Immune-compromised rats transplanted with a 0.5 mm iPSC-RPE patch showed no signs of tumor formation after nine months, confirming the safety profile. We laser-injured the RPE monolayer in the visual streak of pig eyes and, after 48 hours, transplanted the patch. Optical coherence tomography (OCT) confirmed the integration of the patch. A multi-focal electroretinogram (ERG) showed that the retinal layers' electric response was much higher than the lasered area without the implant. We started a phase I/IIa trial for an autologous iPSC-RPE patch to treat AMD. This ongoing trial will test the safety, feasibility, and integration of the iPSC-RPE patch in 12 AMD patients.


Insights into retinal cell replacement: Optimising photoreceptor and RPE transplantation   Karen Tessmer, Center for Regenerative Therapies, TUD Dresden University of Technology

Co-authors: Sylvia Jane Gasparini, Center for Regenerative Therapies, TUD Dresden University of Technology; Juliane Hammer, Center for Regenerative Therapies, TUD Dresden University of Technology; Trishla Adhikari, Center for Regenerative Therapies, TUD Dresden University of Technology; Klara Schmidtke, Center for Regenerative Therapies, TUD Dresden University of Technology; Sebastian Knöbel, Miltenyi Biotec B.V. & Co. KG, Bergisch Gladbach; Marius Ader, Center for Regenerative Therapies, TUD Dresden University of Technology

Retinal degenerative diseases, such as age-related macular degeneration and inherited retinal degenerations, are characterized by the dysfunction and ultimately loss of photoreceptors and retinal pigment epithelium (RPE). Retinal cell replacement has emerged as a potential therapeutic strategy. This is enabled by the availability of desired donor cells  differentiated in large numbers from human embryonic or induced pluripotent stem cells. With many differentiation protocols around, detailed comparison of donor cell and host characteristics allowing improved transplantations outcomes are however still sparse. Here, I will present our work on a more detailed assessment of photoreceptor and RPE single cell suspension transplantations. Human photoreceptors incorporate extensively into a cone-degeneration mouse host, interact with host Müller glia and bipolar cells and polarize to form inner and outer segments as well as synapses. Importantly, increased donor-host interactions correlate with improved graft polarization and maturation, with donor cell age greatly influencing this process. Similarly, RPE transplantations into an acute RPE depletion mouse model showed that monolayer formation strongly depends on RPE differentiation times, with further improvement by enrichment of an RPE subpopulation by cell surface markers. Overall, our work highlights the need for careful selection of appropriate donor cells for structural integration into recipient tissue after transplantation.

Funding acknowledgements:  DFG, BMBF, FFB


Invited session: Diversity in chromatic processing across the animal kingdom 

Color is an important feature of objects in the visual environment. While the neural transformations underlying color perception have received much attention in the primate visual system, the broader animal kingdom exhibits a diverse set of schemes for encoding and processing chromatic information. This session will survey the neural organization of color pathways in a selection of primate and non-primate species, and will examine how those pathways can be used to extract ecologically-relevant signals that guide behavior.

Moderator: David Brainard, University of Pennsylvania


Hue selectivity from recurrent circuitry in Drosophila   Rudy Behnia, Department of Neuroscience, Columbia University, Zuckerman Institute

To perceive color, our brains must transform the wavelengths of light reflected off objects into the derived quantities of brightness, saturation and hue. Neurons responding selectively to hue have been reported in primate cortex, but it is unknown how their narrow tuning in color space is produced by upstream circuit mechanisms. To enable circuit level analysis of color perception, we here report the discovery of neurons in the Drosophila optic lobe with hue selective properties. Using the connectivity graph of the fly brain, we construct a connectomics-constrained circuit model that accounts for this hue selectivity. Unexpectedly, our model predicts that recurrent connections in the circuit are critical for hue selectivity. Experiments using genetic manipulations to perturb recurrence in adult flies confirms this prediction. Our findings reveal the circuit basis for hue selectivity in color vision.


Color vision in stomatopod crustaceans   Tom Cronin, University of Maryland Baltimore County

Stomatopod crustaceans, commonly known as mantis shrimp, have perhaps the most unusual color-vision systems of any animals.  The uniqueness is possible because stomatopods have compound eyes.  Here, each unit, or ommatidium, acts as an independent visual detector, with its own corneal lens, internal optics, and set of photoreceptors.  Ommatidia tuned to different wavelengths can be individually placed in the eye to build unusual color systems.  In mantis shrimps, the receptors responsible for color vision are limited to six parallel rows of ommatidia that together form an equatorial belt, called the midband.  Various receptors in these ommatidia are tuned to eight narrow-band spectral channels in the visible spectrum plus up to four additional ultraviolet channels.  Thus, there is a total of twelve different color receptors for color vision.  How these color channels are analyzed in the complex set of optic lobes existing behind the retina is only partly understood.  It appears that stomatopods use both opponent and labelled-line color channels.  Oddly, these animals appear to have limited ability to discriminate between spectral lights, but they have outstanding color constancy.  Color vision, and color processing, in stomatopods is probably unlike that of any other animal group.


Comparative connectomics reveals noncanonical wiring for color vision in human foveal retina   Yeon Jin Kim, University of Washington

The Old World macaque monkey and New World common marmoset provide fundamental models for human visual processing, yet the human ancestral lineage diverged from these monkey lineages over 25 Mya. We therefore asked whether fine-scale synaptic wiring in the nervous system is preserved across these three primate families, despite long periods of independent evolution. We applied connectomic electron microscopy to the specialized foveal retina where circuits for highest acuity and color vision reside. Synaptic motifs arising from the cone photoreceptor type sensitive to short (S) wavelengths and associated with “blue–yellow” (S-ON and S-OFF) color-coding circuitry were reconstructed. We found that distinctive circuitry arises from S cones for each of the three species. The S cones contacted neighboring L and M (long- and middle-wavelength sensitive) cones in humans, but such contacts were rare or absent in macaques and marmosets. We discovered a major S-OFF pathway in the human retina and established its absence in marmosets. Further, the S-ON and S-OFF chromatic pathways make excitatory-type synaptic contacts with L and M cone types in humans, but not in macaques or marmosets. Our results predict that early-stage chromatic signals are distinct in the human retina and imply that solving the human connectome at the nanoscale level of synaptic wiring will be critical for fully understanding the neural basis of human color vision.

Funding acknowledgements:  NIH grant EY-028282


Color processing in the mouse retina   Anna Vlasits, Department of Neurobiology, Northwestern University

Co-authors: Maria M Korympidou, Institute for Ophthalmic Research, University of Tuebingen; Sarah Strauss, Institute for Ophthalmic Research, University of Tuebingen; Timm Schubert, Institute for Ophthalmic Research, University of Tuebingen; Katrin Franke, Baylor College of Medicine; Philipp Berens, Tuebingen AI Center, Institute for Ophthalmic Research, University of Tuebingen; Thomas Euler, Institute for Ophthalmic Research, Werner Reichardt Centre for Integrative Neuroscience, University of Tuebingen

Across species, specialized retinal circuits allow animals to extract visual information from their environments. How retinal circuits extract relevant visual information is a major area of inquiry. In the mouse retina, cone photoreceptors possess a gradient of opsin expression leading to uneven detection of colors across visual space. However, at the output of the retina, ganglion cells' color preferences deviate from this gradient, suggesting that circuits in the retina may alter the color information before sending it to the brain. We explored how circuits in the retina shape chromatic information, focusing on the retina's interneurons, amacrine cells and bipolar cells. We found that inhibitory amacrine cells rebalance color preferences, leading to diverse color selectivity throughout retinal space. Since amacrine cells vary widely across species, these cells are poised to tune the chromatic information sent to the brain to each species' environmental niche.

Funding acknowledgements:  German Research Foundation (DFG; BE5601/2-1; SPP 2041; BE5601/4-1,2; EU42/9-1,2), the German Ministry of Education and Research (Bernstein Award 01GQ1601 to PB; BCCN 01GQ1002 to KF), the Medical Faculty/U Tübingen (fortüne to AV), and the Max Planck Society (M.FE.A.KYBE0004 to KF).


Invited session: Extended reality – Applications in vision science & beyond 

Extended reality display systems offer the unique opportunity to blend the precise stimulus control of laboratory-based psychophysical experiments with the detailed, naturalistic environments encountered in our everyday visual experience. This session will cover applications of extended reality in four broad areas – depth perception, gaze tracking, driving with low vision and assistive technology for low vision.

Moderator: Jorge Otero-Millan, UC Berkeley


Improving augmented reality through perceptual science   Emily Cooper, University of California, Berkeley

Augmented reality (AR) systems make it possible to present visual stimuli that intermix and interact with people’s view of the natural world. But building an AR system that merges stimuli with our natural visual experience is hard. AR systems often suffer from technical and visual limitations, such as small eyeboxes, limited brightness, and narrow visual field coverage. An integral part of AR system development, therefore, is perceptual research that improves our understanding of when and why these limitations matter. I will describe a suite of perceptual studies designed to provide guidance for engineers on the visibility and appearance of wearable optical see-through AR displays. Our results highlight the idiosyncrasies of how our visual system integrates information from the two eyes, the multifaceted nature of perceptual phenomena in AR, and the trade-offs that are currently necessary to create an AR system that is both wearable and compelling. 


Recent developments in head-mounted eye tracking for the understanding of natural(istic) behavior   Gabriel J. Diaz, Rochester Institute of Technology

The scientific investigation of eye movements in natural or simulated and “naturalistic” environments has historically been operating at the limit of what head-worn eye tracking technology is capable of. In this presentation, I will review efforts by myself and collaborators at RIT to push these limits, and to increase the scope of scientific inquiry into natural visual and motor behavior. This talk will end with a brief discussion of emerging methods, most of which are aimed at resolving long standing limitations to video-based eye tracking. Most notably, as a consequence of USB transfer limits and stringent power budgets, video based eye trackers are restricted to either a high spatial resolution of the eye image which improves the spatial accuracy of the final gaze estimate, or a high temporal sampling rate (i.e., a high number of eye frames per second), but not both.

Funding acknowledgements:  Meta Reality Labs and R15EY031090


Using a driving simulator to evaluate the effects of vision impairment and assistive technology   Alex Bowers, Schepens Eye Research Institute of Mass Eye and Ear, Harvard Medical School

Detection of potential hazards is critical to safe driving but is very difficult to evaluate in real-world driving situations because there is no control over if, when or where hazards might appear. Moreover, gaze tracking can be challenging in the varying environmental conditions of real-world driving.  In contrast, driving simulators provide a safe, controlled, repeatable environment in which to study the effects of vision impairment and assistive technologies. Scenarios can be designed to probe specific aspects of the vision loss and can include situations that would be dangerous in the real world. This talk will summarize how we have used driving simulators to evaluate gaze behaviors and driving responses to potential hazards at mid-block locations and at intersections (including gaze tracking across a 180-degree field of view) for drivers with different kinds of visual field loss and for a variety of assistive devices, including optical devices (peripheral prism glasses and bioptic telescopes) and prototype vibro-tactile hazard warning systems. Using linked pedestrian and driving simulators we have attempted to create more realistic pedestrian hazard scenarios and have evaluated the effects of vision impairment on interactions between drivers and human-controlled, interactive pedestrians within the virtual environment.

Funding acknowledgements:  NIH grant R01 EY025677


Augmented reality systems for people with low vision   Yuhang Zhao, Department of Computer Sciences, University of Wisconsin-Madison

Low vision is a visual impairment that falls short of blindness but cannot be corrected by eyeglasses or contact lenses. While current low vision aids (e.g., magnifier, CCTV) support basic vision enhancements, such as magnification and contrast enhancement, these enhancements often arbitrarily alter a user's full field of view without considering the user's context, such as their visual abilities, tasks, and environmental factors. As a result, these low vision aids are not sufficient or preferred by low vision users in many important tasks. Augmented reality (AR) technology presents a unique opportunity to enhance low vision people’s visual experience by automatically recognizing the surrounding environment and presenting tailored visual augmentations. In this talk, I will talk about how we design and build intelligent AR systems to support low vision people in visual tasks, such as a head-mounted AR system that presents visual cues to orient users’ attention in a visual search task, as well as a projection-based AR system that projects visual highlights on the stair edges to support safe stair navigation. I will conclude my talk by discussing our future research direction on AR for low vision accessibility.


Invited session: Binocular vision & interactions

Binocular vision shapes our ability to perceive and act on our environments, guiding our visuomotor interactions with the objects around us. Speakers in this session will discuss how stereo vision and 3D perception guide movement and support our ability to interact with the world, and how we can use clinical applications and augmented/virtual reality approaches to understand how the human visual system processes depth information under typical and atypical conditions.

Moderator: Deborah Giaschi, University of British Columbia


Depth perception in virtual environments: The role of experience   Laurie Wilcox, Centre for Vision Research, York University

In this presentation I’ll review results of stereoacuity, disparity matching and depth magnitude estimation studies in which comparison of so-called naïve and experienced observers shows substantive differences in performance. I will describe our working theory that the critical difference between these groups is their tolerance to conflicts between stereopsis and other sources of depth information. Further, I will review some recent data that suggest there are some conflicts that even experienced 3D participants cannot disregard.

Funding acknowledgements:  Natural Science and Engineering Council of Canada


Binocular vision and the control of foot placement during walking in natural terrain   Kathryn Bonnen, Indiana University School of Optometry

Co-authors: Jonathan S Matthis, Northeastern University; Mary Hayhoe, University of Texas

Coordination between visual and motor processes during walking is critical for the selection of stable footholds when walking in rough terrains.  In our work we collect eye movements and body motion while people walk over complex natural terrains.  We found that gaze strategy was highly sensitive to the complexity of the terrain, with more fixations dedicated to foothold selection as the terrain became more difficult.  Disrupting the binocular vision led to a tendency to focus on closer footholds, suggesting that this loss of information places more pressure on the visuomotor control process.  Furthermore we find a relationship between a person's stereoacuity and their gaze strategy.  These findings help us understand how vision loss can impact everyday tasks like walking.  

Funding acknowledgements:  NIH NEI, ARVO/VSS fellowship, Harrington fellowship


The case against probabilistic inference: A deterministic theory of 3D visualprocessing   Fulvio Domini, Brown University

I will describe a new computational theory of 3D cue integration and introduce a novel theoretical framework to study 3D vision in humans. The proposed computational theory differs from the current mainstream approaches to the problem in two fundamentally different ways.  First, it assumes that 3D mechanisms are deterministic processes that map a given visual stimulus to a unique 3D representation. Second, the proposed theory posits that 3D processing is heuristic, finding correct solutions to the problem only in ideal viewing conditions. The deterministic and heuristic nature of these mechanisms is inconsistent with Bayesian approaches that model brain mechanisms as processes of Bayesian inference aimed at deriving the most accurate and precise representation of 3D structures. These two main features of the proposed theory are implemented in a computational model that allows quantitative predictions of new phenomena. First, it provides an entirely different interpretation of Just Noticeable Differences, a hallmark measure of perceptual uncertainty. Second, it predicts specific perceptual distortions that are at odds with what previous accounts would predict. I will also discuss how the deterministic and heuristic nature of the proposed computational model points towards a re-evaluation of fundamental theoretical assumptions in perception research.

Funding acknowledgements:  Supported by grants from the National Science Foundation (1827550 and 2120610) and the National Institutes of Health (1R21EY033182-01A1)


Harnessing interactions within binocular vision to treat amblyopia   Benjamin Backus, Vivid Vision Inc

Co-authors: James Blaha, Vivid Vision Inc; Manish Gupta, Vivid Vision Inc; Tuan Tran, Vivid Vision Inc; Brian Dornbos, Vivid Vision Inc

Inexpensive virtual reality (VR) headsets have enabled at-home therapy for binocular dysfunctions, including amblyopia. Healthy binocular vision is exquisite, which is achieved though interactions among various visual subsystems. As a result, however, binocular vision has multiple points of weakness, so development can go wrong in many different ways. Furthermore, the visual system has many ways it can adapt to dysfunctions to improve vision. It is therefore not surprising that amblyopias are idiosyncratic from one person to the next. Habitual interocular suppression is a typical adaption in amblyopia, and by treating suppression in VR, acuity can be improved by 0.1 to 0.2 logMAR. However, the amblyopic visual system shows complicated patterns of interaction between suppression, acuity, stereopsis, motor vergence, accommodation, and motion perception; and each of those subsystems is itself complex. Thus, to improve acuity and stereopsis in amblyopia, a multifaceted approach would seem more promising. In a VR headset, one can addresses suppression, stereopsis, vergence ability, and even accommodation, simultaneously. I will describe Vivid Vision’s approach to this problem, along with preliminary results from independent researchers.


Posters: Friday October 6


Noninvasive neuromodulation of subcortical visual pathways with transcranial focused ultrasound   Ryan Ash, Department of Psychiatry and Behavioral Sciences, Stanford University

Co-authors: Morteza Mohammadjavadi, Department of Radiology, Stanford University; Kim Butts Pauly, Department of Radiology, Stanford University; Anthony Norcia, Department of Psychology, Stanford University

Transcranial ultrasound stimulation (TUS) is an emerging tool to noninvasively modulate neural activity in deep brain areas. In preparation for our first in-human TUS studies, we targeted TUS to the lateral geniculate nucleus (visual thalamus) in a large mammal (sheep).  Full-field light flash stimuli were presented with or without concomitant TUS in randomly interleaved trials. Similar to what has previously observed by Fry et al (Nature 1959) in cats, EEG visual-evoked potentials (VEPs) were reversibly suppressed by TUS to the LGN. No changes in VEPs were observed in sheep who received sham-TUS to a control site in the basal ganglia, ruling out potential transducer auditory-somatosensory confounds. Magnetic resonance acoustic radiation force imaging (MR-ARFI), a technique to measure the ultrasound focus in situ, showed a focal volume of microscopic displacement at the expected target. Excitingly, MR-ARFI predicted the suppressive effect on VEPs in individual subjects, suggesting that MR-ARFI can be used to confirm TUS targeting and estimate neurophysiological impact. We are now translating this paradigm into human, targeting TUS to the LGN while participants perform a contrast detection task with EEG recording of steady-state VEPs. MR-ARFI will be measured to evaluate targeting and estimate TUS dosage in each participant. This work provides the foundation for a dissection of the roles of different subcortical nuclei in different aspects of human vision.


Tests of a contrast gain control model of parabolic brightness matching functions   Osman B. Kavcar, Integrative Neuroscience Program, University of Nevada, Reno

Co-authors: Michael E. Rudd, Center for Integrative Neuroscience, University of Nevada, Reno; Christabel Arthur, Integrative Neuroscience Program, University of Nevada, Reno; Alex J. Richardson, Department of Psychology, University of Nevada, Reno; Michael A. Crognale, Department of Psychology, University of Nevada, Reno

The brightness induction produced by a surround annulus on a target disk can be measured by adjusting the luminance of a second disk to match the target in brightness. This produces parabolic brightness matching functions (BMFs) when average perceptual matches are plotted against annulus luminance on a log-log scale. A model developed by Rudd et al. (JOSA A, 2023), in which a contrast gain control operates between the outer and inner edges of the annulus, predicts that a linear relationship should hold between the first-order (k1) and second-order coefficients (k2) of the parabolic BMFs. We previously verified this prediction and discovered that the slope of this linear relationship depends on the contrast polarity of the target with respect to its annulus, but is unaffected by the annulus size or the background luminance. Here, we further tested the model by varying the target disk luminance in two conditions where the target disk was either a decrement or an increment with respect to its annulus and the background was white (highest display luminance). The model predicts that the slope of the k1 vs k2 plots will itself decrease as a linear function of the log luminance of the target disk, and the rate of decrease will equal -1. Our results confirmed the first prediction, but the slope was -2 instead of -1. This pattern held both in the decremental and incremental target conditions.


Quantifying, modeling, and extending the scope of the principle of inverse effectiveness to vision   Vincent A. Billock, Leidos, Inc. at the Naval Aerospace Medical Research Laboratory, NAMRU-D, WPAFB, OH

When visual responses are amplified/enhanced by the presence of a second (different) sensory stimulus, weak visual responses are amplified/enhanced more than strong ones.  In multisensory integration this is the crucial Principle of Inverse Effectiveness, but here we show that inverse effectiveness also applies to some key Visual-Visual interactions.  Despite being considered one of the three most important principles in sensory integration, Inverse Effectiveness has not been properly quantified for most interactions.  For example, despite the apparent specificity of its name, there is no published evidence that enhancement drops off in a nice 1/Effectiveness progression, nor are there studies that look for an invariant form or imply a particular mechanism.  Here we address these obvious gaps by comparing data from audio-visual, audio-tactile, binocular and color vision interactions, both for psychophysical data and for neural firing rates.  The results can be predicted from a gated power law amplification (Billock & Havig, 2018) found for both psychophysical and neural enhancement data.  The Principle of Inverse Effectiveness (PIE) follows directly from the compressive exponent in the power law, and can be derived from a simulation of excitatory-synaptic-coupled Hodgkin-Huxley spiking neurons.  The similarity of inverse effectiveness in multisensory and visual-visual interactions means that multisensory researchers can have their PIE and vision researchers can eat it too.   

Funding acknowledgements:  Funded by a supplement to an Office of Naval Research Multi-University Research Award, #N00014-20-1-2163


Gain, not changes in spatial receptive field properties, improves task performance in a neural network attention model   Daniel Birman, Department of Biological Structure, University of Washington

Co-authors: Kai J Fox, Department of Psychology, Stanford University; Justin L Gardner, Department of Psychology, Stanford University

Attention allows us to focus sensory processing on behaviorally relevant aspects of the visual world. One potential mechanism of attention is a change in the gain of sensory responses. However, changing gain at early stages could have multiple downstream consequences for visual processing. Which, if any, of these effects can account for the benefits of attention for detection and discrimination? Using a model of primate visual cortex we document how a Gaussian-shaped gain modulation results in changes to spatial tuning properties. Forcing the model to use only these changes failed to produce any benefit in task performance. Instead, we found that gain alone was both necessary and sufficient to explain category detection and discrimination during attention. Our results show how gain can give rise to changes in receptive fields which are not necessary for enhancing task performance.

Funding acknowledgements:  Washington Research Foundation, NEI T32EY07031,  Research to Prevent Blindness,  Lions Clubs International Foundation,  Hellman Fellows Fund


Binocular facilitation of the BOLD response to melanopsin stimulation in the suprachiasmatic nucleus   Joel T. Martin, Department of Psychology, University of York, York, United Kingdom

Co-authors: Lauren Welbourne, Department of Psychology, University of York, York, United Kingdom; Federico G. Segala, Department of Psychology, University of York, York, United Kingdom; Ellie Baker, Department of Psychology, University of York, York, United Kingdom; Monique Bhullar, Department of Psychology, University of York, York, United Kingdom; Rowan Huxley, Department of Psychology, University of York, York, United Kingdom; Allice Wardle, Department of Psychology, University of York, York, United Kingdom; Daniel H. Baker, Department of Psychology, University of York, York, United Kingdom; Alex R. Wade, Department of Psychology, University of York, York, United Kingdom

In a recent analysis of archival data, Spitschan and Cajochen (2019) identify what appears to be substantial binocular facilitation of melatonin suppression due to melanopic light stimulation. This putative effect likely originates in the melanopsin-containing intrinsically photosensitive retinal ganglion cells (ipRGCs) which project directly to the suprachiasmatic nucleus (SCN) of the hypothalamus. We asked whether we could measure a direct physiological correlate of this binocular facilitation using a binocular, MRI-compatible, 10-primary spectral stimulation device. We present preliminary findings from a functional magnetic resonance imaging (fMRI) study designed to explore the blood oxygen level dependent (BOLD) response to monocular and binocular melanopic light stimulation. The study used a 30 s on/off design with three ‘ocularity’ conditions (binocular-low, monocular-high, binocular-high) and two classes of targeted photoreceptors (melanopsin and LMS cones). Throughout each scan, subjects (N=18) also responded to brief, cone-directed sinusoidal modulations of varying intensity. We report that binocular vs. monocular melanopsin stimulation induced significant BOLD activation in SCN but that this effect was not seen for cone-directed stimulation. This is consistent with the binocular facilitation effect described by Spitschan and Cajochen (2019) and provides the first direct evidence of melanopsin-driven activation and binocular facilitation in human subcortical nuclei. 

Funding acknowledgements:  BBSRC (BB/V007580/1)


Combining spatial and quantitative models to account for the relationship between luminance, context and brightness   Joris Vincent, Technische Universität Berlin

Co-author: Marianne Maertens, Technische Universität Berlin

The apparent brightness of a target region is determined not only by its local luminance, but also by its surrounding context. This is well demonstrated by the many illusions where some surround context causes two physically identical targets to appear different in brightness (e.g. White's (1979) illusion). Here we use perceptual scales from Maximum Likelihood Conjoint Measurements to test the predictive power of computational brightness models as a function of surround context and local luminance. While the qualitative relationship between target brightness and surround context (i.e. direction of effect) is captured well by image-computable models of the spatial filtering type (e.g. Blakeslee and McCourt, 1997), these do not predict the quantitative aspects of that relationship (i.e. magnitude of effect). A different class of models (e.g. Whittle, 1992) makes quantitative predictions of brightness as a function of luminance, but requires labeling of target and background (i.e., is not image-computable), thus cannot by itself account for the spatial context. Hence we combine Whittle’s model with spatial mechanisms to account for both spatial context and local luminance. A combined model using multiscale spatial filtering and calculating brightness at every spatial scale is able to provide both a correct qualitative prediction of direction of effect, as well as a quantitative prediction of apparent brightness that can be tested against perceptual scales for different stimuli.

Funding acknowledgements:  German Research Foundation DFG MA5127/5-1 to M. Maertens


Non-rivalrous interocular contrast integration across the human visual cortex hierarchy   Kelly Chang, University of Washington

Co-authors: Xiyan Li, Department of Psychology, University of Washington; Department of Psychology, University of California, San Diego; Kimberly  Meier, Department of Psychology, University of Washington; Kristina Tarczy-Hornoch, Department of Ophthalmology, University of Washington; Geoffrey M. Boynton, Department of Psychology, University of Washington; Ione Fine, Department of Psychology, University of Washington

Here, using functional MRI, we measured interocular interactions as a function of contrast presented to each eye under non-rivalrous dichoptic viewing conditions.  Methods: Activity was measured from early visual cortex (V1 – V3) while participants (n = 5) viewed dichoptic gratings (2-cpd) that independently varied in contrast over time in each eye at 1/6 and 1/8 Hz. We fit a model [((L^m+R^m ))/2]^(1/m) to quantify how the neural response was driven by the contrast in each eye (L and R), where m = 1 represents simple averaging, and as m → ∞ the model shifts towards a max rule, where responses are driven by the eye presented with highest contrast.  Results: Across all visual areas, responses were much closer to a max than a mean model, suggesting that neural responses were primarily driven by the eye presented with highest contrast. Within V1, similar findings have been described using a normalization model (Moradi & Heeger, 2009). The magnitude of m increased across the visual hierarchy (V1: m = 1.82; R2 = 0.32; V2: m = 12.94, R2 = 0.28; V3: m = 13.31, R2 = 0.27).  Conclusions: The neural response integrating signals from each eye approaches a simple maximum as the contrast signal propagates from V1 through V3. This is consistent with previous behavioral data showing that visually typical observers tend to report perceiving the maximum contrast presented to each eye  (Meier et al., 2023).

Funding acknowledgements:  Knights Templar Eye Foundation, Research to Prevent Blindness, UW Center for Human Neuroscience, Unrestricted grant from Research to Prevent Blindness to UW Department of Ophthalmology


Binocular contrast integration: Cortical and behavioral signals reflect different computations   Kimberly Meier, Department of Psychology, University of Washington

Co-authors: Mark Pettet, Department of Psychology, University of Washington; Taylor Garrison, Department of Psychology, University of Washington; Kristina Tarczy-Hornoch, Department of Ophthalmology, University of Washington; Geoffrey M. Boynton, Department of Psychology, University of Washington; Ione Fine, Department of Psychology, University of Washington

Purpose: Although binocular contrast perception under dichoptic viewing conditions has been extensively characterized behaviorally, little is known about how the signals from each eye are combined in cortex.  Here we compared simultaneously-collected behavioral and EEG measures of dichoptic contrast perception. Methods: Observers (n=16) dichoptically viewed a 2-cpd grating flickering at 7.5 Hz and rotating slowly (1º/s). The contrast of the grating shown to each eye modulated sinusoidally over time at independent rates (1/6 and 1/8 Hz). We recorded EEG activity while observers positioned a joystick lever to report perceived contrast as it changed over time. Analysis: A multiband filter was used to isolate the SSVEP responses to 7.5 Hz signals from electrode Oz (occipital pole), and the standard deviation of this signal provided a measure of neural response amplitude as a function of contrast. Results: A simple model was fit to VEP and behavioral responses, [(L^m+R^m)/2]^(1/m)], which essentially characterized whether responses were better fit by a mean (m≈1) or a max (m>1) model. VEP responses (m = 1.0; MSE = 0.013) were well fit by a mean model, suggesting the EEG signal may have been driven by the input layers of V1. In contrast, behavioral responses (m = 8.8; MSE = 0.011) were well fit by a model that was heavily shifted towards a max model, suggesting a significant non-linear transformation of the contrast signal between the input layers of V1 and conscious perception.

Funding acknowledgements:  Knights Templar Eye Foundation, Research to Prevent Blindness, Unrestricted grant from Research to Prevent Blindness to UW Department of Ophthalmology


How important is it to know your own limitations when making perceptual decisions?   Wilson Geisler, University of Texas at Austin

Co-author: Anqi Zhang, University of Texas at Austin

Many models of perceptual performance consist of two components: (1) a representation of the available sensory and memory information, and (2) a Bayes-optimal decision computation. If the model accurately predicts performance, then that finding is often interpreted as evidence for both components of the model. However, it is important and informative to also test heuristic decision rules. Here, we illustrate such analyses for the task of covert visual search, where a target can appear at one of n locations (or be absent) with some prior probability distribution (the “prior map”). The available sensory information can be represented by the signal-to-noise ratio at each retinal location (the “d’ map”) together with the prior map. It can be shown that the Bayes-optimal decision rule is to multiply the response at each location by the d’ at that location, add the log prior for that location, and then pick the location that has the maximum. For a wide range of d’ and prior maps we simulated performance for the optimal and various heuristic decision rules. As expected, varying the d’ and prior maps has a large effect on performance due to the variation in the available sensory information. However, a wide range of simple decision rules perform almost as well as the Bayes-optimal rule. In other words, in the covert search task, our decision processes can be idiosyncratic and almost completely ignorant of the d’ and prior maps and still perform optimally.

Funding acknowledgements:  NIH grant EY11747


Perspective-correct rendering for active observers   Phillip Guan, Meta Reality Labs Research

Co-authors: Eric Penner, Meta Reality Labs Research; Joel Hegland, Meta Reality Labs Research; Benjamin Letham, Meta; Douglas Lanman, Meta Reality Labs Research

Stereoscopic, head-tracked display systems can show users realistic, world-locked virtual objects and environments (i.e., rendering perspective-correct binocular images with accurate motion parallax). However, discrepancies between the rendering pipeline and physical viewing conditions can lead to perceived instability in the rendered content resulting in reduced immersion and, potentially, visually-induced motion sickness. Precise requirements to achieve perceptually stable world-locked rendering (WLR) are unknown due to the challenge of constructing a wide field of view, distortion-free display with highly accurate head and eyetracking. We present a custom-built system capable of rendering virtual objects over real-world references without perceivable drift under such constraints. This platform is used to study acceptable errors in render camera position for WLR in augmented and virtual reality scenarios, where we find an order of magnitude difference in perceptual sensitivity. We conclude with an analytic model which examines changes to apparent depth and visual direction in response to camera displacement errors, and visual direction is highlighted as a potentially important consideration for WLR alongside depth errors from incorrect disparity.


Measuring luminance CSFs from the fovea to far peripheries   Kotaro Kitakami, Tokyo Institute of Technology

Co-authors: Suguru Saito, Tokyo Institute of Technology; Keiji Uchikawa, Kanagawa Institute of Technology

We measured luminance contrast thresholds for three subjects in the visual field of the left eye up to 56, 49, 84, and 63 degrees in nasal, superior, temporal, and inferior directions, respectively. The stimulus was a cosine-Gabor of 10 deg diameter. The stimulus size was constant for all eccentricities. The average luminance of the background was 31cd/m2. The stimulus duration was 0.5sec with 0.5sec increasing and decreasing temporal slopes. The thresholds were measured with the psi procedure of two temporal alternative forced choice. The results showed that although at small eccentricities no significant differences in thresholds were found among directions, the differences were prominent at large eccentricities beyond 40 degrees in all frequencies. Most previous peripheral CSFs were based on data measured within certain ranges of eccentricity, and extrapolated outside these ranges. We optimized the parameters of the previous CSFs using the present results within these limited ranges, and confirmed that the previous CSFs fitted the present results. Then, we applied the present results in all eccentricities to the previous CSFs. It was found that the previous CSFs tended to be higher than the present results beyond 60 degrees in temporal and beyond 40 degrees in other directions in all frequencies. This would indicate that the previous CSFs at large eccentricities were not correctly inferred by extrapolation using data measured at small eccentricities.

Funding acknowledgements:  JSPS KAKENHI Grant Number JP18H03247


Diurnal variations in luminance and chromatic contrast sensitivity   Kowa Koida, Toyohashi University of Technology

Sensory motor performance typically peaks in the evening, influenced by observer's diurnal arousal level. The arousal level is regulated by neurons in the brainstem, which form connections extending throughout the brain but the connection is highly heterogeneous. The dorsal visual pathway and superior colliculus receive a dense projections, while the ventral visual pathway and lateral geniculate nucleus receive fewer. The former primarily deal with stimuli determined by luminance contrast, while the latter handle chromatic information. How such heterogeneous projection in the brain impacts on human visual detection performance has yet to be understood. In this study, we measured the diurnal variation in human contrast sensitivity and investigated whether changes differed between luminance and chromatic stimuli. Result revealed that as the time of day progressed, sensitivity to luminance contrast improved, while sensitivity to color contrast deteriorated.

Funding acknowledgements:  KAKENHI (21H05820)


Leveraging AI to accelerate scientific discoveries   Ipek Oruc, University of British Columbia

Co-authors: Parsa Delavari, University of British Columbia; Gulcenur Ozturan, University of British Columbia; Lei Yuan, University of British Columbia; Ozgur Yilmaz, University of British Columbia

We introduce a structured approach that leverages AI to accelerate scientific discoveries. We showcase the efficacy of this technique via a proof-of-concept study identifying markers of sex in retinal images. Our methodology consists of four stages: In Phase 1, CNN development, we train a VGG model to recognize patient sex from retinal images. Phase 2, Inspiration, involves reviewing post-hoc interpretation tools' visualizations to draw observations and formulate exploratory hypotheses regarding the CNN model's decision process. This yielded 14 testable hypotheses related to potential variances in vasculature and optic disc. In Phase 3, Exploration, we test these hypotheses on an independent dataset, of which nine demonstrated significant differences. In Phase 4, Verification, five out of nine these nine hypotheses are re-tested on a new dataset, verifying five of them: significantly greater length, more nodes and branches of retinal vasculature, a larger area covered by vessels in the superior temporal quadrant, and a darker peri-papillary region in male eyes. Finally, we conducted a psychophysical study and trained a group of ophthalmologists (N=26) to identify these new retinal features for sex classification. Their performance, initially on par with chance and a non-expert group (N=31), significantly improved post-training (p<.001, d=2.63). These outcomes illustrate the potential of our methodology in leveraging AI applications for retinal biomarker discovery.

Funding acknowledgements:  NSERC Discovery, NSERC Accelerator.


Effect of flicker adaptation on perception of small spots presented with AOSLO   J.T. Pirog, Herbert Werthiem School of Optometry and Vision Science, University of California Berkeley

Co-authors: Vickie Kuo, Herbert Werthiem School of Optometry and Vision Science, University of California Berkeley; William S. Tuten, Herbert Werthiem School of Optometry and Vision Science, University of California Berkeley

Previous work suggests the detection of small spots is mediated by a mixture of chromatic and achromatic mechanisms. We tested whether exposure to spatially-uniform chromatic or luminance flicker affected detection thresholds for 543 nm increments delivered through an AOSLO. Heterochromatic flicker photometry was used to determine isoluminant settings for the red and green primaries of a DLP display; this isoluminant red-green mixture provided the 2.1° background upon which 23 arcmin (N=4) or 3 arcmin (N=2) stimuli were presented for 100ms. The projector background was modulated to produce isoluminant chromatic flicker or isochromatic luminance flicker at 3.75 or 30 Hz. The time-averaged luminance and chromaticity for all adaptation conditions were equivalent. For each condition, data collection was preceded by 2 minutes of preadaptation, followed by alternating windows of stimulus delivery (1 sec, steady background) and top-up adaptation (3 sec). Thresholds for all flicker conditions were compared to data obtained on a static background. For 23 arcmin spots, we found reduced sensitivity in the 3.75 Hz chromatic and luminance flicker conditions, but no adaptation effect was observed for 3 arcmin flashes or for 30 Hz flicker of either type. Our data suggest that raster-scanned, AO-corrected stimuli are susceptible to flicker adaptation, but that proximity to a flickering edge may be an important factor governing the effects of contrast adaptation on small spot detection.

Funding acknowledgements:  National Institutes of Health: R01EY023591, T35EY007139-30; Air Force Office of Scientific Research: FA9550-21-1-0230, FA9550-20-1-0195; Hellman Fellows Program; Alcon Research Institute


Digital dual-Purkinje-image eye tracking enables precise determination of visual receptive fields in fixating macaques   Ryan Ressmeyer, Department of Bioengineering, University of Washington

Co-authors: Jacob Yates, Department of Opthamology, University of California Berkeley; Gregory Horwitz, Department of Physiology and Biophysics, University of Washington

Understanding the relationship between visual stimuli and neural activity is a fundamental goal in visual neuroscience. However, the study of visual neurophysiology in awake primates is complicated by the constant occurrence of eye movements, even during periods of nominal fixation. To address this challenge, we adapted a recently developed high-resolution digital dual-Purkinje-image (dDPI) eye tracker (Wu et al., 2023) for use with macaque monkeys. In addition to tracking the Purkinje images, we simultaneously estimate the pupil center and size: a first for video eye tracking. We then sought to evaluate the efficacy of dDPI eye tracking for studying visual processing in fixating macaques by recording single neurons from the lateral geniculate nucleus while a spatially-correlated noise stimulus was displayed. Our analyses show that, as a result of properly accounting for eye movements post-hoc, the predictive performance of generalized linear models improves and the estimates center radii contract to values equal to or smaller than those reported in the literature. Notably, correcting for eye movements using the locations of the pupil center and corneal reflection—the standard method in video eye tracking—yielded worse model fits and larger receptive field sizes. This finding implies that the pupil center is an inaccurate reporter of small eye movements, while the Purkinje images may be veridical during fixations.

Funding acknowledgements:  The research was supported by NIH grant EY032900 to GDH and an NSF predoctoral fellowship to RKR


Dissociation of torsional eye movements and perception during optokinetic stimulation by visual cues of gravity   Raul Rodriguez, University of California, Herbert Wertheim School of Optometry and Vision Science

Co-author: Jorge Otero-Millan, University of California, Herbert Wertheim School of Optometry and Vision Science

An optokinetic stimulus rotating about the naso-occipital axis drives torsional eye movements and causes a bias in perception of upright measured with a subjective visual vertical (SVV) task. In addition, a static image with tilted visual cues of gravity orientation will induce optostatic torsion and biases SVV. We posit that a visual gravity cue provided by a frame combined with a torsional optokinetic stimulus would increase measured torsion and further bias SVV. We use a VR headset with eye-tracking to place the subject in a virtual room with circles forming either a rectangular room (frame condition) or a tubular room (no-frame condition). We place a fixation point at the center of the room while the subject performs a SVV task. The room either rotates about the naso-occipital axis at 0.05 Hz, 0.1 Hz, or 0.2 Hz, with an amplitude of ±20°, or has a static tilt of 0° or ±30°. Static frame conditions had the expected bias of SVV and optostatic torsion compared to the control no-frame condition. In sinusoidal rotation conditions, for SVV, there was a significant difference in the amplitude of the response between the frame (5.4 ± 2.6°; mean ± std) and no-frame (3.0 ± 1.3°) condition, while there was no significant difference for torsion, frame (0.8 ± 0.6°) and no-frame (0.7 ± 0.6°). Our results indicate that while perception integrates a moving visual cue into its estimation of upright orientation, the ocular motor system is only affected by those cues when they are static.


Characteristics and coordination of microsaccades in 6 Dimensions   Roksana Sadeghi, Herbert Wertheim School of Optometry and Vision Science, University of California, Berkeley

Co-author: Jorge Otero-Millan, Herbert Wertheim School of Optometry and Vision Science, University of California, Berkeley

The study of microsaccades, the small and rapid eye movements that occur during fixation, has focused on horizontal and vertical movements, while their torsional component remains relatively uncharted territory in vision research. We used video eye tracking to investigate microsaccades binocularly with horizontal and vertical movements tracked by pupil and corneal reflection and torsion by iris pattern. Five participants looked at a central dot for 20 trials of 20 seconds while seated, and their heads rested on a chin rest. For each microsaccade (N=2040), we measured the displacement of the eye along each dimension, and defined version as the average and vergence as the difference between the motion of the left and right eye. The average horizontal vertical and torsional components were 0.7, 0.3, and 0.1 deg for version and 0.1, 0.1, and 0.1 deg for vergence, respectively. Next, we measured the correlation between each component pair. We found that when the eyes moved to the left or right together, they also rotated by 0.3 deg (top towards the same side) for each degree of horizontal movement (R = 0.94, p < 0.01). When the eyes moved up (down), for each degree of vertical movement, they diverged (converged) 0.08 deg horizontally (R = -0.53, p < 0.01) and rotated 0.09 deg outward (inward; R = -0.47, p < 0.01). There was no strong correlation between other combinations. These results show that microsaccades follow similar kinematics at a minute scale as larger saccades.

Funding acknowledgements:  NEI R00EY027846 and NSF 2107049


LissEYEjous Tracker - precise fundus tracking device based on ultrafast Lissajous scanning   Maciej Marcin Bartuzel,  Institute of Physics, Faculty of Physics, Astronomy and Informatics, Nicolaus Copernicus University; Center for Human Ophthalmic Imaging Research (CHOIR), UC Davis Eye Center

Co-authors: Ewelina Pijewska, Center for Human Ophthalmic Imaging Research (CHOIR), UC Davis Eye Center; Krzysztof Dalasinski, Inoko Vision Ltd.; Szymon Tamborski,  Institute of Physics, Faculty of Physics, Astronomy and Informatics, Nicolaus Copernicus University; Ravi S Jonnal, Center for Human Ophthalmic Imaging Research (CHOIR), UC Davis Eye Center; Robert J Zawadzki, EyePOD Imaging Lab, Dept. of Cell Biology and Human Anatomy, UC Davis; Maciej Szkulmowski, Institute of Physics, Faculty of Physics, Astronomy and Informatics, Nicolaus Copernicus University

Retinal eye tracking has emerged as a promising alternative to conventional video-based trackers, offering direct access to retinal coordinates with refined spatial and temporal resolutions. These attributes make them attractive for applications ranging from image stabilization in advanced ophthalmic imaging to identifying biomarkers of neurological or ophthalmic disorders that affect eye motility. Existing retinal tracking method however face challenges related to reliance on reference frames and non-uniform sampling either in space or time. In this work we present a new approach for retinal tracking, which is based on imaging small retinal patches (~1.5—3°) using self-repeating Lissajous scanning patterns. Pattern repetition rates close to 4kHz are achieved with an optical design that employs two MEMS microscanners with closely matching resonant frequencies, working in mutually perpendicular dimensions. Several examples of fundus images acquired with different Lissajous patterns are presented. Based on this, eye trajectories may be extracted. Future works will further investigate tracking resolution and dependence on Lissajous pattern spatial density and repetition rate.

Funding acknowledgements:  National Science Centre, Poland (2023/48/C/ST7/00164); National   Institutes   of   Health   (R01-EY-033532,   R01-EY-031098,   R01-EY-026556, P30-EY-183 012576)


Distortion of perceived visual space after eccentric gaze holding   Terence Tyson, Herbert Wertheim School of Optometry and Vision Science, University of California, Berkeley; Human Systems Integration Division, NASA Ames Research Center

Co-authors: Dennis Perez, Herbert Wertheim School of Optometry and Vision Science, University of California, Berkeley; Tiffany Lau, Herbert Wertheim School of Optometry and Vision Science, University of California, Berkeley; Jorge Otero-Millan, Herbert Wertheim School of Optometry and Vision Science, University of California, Berkeley; Department of Neurology, Johns Hopkins University

Previous studies have shown that rebound nystagmus can be a behavioral probe into the adaptive properties of the gaze-holding mechanism, showing that after prolonged eccentric gaze holding and upon return to central gaze the eye tends to drift towards the previously held position. It is not known whether perception of visual space is also affected by similar adaptation mechanisms. The current study seeks to elucidate if eccentric gaze holding changes the perception of space in a relative spatial judgment task. To measure their spatial bias, twelve subjects were asked to report which among two short vertical lines flashed to the left or to the right of the display was closer to a third central line. Perception was assessed after holding eccentric gaze at 40 degrees towards the left or right and compared with control trials without eccentric gaze holding. Subjects showed a significant difference in spatial bias between the leftward and rightward gaze holding conditions (p = 0.04), suggesting that the visual space changes differently with respect to the side where gaze was held. While we did not observe an overall bias (p = 0.327) under no gaze holding we did observe a significant correlation between handedness and spatial bias (r2 = 0.4, p = 0.04). We conclude that gaze holding temporarily distorts the perception of space in a mechanism that may be related to the adaptation of the gaze holding mechanism.


Vernier thresholds of a Poisson-noise-limited computational observer with and without fixational eye movements   Mengxin Wang, Department of Experimental Psychology, University of Oxford

Co-authors: Allie C. Hexley, Department of Experimental Psychology, University of Oxford; Alexander J. H. Houston, School of Mathematics, University of Leeds; Jiahe Cui, Department of Engineering Science, University of Oxford; Daniel Read, School of Mathematics, University of Leeds; Hannah E. Smithson, Department of Experimental Psychology, University of Oxford; David H. Brainard, Department of Psychology, University of Pennsylvania

Vernier acuity is a fundamental measure of spatial vision. We modeled how stimulus encoding by the cones limits Vernier acuity. We determined Vernier thresholds for a computational observer that had access to the Poisson-distributed cone photopigment excitations. The observer also had access to the cone mosaic layout and the stimulus possibilities on each trial. We varied stimulus contrast (100%, 50%, 22%, 11% Michelson contrast) and duration (2, 4, 9, 18 stimulus frames; frame duration 8.33 ms) while fixing other stimulus properties (foveal viewing; two achromatic vertical bars; length 6.2 arcmin; width 1 arcmin; vertical gap 0.1 arcmin). When the retinal image is stationary, Vernier thresholds depend jointly on contrast and duration through contrast energy: squared contrast times duration. Introducing fixational drift eye movements impairs performance, when the information about eye path is not accounted for by the computational observer. When the path of fixational drift is made available and used ideally, there is no noticeable difference with the stationary case. The lack of improvement when the path of fixational drift is known exactly may reflect the high-density of foveal cones relative to the optical point spread function and the fact that we did not introduce temporal filtering by the visual system. Our results suggest the possibility of a rich interaction between optics, cone sampling, fixational eye movements, post-receptoral filtering and visual performance.

Funding acknowledgements:  This work was supported by the Engineering and Physical Sciences Research Council [grant number EP/W023873/1].


Does stimulus image quality affect fixational eye movement characteristics?   Laura K. Young, Newcastle University

Co-authors: Allie C. Hexley, University of Oxford; Jenny C. A. Read, Newcastle University; Hannah E. Smithson, University of Oxford; William S. Tuten, University of California, Berkeley; Austin Roorda, University of California, Berkeley

Drift has been found to be inversely related to visual acuity (higher diffusion constants for observers with lower visual acuity; Clark et. al, 2022). However, it is not clear if this reflects long-term tuning to ocular characteristics, or a more dynamic adjustment to image quality. To test this, fixational eye movements were measured using an AOSLO and stimuli were presented through the imaging system at 30 Hz. Image quality was altered by simulating aberrations. Five participants completed a tumbling E task under three conditions: no aberration, the participant’s natural aberration, and 0.25 D defocus. A 1.2 arcmin gap width E was presented as a 543 nm increment on the 840 nm imaging light for 750 ms. Conditions were randomly interleaved, and feedback was given on each trial. As expected, performance was highest with no aberration, followed by the observer’s natural aberration and was worst for defocus. However, drift characteristics (bivariate contour ellipse area, BCEA, and diffusion constant) did not vary, suggesting that on a trial-to-trial basis drift was not tuned to stimulus quality. To test whether drift might be tuned over time, two participants repeated the experiment in ordered blocks of trials. There were differences in BCEA between ordered and randomised trials, but these were not consistent between participants. We will present an analysis of the differences found between individuals in terms of the optical aberrations present in their eye.

Funding acknowledgements:  This work was supported by a UKRI Future Leaders Fellowship (MR/T042192/1), a Reece Foundation Fellowship in Translational Systems Neuroscience (Newcastle University), an Engineering and Physical Sciences Council grant (EP/W004534/1) and a National Institutes of Health grant (R01EY023591).


Comparison of tripolar and traditional electrodes: Waveform morphology   Mackenzie V. Wise, University of Nevada, Reno

Co-authors: Gideon P. Caplovitz, University of Nevada, Reno; Michael A.  Crognale, University of Nevada, Reno

The waveforms of visual evoked potentials (VEPs) recorded by tripolar electrodes are different from those recorded by traditional electrode arrays. Traditional arrays record potentials using both an active and a reference electrode. The location of this reference will affect the waveform amplitude and shape. Conversely, tripolar electrodes measure the surface Laplacian across three concentric rings housed within a single electrode surface. These differences may influence the morphology of evoked responses. We compared these two modalities by recording pattern-reversal VEPs to two sizes of checkerboard. Visual inspection of the VEPs suggest that the signals recorded by the tripolar system have attenuated higher frequencies and increased latency of the major waveform components. Root-mean-square comparison of the two signal types confirm attenuation at the higher frequencies in the tripolar recording. Additionally, there is a cumulative delay present within both the large and small check conditions, such that each subsequent component recorded by the tripolar electrodes is shifted increasingly later in time compared to the same components recoded by the traditional electrodes. Such latency shifts may be indicative of a difference in the physiological sources that are measured by the two EEG systems.


Divided attention in Sign Language recognition   Dave Young, Department of Psychology, University of Washington

Co-authors: Jasmine F. Awad, Department of Psychology, University of Washington; Ione Fine, Department of Psychology, University of Washington; Center for Human Neuroscience, University of Washington; Dina V. Popovkina, Department of Psychology, University of Washington

Previous work suggests that the brain has a limited ability to process multiple visual stimuli during divided attention: for example, people can recognize only one word at a time (White, Palmer, & Boynton, Psych Science 2018). Here, we examine whether American Sign Language (ASL) experience affects divided attention for stimuli: do signers & non-signers differ in their ability to process two signs at once? In a probe recognition paradigm, participants were presented with two letter signs, one or both of which were pre-cued as relevant (single- and dual-task conditions, respectively), and then responded whether a probe sign matched the cued sign(s). The dual-task deficit is the difference in performance between the single- and dual-task conditions and measures the cost of dividing attention. Preliminary data show that hearing non-signers and signers had a similar dual-task deficit (11.8% ± 2.2%, n = 5 vs. 11.4% ± 1.5%, n = 6), with no significant difference between the groups (t(7.27) = 0.18, p = 0.86). The magnitude of these divided attention effects is consistent with processing limits observed for object judgments (e.g. Popovkina, Palmer, Moore, & Boynton, JoV 2021). Thus, these preliminary results suggest that the attentional demands of ASL sign processing are similar in signers and non-signers.

Funding acknowledgements:  Mary Gates Endowment, Arc of Washington Trust Fund, University of Washington, Special Diversity Fellowship, Bolles Dissertation Funds


Criterion effects in maximum likelihood difference scaling: Similar is not always the opposite of different   Yangyi Shi, Psychology Department, College of Science, Northeastern University

Co-author: Rhea Eskew, Psychology Department, College of Science, Northeastern University

Maximum Likelihood Difference Scaling (MLDS) is an efficient method of estimating perceptual representations of suprathreshold physical quantities (Maloney & Yang, 2003), such as luminance contrast. In MLDS, observers can be instructed to judge which of two stimulus pairs are more similar to one another, or which of the two pairs are more different from one another. If the same physical attributes are used for both the similar and dissimilar tasks, the two criteria should produce the same perceptual scales. We estimated perceptual scales for suprathreshold achromatic square patches. Increments and decrements on the mid-gray background were estimated separately. Observers judged which pair of stimuli were more similar in half of the sessions, and more different in the other half sessions. For most observers, the two tasks produced the same perceptual scales: a decelerating curve for increment contrasts and a cubic curve for decremental contrasts (cf. Whittle, 1992). These scales predicted forced-choice contrast discrimination thresholds for both increments and decrements. However, for a subset of observers, the ‘more different’ judgments produced scales that accelerated with contrast for both increments and decrements; these scale shapes do not predict their discrimination thresholds. Our results suggest that, even with these simple stimuli, observers in an MLDS experiment may attend to different aspects of the stimulus depending on the assigned task.

Funding acknowledgements:  NSF: BCS-2239456


Posters: Saturday October 7


Alternating orientation of the chromatic pattern VEP improves signal even in the absence of contrast adaptation   Jawshan Ara, Integrative Neuroscience Program, University of Nevada Reno, Nevada, 89557, USA, Department of Psychology, University of Nevada Reno, Nevada, 89557, USA, and, Department of Computer Science, University of Nevada Reno, Nevada, 89557, USA

Co-authors: Osman Kavcar, Integrative Neuroscience Program, University of Nevada Reno, Nevada, 89557, USA, and, Department of Psychology, University of Nevada Reno, Nevada, 89557, USA; Mackenzie V. Wise, Department of Psychology, University of Nevada Reno, Nevada, 89557, USA; Alireza Tavakkoli, Department of Computer Science, University of Nevada Reno, Nevada, 89557, USA, and Integrative Neuroscience Program, University of Nevada Reno, Nevada, 89557, USA ; Michael A.  Crognale, Department of Psychology, University of Nevada Reno, Nevada, 89557, USA, and, Integrative Neuroscience Program, University of Nevada Reno, Nevada, 89557, USA

The visual evoked potential (VEP) to chromatic pattern reversal is greatly reduced compared to VEPs to pattern onsets. Chromatic pattern onsets produce large and stereotypical waveforms that reliably differ from standard achromatic pattern reversal VEP waveforms used in clinical applications. Rapid contrast adaptation for sustained chromatic but not transient achromatic mechanisms has been suggested as one explanation for these observations. Here we first examined changes in the magnitude of response during recordings to reversing and onset grating patterns that preferentially modulate the L-M, S, and achromatic pathways. Given the evidence for both chromatic and achromatic orientation-selective mechanisms, we then hypothesized that contrast adaptation may be reduced by changing the orientation of the pattern for each reversal or onset. VEPs were recorded for 60 s with 2 onsets/reversals per second using both fixed and alternating (horizontal/vertical) orientations. FFT amplitudes for 6-second windows did not reveal evidence of adaptation for chromatic or achromatic onsets or reversal patterns over the 60-second recording period. Despite this, alternating pattern orientation increased the signal for all chromatic but not achromatic conditions. Although alternating the orientation for reversals increased the signal, the onset responses were still larger, even for non-alternating orientations. Mechanisms other than contrast adaptation must be invoked to explain the results.


Attentional modulation of the achromatic and chromatic reversal VEP   Christabel Arthur, Integrative Neuroscience Program, University of Nevada, Reno

Co-authors: Osman B.  Kavcar, Integrative Neuroscience Program, University of Nevada, Reno; Mackenzie V.   Wise, Cognitive and Brain Sciences Program, University of Nevada, Reno; Michael A.  Crognale, Department of Psychology, University of Nevada, Reno

Previous literature has consistently revealed attentional modulation of the Visual Evoked Potential (VEP) response to achromatic pattern reversal stimuli but little to no attentional modulation of the VEP response to chromatic pattern onsets. Magnetic Resonance Imaging (MRI) research, however, has reported modulation of the responses to both achromatic and chromatic pattern reversal stimuli. Numerous methodological differences including mode of presentation, stimulus contrast, and attentional demand, make comparison of these results difficult. In this study, we report the results of experiments using comparable perceptual contrasts, pattern reversals, and a coextensive and highly-demanding, multiple object tracking (MOT) task. Our findings support prior VEP results indicating that although achromatic VEPs are modulated by attention, chromatic VEPs are more robust to attentional modulation, even for highly demanding distractor tasks. We also found that when compared to a non-attentional condition, the attenuation of the VEP when attending to the MOT task was greater in magnitude than the enhancement of the VEP when attending to the VEP stimulus. This supports prior conclusions, that while avoiding active distraction is likely important, insuring an “attentive state” is not always necessary when recording VEPs. Further experiments are underway to investigate why attentional modulation of chromatic signals in early visual cortex are observed in MRI but not VEP recordings. 


Color and luminance processing in V1 complex cells and artificial neural networks   Luke Bun, Department of Bioengineering, University of Washington

Co-author: Gregory Horwitz, Department of Physiology and Biophysics, University of Washington

Object recognition by natural and artificial visual systems benefits from the identification of object boundaries. A useful cue for the detection of object boundaries is the superposition of luminance and color edges. To gain insight into the suitability of this cue for object recognition, we examined convolutional neural network (CNNs) models that had been trained to recognize objects in natural images. Because CNNs are only trained to do a single task, any properties they possess are likely useful for that task. We focused specifically on units in the second convolutional layer invariant to contrast polarity, a useful trait for object boundary detection. Some of these units were tuned for a nonlinear combination of color and luminance, which is broadly consistent with a role in object boundary detection. Others were tuned for luminance alone, but few were tuned for color alone. A literature review reveals that V1 complex cells have a similar distribution of tuning. We speculate that this pattern of sensitivity provides an efficient basis for object recognition, perhaps by mitigating the effects of lighting on luminance contrast polarity. The paucity of contrast polarity-invariant representation of chromaticity alone suggests that it is redundant with other representations. 

Funding acknowledgements:  This work was supported by EY018849 grants to Gregory D Horwitz and EY07031 to Luke M Bun


SSVEP measurements of color and spatial frequency response in V1   Alex Carter, University of York

Co-authors: Daniel H. Baker, University of York; Antony B. Morland, University of York; Abbie J. Lawton, University of York; Alex R. Wade, University of York

Intro: Recent work from our group (Segala et al, eLife, 2023) shows that the rules for binocular luminance signal combination depend on spatial frequency (SF). Structured patterns show strong interocular suppression while unstructured inputs (mean field disks) do not. Here, we used SSVEPs to ask if SF dependence is also found in chromatic pathways. Methods: SSVEPs were recorded from 12 subjects using a canonical V1 template (Poncet & Ales, 2023). Eyes were targeted using shutter goggles and stimuli were contrast-reversing gratings or disks at 5Hz (left eye) and 7Hz (right eye). Experimental factors were stimulus SF (disk, grating 1cpd), chromaticity (LMS, L-M or S-cone isolating) and ocularity (left, right or both). Results: Monocular conditions generated large responses at 2F. In binocular conditions, all 2F responses showed suppression, and significant intermodulation (IM) terms (sums and differences of the inputs - e.g., 2Hz) were present. The magnitude of both suppression and IM in the binocular condition depended on SF and chromaticity; IM amplitudes were higher for gratings compared to disks in the luminance condition, but higher for disks compared to gratings in the chromatic conditions. Overall we found significant differences in the spectral response signatures across all stimulus combinations. Conclusion: All inputs undergo binocular combination in V1 but the rules governing the combination appear to depend on both chromaticity and SF.


Light-adaptation clamp: A tool to predictably manipulate photoreceptor light responses   Qiang Chen, University of Washington

Co-authors: Norianne T. Ingram, University of Washington; Jacob Baudin, University of Washington; Juan M. Angueyra, University of Maryland, College Park; Raunak Sinha, University of Wisconsin,  Madison; Fred Rieke, University of Washington

Computation in neural circuits relies on judicious use of nonlinear circuit components. In many cases, multiple nonlinear components work collectively to control circuit outputs. Separating the contributions of these different components is difficult, and this hampers our understanding of the mechanistic basis of many important computations. Here, we introduce a tool that permits the design of light stimuli that predictably alter rod and cone phototransduction currents - including the compensation for nonlinear properties such as light adaptation. This tool, based on well-established models for the rod and cone phototransduction cascade, permits the separation of nonlinearities in phototransduction from those in downstream circuits. This will allow, for example, direct tests of the role of photoreceptor adaptation in downstream visual signals or in perception.

Funding acknowledgements:  NIH grant EY028542


Computational analysis of the effect of cone temporal filtering on detection threshold with and without retinal motion   Qiyuan Feng, Department of Psychology, University of Pennsylvania; Department of Brain and Cognitive Science, University of Rochester

Co-authors: Ling-Qi Zhang, Department of Psychology, University of Pennsylvania; Alisa Braun, Herbert Wertheim School of Optometry & Vision Science, University of California, Berkeley; Nicolas P. Cottaris, Department of Psychology, University of Pennsylvania; William S. Tuten, Herbert Wertheim School of Optometry & Vision Science, University of California, Berkeley; David H. Brainard, Department of Psychology, University of Pennsylvania

Retinal stimulus motion can increase visual acuity, but recent experimental evidence indicates that for briefly presented stimuli this benefit does not always occur (Braun et al, VSS 2023). To understand this effect, we modeled how the temporally low-pass filtering that occurs within each cone is expected to impact visual performance. We simulated a grating (10 cpd Gabor) detection task in which the stimulus was present for two 15 msec frames. We used the ISETBio software to compute cone excitations and cone photocurrent responses to the stimulus, for different contrasts and retinal positions, and used linear SVM classifiers to estimate computational-observer detection thresholds. We examined three retinal-motion conditions: 1) stimulus stabilized on the retina; 2) stimulus shifted 0.5 cycle orthogonal to grating orientation across the two frames; 3) stimulus shifted 1 cycle. Across all three conditions, detection threshold estimated on the basis of cone excitations varied little. Threshold estimated on the basis of photocurrent, however, was highest for the 0.5 cycle shift, and about the same for the no shift and 1 cycle shift conditions. Qualitatively, this result recapitulates the findings of Braun et al. and suggests that those findings may be understood as a consequence of the initial visual encoding. We note, however, that we analyzed detection rather than acuity; planned work will attempt to bring the computations into more direct contact with the experimental results.


Spillover effects of color discrimination training on color category boundaries and color appearance   Suzuha Horiuchi, Tokyo Institute of Technology

Co-author: Takehiro Nagai, Tokyo Institute of Technology

Perceptual learning refers to the increase in perceptual sensitivity that results from several days of training on a perceptual task. Although perceptual learning has been shown to be effective in a variety of perceptual tasks, few studies have examined perceptual learning in color perception. In this study, we investigated how color discrimination training at a base color affected various aspects of color perception for entire hues. The training consisted of five days of S color discrimination (300 trials/day) at either the negative or positive L-M base color, depending on the observer groups. Before and after the training, three types of color perception tests (color difference, unique hue, and color category boundary) were conducted for colors with various hues to examine the changes in color perception due to the training. The results showed that the color discrimination thresholds in the training decreased as expected with repeated trials. Interestingly, the training also affected the performance of the three types of tests; the perceived color difference around the training color tended to increase, and some of the unique hues and the color category boundaries shifted significantly toward the training color. These results suggest that only a few days of color discrimination training can spill over to the entire color space and induce distortion of the perceptual color space.

Funding acknowledgements:  JSPS KAKENHI 21KK0203


Constrained color-sorting and the evolution of color terms   Delwin Lindsey, Ohio State University

Co-author: Angela M. Brown, The Ohio State University

Constrained color-sorting—the partition of a color palette into sequential numbers, n, of categories or “piles,” n = {2, 3, …, N}—has been used to examine universal vs. culture-dependent processes that underlie color-term evolution. Here we use sorting to test two hypotheses related to these ideas. 1) If the constrained color sorts (CCSs) depend solely on universal processes that guide color-term evolution, then CCSs should be relatively insensitive to the exact composition of the color palette. 2) Otherwise, CCSs should at least be “optimal” (as measured by well-formedness: Regier et al., 2007). We report three studies, each using a different palette, where English speakers sorted the color samples into 2 … 6 piles; one study also included Somali speakers. All palettes included good examples of the Basic Color Categories (BCCs), but they differed in how hue, saturation, and lightness covaried. Results showed that pile-sorts reflected psychologically salient features of colors, which were mostly linked to BCCs. However, 1) the unfolding of sort-categories with increasing n differed greatly across palettes, and the order in which the sort-categories appeared always deviated from predictions based on classical theories of color term evolution; 2) pile-sorts were not optimal. Taken together, our results reveal strong dependence of CCSs on pre-existing lexical categories but only weak dependence on universal pre-linguistic representations of color related to color-term evolution.

Funding acknowledgements:  National Science Foundation BCS-1152841 to DTL


Kandinsky was right: Few do “express bright yellow in the bass notes, or dark lake in the treble”   Joshua Manfred, Wabash College

Co-authors: Corbin Strimel, Wabash College; Cameron Klabunde, Wabash college; Neil Dittmann, Wabash College; Karen Gunther, Wabash College

Cross-modal correspondence is a sense of the inherent belongingness between two different senses; in our study these were pitch and color. Our goal was to investigate the confound in previous literature of individual differences in color brightness and pitch loudness. We tested twenty male participants. We determined equal brightness for each participant, across six colors: red, orange, yellow, green, blue, and purple; and equal loudness across seven pitches: 125, 250, 500, 2000, 4000, 8000, and 12,500 Hz. Then participants matched pitch with color in three different conditions: prototypical color hues, gray scale, and isobright colors. Our results indicated that in the prototypical condition, the participants chose yellow for high pitches, and blue and purple for the lowest pitches. In the gray scale condition, they chose white for high pitches and black for low pitches. These findings are consistent with previous research in the literature. However, we found that when controlling for individual differences in brightness, participants still chose yellow with higher pitches. Thus, there appears to be an inherent sense of belongingness between yellow and high pitches, even when controlling for the confounds of individual differences in brightness and loudness.


Luminance and chromaticity discrimination sensitivities following a sudden decrease in background luminance   Minwoo Son, Tokyo Institute of Technology

Co-author: Takehiro Nagai, Tokyo Institute of Technology

When we enter a dark place like a tunnel from a bright exterior, our visual sensitivities take some time to adapt to the lower light level. However, there have been few reports about how quickly our sensitivities of luminance and chromaticity discrimination recover in this situation. This study aimed to quantify the time course of discrimination sensitivity for luminance and chromaticity directions after an abrupt decrease in background luminance. In each trial, the background luminance dropped from 100 cd/m² to 1 cd/m². Then, one target and three reference stimuli with different colors were presented under four stimulus onset asynchrony (SOA) conditions. The observer was asked to discriminate the target stimulus from the reference stimuli. The results showed that discrimination sensitivity was lowest right after the background luminance change and gradually improved with SOAs. However, sensitivity recovery differed across color directions, with the most improvement in luminance, followed by S, and negligible change in L-M. There was a statistically significant difference between +S and ±(L-M) sensitivities, indicating that the sensitivity recovery after the sudden background luminance change differed between chromaticity directions. Based on the comparison with previous studies, we speculate that both adaptation and masking may contribute to the temporal change of discrimination sensitivities.

Funding acknowledgements:  This work was supported by Lotte Foundation Scholarship


Image features involved in translucency enhancement by chromaticity information   Mizuki Takanashi, Department of Information and Communications Engineering, Tokyo Institute of Technology

Co-author: Takehiro  Nagai, Department of Information and Communications Engineering, Tokyo Institute of Technology

Previous studies have reported that chromaticity information in object images enhances perceived translucency. The aim of this study, was to elucidate how color forms image features that contribute to translucency in psychophysical experiments. The stimuli were computer-graphics images of translucent objects with different spectral scattering coefficients (i.e., hues) and various optical and geometrical parameters. Achromatic images with the same luminance as the chromatic images were also created. Perceived translucency was measured using Thurston’s pairwise comparison, and the effect of chromaticity or translucency was evaluated by comparing translucency in achromatic and chromatic images. The results showed higher perceived translucency for the chromatic object images than the achromatic ones. Subsequently, we analyzed how different image features correlate with the effects of color on translucency using multiple regression analysis. The result indicated that the luminance- chromaticity correlation, which has been proposed as a potential cue for translucency could hardly explain the color effects. Rather, we found that the change in brightness contrast in the diffuse components modulated by the Helmholtz–Kohlrausch (H-K) effect was strongly and negatively correlated with the color effects. These results suggest that some image features, which covary with brightness contrast brightness contrast, contribute to the color effects on translucency.


Variation of cone spectral composition in the macula   Vimal Prabhu Pandiyan, Department of Ophthalmology, University of Washington

Co-authors: Sierra Schleufer, Department of Ophthalmology, University of Washington, Seattle.; Palash Bharadwaj, Department of Ophthalmology, University of Washington, Seattle.; Ramkumar Sabesan, Department of Ophthalmology, University of Washington, Seattle.

Cone spectral composition is central to the study of color vision and retinal development. There is a lack of information on the spatial distribution of L and M-cones in the macula given that there are no histological methods to separate them. To overcome this gap, cones were spectrally classified using adaptive optics OCT-based optoretinography in human subjects and their variation was described in the macula.  To date, we have classified ~130000 total cones in 9 subjects across 79 regions of interest (ROI), with a maximum of 16 retinal eccentricities per subject spread along the 4 cardinal meridians.  In 2 two subjects, the variation in cone spectral topography in both eyes was compared. The L: M cone ratio decreased in the foveal slope (0.4°- 1°) but remained relatively uniform in the parafovea from 1.5°- 10° eccentricity. The % S-cones and S-cone density were consistent with prior histology (Curcio et al. 1991). No significant differences were observed in the fellow eyes of the same subject or the distribution of cone types across the 4 cardinal meridians. Decreased L: M cone ratio in the foveal slope suggests earlier differentiation of M-cones than L-cones. The stable L: M  cone ratio in the parafovea suggests that the greater fall off in chromatic versus achromatic vision with eccentricity is not explained by cone spectral composition, but is rather attributed to pooling in downstream neurons. 

Funding acknowledgements:  NIH grants U01EY032055, EY029710, P30EY001730, Burroughs Wellcome Fund Careers at the Scientific Interfaces, DOD Air Force Office of Scientific Research FA9550-21-1-0230, Unrestricted grant from Research to Prevent Blindness


Cone spacing and S-cone proportion is sufficient to describe varying S-cone regularity across the human central retina   Sierra Schleufer, University of Washington Department of Ophthalmology and Graduate Program in Neuroscience

Co-authors: Vimal Pandiyan, University of Washington Department of Ophthalmology; Bryna Hazelton, University of Washington eScience Institute and Department of Physics; Daniel Coates, University of Washington Department of Ophthalmology, University of Houston College of Optometry; Ramkumar Sabesan, University of Washington Department of Ophthalmology

The topography of S-cones in the human retina is vital to understand short-wavelength sampling of visual space. In humans S-cones have been reported as randomly arranged within 2° eccentricity and semi-regular more peripherally. A model describing how S-cone regularity varies across the retina is yet to be formulated. Here we describe such a model, dependent on 2 parameters - the average distance between neighboring cones and the proportion of S-cones - that is sufficient to explain S-cone regularity across the central retina. Cones were classified using AO-OCT optoretinography in ROIs distributed across the 4 cardinal meridians in 2 subjects (12 ROIs each) between 1.3 - 12.9°eccentricity. The radius of the S-exclusion zone, the area surrounding S-cones where other S-cones are significantly unlikely to appear, was found to be about twice the average distance between neighboring cones in 19/24 mosaics. We found that the measured regularity of S-cone mosaics increases linearly with the increasing proportion of S cones with eccentricity. Using the average distance between neighboring cones and proportion of S-cones per ROI as variables, we created a model to simulate S-cone mosaics that agree well with the observed topography. These results benefit our understanding of the foundational patterns underpinning spectral topography, and the ability to accurately simulate S-cone topography in computational models of early vision. 

Funding acknowledgements:  NIH grants 5T32EY007031-42, U01EY032055, EY029710, P30EY001730.  Burroughs Wellcome Fund Careers at the Scientific Interface. DOD Air Force Office of Scientific Research FA9550-21-1-0230. University of Washington eScience Institute Incubator Program.


Non-degenerating double cone opsin knockout mouse model of blue cone monochromacy   Mikayla L. Puska, Department of Ophthalmology, University of Washington

Co-authors: Michelle M. Giarmarco, Department of Ophthalmology, University of Washington; Jay Neitz, Department of Ophthalmology, University of Washington; Maureen Neitz, Department of Ophthalmology, University of Washington; James A. Kuchenbecker, Department of Ophthalmology, University of Washington

Ma et al. (2022) performed opsin gene therapy in a mouse model of blue cone monochromacy (BCM). Treatment was only effective for young animals because the retina degenerated, with a significant reduction in the number viable cones by 3 months. Their mouse was created by mating an Opn1mw knockout with a gene trap inserted in intron 2 of the Opn1mw gene, to an Opn1sw knockout with the neomycin resistance gene inserted in intron 3 of the Opn1sw gene. The Opn1mw knockout was reported as having “greatly reduced” M opsin expression, while the Opn1sw knockout was a severely hypomorphic allele. Their double opsin gene knockout (DKO) mouse is not a good model of BCM, which is typically a stationary disorder with no cone degeneration. We evaluated Opn1mw Opn1sw DKO mice for cone degeneration; these mice were created by Regeneron by deleting both genes using genome editing. Eyes of 1 year old DKO animals were processed for cryosections. Sections were immunostained using antibodies against a variety of cone proteins (S and M opsins, arrestin) and markers for retinal degeneration, then confocal imaged. Despite the absence of both cone opsins, cones remain viable and morphologically normal, and the retina shows no signs of degeneration at 1 year. This DKO mouse model will be a valuable tool for developing gene therapies targeting cone opsins, and also for understanding color vision circuitry in the retina.

Funding acknowledgements:  UW Vision Core Grant NIH NEI P30-EY001730, Research to Prevent Blindness


XR-based personalized active aid for color deficient observers   Nasif Zaman, University of Nevada, Reno

Co-author: Alireza Tavakkoli, University of Nevada, Reno

In a previous study, Xu et al. (Optics Express, 2022) investigated the efficacy of active aid in the form of personalized image enhancement to increase color discrimination ability in color-deficient observers (CDO). The study parameterized severity of color deficiency, the wavelength shift of cone spectral fundamentals, and the spectral distribution of display primaries. The first parameter was derived by computing the confusion index of the CDO, employing a modified version of the FM-100 test (ZJU50Hue). The second parameter was determined via evaluation of a wavelength-shifted ZJU50Hue test on color-normal observers (CNO). The three parameters were used to model the gamut mapping between CNO and CDO. In this study, extended reality (XR) based modules were developed to acquire these parameters and consequently tailor the headset display to assist CDOs. We chose to implement the Cambridge color test over the ZJU50Hue test as threshold results along the protan, deutan, and tritan lines are more informative than a single confusion index. Preliminary results on a calibrated Varjo XR-3 headset suggest a high correlation between the standard CCT and our XR-based trivector test. As the calibration, simulation and modeling processes all take place in the same HMD, we intend to model the CNO-CDO gamut mapping into a post-process graphics shader to enhance the camera input of the XR-3 and perform a paper-based Ishihara test for evaluation of real-world color discrimination efficacy.

Funding acknowledgements:  FA9550-21-1-0207


The impact of eye’s longitudinal chromatic aberration on visual acuity and accommodation response   Tianlun Zou, Center for Visual Science, The Institute of Optics, University of Rochester

Co-authors: Sara Aissati, Center for Visual Science, Flaum Eye Institute University of Rochester; Susana Marcos, Center for Visual Science, Institute of Optics, Flaum Eye Institute, University of Rochester

Chromatic composition of displays might affect vision and accommodation,possibly influencing myopia development.We investigated differences in visual acuity(VA) and accommodative lag(AccL) for steady accommodative demands (up to 5D,1D steps) with visual stimuli illuminated by monochromatic wavelengths (mono-λ 480,555&630nm,3nm bandwidth) and white light (WL).Data was obtained on 3 young emmetrope using an Adaptive Optics system with a supercontinuum laser,a DMD for stimuli,and a optunable lens to change vergence.Best focus for far was set at 555nm.VA was measured using QUEST(tumbling E).AccL was obtained from the peak shift of through-focus Visual Strehl,calculated from HS aberrometry.All subjects showed myopic shifts in blue consistent with longitudinal chromatic aberration.However,the response to wavelength differed across subjects.S#1 showed a sustained VA across distances(average VA standard deviation 0.054,-0.005logMAR),low AccL(slope:0.3 D/D) and systematic SA negative shift (slope:-0.04um/D),similar across mono-λ and WL.S#2 showed a more sustained VA,lower AccL and higher SA change in blue (0.09logMAR std,0.3D/D,-0.02um/D) than in WL(0.14logMAR std,0.52D/D,-0.013um/D).S#3 showed a steeper decrease in VA at near and higher AccL for mono-λ(0.218logMAR std,0.78D/D,on average)than in WL(0.05logMAR std,0.38D/D).Different subjects use chromatic cues in different ways to accommodate,likely affected by the interplay of chromatic blur,depth-of-focus and defocus sign perception.

Funding acknowledgements:  Funding: P30 Core Grant EY001319-46, Unrestricted Grant Research to Prevent Blindness (Flaum Eye Institute, University of Rochester NY), Meta Reality Labs, National Institutes of Health-NEI R01 EY35009


Investigating photoreceptor function in disease-affected retinas using optoretinography   Reddikumar Maddipatla, Center for Human Ophthalmic Imaging Research (CHOIR), UC Davis Eye Center, Sacramento, CA 95817, USA; EyePOD Imaging Lab, Dept. of Cell Biology and Human Anatomy, UC Davis, Davis, CA 95616, USA.

Co-authors: Christopher  Langlo, Center for Human Ophthalmic Imaging Research (CHOIR), UC Davis Eye Center, Sacramento, CA 95817, USA.; Kari  Vienola, Institute of Biomedicine, University of Turku, Fi-20014 Turun, Yliopistoy, Finland.; Maciej  Bartuzel, Center for Human Ophthalmic Imaging Research (CHOIR), UC Davis Eye Center, Sacramento, CA 95817, USA; EyePOD Imaging Lab, Dept. of Cell Biology and Human Anatomy, UC Davis, Davis, CA 95616, USA.; Ewelina  Pijewska, Center for Human Ophthalmic Imaging Research (CHOIR), UC Davis Eye Center, Sacramento, CA 95817, USA; EyePOD Imaging Lab, Dept. of Cell Biology and Human Anatomy, UC Davis, Davis, CA 95616, USA; Faculty of Physics, Astronomy, and Informatics, Nicolaus Copernicus University in Torun, Grudziądzka 5, 87-100 Torun, Poland.; Robert  Zawadzki, Center for Human Ophthalmic Imaging Research (CHOIR), UC Davis Eye Center, Sacramento, CA 95817, USA. EyePOD Imaging Lab, Dept. of Cell Biology and Human Anatomy, UC Davis, Davis, CA 95616, USA.; Ravi  Jonnal, Center for Human Ophthalmic Imaging Research (CHOIR), UC Davis Eye Center, Sacramento, CA 95817, USA

Assessing the functional response of photoreceptors is vital in understanding retinal disease progression. Traditional subjective methods like visual acuity and visual fields, and objective ones like electroretinography, have limitations. An ideal complement to these techniques is optoretinography (ORG), which images the retina and tests its function at once. ORG utilizes the phase of the optical coherence tomography (OCT) signal to quantify nanometer-scale changes, measuring subtle photoreceptor responses to stimuli. Efforts to observe stimulus-evoked responses in human cone photoreceptors began with adaptive optics (AO) and common path interferometry, enabling the resolution and tracking of individual cells. Advances in OCT systems with cellular resolution through AO or digital aberration correction successfully measured ORG responses from single cones and rods. This method tracks phase differences between outer segment tips (COST or ROST) and the inner-outer segment junction (IS/OS) to assess individual cell responses. A novel velocity-based method recently demonstrated the feasibility of measuring ORG signals with clinical-grade OCT systems. In the present work, we implemented this technique on disease-affected human retinas, revealing lower magnitudes of response compared to healthy retinas, and highlighting its potential clinical applications.

Funding acknowledgements:  National Institutes of Health (R01-EY-033532, R01-EY-031098, R01-EY-026556, P30-EY-183 012576)


Understanding scattering in high-resolution retinal imaging   Brian Vohnsen, University College Dublin

Co-authors: Salihah Qaysi, Jazan University; Aishwarya  Chanady Babu, University College Dublin; Stacey Choi, Ohio State University; Nathan Doble, Ohio State University

In conventional fundus images the contrast relates to scattering and absorption of light. Here, we report on modeling of high-resolution retinal imaging to understand the role of mitochondria light scattering in the ellipsoid of photoreceptors. We evaluate the directionality appearance of the photoreceptors, and its relation to drusen appearance in age-related macular degeneration (AMD). The refractive index contrast of the mitochondria collectively resembles that of micron-sized reflectors. Obliqueness of the scattering results in vignetting by the pupil that impacts on the brightness of the photoreceptors. Our hypothesis is that in early-stage AMD, drusen will perturb the photoreceptor alignment and thus cause oblique scattering and reduced brightness on the slope of the drusen in high-resolution retinal images. We model how this affects the observable photoreceptor mosaic both in the healthy eye and in eyes with AMD. The results confirm that the annular dark ring around drusen is likely caused by oblique scattering. We use a model eye to test the hypothesis with a reflective mask emulating the photoreceptor mosaic  An SLM is used for amplitude modulation in the pupil and differential imaging allows for capturing off-axis images with an appearance like that of phase contrast images. Further refinement of the model eye may be a valuable way to test new imaging modalities prior to their implementation in the real eye.


Development of transparency in the human lens cells   John Clark, University of Washington

 As the neural tube forms in the fifth week of human embryonic development, a few ectodermal cells adjacent to the neural plate in the trilaminar embryo, swell, thicken, and compress their intercellular space to form a minute lens placode.  At this early stage, the embryonic cells in the lens placode resemble neuroectoderm, enriched in cytoskeletal proteins, junctions, membrane channels, and stress response proteins. With continued differentiation, intracellular organelles disappear to decrease light scattering. The expression of high concentrations of crystallin proteins increase the index of refraction, “n”, for optimal optical function. In each cell, the condensed cytoplasmic proteins organize into the short-range order necessary for maximal transmission of light.  Actin, intermediate filaments and microtubules provide a scaffold for the organization of connexins, and aquaporin channels in the membranes supporting a unique microcirculation for the symmetric transport of fluid, ions, and nutrients to the layers of elongating, close-packed, hexagonal cells, the basis for this unique, biological, optical element. The entire optical system is stabilized by molecular and cellular mechanisms designed to protect optical function for a lifetime. Even with the specialized protective mechanisms in lens cells, loss of transparency in cataract formation is one of the most common effects of cellular aging accounting for over 50% of vision impairment worldwide


Intravitreal gene therapy in primate reaches extrafoveal cones   Briyana Bembry - Colegrove, Department of Ophthalmology, University of Washington

Co-authors: Michelle Giarmarco, University of Washington Department of Ophthalmology; Rachel Barborek, University of Washington Department of Ophthalmology; Jessica Rowlan, University of Washington Department of Ophthalmology; James Kuchenbecker, University of Washington Department of Ophthalmology; Dragos Rezeanu, University of Washington Department of Ophthalmology; Jay Neitz, University of Washington Department of Ophthalmology; Maureen Neitz, University of Washington Department of Ophthalmology

Intravitreal delivery of gene therapy vectors to the retina carries lower risk of adverse events versus subretinal injections, but efficiently targeting cones is a challenge. We used a new adeno-associated vector (AAV) to deliver genes to primate cone photoreceptors. The vector carries a cassette directing expression of an engineered 493 nm opsin to long- and middle-wavelength (L/M) cones, and was injected into the vitreous of the left eye of an adult macaque. An identical AAV carrying a fusion of the engineered opsin to green fluorescent protein (GFP) was injected into the right eye. Electroretinograms were performed on the left eye before and after injection to measure isolated 493 nm light responses; 5 weeks post-injection, response increased modestly. A central strip of the right eye was prepared for histology with cryosections; we found ~30% of cones in the fovea had been transduced, with a preference toward L/M cones (see https://iovs.arvojournals.org/article.aspx?articleid=2782955). Upon close examination of GFP in the peripheral retina, we were surprised to find extensive expression in cones across the retina. Here, we report patches of expression from the perifovea to the retinal margin which reaches ~10% of cones. Expression patches appeared stochastically, or in regions containing blood vessels or disrupted Muller cells. This demonstrates that extrafoveal expression is attainable using intravitreal injection of gene therapy vectors in an adult primate.

Funding acknowledgements:  UW Vision Core grant NIH NEI P30-EY001730, Research to Prevent Blindness


Can a drug for liver disease be used to treat Age-Related Macular Degeneration?   Kriti Pandey, Department of Biochemistry, University of Washington

Co-authors: Daniel T Hass, Department of Biochemistry, University of Washington; Rayne R Lim, Department of Ophthalmology, University of Washington ; Noah Horton, Department of Biochemistry, University of Washington; Jennifer R Chao, Department of Ophthalmology, University of Washington; James B Hurley, NA

Age-related macular degeneration (AMD) is one of the most common degenerative eye diseases among the older population. One of the primary pathological features of AMD is the accumulation of fatty deposits known as drusen between the retinal pigment epithelium (RPE) and Bruch’s membrane in the eye. Few options exist that can slow AMD-associated retinal degeneration. One may be to enhance the RPE’s ability to oxidize fatty acids to delay or prevent drusen accumulation. Firsocostat is a small molecule that increases fatty acid oxidation in the liver. We determined the ability of Firsocostat to increase fatty acid oxidation and to decrease lipid contents in human induced pluripotent stem cells (iPSC) differentiated into RPE. In iPSC-RPE, firsocostat boosts fatty acid oxidation, remodels lipid profiles, and reduces apolipoprotein release. These data suggest that firsocostat may alleviate a pathological increase in drusen deposits and can help to preserve vision in AMD patients. 

Funding acknowledgements:  D.T.H is supported by a Brightfocus Foundation Postdoctoral Fellowship (M2022003F). J.B.H is supported by NEI RO1EY06641, RO1EY017863 and R21032597 and Foundation Fighting Blindness TA-NMT-0522-0826-UWA-TRAP.


Contributed Talks I: Friday October 6


Human cone response models for optoretinography with FF-SS-OCT and adaptive optics   Ewelina Pijewska, Center for Human Ophthalmic Imaging Research (CHOIR), UC Davis Eye Center, Sacramento, CA 95817, USA; EyePOD Imaging Lab, Dept. of Cell Biology and Human Anatomy, UC Davis, Davis, CA 95616, USA.

Co-authors: Denise Valente, Dept. of Physics, Universidade Federal de Pernambuco, Recife, PE 50740-540, Brazil; Kari V Vienola, Institute of Biomedicine, University of Turku, FI-20014 Turku, Finland; Ratheesh Meleppat, Center for Human Ophthalmic Imaging Research (CHOIR), UC Davis Eye Center, Sacramento, CA 95817, USA; Robert J Zawadzki, Center for Human Ophthalmic Imaging Research (CHOIR), UC Davis Eye Center, Sacramento, CA 95817, USA; Ravi S Jonnal, Center for Human Ophthalmic Imaging Research (CHOIR), UC Davis Eye Center, Sacramento, CA 95817, USA

Recent work has shown that human rod and cone outer segments (ROS and COS, respectively)   deform in   response   to   visual   stimuli.   In   phase-based optoretinography   (ORG)   the   phases   of   backscattered   light   from   the inner/outer segment junction (IS/OS) and the COS/ROS tips (COST/ROST), is measured,   which   allows   observation   of   stimulus-evoked,   nanometer-scale   changes   in   the   OS length. In this work, we used a full-field swept-source OCT with AO that allowed up to kHz volume rates.  ORG  responses  were   recorded  in  two  healthy  volunteers,  with  photopigment bleaching levels in the range of 1-60 %, and modeled using an exponential sum.  The proposed harmonic oscillator-based response model allowed us to describe the shape of the cone's ORG responses by amplitudes of deflection and relaxation times. Our preliminary results show that responses   to   complex   stimuli   were   consistent   with   photopigment   availability,   which   in   the context of the consensus theory that adaptation in cones is mediated by photopigment suggests that the ORG may be a useful way to probe light adaptation in cones. The development of   simple   quantitative   parameters   describing   the   ORG   response   should   benefit   future   clinical applications and help to track the progress of blinding diseases.

Funding acknowledgements:  National   Institutes   of   Health   (R01-EY-033532,   R01-EY-031098,   R01-EY-026556, P30-EY-183 012576)


Correlating cone structure and function in retinitis pigmentosa using coarse-scale optoretinography (CoORG)   Teng Liu, Department of Ophthalmology, University of Washington

Co-authors: Vimal Prabhu Pandiyan, Department of Ophthalmology, University of Washington; Benjamin Wendel, Department of Ophthalmology, University of Washington; Emily Slezak, Department of Ophthalmology, University of Washington; Debarshi Mustafi, Department of Ophthalmology, University of Washington; Seattle Children’s Hospital; Jennifer Chao, Department of Ophthalmology, University of Washington; Ramkumar Sabesan, Department of Ophthalmology, University of Washington

Optoretinography (ORG) has the potential to serve as a powerful diagnostic biomarker, owing to its sensitive and objective localization of function and dysfunction. Majority of ORG implementations employ adaptive optics (AO) for imaging activity at a cellular scale. Coarse-scale Optoretinography (CoORG), an ORG paradigm without AO, offers rapid, extended-field recordings and wider applicability in patients with retinal disease, by compromising cellular resolution. This study investigates the feasibility of CoORG in assessing cone dysfunction in patients diagnosed with retinitis pigmentosa (RP). Five RP patients aged 26 – 60 were recruited, alongside age-similar controls. The stimulus for evoking cone activity had a photon density between 15.5x10e6 - 19.7x10e6 photons/μm2, and was centered at 532 ± 5nm. Eight imaging trials per bleach were performed, allowing for 1 min. between successive trials for dark adaptation. The total experimental time for each bleach was 10-20 mins.  In RP, cone function, estimated as the change in optical path length in the outer segment in response to a stimulus, was diminished and generally lower than normal controls. This deficit was observed in areas of seemingly normal outer retinal structure. Contrary to normals, no correlation was observed between outer segment length and cone function in RP. This highlights CoORG's potential for early, sensitive detection of retinal dysfunction prior to apparent structural degradation.

Funding acknowledgements:  NIH grants U01EY032055, EY029710, P30EY001730, Burroughs Wellcome Fund Careers at the Scientific Interfaces,  Foundation for Fighting Blindness, Unrestricted grant from the Research to Prevent Blindness


Inner limiting membrane peel extends vivo calcium imaging of ganglion cells (RGC) beyond the fovea in non-human primate   Hector Baez, Center for Visual Science, University of Rochester

Co-authors: Laporta Jennifer, Center for Visual Science, University of Rochester; Walker Amber, Center for Visual Science, University of Rochester; Fischer William, Center for Visual Science, University of Rochester; Hollar Rachel, Center for Visual Science, University of Rochester; Patterson Sara, Center for Visual Science, University of Rochester; DiLoreto David, Department of Ophthalmology, University of Rochester Medical Center; Gullapalli Vamsi, Department of Ophthalmology, University of Rochester Medical Center; McGregor Juliette, Center for Visual Science, University of Rochester

Viral expression of the calcium indicator GCaMP in primate RGCs has enabled optical readout of retinal function at a cellular scale in vivo.  To date, functional recording has been limited to transduced RGCs close to the foveal pit.  In this study we evaluate ILM peel as a strategy to expand the area of transduced RGCs and allow functional recording beyond the fovea in the living eye. 4 eyes of 3 immunosuppressed macaca fascicularis received a 9-12° ILM peel centered on the fovea, followed by intravitreal injection of GCaMP8s 4-8 weeks post-peel. A 660nm flickering visual stimulus drove RGC GCaMP responses which were recorded with fluorescence adaptive optics scanning laser ophthalmoscopy. In all eyes GCaMP was expressed throughout the peeled area, representing a mean 8-fold enlargement in the area of expression relative to a control eye with no peel.  Functional responses were obtained from RGCs at max eccentricities of 11.7 o, 8.0 o, 9.7 o, and 13.7 o and could be classified as ON or OFF types up to the edge of the peel. Mean RGC responses in ILM peeled and control eyes of the same animal were comparable at 3.5 o and longitudinal tracking of individual RGCs showed stable responses up to 6 months post-peel. ILM peel substantially expands the region of primate retina accessible for in vivo GCaMP beyond the foveal ring of RGCs. This presents new opportunities for physiological study of the retina and pre-clinical testing of novel therapies in retinal degeneration models.

Funding acknowledgements:  Research reported in this publication was supported by the National Eye Institute of the National Institutes of Health under Audacious Goals Initiative funding Award No. York U24 EY033275 Accelerating photoreceptor replacement therapy with in-vivo cellular imaging of retinal function in primate, P30 EY001319 (core) and F32 EY032318 Foveal ganglion cell function in the living eye. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Inst. of Health. This study was supported by an Unrestricted Grant to the University of Rochester Department of Ophthalmology from Research to Prevent Blindness.


The perceptual experience of optogenetic vision   Ezgi Irmak Yücel, Department of Psychology, University of Washington

Co-authors: Vaishnavi Mohan, Department of Psychology, University of Washington, Seattle; Geoffrey M. Boynton,  ; Ariel Rokem, Department of Psychology, Center for Human Neuroscience, University of Washington, Seattle; Ione Fine, Department of Psychology, Center for Human Neuroscience, University of Washington, Seattle

Optogenetic therapy for retinal degenerative diseases aims to elicit light response in remaining retinal cells (bipolar and/or ganglion cells). Animal models suggest that these proteins have lower sensitivity, and slower kinetics compared to neurotypical vision. Here we describe a framework for simulating ‘virtual patients’ to quantify the predicted perceptual experience of optogenetic vision. We simulated the neural responses of rd1 mouse retina expressing 4xBGAG12,460:SNAP-mGluR2 (Holt et al., 2022) and used this simulation to generate virtual patients: sighted participants viewing the visual stimulus filtered through our simulations. We measured the visual performance of these virtual patients (n=6) using temporal contrast sensitivity functions. Virtual patients had a 10x fold loss of sensitivity, which was exacerbated at higher temporal frequencies, corresponding to a loss of Snellen acuity from ~20/40 at low temporal frequencies to ~20/100-20/200 at high temporal frequencies. We predict that the ability to process fast-moving objects may be impaired in optogenetic vision, and patients with uncontrollable nystagmus may be poor candidates for optogenetic treatments with sluggish kinetics. Our virtual patient framework can easily be extended to simulate any optogenetic protein, and thereby provides a way to quantify and compare the expected perceptual performance of different opto-proteins based on in vitro retinal data. 

Funding acknowledgements:  NEI


The effects of monocular and binocular retinal image minification on eyestrain    Iona McLean, Herbert Wertheim School of Optometry and Vision Science, University of California, Berkeley

Co-authors: Esther Sherbak, Herbert Wertheim School of Optometry and Vision Science; Loganne Mikkelsen, Herbert Wertheim School of Optometry and Vision Science; Ian Erkelens, Meta Reality Labs; Robin Sharma, Meta Reality Labs; Emily Cooper, (1) Herbert Wertheim School of Optometry and Vision Science (2) Helen Wills Neuroscience Institute

While corrective spectacles have been worn for centuries, relatively little is known about the physical and perceptual effects associated with their optical distortions. Retinal image minification, for example, is caused by myopic spectacle correction and may also occur in near-eye displays. Previous work suggests that different amounts of minification (or magnification) between the eyes can produce perceptual distortions and oculomotor discomfort, but the extent to which these effects are problematic is unknown. In our first study, forty observers wore minifying spectacles of 2% or 4% in both eyes (binocular), just one eye (monocular), or neither eye (control). After performing a task that incorporated reading, interacting with objects, and visual search, participants reported their symptoms. Overall, participants found monocular minification to be slightly more uncomfortable than the binocular counterpart. Monocular minification produced greater eyestrain and self-reported difficulty interacting with objects. In a second study, we investigated how these two symptoms change after one hour of adaptation to monocular 4% lenses. We found that both symptoms worsened during adaptation. Interestly, the difficulty interacting with objects declined soon after the lenses were removed, while eyestrain persisted. Taken together, these studies indicate specific types of discomfort during natural tasks that may be reduced through improving optical lens designs in the future.

Funding acknowledgements:  Funded by the National Science Foundation (Award #2041726) and Meta Reality Labs.


Specific and non-linear effects of glaucoma on optic radiation tissue properties    John Kruper, University of Washington

Co-authors: Adam Richie-Halford, Stanford University; Noah Benson, University of Washington; Sendy Caffarra, University of Modena; Julia Owen, University of Washington; Yue Wu, University of Washington; Aaron Lee, University of Washington; Cecilia Lee, University of Washington; Jason Yeatman, Stanford University; Ariel Rokem, University of Washington

Changes in sensory input with aging and disease affect brain tissue properties. To establish the link between glaucoma, the most prevalent cause of irreversible blindness, and changes in major brain connections, we characterized white matter tissue properties in diffusion MRI measurements in a large sample of subjects with glaucoma (N=905; age 49-80) and healthy controls (N=5,292; age 45-80) from the UK Biobank. Confounds due to group differences were mitigated by matching a sub-sample of controls to glaucoma subjects. A convolutional neural network (CNN) accurately classified whether a subject has glaucoma using information from the primary visual connection to cortex (the optic radiations, OR), but not from non-visual brain connections. On the other hand, regularized linear regression could not classify glaucoma, and the CNN did not generalize to classification of age-group or of age-related macular degeneration. This suggests a unique non-linear signature of glaucoma in OR tissue properties.

Funding acknowledgements:  This project was funded by NSF grant 1934292 (PI: Balazinska), NIH grant R01 AG 060942 (PI: Lee), NEI grant R01 EY033628 (PI: Benson), NIH grant RF1 MH121868 (PI: Rokem), NIH grant R01HD095861 (PI: Yeatman). SC was funded by the “Rita Levi Montalcini” program, granted by the Italian Ministry of University and Research (MUR). Unrestricted and career development award from RPB (Julia Owen, Yue Wu, Cecilia Lee, Aaron Lee), Latham Vision Science Awards (Julia Owen, Yue Wu, Cecilia Lee, Aaron Lee), NEI/NIH K23EY029246 (Aaron Lee) and NIA/NIH U19AG066567 (Julia Owen, Yue Wu, Cecilia Lee, Aaron Lee, Ariel Rokem).


Contributed Talks II: Saturday October 7


High-resolution assessment of saccadic landing positions for S-cone-isolating targets   Yiyi (Charlotte) Wang, Herbert Wertheim School of Optometry and Vision Science, University of California, Berkeley

Co-authors: Congli Wang, Department of Electrical Engineering and Computer Sciences, University of California, Berkeley; Ren Ng, Department of Electrical Engineering and Computer Sciences, University of California, Berkeley; William S. Tuten, Herbert Wertheim School of Optometry and Vision Science, University of California, Berkeley

The role of S-cone signals in guiding visuomotor behavior is not fully understood. Previously, we used high-resolution retinal tracking during a visual search-and-identification task to show that the preferred retinal locus (PRL) of fixation for S-cone-isolated targets was larger than and offset from the PRL measured with L/M-isolating optotypes (Wang et al, ARVO 2023). Here, we present an analysis of saccadic landing behavior under these conditions. We used an adaptive optics ophthalmoscope to record retinal videos while subjects (N = 6) made small saccades to a tumbling-E stimulus that appeared at random loci within a 3x3 square grid with 0.5° spacing. Subjects reported stimulus orientation via keypress, after which the target moved to a new location. Retinal videos recorded during each experiment were used to extract eye position traces and localize stimuli in retinal coordinates. Saccade PRLs were computed from the post-saccadic retinal landing positions using the ISOA method. The mean (± SEM) saccade PRL areas were 122 ± 8.1 arcmin2 and 525 ± 133 arcmin2 for the L/M- and S-cone conditions, respectively (p<0.01; Wilcoxon rank-sum test). For both conditions, the post-saccadic ISOA size reduced over the course of ~300 ms. The average displacement between the L/M- and S-cone saccade PRL was 7.72 ± 1.24 arcmin, similar to that reported previously for fixation, suggesting the retinal locus directed to a target of interest depends on the visual pathway mediating its detection.

Funding acknowledgements:  This work was supported by NEI Bioengineering Research Partnership R01EY023591, NEI 5T35EY007139, American Academy of Optometry Foundation Ezell Fellowship, Hellman Fellowships, Alcon Research Institute Young Investigator Award, and the Air Force Office of Scientific Research under award numbers FA9550-20-1-0195 and FA9550-21-1-0230.


The relationship between temporal summation at detection threshold and fixational eye movements   Allie C. Hexley, Department of Experimental Psychology, University of Oxford

Co-authors: Laura K. Young, Biosciences Institute, Newcastle University; David H. Brainard, Department of Psychology, University of Pennsylvania; Austin Roorda, Department of Optometry, University of California, Berkeley; William S. Tuten, Department of Optometry, University of California, Berkeley; Hannah E. Smithson, Department of Experimental Psychology, University of Oxford

We studied the relationship between the threshold temporal summation of increment pulses and fixational eye-movements. Six participants completed a 2AFC increment detection task. Stimuli were 0.16 x 2.2 arcmin increments of 543 nm light presented via an AOSLO with a 60 Hz frame rate. Stimuli for temporal integration were two single frame presentations with a 16 ms (consecutive frames), 33 ms, 100 ms, or 300 ms inter-stimulus interval (ISI). Data were also collected for increments presented on a single frame. Stimuli were presented in either world-fixed coordinates (natural retinal image motion) or were stabilised on the retina. There were large differences in overall sensitivity across individuals, but the time-course of performance change with ISI was similar across participants. Thresholds for ISI=33 ms were close to performance with two consecutive frames, suggesting complete summation of light energy; whereas thresholds for ISI=300 ms were closer to the single-frame case, suggesting limited summation; and thresholds for ISI=100 ms were intermediate, suggesting residual summation. The effect of ISI on threshold was similar for stabilised stimuli and natural viewing, but there was a small trend towards lower thresholds for stabilised stimuli at short ISI and vice-versa at long ISI. We plan to present our results in the context of an ideal observer calculation that may clarify how the initial visual encoding, including temporal summation within cones, shapes performance.

Funding acknowledgements:  UKRI/Wellcome Physics of Life; EP/W023873/1; National Institutes of Health R01EY023591; AFOSR FA9550-21-1-0230; Hellman Fellows Program


Bringing color into focus: Dynamic accommodation responses to polychromatic stimuli   Benjamin M. Chin, Herbert Wertheim School of Optometry and Vision Science, University of California, Berkeley

Co-authors: Martin S. Banks, 1. Herbert Wertheim School of Optometry and Vision Science, University of California at Berkeley 2. Helen Wills Neuroscience Institute, University of California at Berkeley; Derek Nankivil, Johnson & Johnson Vision Care, Research & Development; Austin Roorda, 1. Herbert Wertheim School of Optometry and Vision Science, University of California at Berkeley 2. Helen Wills Neuroscience Institute, University of California at Berkeley; Emily A. Cooper, 1. Herbert Wertheim School of Optometry and Vision Science, University of California at Berkeley 2. Helen Wills Neuroscience Institute, University of California at Berkeley

As humans look around the environment, the crystalline lens inside the eye changes optical power to bring retinal images into focus. This visuomotor response is called accommodation. For a given accommodative state, light at only one wavelength can be in focus because the eye contains significant chromatic aberration. We examined how the visual system weights different wavelengths for focusing polychromatic stimuli, especially those with peaks at more than one wavelength. With an autorefractor, we continuously measured human accommodative responses (at 30 Hz) to stimuli comprising various mixtures of short- and long-wavelength content. In a series of trials, seven human observers viewed a three-letter word stimulus spanning 1.5° (24 arcmins per letter) against a black background on an AMOLED display for seven seconds. The optical distance of the screen was varied using a focus-adjustable lens. Halfway through the trial, the stimulus underwent a step change in optical distance (±0.75, 1.00, or 1.50 diopters). Simultaneously, the color of the stimulus changed. Accommodative responses for each subject were analyzed with nested descriptive models, including a color-free model, a weighted-averaging model, and a color-switching model. The results show that stimulus color significantly influences the dynamic accommodative response, and that long wavelengths influence the response more than short wavelengths, even when their luminance is the same. 

Funding acknowledgements:  National Science Foundation (Award #2041726)


Binocular combination of the pupil response depends on photoreceptor pathway   Federico G. Segala, Department of Psychology, University of York, York, United Kingdom

Co-authors: Joel T. Martin, Department of Psychology, University of York, York, United Kingdom; Aurelio Bruno, School of Psychology and Vision Sciences, University of Leicester, Leicester, United Kingdom; Alex R. Wade, 1. Department of Psychology, University of York, York, United Kingdom   2. York Biomedical Research Institute, University of York, York, United Kingdom; Daniel H. Baker, 1. Department of Psychology, University of York, York, United Kingdom   2. York Biomedical Research Institute, University of York, York, United Kingdom

The pupillary light response is driven by three classes of retinal photoreceptor. Cones and rods are involved in the initial constriction of the pupil, whereas melanopsin-containing intrinsically photosensitive Retinal Ganglion Cells (ipRGCs) maintain constriction over longer timescales. Previous work has characterized the contributions of photoreceptor signals to pupil control, but relatively little is known about binocular combination of these signals when simultaneously stimulating the retina in both eyes. We measured changes in pupil size in 48 participants using a binocular eye-tracker, targeting specific photoreceptor classes with a binocular 10-primary light engine and the silent substitution method. We stimulated the periphery of the retina using light flickering at 0.4 and 0.5 Hz. Participants viewed a disc of either achromatic flickering light, or contrast modulations that targeted the ipRGCs, or the opponent colour pathways L-M or S-(L+M). Using a modified virtual reality headset, we presented the stimuli at a range of modulation amplitudes in three different ocular configurations: monocular, binocular, and dichoptic. We obtained clear pupil responses at both the first and the second harmonic frequencies. Suppression levels differed across conditions with the strongest suppression measured for the L-M condition. We account for the results in a single modelling framework where the weight of interocular suppression determines the binocular combination properties.

Funding acknowledgements:  Biotechnology and Biological Sciences Research Council grant BB/V007580/1 awarded to DHB and ARW


Computational modeling of shift in unique yellow for small stimuli   Carlos Rodriguez, University of Pennsylvania

Co-authors: Ling-Qi Zhang, University of Pennsylvania; Alexandra E.  Boehm, Herbert Wertheim School of Optometry & Vision Science, University of California, Berkeley; Maxwell J.  Greene, Herbert Wertheim School of Optometry & Vision Science, University of California, Berkeley; William S.  Tuten, Herbert Wertheim School of Optometry & Vision Science, University of California, Berkeley; David H. Brainard, University of Pennsylvania

Unique yellow (UY) is largely invariant to L:M cone proportion for spatially-extended stimuli in healthy trichromats. However, a recent adaptive-optics-based study by Boehm et al. reveals that when stimulus size is reduced to a few arcmin, color appearance depends on the local L:M proportion in the patch of the retina on which the stimulus was imaged. We aimed to determine if such findings are consistent with a normative account of visual processing. A series of 3.5 and 10 arcmin stimuli were simulated as isoluminant mixtures of 540 and 680 nm primaries. We modeled sensory encoding under adaptive-optics conditions using the open-source software ISETBio, for simulated retinal cone mosaics with varying local L:M proportions. The resultant cone excitations were decoded using a Bayesian image reconstruction algorithm (Zhang et al., 2022). For the 3.5 arcmin stimuli, as local L:M proportion decreased, the 540 nm component of the reconstructions increased relative to the 680 nm component. This is qualitatively consistent with the experimental observations of Boehm et al.  For 10 arcmin stimuli, in contrast, reconstructions were stable across variation in local L:M cone proportion. Notably, reconstructions depend not only on the local L:M cone proportion, but also on the proportion in the immediately surrounding retina, leading to a testable prediction. The computational observations frame the experimental results as a normative consequence of visual processing.

Funding acknowledgements:  The University of Pennsylvania Post-Baccalaureate Research Education Program, grant number R25 GM071745, and a research gift from Meta


Neural correlates of serial processing during divided attention across multipart objects   Dina V. Popovkina, Department of Psychology, University of Washington

Co-authors: Kelly Chang, Department of Psychology, University of Washington; Lucas M. Suarez, Department of Psychology, University of Washington; John Palmer, Department of Psychology, University of Washington; Cathleen M. Moore, Department of Psychological and Brain Sciences, University of Iowa; Geoffrey M. Boynton, Department of Psychology, University of Washington

Judgments of multiple simultaneously presented stimuli produce a variety of divided attention effects. For example, participants can detect colors in two locations as well as in one, but can recognize only one masked word at a time (White, Palmer, & Boynton, Psych Science 2018). Here, we ask whether judging objects with interchangeable parts, similar to letters in words, also produces performance deficits consistent with serial processing, and which brain areas subserve this process. In a probe recognition task, participants discriminated abstract objects made of Duplo™ parts. On each trial, two objects were presented, with either one or both objects cued as relevant (single- and dual-task conditions, respectively). The distractor probes were made of the same parts in a different order. The difference in performance between the single- and dual-task conditions was 17±1% (n=13), consistent with the prediction of a serial model and rejecting the fixed-capacity parallel model. This result suggests that there is a visual brain area where information can be processed about only one object at a time. Currently, we are using fMRI to examine activity in object-selective regions of the human lateral occipital cortex. To seek evidence of serial processing, we are assessing how stimulus-related modulation is mediated by selective attention; the area of interest should show a modulation for only the attended stimulus location.

Funding acknowledgements:  This work was supported in part by grants from the National Eye Institute (F32 EY030320 to D.V.P. and EY12925 to G.M.B. and J.P.). 


Contributed Talks III: Sunday October 8


The naming and understanding of color: the Color Communication Game   Angela M. Brown, Ohio State University

Co-author: Delwin T. Lindsey, Ohio State University

When a person views a color sample, they can usually provide a color term for it. But will that color term allow someone else to understand which sample was named? We examined color understanding using a Color Communication Game, in which one person (the “sender”) names 30 color samples as in any color-naming study, then another person (the “receiver”) chooses the sample they think the sender intended to communicate. The receiver cannot always guess the right sample, and no choice strategy will do better than randomly choosing among the samples the receiver called by that term. When 70 English-speaking dyads and 63 Somali-speaking dyads played the game, receivers did not perform randomly. Instead, they systematically chose “focal” samples near the centers of their color term distributions. When the senders’ named samples were compared directly to the receivers’ chosen samples, the systematic distribution of receiver choices revealed color categories, which appeared without any statistical analysis of the players’ terms. Simulation of receiver choices based on senders’ color names showed that both Somali-speaking and English-speaking participants knew more color terms than the ones they used in color-naming. Our Color Communication Game showed that color-naming experiments underestimate color understanding: people understand colors categorically, but they express colors using multiple synonymous color terms that are well-understood by others. 

Funding acknowledgements:  NSF BCS-1152841 


AAV-mediated gene therapy for PDE6C achromatopsia: Progress and challenges   Ala Moshiri, UC Davis 

Co-authors: Tawfik Issa, Baylor College of Medicine; Jeffrey Rogers, Baylor College of Medicine; Rui Chen, Baylor College of Medicine; Sara Thomasy, UC Davis; Tim Stout, Baylor College of Medicine

Purpose: To determine the clinical circumstances under which viral mediated gene therapy can rescue cone function in a nonhuman primate model of PDE6C achromatopsia. Methods: Infant rhesus macaques homozygous for the PDE6C R565Q mutation were generated through a breeding program at the California National Primate Research Center. Homozygotes were treated in the right eye with adeno-associated virus (AAV5) carrying rhesus PDE6C under the control of the PR1.7 cone-specific promoter. The left eye was used as a control. Animals were tested by full-field and multifocal electroretinography. A total of 7 animals have been treated. Results: The virus was found to be safe, but with variable inflammatory response. There were no obvious alterations in retinal lamination in treated eyes. The virus was expressed specifically in cone photoreceptors. Pre- and post-treatment systemic steroids led to minimal to moderate inflammatory response. In general, the gene therapy partially restored the cone responses on ERG within one month of injection in infants, but not in the older animals. If restored, the rescued cone responses were sustained and durable for over a year. Chromatic ERG testing showed restoration of amplitudes in all three cone subtypes. Conclusions: AAV-mediated gene therapy partially restored cone function and was relatively durable. Inflammation and age of administration may be important to outcomes. Similar approaches in human patients may warrant investigation.

Funding acknowledgements:  NIH NEI U24 EY029904


The limits of resolution in the S-cone pathway   Palash Bharadwaj, Department of Ophthalmology, University of Washington

Co-authors: Emily Slezak, Department of Ophthalmology, University of Washington; Vimal Prabhu Pandiyan, Department of Ophthalmology, University of Washington; Daniel Coates, College of Optometry, University of Houston; Ramkumar Sabesan, Department of Ophthalmology, University of Washington

The resolution of the S-cone pathway is first constrained by the density and arrangement of S-cones in the photoreceptor mosaic. Prior work comparing S-cone isolating visual acuity (sVA) to histological estimates of S-cone density has led to mixed conclusions, likely due to inter-individual differences in the S-cone sub-mosaic. We examined sVA in subjects with spectrally classified cone mosaics to test how the grain of the S-cone sub-mosaic limits resolution. Three observers whose cones were previously classified via adaptive optics (AO)-OCT based optoretinography participated in an sVA task at eccentricities ranging from 1.3° to 12.9° spread along all four cardinal meridians. Observers adapted to yellow light [CIE (0.45, 0.51); 200 cd/m2] for two minutes.Then, acuity was measured using a Tumbling ‘E’ task that showed blue [CIE (0.16, 0.044); 0.66 cd/m2) letters on the same yellow background, with an S-cone contrast of 0.93 (L/M-cone contrasts <0.01). Simultaneously recorded high-resolution AOSLO videos helped guide stimulus delivery to the spectrally classified retinal area. Measured sVAs were worse than predicted by the calculated S-cone Nyquist limit at all eccentricities, suggesting pooling of information from S-cones. Moreover, this pooling increased with eccentricity. While sVA does not follow the S-cone Nyquist limit, it is in good concordance with the Nyquist limit corresponding to known estimates of the sampling density of small bistratified retinal ganglion cells.

Funding acknowledgements:  NIH grants U01EY032055, EY029710, P30EY001730; Burroughs Wellcome Fund Careers at the Scientific Interfaces; DOD Air Force Office of Scientific Research FA9550-21-1-0230; Unrestricted grant from the Research to Prevent Blindness


Measuring binocular combination of luminance and chromatic stimuli using fMRI   Lauren Welbourne, University of York

Co-authors: Joel Martin, University of York; Federico Segala, University of York; Alex Wade, University of York; Daniel Baker, University of York

There are clear and measurable benefits of using two eyes instead of one (e.g. at detection thresholds, stereopsis). However, at high contrasts, excitation and inhibition between binocular (“Bin”) and monocular (“Mon’) responses are balanced, resulting in ocularity invariance behaviourally, and in the primary visual cortex. Little is known about whether and how signals are combined binocularly in other brain regions, including MT and some subcortical areas. We investigated whether we could measure differences in fMRI BOLD responses in Bin vs Mon across different brain regions, for high contrast luminance and chromatic stimuli. Thirty-six subjects had four functional MRI scans consisting of a block design with five stimulus types (three luminance stimuli, L-M, and S-cone) presented in Bin and Mon. Expanding ring and rotating wedge stimuli scans were also used for retinotopic mapping of the early visual areas. Full brain analyses showed greater overall responses to Bin vs Mon stimuli, centered on the occipital lobe. In individual (retinotopically-defined) ROI analyses, we saw a significant difference in beta weights between Bin and Mon conditions in V1 for luminance and L-M, but the increase in response to Bin stimuli was much lower than would be predicted by strong binocular facilitation (Quaia et al, 2019). In the LGN and MT we saw no significant differences in Bin vs Mon for any condition.

Funding acknowledgements:  BBSRC


Active vision shapes ocular dominance   Paola Binda, Department of Translational Research and New Technologies in Medicine, University of Pisa

Co-authors: Cecilia Steinwurzel, Department of Translational Research and New Technologies in Medicine, University of Pisa; Miriam Acquafredda, Department of Translational Research and New Technologies in Medicine, University of Pisa; Giulio Sandini, Robotics Brain and Cognitive Sciences, Istituto Italiano di Tecnologia; Maria Concetta Morrone, Department of Translational Research and New Technologies in Medicine, University of Pisa

Ocular dominance is a basic visual property that shows short-term plasticity in adult humans, where 2h of monocular deprivation leads to a homeostatic shift of ocular dominance in favour of the deprived eye. Using an altered reality setting, we found that this homeostatic plasticity can be triggered without depriving one eye of visual input, but merely perturbing the temporal correspondence between voluntary actions and vision in one eye. Participants wore a VR set; its monocular screens were connected with cameras monitoring the front space, which participants used to perform a complex visuomotor task. During a 60 minute period, the input to the dominant eye was delayed by 333 ms, making it useless for visuomotor coordination. Following this, ocular dominance (quantified by binocular rivalry) was systematically shifted in favour of the delayed eye, a similar effect as that produced by monocular contrast-deprivation. The shift was only observed when participants actively engaged in the visuomotor task, not when they passively watched a confederate perform the same task. We interpret these results in the light of parallel fMRI experiments where monocular deprivation is associated with a global system reconfiguration that pivots around a key area for sensorimotor integration, the Pulvinar. Based on our findings, we suggest that active vision is foundational to weighting sensory information, even at the level of simple visual processes as those setting ocular dominance. 

Funding acknowledgements:  This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme, grant n. 801715 (PUPILTRAITS) and n. 832813 (GenPercept) and by the Italian Ministry of University and Research under the PRIN2017 programme (grant MISMATCH and 2017SBCPZY) and FARE-2 (grant SMILY).