Vision and Color Fall Data Blast: Session III

Tuesday, October 19, 12:00 – 14:00 ET // Register here

This event has already occurred. You can watch a replay here.

Presenters:

  • A surprisingly simple neural-based relationship predicts binocular facilitation presented by Vincent A. Billock, Naval Aerospace Medical Research Laboratory, Wright-Patterson AFB

  • Central-Peripheral Dichotomy (CPD) in feedforward and feedback processes explored by depth perception in random-dot stereograms (RDSs) presented by Li Zhaoping, University of Tübingen

  • InFoRM: Rivalry generates reliable estimates of perceptual dynamics presented by Jan Skerswetat, Northeastern University

  • Global Luminance and Local Contrast Modulate Interocular Delay in Virtual Reality presented by Anthony LoPrete, University of Pennsylvania

  • Target tracking shows that millisecond-scale visual delays are faithfully preserved in the movement of the hand presented by Johannes Burge, University of Pennsylvania

  • Perceptual consequences of interocular imbalances in temporal integration presented by Benjamin M. Chin, University of Pennsylvania

  • The benefits of naturally moving over stabilized stimuli for acuity increase with longer presentation times presented by Alisa Braun, University of California, Berkeley

  • Assessing the neural coding of image blur using multivariate pattern analysis presented by Zoey J. Isherwood, University of Nevada, Reno

  • Neural model of lightness scaling in the staircase Gelb illusion presented by Michael E. Rudd, University of Nevada, Reno

  • Disentangling object color from illuminant color: The role of color shifts presented by Cehao Yu, Delft University of Technology

  • Phenomenological Assessment of Dynamic Fractals presented by Nathan Gonzales-Hess, University of Oregon

Moderators:

  • Kimberly Meier, University of Washington

  • Angela Brown, Ohio State University


Each data blast session feature a series of short talks followed by lots of time for questions and discussion. This data blast is being hosted by the Vision and Color Technical Division and the Fall Vision Meeting Planning Committee.


Abstracts:

A surprisingly simple neural-based relationship predicts binocular facilitation

Presenter: Vincent A. Billock, ORISE at Naval Aerospace Medical Research Laboratory, NAMRU-D, Wright-Patterson AFB, OH

Co-authors: Micah J. Kinney, Naval Air Warfare Center, NAWCAD, Patuxent River, MD; Marc Winterbottom, Air Force Research Laboratory, 711 Human Performance Wing, Wright-Patterson AFB, OH; Air Force Research Laboratory, 711 Human Performance Wing, Wright-Patterson AFB, OH, Air Force Research Laboratory, 711 Human Performance Wing, Wright-Patterson AFB, OH

Two eyes are better than one – but not by much. Most binocular facilitation models incorporate incomplete binocular summation (e.g., the Minkowski equation). One complication is that the Minkowski exponent for binocular facilitation varies with spatial frequency. We find that a simple alternative models both psychophysical and neural data without taking spatial frequency into account. There are cortical neurons that only fire for one eye, but amplify their firing rates if both eyes are stimulated (Dougherty, 2019). We find that binocular amplification for these monocular neurons follows a power law, with a compressive exponent of 0.84 (r=0.96), remarkably close to power law multisensory interactions (Billock & Havig, 2018). Next, we modeled the relationship between the dominant eye’s firing rate and the binocular firing rate in binocular V1 facilitatory neurons. Unexpectedly, the same rule also applies to cortical facilitatory binocular neurons (exponent=0.87, r=0.97); binocular neurons can be approximated as gated amplifiers of dominant monocular responses, if both eyes are stimulated. Finally, the same power law models binocular facilitation of contrast sensitivity in Winterbottom (2016) and our unpublished psychophysical data (pooled data’s exponent is 0.89; r=0.98); binocular response is a gated amplification of the more sensitive eye. This unexpected result fits with growing evidence that non-driving modulatory neural interactions affect sensory processing.

Funding acknowledgement: Office of the Assistant Secretary of Defense for Health Affairs and the Defense Health Agency J9 Research and Health Directorate, Defense Health 6.7 Program #DP_67.2_17_J9_1757, work unit # H1814.


Central-Peripheral Dichotomy (CPD) in feedforward and feedback processes explored by depth perception in random-dot stereograms (RDSs)

Presenter: Li Zhaoping, University of Tübingen, Max Planck Institute for Biological Cybernetics

Information bottleneck limits feedforward signals from the primary visual cortex (V1) to higher brain areas, these signals nevertheless give initial perceptual hypotheses about visual scenes. Feedback from higher to lower visual areas verify and re-weight these hypotheses in noisy or ambiguous situations via analysis-by-synthesis. We call this process Feedforward-Feedback-Verify-reWeight (FFVW). CPD hypothesizes that the feedback is weaker or absent in the peripheral visual field (Zhaoping, 2017, 2019). Accordingly, peripheral vision is vulnerable to illusions by misleading V1 signals. CPD and FFVW are manifest using RDSs for depth surfaces by contrast-reversed random-dots (CRRDs), for which a black dot in one eye corresponds to a white dot in the other eye. V1 neurons respond to CRRDs as if their preferred binocular disparities become anti-preferred and vice versa, signalling reversed depths. I show that (1) the reversed depth is seen in peripheral but not central vision; (2) compromising feedback by brief viewing and backward masking makes reversed-depth seen in central vision; and (3) adding reversed-depth signals to normal-depth ones enhances or degrades depth percepts in central vision when these depth signals by V1 responses agree or disagree, respectively, with each other. In (3), degradation occurs only when the feedback is compromised, whereas enhancement occurs regardless, revealing a nonlinearity in the feedback processes for perceptual decisions.

Funding acknowledgement: University of Tübingen, Max Planck Society


InFoRM: Rivalry generates reliable estimates of perceptual dynamics

Presenter: Jan Skerswetat, Department of Psychology, Northeastern University, Boston, MA, USA

Co-author: Peter J. Bex, Department of Psychology, Northeastern University, Boston, MA, USA

Binocular rivalry has been traditionally investigated using subjective reports among 2-4 pre-defined states (OS,OD,piecemeal, & superimposition). We used the 4-phase InFoRM method (Skerswetat & Bex, 2021,VSS) during which 28 observers moved a joystick to 1st: Indicate and 2nd: Follow physically changing stimuli, 3rd: measure binocular rivalry, & 4th: Replay those rivalry changes via physical stimulus changes. Stimuli (±45°, 2c/°, 2°sine-wave-apertures, 3 contrast conditions) were either physically blended to simulate continuous rivalry-states or presented dichoptically to generate perceptual rivalry while leaving participants condition-blinded. Clustering and dynamics of joystick reports (3600 data/trial) were analyzed to classify perceptual experiences continuously. We compared joystick reports from ‘Rivalry’ and ‘Replay’ to validate individual rivalry dynamics using pairwise data comparison. We applied a Hamming distance algorithm and fit individual’s results with an exponential function to estimate agreement and perceptual delays. There was agreement (77% ±3 (standard deviation) across trials/observers/conditions) between Rivalry and Replay responses and median response delays (161ms ± 8ms) were not significantly different across contrast conditions. InFoRM: Rivalry provides a tool to quantify perceptual dynamics for both basic and clinical research of neuro-typical and atypical populations.

Funding acknowledgement: Supported by NIH grant R01 EY029713. InFoRM is protected by a provisional patent that is owned by Northeastern University, Boston, USA. JS & PJB are founders of PerZeption Inc.


Global luminance and local contrast modulate interocular delay in virtual reality

Presenter: Anthony LoPrete, Center for Neuroscience and Behavior, American University, USA; Department of Bioengineering, University of Pennsylvania, USA

Co-author: Arthur G Shapiro, Department of Psychology, Department of Computer Science, and Center for Neuroscience and Behavior, American University, USA

Virtual reality (VR) systems take advantage of depth cues to create immersive environments that simulate a three-dimensional world. Most often these cues provide consistent information about depth, but on occasion, such cues conflict with each other to create compelling phenomena that illustrate the function of underlying mechanisms. Here we examine the interaction of depth produced by local contrast, global luminance, and interocular luminance differences (i.e., a Pulfrich effect). We measure depth perception with two adjustment tasks. 1: Two moving streams of balls, one upwards (the standard) and the other sideways; the observer adjusts the depth of the sideways stream to lie in the same plane as the upward stream. 2: Balls move sinusoidally back and forth in the frontal plane; if depth is perceived, the pattern appears as a rotating helix. The observer adjusts the helical path until the balls appear to move in a single depth plane. We report that at low luminance levels dark balls are more effective at creating depth than bright balls; at higher luminance levels the effectiveness of dark balls decreases but the effectiveness of bright balls remains constant. Based on these data, we demonstrate how changes in the luminance of the background (or balls) can lead to shifts in the apparent depth order and the apparent direction of helical rotation. The results suggest a method for investigating potentially conflicting contributions of contrast and Pulfrich depth information.

Funding acknowledgement: This research was supported by a fellowship from the NASA DC Space Grant Consortium.


Target tracking shows that millisecond-scale visual delays are faithfully preserved in the movement of the hand

Presenter: Johannes Burge, Department of Psychology, University of Pennsylvania, USA

Co-author: Lawrence K. Cormack, Department of Psychology, University of Texas at Austin, USA

Image differences between the eyes can cause millisecond-scale interocular differences in processing speed. For moving objects, these differences can cause dramatic misperceptions of distance and 3D direction. Here, we develop a continuous target-tracking paradigm that shows these tiny differences are preserved in the movement dynamics of the hand. Human observers continuously tracked a target undergoing Brownian motion with various luminance differences between the eyes. From this data, we recover the time course of the visuomotor response. The difference in the visuomotor response across luminance conditions reveals temporal differences in visual processing between the eyes. Next, using traditional psychophysical methods, we measure interocular processing speed differences in equivalent conditions using a paradigm developed to study the Pulfrich effect. Target tracking and traditional psychophysics provide estimates of interocular delays that agree to within a fraction of a millisecond. Thus, despite the myriad signal transformations occurring between early visual and motor responses, millisecond-scale visual delays are preserved in the movement of the hand. This paradigm provides the potential for new predictive power, the application of analytical techniques from computational neuroscience, and rapid measurements in populations in which traditional psychophysics might be impractical. Further implications for future research will be discussed.

Funding acknowledgement: This work was supported by NIH grant R01-EY028571 from the National Eye Institute and the Office of Behavioral and Social Science Research to JB


Perceptual consequences of interocular imbalances in temporal integration

Presenter: Benjamin M. Chin, Department of Psychology, University of Pennsylvania

Co-author: Johannes Burge, Department of Psychology, University of Pennsylvania

Temporal differences in visual information processing between the eyes can cause dramatic misperceptions of motion and depth. A simple processing delay between the eyes causes a target, oscillating in the frontal plane, to be misperceived as moving along a near-elliptical motion trajectory in depth. Here, we explain a previously reported but poorly understood variant in which the illusory near-elliptical motion trajectory appears to be rotated left- or right-side back in depth, rather than aligned with the true direction of motion. We hypothesized that this variant is caused by differences in the temporal integration periods (i.e. differences in temporal blurring) between the eyes. Differences in temporal blurring dampen the amplitude of motion in one eye relative to the other, and—in a dynamic analog of the ‘geometric’ effect (Ogle, 1950)—cause the apparent misalignment. A target-tracking experiment shows that temporal blurring is depends on stimulus spatial frequency. A psychophysical experiment shows that when different spatial frequencies are presented to each eye, the direction of perceived rotation is predicted by the eye with the frequency associated with more temporal blur, as assessed by the target tracking task. The current findings add to add to the body of knowledge regarding the dependence of temporal processing on stimulus properties, while highlighting the perceptual consequences of interocular imbalances in temporal processing.


The benefits of naturally moving over stabilized stimuli for acuity increase with longer presentation times

Presenter: Alisa Braun, Vision Science Graduate Group, University of California, Berkeley, USA

Co-authors: Jorge Otero-Millan, School of Optometry and Vision Science Graduate Group, University of California, Berkeley, USA; Austin Roorda, School of Optometry and Vision Science Graduate Group, University of California, Berkeley, USA; School of Optometry and Vision Science Graduate Group, University of California, Berkeley, USA, School of Optometry and Vision Science Graduate Group, University of California, Berkeley, USA; Will Tuten, School of Optometry and Vision Science Graduate Group, University of California, Berkeley, USA

Longer stimulus durations aid in resolving fine spatial detail (Graham & Cook, 1937) - an effect partially attributed to fixational eye movements (FEM; Rucci et al., 2007, Anderson et al., 2020). While FEM do not appear to impact acuity at short presentations (6-300 ms; Tulunay-Keesey & Jones, 1976), recent work has shown that stabilizing stimuli on the retina, eliminating the effects of FEM, impedes acuity at longer durations (750 ms; Ratnam et al., 2017). At intermediate durations, the interplay between acuity and stimulus motion on the retina is not well understood. We used an adaptive optics scanning laser ophthalmoscope to stabilize stimuli on the retina at 1 degree from fixation. A 4 AFC tumbling E staircase was used to find the minimum angle of resolution (MAR) for 100, 375 and 750 ms presentations of unstabilized stimuli. The MAR-duration relationship followed an asymptotic shape, improving from 100 ms (mean MAR ± SEM: 1.84 ± 0.17 arcmin) to 375 ms (1.45 ± 0.09) but not to 750 ms (1.48 ± 0.08). Next, performance (% correct) on stabilized and unstabilized stimuli was measured at each duration’s MAR. For stabilized stimuli, we measured performance as a ratio relative to unstabilized performance and found that it degraded with increasing stimulus duration (1.1 ± 0.01, 0.91 ± 0.07, 0.65 ± 0.11 for 100, 375, and 750 ms durations, respectively). These results suggest that retinal stabilization may disrupt the mechanisms by which integration over time enhances visual acuity.

Funding acknowledgement: This project is supported by the Training Program in Vision Science, T32EY007043-44, R00EY027846 & 1R01EY023591.


Assessing the neural coding of image blur using multivariate pattern analysis

Presenter: Zoey J Isherwood, Department of Psychology, University of Nevada, Reno

Co-authors: Katherine EM Tregillus, Department of Psychology, University of Minnesota; Michael A Webster, Department of Psychology, University of Nevada, Reno; Department of Psychology, University of Nevada, Reno, Department of Psychology, University of Nevada, Reno

Blur is a fundamental perceptual attribute of images, but the way in which the visual system encodes this attribute remains poorly understood. Previously, we examined the neural correlates of blur by measuring BOLD responses to in-focus images and their blurred or sharpened counterparts, formed by varying the slope of the amplitude spectra but maintaining constant RMS contrast (Tregillus et al. 2014). In visual cortex (V1-V4), highest activation occurred for in-focus images compared to blurred or sharpened images – a finding which counters expectations from norm-based or predictive coding but is consistent with other studies examining the effects of manipulating the 1/f amplitude spectrum (Olman et al. 2004; Isherwood et al. 2017). To further examine the representation of blur, here we reanalysed this dataset using multivariate pattern analysis and also expanded the analysis to include additional visual areas (VO1, VO2, V3AB, LO, TO). A linear classifier trained to distinguish blurred vs. sharpened images provided significant decoding irrespective of visual area, suggesting that information about blur may be preserved across much of the visual hierarchy. The decoding may reflect larger scale differences in the representation of spatial frequency information within regions (e.g. as a function of eccentricity) or a finer scale columnar organization.

Funding acknowledgement: Supported by P20-GM-103650, EY-10834, and F32 EY031178-01A1.


Neural model of lightness scaling in the staircase Gelb illusion

Presenter: Michael E. Rudd, Department of Psychology and Center for Integrative Neuroscience, University of Nevada, Reno, USA

In the staircase Gelb illusion, the range of perceived reflectances of grayscale papers arranged from darkest to lightest in a spotlight is highly compressed relative to the range of actual paper reflectances: close to the cube-root compression predicted from applying Stevens’ brightness law to the reflected light. Reordering the papers reveals additional effects of spatial paper arrangement on lightness. Here, I model both the perceptual scaling and spatial arrangement effects with a computational neural model based on the principle of edge integration (Rudd, J Vision, 2013; J Percept Imaging, 2020). Edge contrasts are encoded by ON and OFF cells described by Naka-Rushton input-output functions having different parameters for ON and OFF cells. These neural responses are subsequently log-transformed, then integrated across space, to compute lightness. Edges are thus neurally weighted on the basis of two independent factors: 1) distance of the edge from the paper whose lightness is computed, and 2) the edge contrast polarity (whether the edge is a luminance increment or decrement in the target direction). Polarity-dependent weightings of 0.27 for increments and 1.0 for decrements were derived from physiological ON and OFF cell data from macaque LGN. Distance-dependence was modeled as an exponential decay with space constant 1.78 deg. The model accounts to within <5% error for lightness matches made to staircase Gelb and reordered grayscale surfaces in an actual 3D environment.

Funding acknowledgement: The author is supported by NIH COBRE P20GM103650 (UNR Center for Integrative Neuroscience, Michael A. Webster, PI)


Disentangling object color from illuminant color: The role of color shifts

Presenter: Cehao Yu, Perceptual Intelligence lab (π-lab), Delft University of Technology, The Netherlands

Co-authors: Maarten Wijntjes, Perceptual Intelligence lab (π-lab), Delft University of Technology, The Netherlands; Elmar Eisemann, Computer Graphics and Visualization Group, Delft University of Technology, The Netherlands; Computer Graphics and Visualization Group, Delft University of Technology, The Netherlands, Computer Graphics and Visualization Group, Delft University of Technology, The Netherlands; Sylvia Pont, Perceptual Intelligence lab (π-lab), Delft University of Technology, The Netherlands

Research has shown that disentangling surface and illuminant colors was possible based on various scene statistics. This study investigates the statistical cues induced by the chromatic effects of interreflections. We present a numerical analysis of ambiguous spectral pairs, in which the spectral power distribution of the illuminant in one scene matched the surface reflectance function in the other scene and vice versa. If the scenes are flat or convex and perfectly matte (Lambertian), the reflected light spectra of both cases are identical. However, the incident light undergoes interreflections for concave scenes. The spectral power of interreflections will be absorbed spectrally in an exponential way, dependent on the number of interreflections. We found that this causes systematic shifts towards the spectral reflectance peaks, resulting in brightness, saturation and hue shifts. Those paired cases' color differences (CIEDE2000) are so large that humans would be able to observe them if viewed simultaneously. In addition, we find that the color shifts cause qualitatively different gradients for chromatic materials and achromatic light and vice versa. Further psychophysical testing is necessary to see whether the different color shifts for the two cases can be recognized in isolation due to material or light properties. Moreover, the light densities and light vectors are spectrally different for these cases, creating different appearances of 3D objects in non-empty rooms.

Funding acknowledgement: This work has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 765121; project "DyViTo".


Phenomenological assessment of dynamic fractals

Presenter: Nathan Gonzales-Hess, Department of Psychology, University of Oregon, USA

Co-authors: Richard Taylor, Department of Physics, University of Oregon, USA; Margaret Sereno, Department of Psychology, University of Oregon, USA; Department of Psychology, University of Oregon, USA, Department of Psychology, University of Oregon, USA

Fractal geometry can be used to describe structures once considered too irregular to be described mathematically. As such, fractals can approximate the visual properties of elements found in the natural world like plants, terrain, and clouds. Previous work has demonstrated that human subjects show aesthetic preference for static fractal patterns of low-to-moderate fractal dimension (fD), analogous to the level of complexity in a natural scene viewed at a distance. But natural scenes are dynamic; animated by features like water and wind. While the aesthetic judgement of static fractals has been studied extensively, there has been little research on the perception of dynamic fractals. Here, we have developed novel dynamic fractal stimuli, to test whether dynamic fractals induce different phenomenological assessments than do static fractals of similar fD. Further, we analyze how animation speed may influence these assessments. Participants were asked to rate dynamic and static stimuli on a continuous scale between 0-1 across 5 judgement categories: complexity, relaxingness, naturalness, appealingness, and interestingness. We find that ratings for all categories besides complexity tended to decrease with increasing fD. Further, within these categories we find that dynamic stimuli tended to receive higher ratings at low fD, while static stimuli received higher ratings at high fD, indicating that dynamism increases the degree of perceived complexity in fractal patterns.