Welcome to the Crossmodal Perception and Plasticity laboratory (CPP-Lab)

What is the advantage of having multiple senses to sample the world? How does the brain integrate or segregate different sensory signals? What are the consequences of sensory deprivation for the mind and brain? Do blind people think differently about colors? What replaces voices or speech in deaf people?
In the CPP-lab, we are trying to address these questions and many more.

News

  • Sep 27, 2023

    Talwart_et_al_2023-1240×677

    Automatic Brain Categorization of Discrete Auditory Emotion Expressions
    Talwar S., Barbero F.M., Calce R.P., Collignon O.
    Seamlessly extracting emotional information from voices is crucial for efficient interpersonal communication. However, it remains unclear how the brain categorizes vocal expressions of emotion beyond the processing of their acoustic features. In our study, we developed a new approach combining electroencephalographic recordings (EEG) in humans with a frequency-tagging paradigm to ‘tag’ automatic neural responses to specific categories of emotion expressions. Participants were presented with a periodic stream of heterogeneous non-verbal emotional vocalizations belonging to five emotion categories: anger, disgust, fear, happiness and sadness at 2.5 Hz (stimuli length of 350 ms with a 50 ms silent gap between stimuli). Importantly, unknown to the participant, a specific emotion category appeared at a target presentation rate of 0.83 Hz that would elicit an additional response in the EEG spectrum only if the brain discriminates the target emotion category from other emotion categories and generalizes across heterogeneous exemplars of the target emotion category. Stimuli were matched across emotion categories for harmonicity-to-noise ratio, spectral center of gravity and pitch. Additionally, participants were presented with a scrambled version of the stimuli with identical spectral content and periodicity but disrupted intelligibility. Both types of sequences had comparable envelopes and early auditory peripheral processing computed via the simulation of the cochlear response. We observed that in addition to the responses at the general presentation frequency (2.5 Hz) in both intact and scrambled sequences, a greater peak in the EEG spectrum at the target emotion presentation rate (0.83 Hz) and its harmonics emerged in the intact sequence in comparison to the scrambled sequence. The greater response at the target frequency in the intact sequence, together with our stimuli matching procedure, suggest that the categorical brain response elicited by a specific emotion is at least partially independent from the low-level acoustic features of the sounds. Moreover, responses at the fearful and happy vocalizations presentation rates elicited different topographies and different temporal dynamics, suggesting that different discrete emotions are represented differently in the brain. Our paradigm revealed the brain’s ability to automatically categorize non-verbal vocal emotion expressions objectively (at a predefined frequency of interest), behavior-free, rapidly (in few minutes of recording time) and robustly (with a high signal-to-noise ratio), making it a useful tool to study vocal emotion processing and auditory categorization in general and in populations where behavioral assessments are more challenging.

    You can find the paper here.

    Read more
  • Sep 27, 2023

    Vignali_et_al_2023-1240×516

    Vignali L., Xu Y., Turini J., Collignon O., Crepaldi D., Bottini R.

    Dual Coding Theories (DCT) suggest that meaning is represented in the brain by a double code: a language-derived code in the Anterior Temporal Lobe (ATL) and a sensory-derived code in perceptual and motor regions. Concrete concepts should activate both codes, while abstract ones rely solely on the linguistic code. To test these hypotheses, the present magnetoencephalography (MEG) experiment had participants judge whether visually presented words relate to the senses while we recorded brain responses to abstract and concrete semantic components obtained from 65 independently rated semantic features. Results evidenced early involvement of anterior-temporal and inferior-frontal brain areas in both abstract and concrete semantic information encoding. At later stages, occipital and occipito-temporal regions showed greater responses to concrete compared to abstract features. The present findings suggest that the concreteness of words is processed first with a transmodal/linguistic code, housed in frontotemporal brain systems, and only after with an imagistic/sensorimotor code in perceptual regions.

    You can find the paper here.

    Read more
  • Sep 27, 2023

    Xu_et_al_2023-1240×819

    Similar object shape representation encoded in the inferolateral occipitotemporal cortex of sighted and early blind people

    Xu Y., Vignali L., Sigismondi F., Crepaldi D., Bottini R., Collignon O.

    We can sense an object’s shape by vision or touch. Previous studies suggested that the inferolateral occipitotemporal cortex (ILOTC) implements supramodal shape representations as it responds more to seeing or touching objects than shapeless textures. However, such activation in the anterior portion of the ventral visual pathway could be due to the conceptual representation of an object or visual imagery triggered by touching an object. We addressed these possibilities by directly comparing shape and conceptual representations of objects in early blind (who lack visual experience/imagery) and sighted participants. We found that bilateral ILOTC in both groups showed stronger activation during a shape verification task than during a conceptual verification task made on the names of the same manmade objects. Moreover, the distributed activity in the ILOTC encoded shape similarity but not conceptual association among objects. Besides the ILOTC, we also found shape representation in both groups’ bilateral ventral premotor cortices and intraparietal sulcus (IPS), a frontoparietal circuit relating to object grasping and haptic processing. In contrast, the conceptual verification task activated both groups’ left perisylvian brain network relating to language processing and, interestingly, the cuneus in early blind participants only. The ILOTC had stronger functional connectivity to the frontoparietal circuit than to the left perisylvian network, forming a modular structure specialized in shape representation. Our results conclusively support that the ILOTC selectively implements shape representation independently of visual experience, and this unique functionality likely comes from its privileged connection to the frontoparietal haptic circuit.

    You can find the paper here.

    Read more
  • Dec 20, 2022

    IMRF_2023_logo_maybe-1240×392

    We are delighted to announce that the next International Multisensory Research Forum (IMRF 2023) will take place in Brussels (Belgium) on June 27-30 / 2023.

    IMRF is a vibrant medium size conference dedicated to studying how the senses combine and (re)organise in the mind, brain and computational models.

    **SAVE THE DATES**

    Symposium submissions will open on February 1st and close on February 15th. Symposium proposals should focus on a thematic topic related to the IMRF. They can include between 4 and 6 individual presentations. Symposium proposals need to have a synopsis (max. 300 words) provided by the symposium chair(s) plus abstracts from up to 6 individual contributions (max. 250 words).

    Submissions for regular abstracts (Posters+Talks| max 250 words) will open on February 1st and close on March 15th.

    More information and the conference website will follow in the next weeks.

    We are looking forward to seeing you all in Brussels next year,

    The 2023 IMRF Organizing Committee

    Read more
  • Dec 16, 2022

    Mattioni_eLife_2022

    Impact of blindness onset on the representation of sound categories in occipital and temporal cortices

    Mattioni S., Rezk M., Battal C., Vadlamudi J.,Collignon O.

    The ventral occipito-temporal cortex (VOTC) reliably encodes auditory categories in people born blind using a representational structure partially similar to the one found in vision (Mattioni et al.,2020). Here, using a combination of uni- and multivoxel analyses applied to fMRI data, we extend our previous findings, comprehensively investigating how early and late acquired blindness impact on the cortical regions coding for the deprived and the remaining senses. First, we show enhanced univariate response to sounds in part of the occipital cortex of both blind groups that is concomitant to reduced auditory responses in temporal regions. We then reveal that the representation of the sound categories in the occipital and temporal regions is more similar in blind subjects compared to sighted subjects. What could drive this enhanced similarity? The multivoxel encoding of the ‘human voice’ category that we observed in the temporal cortex of all sighted and blind groups is enhanced in occipital regions in blind groups , suggesting that the representation of vocal information is more similar between the occipital and temporal regions in blind compared to sighted individuals. We additionally show that blindness does not affect the encoding of the acoustic properties of our sounds (e.g. pitch, harmonicity) in occipital and in temporal regions but instead selectively alter the categorical coding of the voice category itself. These results suggest a functionally congruent interplay between the reorganization of occipital and temporal regions following visual deprivation, across the lifespan.

    You can find the paper here.

    Read more
  • Mar 23, 2022

    Screenshot-2022-16

    Structural and functional network-level reorganization in the coding of auditory motion directions and sound source locations in the absence of vision

    Battal C., Gurtubay-Antolin A., Rezk M., Mattioni S., Bertonati G., Occelli V., Bottini R., Targher S., Maffei C., Jovicich J., Collignon O.

    hMT+/V5 is a region in the middle occipito-temporal cortex that responds preferentially to visual motion in sighted people. In case of early visual deprivation, hMT+/V5 enhances its response to moving sounds. Whether hMT+/V5 contains information about motion directions and whether the functional enhancement observed in the blind is motion specific, or also involves sound source location, remains unsolved. Moreover, the impact of this crossmodal reorganization of hMT+/V5 on the regions typically supporting auditory motion processing, like the human Planum Temporale (hPT), remains equivocal. We used a combined functional and diffusion MRI approach and individual in-ear recordings to study the impact of early blindness on the brain networks supporting spatial hearing, in male and female humans. Whole-brain univariate analysis revealed that the anterior portion of hMT+/V5 responded to moving sounds in sighted and blind people, while the posterior portion was selective to moving sounds only in blind participants. Multivariate decoding analysis revealed that the presence of motion directions and sound positions information was higher in hMT+/V5 and lower in hPT in the blind group. While both groups showed axis-of-motion organization in hMT+/V5 and hPT, this organization was reduced in the hPT of blind people. Diffusion MRI revealed that the strength of hMT+/V5 – hPT connectivity did not differ between groups, whereas the microstructure of the connections was altered by blindness. Our results suggest that the axis-of-motion organization of hMT+/V5 does not depend on visual experience, but that blindness alters the response properties of occipito-temporal networks supporting spatial hearing in the sighted.

    You can find the preprint of the paper here.

    Read more
  • Sep 30, 2021

    GAC support a constructive debate between researchers with different points of view on the same subject (find the discussion here)

    Speakers: Maria Bedny, Nancy Kanwisher, Olivier Collignon, Ilker Yildirim, Elizabeth Saccone, Apurva Ratan Murty, Stefania Mattioni

    Scientific question: A key puzzle in cognitive neuroscience concerns the contribution of innate predispositions versus lifetime experience to cortical function. Addressing this puzzle has implications that reach far, from plasticity of the neural hardware to representations in the mind and their developmental trajectory, and even to building artificially intelligent systems. Yet, this is a notoriously difficult topic to study empirically. We propose to tackle this issue in the context of the high- level ‘visual’ representations through neural, behavioral, and computational studies of individuals who are sighted and congenitally blind. Congenital blindness represents a uniquely tractable and rich model to study how innate substrate and atypical experience interact to shape the functional tuning of the brain. This work aspires to reveal the origins, including the representational and computational basis, of high-level visual representations by addressing the following questions: How does visual experience impact representations and transformations along the ventral stream? How broad is the human brain’s capacity to ‘retool’ in the face of ‘atypical’ experience?

    Read more
  • Apr 10, 2021

    Screenshot-2021-15

    Fast Periodic Auditory Stimulation Reveals a Robust Categorical Response to Voices in the Human Brain

    Barbero FM, Calce RP., Talwar S., Rossion B. , Collignon O.

    Voices are arguably among the most relevant sounds in humans’ everyday life, and several studies have suggested the existence of voice-selective regions in the human brain. Despite two decades of research, defining the human brain regions supporting voice recognition remains challenging. Moreover, whether neural selectivity to voices is merely driven by acoustic properties specific to human voices (e.g., spectrogram, harmonicity), or whether it also reflects a higher-level categorization response is still under debate. Here, we objectively measured rapid automatic categorization responses to human voices with fast periodic auditory stimulation (FPAS) combined with electroencephalography (EEG). Participants were tested with stimulation sequences containing heterogeneous non-vocal sounds from different categories presented at 4 Hz (i.e., four stimuli/s), with vocal sounds appearing every three stimuli (1.333 Hz). A few minutes of stimulation are sufficient to elicit robust 1.333-Hz voice-selective focal brain responses over superior temporal regions of individual participants. This response is virtually absent for sequences using frequency-scrambled sounds, but is clearly observed when voices are presented among sounds from musical instruments matched for pitch and harmonicity-to-noise ratio (HNR). Overall, our FPAS paradigm demonstrates that the human brain seamlessly categorizes human voices when compared with other sounds including matched musical instruments and that voice-selective responses are at least partially independent from low-level acoustic features, making it a powerful and versatile tool to understand human auditory categorization in general.

    You can find the paper here.

    Read more
  • Mar 10, 2021

    Screenshot-2021-12

    Gurtubay-Antolin A., Battal C., Maffei C., Rezk M., Mattioni S., Jovicich J., Collignon O.

    In humans, the occipital middle-temporal region (hMT1/V5) specializes in the processing of visual motion, while the planum temporale (hPT) specializes in auditory motion processing. It has been hypothesized that these regions might communicate directly to achieve fast and optimal exchange of multisensory motion information. Here we investigated, for the first time in humans (male and female), the presence of direct white matter connections between visual and auditory motion-selective regions using a combined fMRI and diffusion MRI approach. We found evidence supporting the potential existence of direct white matter connections between individually and functionally defined hMT1/V5 and hPT. We show that projections between hMT1/V5 and hPT do not overlap with large white matter bundles, such as the inferior longitudinal fasciculus and the inferior frontal occipital fasciculus. Moreover, we did not find evidence suggesting the presence of projections between the fusiform face area and hPT, supporting the functional specificity of hMT1/V5–hPT connections. Finally, the potential presence of hMT1/V5–hPT connections was corroborated in a large sample of participants (n=114) from the human connec- tome project. Together, this study provides a first indication for potential direct occipitotemporal projections between hMT1/ V5 and hPT, which may support the exchange of motion information between functionally specialized auditory and visual regions.

    You can find the paper here.

    Read more
  • Jun 09, 2020

    Screenshot-2021-11

    Rezk M., Cattoir S., Battal C., Occelli V., Mattioni S. , Collignon O.

    The human occipito-temporal region hMT+/V5 is well known for processing visual motion direction. Here, we demonstrate that hMT+/V5 also represents the direction of auditory motion in a format partially aligned with the one used to code visual motion. We show that auditory and visual motion directions can be reliably de- coded in individually localized hMT+/V5 and that motion directions in one modality can be predicted from the activity patterns elicited by the other modality. Despite shared motion-direction information across the senses, vision and audition, however, overall produce opposite voxel-wise responses in hMT+/V5. Our results reveal a multifaced representation of multisensory motion signals in hMT+/V5 and have broader implications for our understanding of how we consider the division of sensory labor between brain regions dedicated to a specific perceptual function.

    Read more
  • Apr 22, 2020

    Screenshot-2020-10

    Variability in the analysis of a single neuroimaging dataset by many teams

    Botvinik-Nezer, R., Holzmeister F.,… Barilari M.,…Collignon O.,…Gau R. et al.

    Data analysis workflows in many scientific domains have become increasingly complex and flexible. To assess the impact of this flexibility on functional magnetic resonance imaging (fMRI) results, the same dataset was independently analyzed by 70 teams, testing nine ex-ante hypotheses. The flexibility of analytic approaches is exemplified by the fact that no two teams chose identical workflows to analyze the data. This flexibility resulted in sizeable variation in hypothesis test results, even for teams whose statistical maps were highly correlated at intermediate stages of their analysis pipeline. Variation in reported results was related to several aspects of analysis methodology. Importantly, meta-analytic approaches that aggregated information across teams yielded significant consensus in activated regions across teams. Furthermore, prediction markets of researchers in the field revealed an overestimation of the likelihood of significant findings, even by researchers with direct knowledge of the dataset. Our findings show that analytic flexibility can have substantial effects on scientific conclusions, and demonstrate factors related to variability in fMRI. The results emphasize the importance of validating and sharing complex analysis workflows, and demonstrate the need for multiple analyses of the same data. Potential approaches to mitigate issues related to analytical variability are discussed

    Read more
  • Feb 07, 2020

    Website-FIG-1-pdf

    Categorical representation from sound and sight in the ventral occipito-temporal cortex of sighted and blind (Find here a 10 min. talk on the paper)

    Mattioni S., Rezk M., Battal C., Bottini R.,Cuculiza Mendoza K.E., Oosterhof N. N., Collignon O.

    Is vision necessary for the development of the categorical organization of the Ventral Occipito-Temporal Cortex (VOTC)? We used fMRI to characterize VOTC responses to eight categories presented acoustically in sighted and early blind individuals, and visually in a separate sighted group. We observed that VOTC reliably encodes sound categories in sighted and blind people using a representational structure and connectivity partially similar to the one found in vision. Sound categories were, however, more reliably encoded in the blind than the sighted group, using a representational format closer to the one found in vision. Crucially, VOTC in blind represents the categorical membership of sounds rather than their acoustic features. Our results suggest that sounds trigger categorical responses in the VOTC of congenitally blind and sighted people that partially match the topography and functional profile of the visual response, despite qualitative nuances in the categorical organization of VOTC between modalities and groups.

    You can find the paper here.

    Read more
  • Dec 11, 2019

    Screenshot-2019-6
    Olivier will talk about his research in the BE Neuroscience & Technology Meetup , with the talk: Building a brain in the dark.

    What does happen to the “visual cortex” of someone born blind? Are these regions unused as they do not receive their preferred sensory input? No. In contrast, I will show that these regions reorganise to process non-visual inputs in an organise fashion. These data shed new light on the old ‘nature versus nurture’ debate on brain development: while the recruitment of occipital (visual) regions by non-visual inputs in blind individuals highlights the ability of the brain to remodel itself due to experience (nurture influence), the observation of specialized cognitive modules in the reorganised occipital cortex of the blinds, similar to those observed in the sighted, highlights the intrinsic constraints imposed to such plasticity (nature influence).
    What then would happen if a congenitally blind individual was given the gift of sight? Would those reorganised regions switch back to their natural dedication to vision? We had the unique opportunity to track the behavioral and neurophysiological changes taking place in the occipital cortex of an early and severely visually impaired patient before as well as 1.5 and 7 months after sight restoration. An in-deep study of this exceptional patient highlighted the dynamic nature of the occipital cortex facing visual deprivation and restoration. Finally, I will present some data demonstrating that even a short period of visual deprivation (only few weeks) during the early sensitive period of brain development leads to enduring large-scale crossmodal reorganization of the brain circuitry typically dedicated to vision, even years after visual inputs.

    Read more
  • Nov 12, 2019

    Screenshot-2019-5

    Investigating the respective contribution of sensory modalities and spatial disposition in numerical training

    (Crollen V., Noël M., Honoré N., Degroote V., Collignon O.)

    Recent studies have suggested that multisensory redundancy may improve cognitive learning. According to this view, information simultaneously available across two or more modalities is highly salient and, therefore, may be learned and remembered better than the same information presented to only one modality. In the cur- rent study, we wanted to evaluate whether training arithmetic with a multisensory intervention could induce larger learning improvements than a visual intervention alone. Moreover, because a left-to-right-oriented mental number line was for a long time considered as a core feature of numerical representation, we also wanted to compare left-to-right-organized and randomly orga- nized arithmetic training. Therefore, five training programs were created and called (a) multisensory linear, (b) multisensory ran- dom, (c) visual linear, (d) visual random, and (e) control. A total of 85 preschoolers were randomly assigned to one of these five training conditions. Whereas children were trained to solve simple addition and subtraction operations in the first four training condi- tions, story understanding was the focus of the control training. Several numerical tasks (arithmetic, number-to-position, number comparison, counting, and subitizing) were used as pre- and post-test measures. Although the effect of spatial disposition was not significant, results demonstrated that the multisensory train- ing condition led to a significantly larger performance improvement than the visual training and control conditions. This result was specific to the trained ability (arithmetic) and is dis- cussed in light of the multisensory redundancy hypothesis.

    Read more
  • Apr 15, 2019

    Screen-Shot-3
    Olivier will talk about his research in the section Insane in the main brain, with the talk: Building a brain in the dark.

    The human brain evolved highly specialised regions dedicated to the refined processing of visual information. What does happen to these regions if you are born blind? Are they simply left dormant and unused? No! In case of blindness, the brain reorganises itself and the regions normally dedicated to vision now involved in the processing of information from the remaining senses. This demonstrates the fascinating ability of the brain to change the tuning of its neurons due to experience, a mechanism called brain plasticity. But what happens then if a blind person recovers sight?

    Read more
  • Feb 04, 2019

    Screen-Shot-1
    Representation of auditory motion directions and sound source locations in the human planum temporale

    (Battal C., Rezk M., Mattioni S., Vadlamudi J., & Collignon O)

    The ability to compute the location and direction of sounds is a crucial perceptual skill to efficiently interact with dynamic environments. How the human brain implements spatial hearing is however poorly understood. In our study, we used fMRI to characterize the brain activity of male and female humans listening to left, right, up and down moving as well as static sounds. Whole brain univariate results contrasting moving and static sounds varying in their location revealed a robust functional preference for auditory motion in bilateral human Planum Temporale (hPT). Using independently localized hPT, we show that this region contains information about auditory motion directions and, to a lesser extent, sound source locations. Moreover, hPT showed an axis of motion organization reminiscent of the functional organization of the middle-temporal cortex (hMT+/V5) for vision. Importantly, whereas motion direction and location rely on partially shared pattern geometries in hPT, as demonstrated by successful cross-condition decoding, the responses elicited by static and moving sounds were however significantly distinct. Altogether our results demonstrate that the hPT codes for auditory motion and location but that the underlying neural computation linked to motion processing is more reliable and partially distinct from the one supporting sound source location.

    Read more
  • Jan 18, 2019

    Screen-Shot
    Sound symbolism in sighted and blind. The role of vision and orthography in Tsound-shape correspondences

    (Bottini R., Barilari M., & Collignon O)

    Non-arbitrary sound-shape correspondences (SSC), such as the “bouba-kiki” effect, have been consistently ob- served across languages and together with other sound-symbolic phenomena challenge the classic linguistic dictum of the arbitrariness of the sign. Yet, it is unclear what makes a sound “round” or “spiky” to the human mind. Here we tested the hypothesis that visual experience is necessary for the emergence of SSC, supported by empirical evidence showing reduced SSC in visually impaired people. Results of two experiments comparing early blind and sighted individuals showed that SSC emerged strongly in both groups. Experiment 2, however, showed a partially different pattern of SSC in sighted and blind, that was mostly explained by a different effect of orthographic letter shape: The shape of written letters (spontaneously activated by spoken words) influenced SSC in the sighted, but not in the blind, who are exposed to an orthography (Braille) in which letters do not have spiky or round outlines. In sum, early blindness does not prevent the emergence of SSC, and differences between sighted and visually impaired people may be due the indirect influence (or lack thereof) of orthographic letter shape.

    Read more
  • Dec 14, 2018

    Virginie2018Neuroimage_fig
    Recruitment of the occipital cortex by arithmetic processing follows computational bias in the congenitally blind

    Arithmetic reasoning activates the occipital cortex of congenitally blind people (CB). This activation of visual areas may highlight the functional flexibility of occipital regions deprived of their dominant inputs or relate to the intrinsic computational role of specific occipital regions. We contrasted these competing hypotheses by characterising the brain activity of CB and sighted participants while performing subtraction, multiplication and a control letter task. In both groups, subtraction selectively activated a bilateral dorsal network commonly activated during spatial processing. Multiplication triggered activity in temporal regions thought to participate in memory retrieval. No between-group difference was observed for the multiplication task whereas subtraction induced enhanced activity in the right dorsal occipital cortex of the blind individuals only. As this area overlaps with regions showing selective tuning to auditory spatial processing and exhibits increased functional connectivity with a dorsal “spatial” network, our results suggest that the recruitment of occipital regions during high-level cognition in the blind actually relates to the intrinsic computational role of the activated regions.

    Read more
  • Nov 13, 2018

    paper1-1240×660
    Hierarchical brain network for face and voice integration of emotion expression.

    (Davies-Thompson J., Elli G., Rezk M., Benetti S., van Ackeren M., Collignon O.)

    The brain has separate specialized computational units to process faces and voices located in occipital and temporal cortices. However, humans seamlessly integrate signals from the faces and voices of others for optimal social interaction. How are emotional expressions, when delivered by different sensory modalities (faces and voices), integrated in the brain? In this study, we characterized the brains’ response to faces, voices, and combined face-voice information (congruent, incongruent), which varied in expression (neutral, fearful). Using a whole-brain approach, we found that only the right posterior superior temporal sulcus (rpSTS) responded more to bimodal stimuli than to face or voice alone but only when the stimuli contained emotional expression. Face- and voice-selective regions of interest extracted from independent functional localizers, similarly revealed multisensory integration in the face-selective rpSTS only; further, this was the only face- selective region that also responded significantly to voices. Dynamic Causal Modeling revealed that the rpSTS receives unidirectional information from the face-selective fusiform face area (FFA), and voice-selective temporal voice area (TVA), with emotional expression affecting the connections strength. Our study promotes a hierarchical model of face and voice integration, with convergence in the rpSTS, and that such integration depends on the (emotional) salience of the stimuli.

    Read more
  • Sep 10, 2018

    paper2

    New paper accepted in Journal of Experimental Psychology: Human Perception and Performance

    How visual experience and task context modulate the use of internal and external spatial coordinate for perception and action.

    (Crollen V, Spruyt T, Mahau P, Bottini R, & Collignon O)

    Recent studies proposed that the use of internal and external coordinate systems may be more flexible in congenitally blind when compared to sighted individuals. To investigate this hypothesis further, we asked congenitally blind and sighted people to perform, with the hands uncrossed and crossed over the body midline, a tactile TOJ and an auditory Simon task. Crucially, both tasks were carried out under task instructions either favoring the use of an internal (left vs. right hand) or an external (left vs. right hemispace) frame of reference. In the internal condition of the TOJ task, our results replicated previous findings (Röder et al., 2004) showing that hand crossing only impaired sighted participants’ performance, suggesting that blind people did not activate by default a (conflicting) external frame of reference. However, under external instructions, a decrease of performance was observed in both groups, suggesting that even blind people activated an external coordinate system in this condition. In the Simon task, and in contrast with a previous study (Roder et al., 2007), both groups responded more efficiently when the sound was presented from the same side of the response (‘‘Simon effect’’) independently of the hands position. This was true under the internal and external conditions, therefore suggesting that blind and sighted by default activated an external coordinate system in this task. All together, these data comprehensively demonstrate how visual experience shapes the default weight attributed to internal and external coordinate systems for action and perception depending on task demand.

     

    Read more
  • Jul 06, 2018

    paper3

    New paper accepted in Scientific Report

    Light modulates oscillatory alpha activity in the occipital cortex of totally visually blind individuals with intact non-image-forming photoreception.

    (Vandewalle G, van Ackeren M, Daneault V, Hull J, Albouy G, Lepore F, Doyon J, Czeisler C, Dumont M, Carrier J, Lockley S, and Collignon O)

    The discovery of intrinsically photosensitive retinal ganglion cells (ipRGCs) marked a major shift in our understanding of how light information is processed by the mammalian brain. These ipRGCs influence multiple functions not directly related to image formation such as circadian resetting and entrainment, pupil constriction, enhancement of alertness, as well as the modulation of cognition. More recently, it was demonstrated that ipRGCs may also contribute to basic visual functions. The impact of ipRGCs on visual function, independently of image forming photoreceptors, remains difficult to isolate, however, particularly in humans. We previously showed that exposure to intense monochromatic blue light (465 nm) induced non-conscious light perception in a forced choice task in three rare totally visually blind individuals without detectable rod and cone function, but who retained non-image-forming responses to light, very likely via ipRGCs. The neural foundation of such light perception in the absence of conscious vision is unknown, however. In this study, we characterized the brain activity of these three participants using electroencephalography (EEG), and demonstrate that unconsciously perceived light triggers an early and reliable transient desynchronization (i.e. decreased power) of the alpha EEG rhythm (8-14 Hz) over the occipital cortex. These results provide compelling insight into how ipRGC may contribute to transient changes in ongoing brain activity. They suggest that occipital alpha rhythm synchrony, which is typically linked to the visual system, is modulated by ipRGCs photoreception; a process that may contribute to the non-conscious light perception in those blind individuals.

    Read more