Cortical Encoding of Language in an American Sign Language (ASL) User: An Electrocorticography Study
Saturday, January 24, 2026
4:35 PM - 4:45 PM PST
Location: Milano Ballroom V & VI
Introduction: While spoken language processing is well-established [1-2], our understanding of visual-manual language processing, such as American Sign Language (ASL), remains limited. Most sign language research relies on fMRI [3-5], lacking the temporal resolution needed to understand dynamic language processes. We mapped language-related neural activity with high spatiotemporal precision in posterior temporal and parietal cortex during visual-manual language processing to test whether semantic processing exhibits amodal convergence across communication modalities.
Methods: A 34-year-old congenitally deaf ASL-fluent woman underwent awake craniotomy for astrocytoma resection. High-density electrocorticography (ECoG) electrodes (256-channel grid) covered posterior superior temporal gyrus (STG) and parietal regions. We designed three tasks holding semantic content constant while varying modality: (1) ASL signing - copying pseudo-signs versus translating written words to ASL, (2) speech/mouthing - copying pseudo-word movements versus translating ASL to mouthed words, and (3) finger writing - copying Georgian letters versus translating lip-read words to air-traced letters. High-frequency broadband activity (70-110 Hz) was analyzed using cluster permutation testing, with direct electrical stimulation confirming functional language regions.
Results: STG showed selective activation during linguistic movements across all modalities (100-400ms latency) but not during non-linguistic control movements, suggesting cross-modal recruitment for language processing. Angular gyrus (AG) exhibited distinct topographical organization for different communication modalities, with separated subregions activating during ASL signing (inferior somatosensory cortex), speech/mouthing (inferior AG), and writing tasks (superior somatosensory and lateral AG). Time-frequency analysis revealed modality-specific patterns: occipital activation at 35 Hz during number processing, beta desynchronization in primary somatosensory cortex during sign production occurring ~0.8s post-onset, and distinct frequency-specific changes across communication modalities in AG. Direct electrical stimulation confirmed functional language roles, disrupting speech production in anterolateral AG and comprehension in posterior AG sites.
Conclusion: Our findings provide electrophysiological evidence that visual languages recruit canonical language networks while maintaining modality-specific neural organization. Our study represents the first high-density ECoG investigation of multiple communication modalities in temporal and parietal regions of a deaf signer, extending beyond previous ECoG work that focused on motor areas during sign production [6]. We found that STG demonstrates cross-modal recruitment for lexicosemantic processing in deaf signers. Critically, distinct topographical organization within AG directly challenges models of amodal semantic convergence, suggesting specialized subregions process semantic information in modality-dependent ways. These results have clinical implications for neurosurgical language mapping in deaf patients.