Brain processing the word of

«Language processing» redirects here. For the processing of language by computers, see Natural language processing.

Dual stream connectivity between the auditory cortex and frontal lobe of monkeys and humans. Top: The auditory cortex of the monkey (left) and human (right) is schematically depicted on the supratemporal plane and observed from above (with the parieto- frontal operculi removed). Bottom: The brain of the monkey (left) and human (right) is schematically depicted and displayed from the side. Orange frames mark the region of the auditory cortex, which is displayed in the top sub-figures. Top and Bottom: Blue colors mark regions affiliated with the ADS, and red colors mark regions affiliated with the AVS (dark red and blue regions mark the primary auditory fields). CC BY icon.svg Material was copied from this source, which is available under a Creative Commons Attribution 4.0 International License.

In psycholinguistics, language processing refers to the way humans use words to communicate ideas and feelings, and how such communications are processed and understood. Language processing is considered to be a uniquely human ability that is not produced with the same grammatical understanding or systematicity in even human’s closest primate relatives.[1]

Throughout the 20th century the dominant model[2] for language processing in the brain was the Geschwind-Lichteim-Wernicke model, which is based primarily on the analysis of brain-damaged patients. However, due to improvements in intra-cortical electrophysiological recordings of monkey and human brains, as well non-invasive techniques such as fMRI, PET, MEG and EEG, a dual auditory pathway[3][4] has been revealed and a two-streams model has been developed. In accordance with this model, there are two pathways that connect the auditory cortex to the frontal lobe, each pathway accounting for different linguistic roles. The auditory ventral stream pathway is responsible for sound recognition, and is accordingly known as the auditory ‘what’ pathway. The auditory dorsal stream in both humans and non-human primates is responsible for sound localization, and is accordingly known as the auditory ‘where’ pathway. In humans, this pathway (especially in the left hemisphere) is also responsible for speech production, speech repetition, lip-reading, and phonological working memory and long-term memory. In accordance with the ‘from where to what’ model of language evolution,[5][6] the reason the ADS is characterized with such a broad range of functions is that each indicates a different stage in language evolution.

The division of the two streams first occurs in the auditory nerve where the anterior branch enters the anterior cochlear nucleus in the brainstem which gives rise to the auditory ventral stream. The posterior branch enters the dorsal and posteroventral cochlear nucleus to give rise to the auditory dorsal stream.[7]: 8 

Language processing can also occur in relation to signed languages or written content.

Early neurolinguistics models[edit]

Throughout the 20th century, our knowledge of language processing in the brain was dominated by the Wernicke-Lichtheim-Geschwind model.[8][2][9] The Wernicke-Lichtheim-Geschwind model is primarily based on research conducted on brain-damaged individuals who were reported to possess a variety of language related disorders. In accordance with this model, words are perceived via a specialized word reception center (Wernicke’s area) that is located in the left temporoparietal junction. This region then projects to a word production center (Broca’s area) that is located in the left inferior frontal gyrus. Because almost all language input was thought to funnel via Wernicke’s area and all language output to funnel via Broca’s area, it became extremely difficult to identify the basic properties of each region. This lack of clear definition for the contribution of Wernicke’s and Broca’s regions to human language rendered it extremely difficult to identify their homologues in other primates.[10] With the advent of the fMRI and its application for lesion mappings, however, it was shown that this model is based on incorrect correlations between symptoms and lesions.[11][12][13][14][15][16][17] The refutation of such an influential and dominant model opened the door to new models of language processing in the brain.

Current neurolinguistics models[edit]

Anatomy[edit]

In the last two decades, significant advances occurred in our understanding of the neural processing of sounds in primates. Initially by recording of neural activity in the auditory cortices of monkeys[18][19] and later elaborated via histological staining[20][21][22] and fMRI scanning studies,[23] 3 auditory fields were identified in the primary auditory cortex, and 9 associative auditory fields were shown to surround them (Figure 1 top left). Anatomical tracing and lesion studies further indicated of a separation between the anterior and posterior auditory fields, with the anterior primary auditory fields (areas R-RT) projecting to the anterior associative auditory fields (areas AL-RTL), and the posterior primary auditory field (area A1) projecting to the posterior associative auditory fields (areas CL-CM).[20][24][25][26] Recently, evidence accumulated that indicates homology between the human and monkey auditory fields. In humans, histological staining studies revealed two separate auditory fields in the primary auditory region of Heschl’s gyrus,[27][28] and by mapping the tonotopic organization of the human primary auditory fields with high resolution fMRI and comparing it to the tonotopic organization of the monkey primary auditory fields, homology was established between the human anterior primary auditory field and monkey area R (denoted in humans as area hR) and the human posterior primary auditory field and the monkey area A1 (denoted in humans as area hA1).[29][30][31][32][33] Intra-cortical recordings from the human auditory cortex further demonstrated similar patterns of connectivity to the auditory cortex of the monkey. Recording from the surface of the auditory cortex (supra-temporal plane) reported that the anterior Heschl’s gyrus (area hR) projects primarily to the middle-anterior superior temporal gyrus (mSTG-aSTG) and the posterior Heschl’s gyrus (area hA1) projects primarily to the posterior superior temporal gyrus (pSTG) and the planum temporale (area PT; Figure 1 top right).[34][35] Consistent with connections from area hR to the aSTG and hA1 to the pSTG is an fMRI study of a patient with impaired sound recognition (auditory agnosia), who was shown with reduced bilateral activation in areas hR and aSTG but with spared activation in the mSTG-pSTG.[36] This connectivity pattern is also corroborated by a study that recorded activation from the lateral surface of the auditory cortex and reported of simultaneous non-overlapping activation clusters in the pSTG and mSTG-aSTG while listening to sounds.[37]

Downstream to the auditory cortex, anatomical tracing studies in monkeys delineated projections from the anterior associative auditory fields (areas AL-RTL) to ventral prefrontal and premotor cortices in the inferior frontal gyrus (IFG)[38][39] and amygdala.[40] Cortical recording and functional imaging studies in macaque monkeys further elaborated on this processing stream by showing that acoustic information flows from the anterior auditory cortex to the temporal pole (TP) and then to the IFG.[41][42][43][44][45][46] This pathway is commonly referred to as the auditory ventral stream (AVS; Figure 1, bottom left-red arrows). In contrast to the anterior auditory fields, tracing studies reported that the posterior auditory fields (areas CL-CM) project primarily to dorsolateral prefrontal and premotor cortices (although some projections do terminate in the IFG.[47][39] Cortical recordings and anatomical tracing studies in monkeys further provided evidence that this processing stream flows from the posterior auditory fields to the frontal lobe via a relay station in the intra-parietal sulcus (IPS).[48][49][50][51][52][53] This pathway is commonly referred to as the auditory dorsal stream (ADS; Figure 1, bottom left-blue arrows). Comparing the white matter pathways involved in communication in humans and monkeys with diffusion tensor imaging techniques indicates of similar connections of the AVS and ADS in the two species (Monkey,[52] Human[54][55][56][57][58][59]). In humans, the pSTG was shown to project to the parietal lobe (sylvian parietal-temporal junction-inferior parietal lobule; Spt-IPL), and from there to dorsolateral prefrontal and premotor cortices (Figure 1, bottom right-blue arrows), and the aSTG was shown to project to the anterior temporal lobe (middle temporal gyrus-temporal pole; MTG-TP) and from there to the IFG (Figure 1 bottom right-red arrows).

Auditory ventral stream[edit]

The auditory ventral stream (AVS) connects the auditory cortex with the middle temporal gyrus and temporal pole, which in turn connects with the inferior frontal gyrus. This pathway is responsible for sound recognition, and is accordingly known as the auditory ‘what’ pathway. The functions of the AVS include the following.

Sound recognition[edit]

Accumulative converging evidence indicates that the AVS is involved in recognizing auditory objects. At the level of the primary auditory cortex, recordings from monkeys showed higher percentage of neurons selective for learned melodic sequences in area R than area A1,[60] and a study in humans demonstrated more selectivity for heard syllables in the anterior Heschl’s gyrus (area hR) than posterior Heschl’s gyrus (area hA1).[61] In downstream associative auditory fields, studies from both monkeys and humans reported that the border between the anterior and posterior auditory fields (Figure 1-area PC in the monkey and mSTG in the human) processes pitch attributes that are necessary for the recognition of auditory objects.[18] The anterior auditory fields of monkeys were also demonstrated with selectivity for con-specific vocalizations with intra-cortical recordings.[41][19][62] and functional imaging[63][42][43] One fMRI monkey study further demonstrated a role of the aSTG in the recognition of individual voices.[42] The role of the human mSTG-aSTG in sound recognition was demonstrated via functional imaging studies that correlated activity in this region with isolation of auditory objects from background noise,[64][65] and with the recognition of spoken words,[66][67][68][69][70][71][72] voices,[73] melodies,[74][75] environmental sounds,[76][77][78] and non-speech communicative sounds.[79] A meta-analysis of fMRI studies[80] further demonstrated functional dissociation between the left mSTG and aSTG, with the former processing short speech units (phonemes) and the latter processing longer units (e.g., words, environmental sounds). A study that recorded neural activity directly from the left pSTG and aSTG reported that the aSTG, but not pSTG, was more active when the patient listened to speech in her native language than unfamiliar foreign language.[81] Consistently, electro stimulation to the aSTG of this patient resulted in impaired speech perception[81] (see also[82][83] for similar results). Intra-cortical recordings from the right and left aSTG further demonstrated that speech is processed laterally to music.[81] An fMRI study of a patient with impaired sound recognition (auditory agnosia) due to brainstem damage was also shown with reduced activation in areas hR and aSTG of both hemispheres when hearing spoken words and environmental sounds.[36] Recordings from the anterior auditory cortex of monkeys while maintaining learned sounds in working memory,[46] and the debilitating effect of induced lesions to this region on working memory recall,[84][85][86] further implicate the AVS in maintaining the perceived auditory objects in working memory. In humans, area mSTG-aSTG was also reported active during rehearsal of heard syllables with MEG.[87] and fMRI[88] The latter study further demonstrated that working memory in the AVS is for the acoustic properties of spoken words and that it is independent to working memory in the ADS, which mediates inner speech. Working memory studies in monkeys also suggest that in monkeys, in contrast to humans, the AVS is the dominant working memory store.[89]

In humans, downstream to the aSTG, the MTG and TP are thought to constitute the semantic lexicon, which is a long-term memory repository of audio-visual representations that are interconnected on the basis of semantic relationships. (See also the reviews by[3][4] discussing this topic). The primary evidence for this role of the MTG-TP is that patients with damage to this region (e.g., patients with semantic dementia or herpes simplex virus encephalitis) are reported[90][91] with an impaired ability to describe visual and auditory objects and a tendency to commit semantic errors when naming objects (i.e., semantic paraphasia). Semantic paraphasias were also expressed by aphasic patients with left MTG-TP damage[14][92] and were shown to occur in non-aphasic patients after electro-stimulation to this region.[93][83] or the underlying white matter pathway[94] Two meta-analyses of the fMRI literature also reported that the anterior MTG and TP were consistently active during semantic analysis of speech and text;[66][95] and an intra-cortical recording study correlated neural discharge in the MTG with the comprehension of intelligible sentences.[96]

Sentence comprehension[edit]

In addition to extracting meaning from sounds, the MTG-TP region of the AVS appears to have a role in sentence comprehension, possibly by merging concepts together (e.g., merging the concept ‘blue’ and ‘shirt’ to create the concept of a ‘blue shirt’). The role of the MTG in extracting meaning from sentences has been demonstrated in functional imaging studies reporting stronger activation in the anterior MTG when proper sentences are contrasted with lists of words, sentences in a foreign or nonsense language, scrambled sentences, sentences with semantic or syntactic violations and sentence-like sequences of environmental sounds.[97][98][99][100][101][102][103][104] One fMRI study[105] in which participants were instructed to read a story further correlated activity in the anterior MTG with the amount of semantic and syntactic content each sentence contained. An EEG study[106] that contrasted cortical activity while reading sentences with and without syntactic violations in healthy participants and patients with MTG-TP damage, concluded that the MTG-TP in both hemispheres participate in the automatic (rule based) stage of syntactic analysis (ELAN component), and that the left MTG-TP is also involved in a later controlled stage of syntax analysis (P600 component). Patients with damage to the MTG-TP region have also been reported with impaired sentence comprehension.[14][107][108] See review[109] for more information on this topic.

Bilaterality[edit]

In contradiction to the Wernicke-Lichtheim-Geschwind model that implicates sound recognition to occur solely in the left hemisphere, studies that examined the properties of the right or left hemisphere in isolation via unilateral hemispheric anesthesia (i.e., the WADA procedure[110]) or intra-cortical recordings from each hemisphere[96] provided evidence that sound recognition is processed bilaterally. Moreover, a study that instructed patients with disconnected hemispheres (i.e., split-brain patients) to match spoken words to written words presented to the right or left hemifields, reported vocabulary in the right hemisphere that almost matches in size with the left hemisphere[111] (The right hemisphere vocabulary was equivalent to the vocabulary of a healthy 11-years old child). This bilateral recognition of sounds is also consistent with the finding that unilateral lesion to the auditory cortex rarely results in deficit to auditory comprehension (i.e., auditory agnosia), whereas a second lesion to the remaining hemisphere (which could occur years later) does.[112][113] Finally, as mentioned earlier, an fMRI scan of an auditory agnosia patient demonstrated bilateral reduced activation in the anterior auditory cortices,[36] and bilateral electro-stimulation to these regions in both hemispheres resulted with impaired speech recognition.[81]

Auditory dorsal stream[edit]

The auditory dorsal stream connects the auditory cortex with the parietal lobe, which in turn connects with inferior frontal gyrus. In both humans and non-human primates, the auditory dorsal stream is responsible for sound localization, and is accordingly known as the auditory ‘where’ pathway. In humans, this pathway (especially in the left hemisphere) is also responsible for speech production, speech repetition, lip-reading, and phonological working memory and long-term memory.

Speech production[edit]

Studies of present-day humans have demonstrated a role for the ADS in speech production, particularly in the vocal expression of the names of objects. For instance, in a series of studies in which sub-cortical fibers were directly stimulated[94] interference in the left pSTG and IPL resulted in errors during object-naming tasks, and interference in the left IFG resulted in speech arrest. Magnetic interference in the pSTG and IFG of healthy participants also produced speech errors and speech arrest, respectively[114][115] One study has also reported that electrical stimulation of the left IPL caused patients to believe that they had spoken when they had not and that IFG stimulation caused patients to unconsciously move their lips.[116] The contribution of the ADS to the process of articulating the names of objects could be dependent on the reception of afferents from the semantic lexicon of the AVS, as an intra-cortical recording study reported of activation in the posterior MTG prior to activation in the Spt-IPL region when patients named objects in pictures[117] Intra-cortical electrical stimulation studies also reported that electrical interference to the posterior MTG was correlated with impaired object naming[118][82]

Vocal mimicry[edit]

Although sound perception is primarily ascribed with the AVS, the ADS appears associated with several aspects of speech perception. For instance, in a meta-analysis of fMRI studies[119] (Turkeltaub and Coslett, 2010), in which the auditory perception of phonemes was contrasted with closely matching sounds, and the studies were rated for the required level of attention, the authors concluded that attention to phonemes correlates with strong activation in the pSTG-pSTS region. An intra-cortical recording study in which participants were instructed to identify syllables also correlated the hearing of each syllable with its own activation pattern in the pSTG.[120] The involvement of the ADS in both speech perception and production has been further illuminated in several pioneering functional imaging studies that contrasted speech perception with overt or covert speech production.[121][122][123] These studies demonstrated that the pSTS is active only during the perception of speech, whereas area Spt is active during both the perception and production of speech. The authors concluded that the pSTS projects to area Spt, which converts the auditory input into articulatory movements.[124][125] Similar results have been obtained in a study in which participants’ temporal and parietal lobes were electrically stimulated. This study reported that electrically stimulating the pSTG region interferes with sentence comprehension and that stimulation of the IPL interferes with the ability to vocalize the names of objects.[83] The authors also reported that stimulation in area Spt and the inferior IPL induced interference during both object-naming and speech-comprehension tasks. The role of the ADS in speech repetition is also congruent with the results of the other functional imaging studies that have localized activation during speech repetition tasks to ADS regions.[126][127][128] An intra-cortical recording study that recorded activity throughout most of the temporal, parietal and frontal lobes also reported activation in the pSTG, Spt, IPL and IFG when speech repetition is contrasted with speech perception.[129] Neuropsychological studies have also found that individuals with speech repetition deficits but preserved auditory comprehension (i.e., conduction aphasia) suffer from circumscribed damage to the Spt-IPL area[130][131][132][133][134][135][136] or damage to the projections that emanate from this area and target the frontal lobe[137][138][139][140] Studies have also reported a transient speech repetition deficit in patients after direct intra-cortical electrical stimulation to this same region.[11][141][142] Insight into the purpose of speech repetition in the ADS is provided by longitudinal studies of children that correlated the learning of foreign vocabulary with the ability to repeat nonsense words.[143][144]

Speech monitoring[edit]

In addition to repeating and producing speech, the ADS appears to have a role in monitoring the quality of the speech output. Neuroanatomical evidence suggests that the ADS is equipped with descending connections from the IFG to the pSTG that relay information about motor activity (i.e., corollary discharges) in the vocal apparatus (mouth, tongue, vocal folds). This feedback marks the sound perceived during speech production as self-produced and can be used to adjust the vocal apparatus to increase the similarity between the perceived and emitted calls. Evidence for descending connections from the IFG to the pSTG has been offered by a study that electrically stimulated the IFG during surgical operations and reported the spread of activation to the pSTG-pSTS-Spt region[145] A study[146] that compared the ability of aphasic patients with frontal, parietal or temporal lobe damage to quickly and repeatedly articulate a string of syllables reported that damage to the frontal lobe interfered with the articulation of both identical syllabic strings («Bababa») and non-identical syllabic strings («Badaga»), whereas patients with temporal or parietal lobe damage only exhibited impairment when articulating non-identical syllabic strings. Because the patients with temporal and parietal lobe damage were capable of repeating the syllabic string in the first task, their speech perception and production appears to be relatively preserved, and their deficit in the second task is therefore due to impaired monitoring. Demonstrating the role of the descending ADS connections in monitoring emitted calls, an fMRI study instructed participants to speak under normal conditions or when hearing a modified version of their own voice (delayed first formant) and reported that hearing a distorted version of one’s own voice results in increased activation in the pSTG.[147] Further demonstrating that the ADS facilitates motor feedback during mimicry is an intra-cortical recording study that contrasted speech perception and repetition.[129] The authors reported that, in addition to activation in the IPL and IFG, speech repetition is characterized by stronger activation in the pSTG than during speech perception.

Integration of phonemes with lip-movements[edit]

Although sound perception is primarily ascribed with the AVS, the ADS appears associated with several aspects of speech perception. For instance, in a meta-analysis of fMRI studies[119] in which the auditory perception of phonemes was contrasted with closely matching sounds, and the studies were rated for the required level of attention, the authors concluded that attention to phonemes correlates with strong activation in the pSTG-pSTS region. An intra-cortical recording study in which participants were instructed to identify syllables also correlated the hearing of each syllable with its own activation pattern in the pSTG.[148] Consistent with the role of the ADS in discriminating phonemes,[119] studies have ascribed the integration of phonemes and their corresponding lip movements (i.e., visemes) to the pSTS of the ADS. For example, an fMRI study[149] has correlated activation in the pSTS with the McGurk illusion (in which hearing the syllable «ba» while seeing the viseme «ga» results in the perception of the syllable «da»). Another study has found that using magnetic stimulation to interfere with processing in this area further disrupts the McGurk illusion.[150] The association of the pSTS with the audio-visual integration of speech has also been demonstrated in a study that presented participants with pictures of faces and spoken words of varying quality. The study reported that the pSTS selects for the combined increase of the clarity of faces and spoken words.[151] Corroborating evidence has been provided by an fMRI study[152] that contrasted the perception of audio-visual speech with audio-visual non-speech (pictures and sounds of tools). This study reported the detection of speech-selective compartments in the pSTS. In addition, an fMRI study[153] that contrasted congruent audio-visual speech with incongruent speech (pictures of still faces) reported pSTS activation. For a review presenting additional converging evidence regarding the role of the pSTS and ADS in phoneme-viseme integration see.[154]

Phonological long-term memory[edit]

A growing body of evidence indicates that humans, in addition to having a long-term store for word meanings located in the MTG-TP of the AVS (i.e., the semantic lexicon), also have a long-term store for the names of objects located in the Spt-IPL region of the ADS (i.e., the phonological lexicon). For example, a study[155][156] examining patients with damage to the AVS (MTG damage) or damage to the ADS (IPL damage) reported that MTG damage results in individuals incorrectly identifying objects (e.g., calling a «goat» a «sheep,» an example of semantic paraphasia). Conversely, IPL damage results in individuals correctly identifying the object but incorrectly pronouncing its name (e.g., saying «gof» instead of «goat,» an example of phonemic paraphasia). Semantic paraphasia errors have also been reported in patients receiving intra-cortical electrical stimulation of the AVS (MTG), and phonemic paraphasia errors have been reported in patients whose ADS (pSTG, Spt, and IPL) received intra-cortical electrical stimulation.[83][157][94] Further supporting the role of the ADS in object naming is an MEG study that localized activity in the IPL during the learning and during the recall of object names.[158] A study that induced magnetic interference in participants’ IPL while they answered questions about an object reported that the participants were capable of answering questions regarding the object’s characteristics or perceptual attributes but were impaired when asked whether the word contained two or three syllables.[159] An MEG study has also correlated recovery from anomia (a disorder characterized by an impaired ability to name objects) with changes in IPL activation.[160] Further supporting the role of the IPL in encoding the sounds of words are studies reporting that, compared to monolinguals, bilinguals have greater cortical density in the IPL but not the MTG.[161][162] Because evidence shows that, in bilinguals, different phonological representations of the same word share the same semantic representation,[163] this increase in density in the IPL verifies the existence of the phonological lexicon: the semantic lexicon of bilinguals is expected to be similar in size to the semantic lexicon of monolinguals, whereas their phonological lexicon should be twice the size. Consistent with this finding, cortical density in the IPL of monolinguals also correlates with vocabulary size.[164][165] Notably, the functional dissociation of the AVS and ADS in object-naming tasks is supported by cumulative evidence from reading research showing that semantic errors are correlated with MTG impairment and phonemic errors with IPL impairment. Based on these associations, the semantic analysis of text has been linked to the inferior-temporal gyrus and MTG, and the phonological analysis of text has been linked to the pSTG-Spt- IPL[166][167][168]

Phonological working memory[edit]

Working memory is often treated as the temporary activation of the representations stored in long-term memory that are used for speech (phonological representations). This sharing of resources between working memory and speech is evident by the finding[169][170] that speaking during rehearsal results in a significant reduction in the number of items that can be recalled from working memory (articulatory suppression). The involvement of the phonological lexicon in working memory is also evidenced by the tendency of individuals to make more errors when recalling words from a recently learned list of phonologically similar words than from a list of phonologically dissimilar words (the phonological similarity effect).[169] Studies have also found that speech errors committed during reading are remarkably similar to speech errors made during the recall of recently learned, phonologically similar words from working memory.[171] Patients with IPL damage have also been observed to exhibit both speech production errors and impaired working memory[172][173][174][175] Finally, the view that verbal working memory is the result of temporarily activating phonological representations in the ADS is compatible with recent models describing working memory as the combination of maintaining representations in the mechanism of attention in parallel to temporarily activating representations in long-term memory.[170][176][177][178] It has been argued that the role of the ADS in the rehearsal of lists of words is the reason this pathway is active during sentence comprehension[179] For a review of the role of the ADS in working memory, see.[180]

The ‘from where to what’ model of language evolution hypotheses 7 stages of language evolution.

The evolution of language[edit]

The auditory dorsal stream also has non-language related functions, such as sound localization[181][182][183][184][185] and guidance of eye movements.[186][187] Recent studies also indicate a role of the ADS in localization of family/tribe members, as a study[188] that recorded from the cortex of an epileptic patient reported that the pSTG, but not aSTG, is selective for the presence of new speakers. An fMRI[189] study of fetuses at their third trimester also demonstrated that area Spt is more selective to female speech than pure tones, and a sub-section of Spt is selective to the speech of their mother in contrast to unfamiliar female voices.

It is presently unknown why so many functions are ascribed to the human ADS. An attempt to unify these functions under a single framework was conducted in the ‘From where to what’ model of language evolution[190][191] In accordance with this model, each function of the ADS indicates of a different intermediate phase in the evolution of language. The roles of sound localization and integration of sound location with voices and auditory objects is interpreted as evidence that the origin of speech is the exchange of contact calls (calls used to report location in cases of separation) between mothers and offspring. The role of the ADS in the perception and production of intonations is interpreted as evidence that speech began by modifying the contact calls with intonations, possibly for distinguishing alarm contact calls from safe contact calls. The role of the ADS in encoding the names of objects (phonological long-term memory) is interpreted as evidence of gradual transition from modifying calls with intonations to complete vocal control. The role of the ADS in the integration of lip movements with phonemes and in speech repetition is interpreted as evidence that spoken words were learned by infants mimicking their parents’ vocalizations, initially by imitating their lip movements. The role of the ADS in phonological working memory is interpreted as evidence that the words learned through mimicry remained active in the ADS even when not spoken. This resulted with individuals capable of rehearsing a list of vocalizations, which enabled the production of words with several syllables. Further developments in the ADS enabled the rehearsal of lists of words, which provided the infra-structure for communicating with sentences.

Sign language in the brain[edit]

Neuroscientific research has provided a scientific understanding of how sign language is processed in the brain. There are over 135 discrete sign languages around the world- making use of different accents formed by separate areas of a country.[192]

By resorting to lesion analyses and neuroimaging, neuroscientists have discovered that whether it be spoken or sign language, human brains process language in general, in a similar manner regarding which area of the brain is being used. [192]Lesion analyses are used to examine the consequences of damage to specific brain regions involved in language while neuroimaging explore regions that are engaged in the processing of language.[192]

Previous hypotheses have been made that damage to Broca’s area or Wernicke’s area does not affect sign language being perceived; however, it is not the case. Studies have shown that damage to these areas are similar in results in spoken language where sign errors are present and/or repeated. [192]In both types of languages, they are affected by damage to the left hemisphere of the brain rather than the right -usually dealing with the arts.

There are obvious patterns for utilizing and processing language. In sign language, Broca’s area is activated while processing sign language employs Wernicke’s area similar to that of spoken language [192]

There have been other hypotheses about the lateralization of the two hemispheres. Specifically, the right hemisphere was thought to contribute to the overall communication of a language globally whereas the left hemisphere would be dominant in generating the language locally.[193] Through research in aphasias, RHD signers were found to have a problem maintaining the spatial portion of their signs, confusing similar signs at different locations necessary to communicate with another properly.[193] LHD signers, on the other hand, had similar results to those of hearing patients. Furthermore, other studies have emphasized that sign language is present bilaterally but will need to continue researching to reach a conclusion.[193]

Writing in the brain[edit]

There is a comparatively small body of research on the neurology of reading and writing.[194] Most of the studies performed deal with reading rather than writing or spelling, and the majority of both kinds focus solely on the English language.[195] English orthography is less transparent than that of other languages using a Latin script.[194] Another difficulty is that some studies focus on spelling words of English and omit the few logographic characters found in the script.[194]

In terms of spelling, English words can be divided into three categories – regular, irregular, and “novel words” or “nonwords.” Regular words are those in which there is a regular, one-to-one correspondence between grapheme and phoneme in spelling. Irregular words are those in which no such correspondence exists. Nonwords are those that exhibit the expected orthography of regular words but do not carry meaning, such as nonce words and onomatopoeia.[194]

An issue in the cognitive and neurological study of reading and spelling in English is whether a single-route or dual-route model best describes how literate speakers are able to read and write all three categories of English words according to accepted standards of orthographic correctness. Single-route models posit that lexical memory is used to store all spellings of words for retrieval in a single process. Dual-route models posit that lexical memory is employed to process irregular and high-frequency regular words, while low-frequency regular words and nonwords are processed using a sub-lexical set of phonological rules.[194]

The single-route model for reading has found support in computer modelling studies, which suggest that readers identify words by their orthographic similarities to phonologically alike words.[194] However, cognitive and lesion studies lean towards the dual-route model. Cognitive spelling studies on children and adults suggest that spellers employ phonological rules in spelling regular words and nonwords, while lexical memory is accessed to spell irregular words and high-frequency words of all types.[194] Similarly, lesion studies indicate that lexical memory is used to store irregular words and certain regular words, while phonological rules are used to spell nonwords.[194]

More recently, neuroimaging studies using positron emission tomography and fMRI have suggested a balanced model in which the reading of all word types begins in the visual word form area, but subsequently branches off into different routes depending upon whether or not access to lexical memory or semantic information is needed (which would be expected with irregular words under a dual-route model).[194] A 2007 fMRI study found that subjects asked to produce regular words in a spelling task exhibited greater activation in the left posterior STG, an area used for phonological processing, while the spelling of irregular words produced greater activation of areas used for lexical memory and semantic processing, such as the left IFG and left SMG and both hemispheres of the MTG.[194] Spelling nonwords was found to access members of both pathways, such as the left STG and bilateral MTG and ITG.[194] Significantly, it was found that spelling induces activation in areas such as the left fusiform gyrus and left SMG that are also important in reading, suggesting that a similar pathway is used for both reading and writing.[194]

Far less information exists on the cognition and neurology of non-alphabetic and non-English scripts. Every language has a morphological and a phonological component, either of which can be recorded by a writing system. Scripts recording words and morphemes are considered logographic, while those recording phonological segments, such as syllabaries and alphabets, are phonographic.[195] Most systems combine the two and have both logographic and phonographic characters.[195]

In terms of complexity, writing systems can be characterized as “transparent” or “opaque” and as “shallow” or “deep.” A “transparent” system exhibits an obvious correspondence between grapheme and sound, while in an “opaque” system this relationship is less obvious. The terms “shallow” and “deep” refer to the extent that a system’s orthography represents morphemes as opposed to phonological segments.[195] Systems that record larger morphosyntactic or phonological segments, such as logographic systems and syllabaries put greater demand on the memory of users.[195] It would thus be expected that an opaque or deep writing system would put greater demand on areas of the brain used for lexical memory than would a system with transparent or shallow orthography.

See also[edit]

  • Sign language
  • Phonology
  • Auditory processing disorder
  • Brodmann area
  • Cognitive science
  • Developmental verbal dyspraxia
  • FOXP2
  • Language disorder
  • Neurobiology
  • Neurolinguistics
  • Neuropsychology
  • Neuroscience
  • Origin of language
  • Visual word form area

References[edit]

  1. ^ Seidenberg MS, Petitto LA (1987). «Communication, symbolic communication, and language: Comment on Savage-Rumbaugh, McDonald, Sevcik, Hopkins, and Rupert (1986)». Journal of Experimental Psychology: General. 116 (3): 279–287. doi:10.1037/0096-3445.116.3.279. S2CID 18329599.
  2. ^ a b Geschwind N (June 1965). «Disconnexion syndromes in animals and man. I». review. Brain. 88 (2): 237–94. doi:10.1093/brain/88.2.237. PMID 5318481.
  3. ^ a b Hickok G, Poeppel D (May 2007). «The cortical organization of speech processing». review. Nature Reviews. Neuroscience. 8 (5): 393–402. doi:10.1038/nrn2113. PMID 17431404. S2CID 6199399.
  4. ^ a b Gow DW (June 2012). «The cortical organization of lexical knowledge: a dual lexicon model of spoken language processing». review. Brain and Language. 121 (3): 273–88. doi:10.1016/j.bandl.2012.03.005. PMC 3348354. PMID 22498237.
  5. ^ Poliva O (2017-09-20). «From where to what: a neuroanatomically based evolutionary model of the emergence of speech in humans». review. F1000Research. 4: 67. doi:10.12688/f1000research.6175.3. PMC 5600004. PMID 28928931. CC BY icon.svg Material was copied from this source, which is available under a Creative Commons Attribution 4.0 International License.
  6. ^ Poliva O (2016). «From Mimicry to Language: A Neuroanatomically Based Evolutionary Model of the Emergence of Vocal Language». review. Frontiers in Neuroscience. 10: 307. doi:10.3389/fnins.2016.00307. PMC 4928493. PMID 27445676. CC BY icon.svg Material was copied from this source, which is available under a Creative Commons Attribution 4.0 International License.
  7. ^ Pickles JO (2015). «Chapter 1: Auditory pathways: anatomy and physiology». In Aminoff MJ, Boller F, Swaab DF (eds.). Handbook of Clinical Neurology. review. Vol. 129. pp. 3–25. doi:10.1016/B978-0-444-62630-1.00001-9. ISBN 978-0-444-62630-1. PMID 25726260.
  8. ^ Lichteim L (1885-01-01). «On Aphasia». Brain. 7 (4): 433–484. doi:10.1093/brain/7.4.433. hdl:11858/00-001M-0000-002C-5780-B.
  9. ^ Wernicke C (1974). Der aphasische Symptomenkomplex. Springer Berlin Heidelberg. pp. 1–70. ISBN 978-3-540-06905-8.
  10. ^ Aboitiz F, García VR (December 1997). «The evolutionary origin of the language areas in the human brain. A neuroanatomical perspective». Brain Research. Brain Research Reviews. 25 (3): 381–96. doi:10.1016/s0165-0173(97)00053-2. PMID 9495565. S2CID 20704891.
  11. ^ a b Anderson JM, Gilmore R, Roper S, Crosson B, Bauer RM, Nadeau S, Beversdorf DQ, Cibula J, Rogish M, Kortencamp S, Hughes JD, Gonzalez Rothi LJ, Heilman KM (October 1999). «Conduction aphasia and the arcuate fasciculus: A reexamination of the Wernicke-Geschwind model». Brain and Language. 70 (1): 1–12. doi:10.1006/brln.1999.2135. PMID 10534369. S2CID 12171982.
  12. ^ DeWitt I, Rauschecker JP (November 2013). «Wernicke’s area revisited: parallel streams and word processing». Brain and Language. 127 (2): 181–91. doi:10.1016/j.bandl.2013.09.014. PMC 4098851. PMID 24404576.
  13. ^ Dronkers NF (January 2000). «The pursuit of brain-language relationships». Brain and Language. 71 (1): 59–61. doi:10.1006/brln.1999.2212. PMID 10716807. S2CID 7224731.
  14. ^ a b c Dronkers NF, Wilkins DP, Van Valin RD, Redfern BB, Jaeger JJ (May 2004). «Lesion analysis of the brain areas involved in language comprehension». Cognition. 92 (1–2): 145–77. doi:10.1016/j.cognition.2003.11.002. hdl:11858/00-001M-0000-0012-6912-A. PMID 15037129. S2CID 10919645.
  15. ^ Mesulam MM, Thompson CK, Weintraub S, Rogalski EJ (August 2015). «The Wernicke conundrum and the anatomy of language comprehension in primary progressive aphasia». Brain. 138 (Pt 8): 2423–37. doi:10.1093/brain/awv154. PMC 4805066. PMID 26112340.
  16. ^ Poeppel D, Emmorey K, Hickok G, Pylkkänen L (October 2012). «Towards a new neurobiology of language». The Journal of Neuroscience. 32 (41): 14125–31. doi:10.1523/jneurosci.3244-12.2012. PMC 3495005. PMID 23055482.
  17. ^ Vignolo LA, Boccardi E, Caverni L (March 1986). «Unexpected CT-scan findings in global aphasia». Cortex; A Journal Devoted to the Study of the Nervous System and Behavior. 22 (1): 55–69. doi:10.1016/s0010-9452(86)80032-6. PMID 2423296. S2CID 4479679.
  18. ^ a b Bendor D, Wang X (August 2006). «Cortical representations of pitch in monkeys and humans». Current Opinion in Neurobiology. 16 (4): 391–9. doi:10.1016/j.conb.2006.07.001. PMC 4325365. PMID 16842992.
  19. ^ a b Rauschecker JP, Tian B, Hauser M (April 1995). «Processing of complex sounds in the macaque nonprimary auditory cortex». Science. 268 (5207): 111–4. Bibcode:1995Sci…268..111R. doi:10.1126/science.7701330. PMID 7701330.
  20. ^ a b de la Mothe LA, Blumell S, Kajikawa Y, Hackett TA (May 2006). «Cortical connections of the auditory cortex in marmoset monkeys: core and medial belt regions». The Journal of Comparative Neurology. 496 (1): 27–71. doi:10.1002/cne.20923. PMID 16528722. S2CID 38393074.
  21. ^ de la Mothe LA, Blumell S, Kajikawa Y, Hackett TA (May 2012). «Cortical connections of auditory cortex in marmoset monkeys: lateral belt and parabelt regions». Anatomical Record. 295 (5): 800–21. doi:10.1002/ar.22451. PMC 3379817. PMID 22461313.
  22. ^ Kaas JH, Hackett TA (October 2000). «Subdivisions of auditory cortex and processing streams in primates». Proceedings of the National Academy of Sciences of the United States of America. 97 (22): 11793–9. Bibcode:2000PNAS…9711793K. doi:10.1073/pnas.97.22.11793. PMC 34351. PMID 11050211.
  23. ^ Petkov CI, Kayser C, Augath M, Logothetis NK (July 2006). «Functional imaging reveals numerous fields in the monkey auditory cortex». PLOS Biology. 4 (7): e215. doi:10.1371/journal.pbio.0040215. PMC 1479693. PMID 16774452.
  24. ^ Morel A, Garraghty PE, Kaas JH (September 1993). «Tonotopic organization, architectonic fields, and connections of auditory cortex in macaque monkeys». The Journal of Comparative Neurology. 335 (3): 437–59. doi:10.1002/cne.903350312. PMID 7693772. S2CID 22872232.
  25. ^ Rauschecker JP, Tian B (October 2000). «Mechanisms and streams for processing of «what» and «where» in auditory cortex». Proceedings of the National Academy of Sciences of the United States of America. 97 (22): 11800–6. Bibcode:2000PNAS…9711800R. doi:10.1073/pnas.97.22.11800. PMC 34352. PMID 11050212.
  26. ^ Rauschecker JP, Tian B, Pons T, Mishkin M (May 1997). «Serial and parallel processing in rhesus monkey auditory cortex». The Journal of Comparative Neurology. 382 (1): 89–103. doi:10.1002/(sici)1096-9861(19970526)382:1<89::aid-cne6>3.3.co;2-y. PMID 9136813.
  27. ^ Sweet RA, Dorph-Petersen KA, Lewis DA (October 2005). «Mapping auditory core, lateral belt, and parabelt cortices in the human superior temporal gyrus». The Journal of Comparative Neurology. 491 (3): 270–89. doi:10.1002/cne.20702. PMID 16134138. S2CID 40822276.
  28. ^ Wallace MN, Johnston PW, Palmer AR (April 2002). «Histochemical identification of cortical areas in the auditory region of the human brain». Experimental Brain Research. 143 (4): 499–508. doi:10.1007/s00221-002-1014-z. PMID 11914796. S2CID 24211906.
  29. ^ Da Costa S, van der Zwaag W, Marques JP, Frackowiak RS, Clarke S, Saenz M (October 2011). «Human primary auditory cortex follows the shape of Heschl’s gyrus». The Journal of Neuroscience. 31 (40): 14067–75. doi:10.1523/jneurosci.2000-11.2011. PMC 6623669. PMID 21976491.
  30. ^ Humphries C, Liebenthal E, Binder JR (April 2010). «Tonotopic organization of human auditory cortex». NeuroImage. 50 (3): 1202–11. doi:10.1016/j.neuroimage.2010.01.046. PMC 2830355. PMID 20096790.
  31. ^ Langers DR, van Dijk P (September 2012). «Mapping the tonotopic organization in human auditory cortex with minimally salient acoustic stimulation». Cerebral Cortex. 22 (9): 2024–38. doi:10.1093/cercor/bhr282. PMC 3412441. PMID 21980020.
  32. ^ Striem-Amit E, Hertz U, Amedi A (March 2011). «Extensive cochleotopic mapping of human auditory cortical fields obtained with phase-encoding fMRI». PLOS ONE. 6 (3): e17832. Bibcode:2011PLoSO…617832S. doi:10.1371/journal.pone.0017832. PMC 3063163. PMID 21448274.
  33. ^ Woods DL, Herron TJ, Cate AD, Yund EW, Stecker GC, Rinne T, Kang X (2010). «Functional properties of human auditory cortical fields». Frontiers in Systems Neuroscience. 4: 155. doi:10.3389/fnsys.2010.00155. PMC 3001989. PMID 21160558.
  34. ^ Gourévitch B, Le Bouquin Jeannès R, Faucon G, Liégeois-Chauvel C (March 2008). «Temporal envelope processing in the human auditory cortex: response and interconnections of auditory cortical areas» (PDF). Hearing Research. 237 (1–2): 1–18. doi:10.1016/j.heares.2007.12.003. PMID 18255243. S2CID 15271578.
  35. ^ Guéguin M, Le Bouquin-Jeannès R, Faucon G, Chauvel P, Liégeois-Chauvel C (February 2007). «Evidence of functional connectivity between auditory cortical areas revealed by amplitude modulation sound processing». Cerebral Cortex. 17 (2): 304–13. doi:10.1093/cercor/bhj148. PMC 2111045. PMID 16514106.
  36. ^ a b c Poliva O, Bestelmeyer PE, Hall M, Bultitude JH, Koller K, Rafal RD (September 2015). «Functional Mapping of the Human Auditory Cortex: fMRI Investigation of a Patient with Auditory Agnosia from Trauma to the Inferior Colliculus» (PDF). Cognitive and Behavioral Neurology. 28 (3): 160–80. doi:10.1097/wnn.0000000000000072. PMID 26413744. S2CID 913296.
  37. ^ Chang EF, Edwards E, Nagarajan SS, Fogelson N, Dalal SS, Canolty RT, Kirsch HE, Barbaro NM, Knight RT (June 2011). «Cortical spatio-temporal dynamics underlying phonological target detection in humans». Journal of Cognitive Neuroscience. 23 (6): 1437–46. doi:10.1162/jocn.2010.21466. PMC 3895406. PMID 20465359.
  38. ^ Muñoz M, Mishkin M, Saunders RC (September 2009). «Resection of the medial temporal lobe disconnects the rostral superior temporal gyrus from some of its projection targets in the frontal lobe and thalamus». Cerebral Cortex. 19 (9): 2114–30. doi:10.1093/cercor/bhn236. PMC 2722427. PMID 19150921.
  39. ^ a b Romanski LM, Bates JF, Goldman-Rakic PS (January 1999). «Auditory belt and parabelt projections to the prefrontal cortex in the rhesus monkey». The Journal of Comparative Neurology. 403 (2): 141–57. doi:10.1002/(sici)1096-9861(19990111)403:2<141::aid-cne1>3.0.co;2-v. PMID 9886040. S2CID 42482082.
  40. ^ Tanaka D (June 1976). «Thalamic projections of the dorsomedial prefrontal cortex in the rhesus monkey (Macaca mulatta)». Brain Research. 110 (1): 21–38. doi:10.1016/0006-8993(76)90206-7. PMID 819108. S2CID 21529048.
  41. ^ a b Perrodin C, Kayser C, Logothetis NK, Petkov CI (August 2011). «Voice cells in the primate temporal lobe». Current Biology. 21 (16): 1408–15. doi:10.1016/j.cub.2011.07.028. PMC 3398143. PMID 21835625.
  42. ^ a b c Petkov CI, Kayser C, Steudel T, Whittingstall K, Augath M, Logothetis NK (March 2008). «A voice region in the monkey brain». Nature Neuroscience. 11 (3): 367–74. doi:10.1038/nn2043. PMID 18264095. S2CID 5505773.
  43. ^ a b Poremba A, Malloy M, Saunders RC, Carson RE, Herscovitch P, Mishkin M (January 2004). «Species-specific calls evoke asymmetric activity in the monkey’s temporal poles». Nature. 427 (6973): 448–51. Bibcode:2004Natur.427..448P. doi:10.1038/nature02268. PMID 14749833. S2CID 4402126.
  44. ^ Romanski LM, Averbeck BB, Diltz M (February 2005). «Neural representation of vocalizations in the primate ventrolateral prefrontal cortex». Journal of Neurophysiology. 93 (2): 734–47. doi:10.1152/jn.00675.2004. PMID 15371495.
  45. ^ Russ BE, Ackelson AL, Baker AE, Cohen YE (January 2008). «Coding of auditory-stimulus identity in the auditory non-spatial processing stream». Journal of Neurophysiology. 99 (1): 87–95. doi:10.1152/jn.01069.2007. PMC 4091985. PMID 18003874.
  46. ^ a b Tsunada J, Lee JH, Cohen YE (June 2011). «Representation of speech categories in the primate auditory cortex». Journal of Neurophysiology. 105 (6): 2634–46. doi:10.1152/jn.00037.2011. PMC 3118748. PMID 21346209.
  47. ^ Cusick CG, Seltzer B, Cola M, Griggs E (September 1995). «Chemoarchitectonics and corticocortical terminations within the superior temporal sulcus of the rhesus monkey: evidence for subdivisions of superior temporal polysensory cortex». The Journal of Comparative Neurology. 360 (3): 513–35. doi:10.1002/cne.903600312. PMID 8543656. S2CID 42281619.
  48. ^ Cohen YE, Russ BE, Gifford GW, Kiringoda R, MacLean KA (December 2004). «Selectivity for the spatial and nonspatial attributes of auditory stimuli in the ventrolateral prefrontal cortex». The Journal of Neuroscience. 24 (50): 11307–16. doi:10.1523/jneurosci.3935-04.2004. PMC 6730358. PMID 15601937.
  49. ^ Deacon TW (February 1992). «Cortical connections of the inferior arcuate sulcus cortex in the macaque brain». Brain Research. 573 (1): 8–26. doi:10.1016/0006-8993(92)90109-m. ISSN 0006-8993. PMID 1374284. S2CID 20670766.
  50. ^ Lewis JW, Van Essen DC (December 2000). «Corticocortical connections of visual, sensorimotor, and multimodal processing areas in the parietal lobe of the macaque monkey». The Journal of Comparative Neurology. 428 (1): 112–37. doi:10.1002/1096-9861(20001204)428:1<112::aid-cne8>3.0.co;2-9. PMID 11058227. S2CID 16153360.
  51. ^ Roberts AC, Tomic DL, Parkinson CH, Roeling TA, Cutter DJ, Robbins TW, Everitt BJ (May 2007). «Forebrain connectivity of the prefrontal cortex in the marmoset monkey (Callithrix jacchus): an anterograde and retrograde tract-tracing study». The Journal of Comparative Neurology. 502 (1): 86–112. doi:10.1002/cne.21300. PMID 17335041. S2CID 18262007.
  52. ^ a b Schmahmann JD, Pandya DN, Wang R, Dai G, D’Arceuil HE, de Crespigny AJ, Wedeen VJ (March 2007). «Association fibre pathways of the brain: parallel observations from diffusion spectrum imaging and autoradiography». Brain. 130 (Pt 3): 630–53. doi:10.1093/brain/awl359. PMID 17293361.
  53. ^ Seltzer B, Pandya DN (July 1984). «Further observations on parieto-temporal connections in the rhesus monkey». Experimental Brain Research. 55 (2): 301–12. doi:10.1007/bf00237280. PMID 6745368. S2CID 20167953.
  54. ^ Catani M, Jones DK, ffytche DH (January 2005). «Perisylvian language networks of the human brain». Annals of Neurology. 57 (1): 8–16. doi:10.1002/ana.20319. PMID 15597383. S2CID 17743067.
  55. ^ Frey S, Campbell JS, Pike GB, Petrides M (November 2008). «Dissociating the human language pathways with high angular resolution diffusion fiber tractography». The Journal of Neuroscience. 28 (45): 11435–44. doi:10.1523/jneurosci.2388-08.2008. PMC 6671318. PMID 18987180.
  56. ^ Makris N, Papadimitriou GM, Kaiser JR, Sorg S, Kennedy DN, Pandya DN (April 2009). «Delineation of the middle longitudinal fascicle in humans: a quantitative, in vivo, DT-MRI study». Cerebral Cortex. 19 (4): 777–85. doi:10.1093/cercor/bhn124. PMC 2651473. PMID 18669591.
  57. ^ Menjot de Champfleur N, Lima Maldonado I, Moritz-Gasser S, Machi P, Le Bars E, Bonafé A, Duffau H (January 2013). «Middle longitudinal fasciculus delineation within language pathways: a diffusion tensor imaging study in human». European Journal of Radiology. 82 (1): 151–7. doi:10.1016/j.ejrad.2012.05.034. PMID 23084876.
  58. ^ Turken AU, Dronkers NF (2011). «The neural architecture of the language comprehension network: converging evidence from lesion and connectivity analyses». Frontiers in Systems Neuroscience. 5: 1. doi:10.3389/fnsys.2011.00001. PMC 3039157. PMID 21347218.
  59. ^ Saur D, Kreher BW, Schnell S, Kümmerer D, Kellmeyer P, Vry MS, Umarova R, Musso M, Glauche V, Abel S, Huber W, Rijntjes M, Hennig J, Weiller C (November 2008). «Ventral and dorsal pathways for language». Proceedings of the National Academy of Sciences of the United States of America. 105 (46): 18035–40. Bibcode:2008PNAS..10518035S. doi:10.1073/pnas.0805234105. PMC 2584675. PMID 19004769.
  60. ^ Yin P, Mishkin M, Sutter M, Fritz JB (December 2008). «Early stages of melody processing: stimulus-sequence and task-dependent neuronal activity in monkey auditory cortical fields A1 and R». Journal of Neurophysiology. 100 (6): 3009–29. doi:10.1152/jn.00828.2007. PMC 2604844. PMID 18842950.
  61. ^ Steinschneider M, Volkov IO, Fishman YI, Oya H, Arezzo JC, Howard MA (February 2005). «Intracortical responses in human and monkey primary auditory cortex support a temporal processing mechanism for encoding of the voice onset time phonetic parameter». Cerebral Cortex. 15 (2): 170–86. doi:10.1093/cercor/bhh120. PMID 15238437.
  62. ^ Russ BE, Ackelson AL, Baker AE, Cohen YE (January 2008). «Coding of auditory-stimulus identity in the auditory non-spatial processing stream». Journal of Neurophysiology. 99 (1): 87–95. doi:10.1152/jn.01069.2007. PMC 4091985. PMID 18003874.
  63. ^ Joly O, Pallier C, Ramus F, Pressnitzer D, Vanduffel W, Orban GA (September 2012). «Processing of vocalizations in humans and monkeys: a comparative fMRI study» (PDF). NeuroImage. 62 (3): 1376–89. doi:10.1016/j.neuroimage.2012.05.070. PMID 22659478. S2CID 9441377.
  64. ^ Scheich H, Baumgart F, Gaschler-Markefski B, Tegeler C, Tempelmann C, Heinze HJ, Schindler F, Stiller D (February 1998). «Functional magnetic resonance imaging of a human auditory cortex area involved in foreground-background decomposition». The European Journal of Neuroscience. 10 (2): 803–9. doi:10.1046/j.1460-9568.1998.00086.x. PMID 9749748. S2CID 42898063.
  65. ^ Zatorre RJ, Bouffard M, Belin P (April 2004). «Sensitivity to auditory object features in human temporal neocortex». The Journal of Neuroscience. 24 (14): 3637–42. doi:10.1523/jneurosci.5458-03.2004. PMC 6729744. PMID 15071112.
  66. ^ a b Binder JR, Desai RH, Graves WW, Conant LL (December 2009). «Where is the semantic system? A critical review and meta-analysis of 120 functional neuroimaging studies». Cerebral Cortex. 19 (12): 2767–96. doi:10.1093/cercor/bhp055. PMC 2774390. PMID 19329570.
  67. ^ Davis MH, Johnsrude IS (April 2003). «Hierarchical processing in spoken language comprehension». The Journal of Neuroscience. 23 (8): 3423–31. doi:10.1523/jneurosci.23-08-03423.2003. PMC 6742313. PMID 12716950.
  68. ^ Liebenthal E, Binder JR, Spitzer SM, Possing ET, Medler DA (October 2005). «Neural substrates of phonemic perception». Cerebral Cortex. 15 (10): 1621–31. doi:10.1093/cercor/bhi040. PMID 15703256.
  69. ^ Narain C, Scott SK, Wise RJ, Rosen S, Leff A, Iversen SD, Matthews PM (December 2003). «Defining a left-lateralized response specific to intelligible speech using fMRI». Cerebral Cortex. 13 (12): 1362–8. doi:10.1093/cercor/bhg083. PMID 14615301.
  70. ^ Obleser J, Boecker H, Drzezga A, Haslinger B, Hennenlotter A, Roettinger M, Eulitz C, Rauschecker JP (July 2006). «Vowel sound extraction in anterior superior temporal cortex». Human Brain Mapping. 27 (7): 562–71. doi:10.1002/hbm.20201. PMC 6871493. PMID 16281283.
  71. ^ Obleser J, Zimmermann J, Van Meter J, Rauschecker JP (October 2007). «Multiple stages of auditory speech perception reflected in event-related FMRI». Cerebral Cortex. 17 (10): 2251–7. doi:10.1093/cercor/bhl133. PMID 17150986.
  72. ^ Scott SK, Blank CC, Rosen S, Wise RJ (December 2000). «Identification of a pathway for intelligible speech in the left temporal lobe». Brain. 123 (12): 2400–6. doi:10.1093/brain/123.12.2400. PMC 5630088. PMID 11099443.
  73. ^ Belin P, Zatorre RJ (November 2003). «Adaptation to speaker’s voice in right anterior temporal lobe». NeuroReport. 14 (16): 2105–2109. doi:10.1097/00001756-200311140-00019. PMID 14600506. S2CID 34183900.
  74. ^ Benson RR, Whalen DH, Richardson M, Swainson B, Clark VP, Lai S, Liberman AM (September 2001). «Parametrically dissociating speech and nonspeech perception in the brain using fMRI». Brain and Language. 78 (3): 364–96. doi:10.1006/brln.2001.2484. PMID 11703063. S2CID 15328590.
  75. ^ Leaver AM, Rauschecker JP (June 2010). «Cortical representation of natural complex sounds: effects of acoustic features and auditory object category». The Journal of Neuroscience. 30 (22): 7604–12. doi:10.1523/jneurosci.0296-10.2010. PMC 2930617. PMID 20519535.
  76. ^ Lewis JW, Phinney RE, Brefczynski-Lewis JA, DeYoe EA (August 2006). «Lefties get it «right» when hearing tool sounds». Journal of Cognitive Neuroscience. 18 (8): 1314–30. doi:10.1162/jocn.2006.18.8.1314. PMID 16859417. S2CID 14049095.
  77. ^ Maeder PP, Meuli RA, Adriani M, Bellmann A, Fornari E, Thiran JP, Pittet A, Clarke S (October 2001). «Distinct pathways involved in sound recognition and localization: a human fMRI study» (PDF). NeuroImage. 14 (4): 802–16. doi:10.1006/nimg.2001.0888. PMID 11554799. S2CID 1388647.
  78. ^ Viceic D, Fornari E, Thiran JP, Maeder PP, Meuli R, Adriani M, Clarke S (November 2006). «Human auditory belt areas specialized in sound recognition: a functional magnetic resonance imaging study» (PDF). NeuroReport. 17 (16): 1659–62. doi:10.1097/01.wnr.0000239962.75943.dd. PMID 17047449. S2CID 14482187.
  79. ^ Shultz S, Vouloumanos A, Pelphrey K (May 2012). «The superior temporal sulcus differentiates communicative and noncommunicative auditory signals». Journal of Cognitive Neuroscience. 24 (5): 1224–32. doi:10.1162/jocn_a_00208. PMID 22360624. S2CID 10784270.
  80. ^ DeWitt I, Rauschecker JP (February 2012). «Phoneme and word recognition in the auditory ventral stream». Proceedings of the National Academy of Sciences of the United States of America. 109 (8): E505-14. doi:10.1073/pnas.1113427109. PMC 3286918. PMID 22308358.
  81. ^ a b c d Lachaux JP, Jerbi K, Bertrand O, Minotti L, Hoffmann D, Schoendorff B, Kahane P (October 2007). «A blueprint for real-time functional mapping via human intracranial recordings». PLOS ONE. 2 (10): e1094. Bibcode:2007PLoSO…2.1094L. doi:10.1371/journal.pone.0001094. PMC 2040217. PMID 17971857.
  82. ^ a b Matsumoto R, Imamura H, Inouchi M, Nakagawa T, Yokoyama Y, Matsuhashi M, Mikuni N, Miyamoto S, Fukuyama H, Takahashi R, Ikeda A (April 2011). «Left anterior temporal cortex actively engages in speech perception: A direct cortical stimulation study». Neuropsychologia. 49 (5): 1350–1354. doi:10.1016/j.neuropsychologia.2011.01.023. hdl:2433/141342. PMID 21251921. S2CID 1831334.
  83. ^ a b c d Roux FE, Miskin K, Durand JB, Sacko O, Réhault E, Tanova R, Démonet JF (October 2015). «Electrostimulation mapping of comprehension of auditory and visual words». Cortex; A Journal Devoted to the Study of the Nervous System and Behavior. 71: 398–408. doi:10.1016/j.cortex.2015.07.001. PMID 26332785. S2CID 39964328.
  84. ^ Fritz J, Mishkin M, Saunders RC (June 2005). «In search of an auditory engram». Proceedings of the National Academy of Sciences of the United States of America. 102 (26): 9359–64. Bibcode:2005PNAS..102.9359F. doi:10.1073/pnas.0503998102. PMC 1166637. PMID 15967995.
  85. ^ Stepien LS, Cordeau JP, Rasmussen T (1960). «The effect of temporal lobe and hippocampal lesions on auditory and visual recent memory in monkeys». Brain. 83 (3): 470–489. doi:10.1093/brain/83.3.470. ISSN 0006-8950.
  86. ^ Strominger NL, Oesterreich RE, Neff WD (June 1980). «Sequential auditory and visual discriminations after temporal lobe ablation in monkeys». Physiology & Behavior. 24 (6): 1149–56. doi:10.1016/0031-9384(80)90062-1. PMID 6774349. S2CID 7494152.
  87. ^ Kaiser J, Ripper B, Birbaumer N, Lutzenberger W (October 2003). «Dynamics of gamma-band activity in human magnetoencephalogram during auditory pattern working memory». NeuroImage. 20 (2): 816–27. doi:10.1016/s1053-8119(03)00350-1. PMID 14568454. S2CID 19373941.
  88. ^ Buchsbaum BR, Olsen RK, Koch P, Berman KF (November 2005). «Human dorsal and ventral auditory streams subserve rehearsal-based and echoic processes during verbal working memory». Neuron. 48 (4): 687–97. doi:10.1016/j.neuron.2005.09.029. PMID 16301183. S2CID 13202604.
  89. ^ Scott BH, Mishkin M, Yin P (July 2012). «Monkeys have a limited form of short-term memory in audition». Proceedings of the National Academy of Sciences of the United States of America. 109 (30): 12237–41. Bibcode:2012PNAS..10912237S. doi:10.1073/pnas.1209685109. PMC 3409773. PMID 22778411.
  90. ^ Noppeney U, Patterson K, Tyler LK, Moss H, Stamatakis EA, Bright P, Mummery C, Price CJ (April 2007). «Temporal lobe lesions and semantic impairment: a comparison of herpes simplex virus encephalitis and semantic dementia». Brain. 130 (Pt 4): 1138–47. doi:10.1093/brain/awl344. PMID 17251241.
  91. ^ Patterson K, Nestor PJ, Rogers TT (December 2007). «Where do you know what you know? The representation of semantic knowledge in the human brain». Nature Reviews. Neuroscience. 8 (12): 976–87. doi:10.1038/nrn2277. PMID 18026167. S2CID 7310189.
  92. ^ Schwartz MF, Kimberg DY, Walker GM, Faseyitan O, Brecher A, Dell GS, Coslett HB (December 2009). «Anterior temporal involvement in semantic word retrieval: voxel-based lesion-symptom mapping evidence from aphasia». Brain. 132 (Pt 12): 3411–27. doi:10.1093/brain/awp284. PMC 2792374. PMID 19942676.
  93. ^ Hamberger MJ, McClelland S, McKhann GM, Williams AC, Goodman RR (March 2007). «Distribution of auditory and visual naming sites in nonlesional temporal lobe epilepsy patients and patients with space-occupying temporal lobe lesions». Epilepsia. 48 (3): 531–8. doi:10.1111/j.1528-1167.2006.00955.x. PMID 17326797. S2CID 12642281.
  94. ^ a b c Duffau H (March 2008). «The anatomo-functional connectivity of language revisited. New insights provided by electrostimulation and tractography». Neuropsychologia. 46 (4): 927–34. doi:10.1016/j.neuropsychologia.2007.10.025. PMID 18093622. S2CID 40514753.
  95. ^ Vigneau M, Beaucousin V, Hervé PY, Duffau H, Crivello F, Houdé O, Mazoyer B, Tzourio-Mazoyer N (May 2006). «Meta-analyzing left hemisphere language areas: phonology, semantics, and sentence processing». NeuroImage. 30 (4): 1414–32. doi:10.1016/j.neuroimage.2005.11.002. PMID 16413796. S2CID 8870165.
  96. ^ a b Creutzfeldt O, Ojemann G, Lettich E (October 1989). «Neuronal activity in the human lateral temporal lobe. I. Responses to speech». Experimental Brain Research. 77 (3): 451–75. doi:10.1007/bf00249600. hdl:11858/00-001M-0000-002C-89EA-3. PMID 2806441. S2CID 19952034.
  97. ^ Mazoyer BM, Tzourio N, Frak V, Syrota A, Murayama N, Levrier O, Salamon G, Dehaene S, Cohen L, Mehler J (October 1993). «The cortical representation of speech» (PDF). Journal of Cognitive Neuroscience. 5 (4): 467–79. doi:10.1162/jocn.1993.5.4.467. PMID 23964919. S2CID 22265355.
  98. ^ Humphries C, Love T, Swinney D, Hickok G (October 2005). «Response of anterior temporal cortex to syntactic and prosodic manipulations during sentence processing». Human Brain Mapping. 26 (2): 128–38. doi:10.1002/hbm.20148. PMC 6871757. PMID 15895428.
  99. ^ Humphries C, Willard K, Buchsbaum B, Hickok G (June 2001). «Role of anterior temporal cortex in auditory sentence comprehension: an fMRI study». NeuroReport. 12 (8): 1749–52. doi:10.1097/00001756-200106130-00046. PMID 11409752. S2CID 13039857.
  100. ^ Vandenberghe R, Nobre AC, Price CJ (May 2002). «The response of left temporal cortex to sentences». Journal of Cognitive Neuroscience. 14 (4): 550–60. doi:10.1162/08989290260045800. PMID 12126497. S2CID 21607482.
  101. ^ Friederici AD, Rüschemeyer SA, Hahne A, Fiebach CJ (February 2003). «The role of left inferior frontal and superior temporal cortex in sentence comprehension: localizing syntactic and semantic processes». Cerebral Cortex. 13 (2): 170–7. doi:10.1093/cercor/13.2.170. PMID 12507948.
  102. ^ Xu J, Kemeny S, Park G, Frattali C, Braun A (2005). «Language in context: emergent features of word, sentence, and narrative comprehension». NeuroImage. 25 (3): 1002–15. doi:10.1016/j.neuroimage.2004.12.013. PMID 15809000. S2CID 25570583.
  103. ^ Rogalsky C, Hickok G (April 2009). «Selective attention to semantic and syntactic features modulates sentence processing networks in anterior temporal cortex». Cerebral Cortex. 19 (4): 786–96. doi:10.1093/cercor/bhn126. PMC 2651476. PMID 18669589.
  104. ^ Pallier C, Devauchelle AD, Dehaene S (February 2011). «Cortical representation of the constituent structure of sentences». Proceedings of the National Academy of Sciences of the United States of America. 108 (6): 2522–7. doi:10.1073/pnas.1018711108. PMC 3038732. PMID 21224415.
  105. ^ Brennan J, Nir Y, Hasson U, Malach R, Heeger DJ, Pylkkänen L (February 2012). «Syntactic structure building in the anterior temporal lobe during natural story listening». Brain and Language. 120 (2): 163–73. doi:10.1016/j.bandl.2010.04.002. PMC 2947556. PMID 20472279.
  106. ^ Kotz SA, von Cramon DY, Friederici AD (October 2003). «Differentiation of syntactic processes in the left and right anterior temporal lobe: Event-related brain potential evidence from lesion patients». Brain and Language. 87 (1): 135–136. doi:10.1016/s0093-934x(03)00236-0. S2CID 54320415.
  107. ^ Martin RC, Shelton JR, Yaffee LS (February 1994). «Language processing and working memory: Neuropsychological evidence for separate phonological and semantic capacities». Journal of Memory and Language. 33 (1): 83–111. doi:10.1006/jmla.1994.1005.
  108. ^ Magnusdottir S, Fillmore P, den Ouden DB, Hjaltason H, Rorden C, Kjartansson O, Bonilha L, Fridriksson J (October 2013). «Damage to left anterior temporal cortex predicts impairment of complex syntactic processing: a lesion-symptom mapping study». Human Brain Mapping. 34 (10): 2715–23. doi:10.1002/hbm.22096. PMC 6869931. PMID 22522937.
  109. ^ Bornkessel-Schlesewsky I, Schlesewsky M, Small SL, Rauschecker JP (March 2015). «Neurobiological roots of language in primate audition: common computational properties». Trends in Cognitive Sciences. 19 (3): 142–50. doi:10.1016/j.tics.2014.12.008. PMC 4348204. PMID 25600585.
  110. ^ Hickok G, Okada K, Barr W, Pa J, Rogalsky C, Donnelly K, Barde L, Grant A (December 2008). «Bilateral capacity for speech sound processing in auditory comprehension: evidence from Wada procedures». Brain and Language. 107 (3): 179–84. doi:10.1016/j.bandl.2008.09.006. PMC 2644214. PMID 18976806.
  111. ^ Zaidel E (September 1976). «Auditory Vocabulary of the Right Hemisphere Following Brain Bisection or Hemidecortication». Cortex. 12 (3): 191–211. doi:10.1016/s0010-9452(76)80001-9. ISSN 0010-9452. PMID 1000988. S2CID 4479925.
  112. ^ Poeppel D (October 2001). «Pure word deafness and the bilateral processing of the speech code». Cognitive Science. 25 (5): 679–693. doi:10.1016/s0364-0213(01)00050-7.
  113. ^ Ulrich G (May 1978). «Interhemispheric functional relationships in auditory agnosia. An analysis of the preconditions and a conceptual model». Brain and Language. 5 (3): 286–300. doi:10.1016/0093-934x(78)90027-5. PMID 656899. S2CID 33841186.
  114. ^ Stewart L, Walsh V, Frith U, Rothwell JC (March 2001). «TMS produces two dissociable types of speech disruption» (PDF). NeuroImage. 13 (3): 472–8. doi:10.1006/nimg.2000.0701. PMID 11170812. S2CID 10392466.
  115. ^ Acheson DJ, Hamidi M, Binder JR, Postle BR (June 2011). «A common neural substrate for language production and verbal working memory». Journal of Cognitive Neuroscience. 23 (6): 1358–67. doi:10.1162/jocn.2010.21519. PMC 3053417. PMID 20617889.
  116. ^ Desmurget M, Reilly KT, Richard N, Szathmari A, Mottolese C, Sirigu A (May 2009). «Movement intention after parietal cortex stimulation in humans». Science. 324 (5928): 811–3. Bibcode:2009Sci…324..811D. doi:10.1126/science.1169896. PMID 19423830. S2CID 6555881.
  117. ^ Edwards E, Nagarajan SS, Dalal SS, Canolty RT, Kirsch HE, Barbaro NM, Knight RT (March 2010). «Spatiotemporal imaging of cortical activation during verb generation and picture naming». NeuroImage. 50 (1): 291–301. doi:10.1016/j.neuroimage.2009.12.035. PMC 2957470. PMID 20026224.
  118. ^ Boatman D, Gordon B, Hart J, Selnes O, Miglioretti D, Lenz F (August 2000). «Transcortical sensory aphasia: revisited and revised». Brain. 123 (8): 1634–42. doi:10.1093/brain/123.8.1634. PMID 10908193.
  119. ^ a b c Turkeltaub PE, Coslett HB (July 2010). «Localization of sublexical speech perception components». Brain and Language. 114 (1): 1–15. doi:10.1016/j.bandl.2010.03.008. PMC 2914564. PMID 20413149.
  120. ^ Chang EF, Rieger JW, Johnson K, Berger MS, Barbaro NM, Knight RT (November 2010). «Categorical speech representation in human superior temporal gyrus». Nature Neuroscience. 13 (11): 1428–32. doi:10.1038/nn.2641. PMC 2967728. PMID 20890293.
  121. ^ Buchsbaum BR, Hickok G, Humphries C (September 2001). «Role of left posterior superior temporal gyrus in phonological processing for speech perception and production». Cognitive Science. 25 (5): 663–678. doi:10.1207/s15516709cog2505_2. ISSN 0364-0213.
  122. ^ Wise RJ, Scott SK, Blank SC, Mummery CJ, Murphy K, Warburton EA (January 2001). «Separate neural subsystems within ‘Wernicke’s area’«. Brain. 124 (Pt 1): 83–95. doi:10.1093/brain/124.1.83. PMID 11133789.
  123. ^ Hickok G, Buchsbaum B, Humphries C, Muftuler T (July 2003). «Auditory-motor interaction revealed by fMRI: speech, music, and working memory in area Spt». Journal of Cognitive Neuroscience. 15 (5): 673–82. doi:10.1162/089892903322307393. PMID 12965041.
  124. ^ Warren JE, Wise RJ, Warren JD (December 2005). «Sounds do-able: auditory-motor transformations and the posterior temporal plane». Trends in Neurosciences. 28 (12): 636–43. doi:10.1016/j.tins.2005.09.010. PMID 16216346. S2CID 36678139.
  125. ^ Hickok G, Poeppel D (May 2007). «The cortical organization of speech processing». Nature Reviews. Neuroscience. 8 (5): 393–402. doi:10.1038/nrn2113. PMID 17431404. S2CID 6199399.
  126. ^ Karbe H, Herholz K, Weber-Luxenburger G, Ghaemi M, Heiss WD (June 1998). «Cerebral networks and functional brain asymmetry: evidence from regional metabolic changes during word repetition». Brain and Language. 63 (1): 108–21. doi:10.1006/brln.1997.1937. PMID 9642023. S2CID 31335617.
  127. ^ Giraud AL, Price CJ (August 2001). «The constraints functional neuroimaging places on classical models of auditory word processing». Journal of Cognitive Neuroscience. 13 (6): 754–65. doi:10.1162/08989290152541421. PMID 11564320. S2CID 13916709.
  128. ^ Graves WW, Grabowski TJ, Mehta S, Gupta P (September 2008). «The left posterior superior temporal gyrus participates specifically in accessing lexical phonology». Journal of Cognitive Neuroscience. 20 (9): 1698–710. doi:10.1162/jocn.2008.20113. PMC 2570618. PMID 18345989.
  129. ^ a b Towle VL, Yoon HA, Castelle M, Edgar JC, Biassou NM, Frim DM, Spire JP, Kohrman MH (August 2008). «ECoG gamma activity during a language task: differentiating expressive and receptive speech areas». Brain. 131 (Pt 8): 2013–27. doi:10.1093/brain/awn147. PMC 2724904. PMID 18669510.
  130. ^ Selnes OA, Knopman DS, Niccum N, Rubens AB (June 1985). «The critical role of Wernicke’s area in sentence repetition». Annals of Neurology. 17 (6): 549–57. doi:10.1002/ana.410170604. PMID 4026225. S2CID 12914191.
  131. ^ Axer H, von Keyserlingk AG, Berks G, von Keyserlingk DG (March 2001). «Supra- and infrasylvian conduction aphasia». Brain and Language. 76 (3): 317–31. doi:10.1006/brln.2000.2425. PMID 11247647. S2CID 25406527.
  132. ^ Bartha L, Benke T (April 2003). «Acute conduction aphasia: an analysis of 20 cases». Brain and Language. 85 (1): 93–108. doi:10.1016/s0093-934x(02)00502-3. PMID 12681350. S2CID 18466425.
  133. ^ Baldo JV, Katseff S, Dronkers NF (March 2012). «Brain Regions Underlying Repetition and Auditory-Verbal Short-term Memory Deficits in Aphasia: Evidence from Voxel-based Lesion Symptom Mapping». Aphasiology. 26 (3–4): 338–354. doi:10.1080/02687038.2011.602391. PMC 4070523. PMID 24976669.
  134. ^ Baldo JV, Klostermann EC, Dronkers NF (May 2008). «It’s either a cook or a baker: patients with conduction aphasia get the gist but lose the trace». Brain and Language. 105 (2): 134–40. doi:10.1016/j.bandl.2007.12.007. PMID 18243294. S2CID 997735.
  135. ^ Fridriksson J, Kjartansson O, Morgan PS, Hjaltason H, Magnusdottir S, Bonilha L, Rorden C (August 2010). «Impaired speech repetition and left parietal lobe damage». The Journal of Neuroscience. 30 (33): 11057–61. doi:10.1523/jneurosci.1120-10.2010. PMC 2936270. PMID 20720112.
  136. ^ Buchsbaum BR, Baldo J, Okada K, Berman KF, Dronkers N, D’Esposito M, Hickok G (December 2011). «Conduction aphasia, sensory-motor integration, and phonological short-term memory — an aggregate analysis of lesion and fMRI data». Brain and Language. 119 (3): 119–28. doi:10.1016/j.bandl.2010.12.001. PMC 3090694. PMID 21256582.
  137. ^ Yamada K, Nagakane Y, Mizuno T, Hosomi A, Nakagawa M, Nishimura T (March 2007). «MR tractography depicting damage to the arcuate fasciculus in a patient with conduction aphasia». Neurology. 68 (10): 789. doi:10.1212/01.wnl.0000256348.65744.b2. PMID 17339591.
  138. ^ Breier JI, Hasan KM, Zhang W, Men D, Papanicolaou AC (March 2008). «Language dysfunction after stroke and damage to white matter tracts evaluated using diffusion tensor imaging». AJNR. American Journal of Neuroradiology. 29 (3): 483–7. doi:10.3174/ajnr.A0846. PMC 3073452. PMID 18039757.
  139. ^ Zhang Y, Wang C, Zhao X, Chen H, Han Z, Wang Y (September 2010). «Diffusion tensor imaging depicting damage to the arcuate fasciculus in patients with conduction aphasia: a study of the Wernicke-Geschwind model». Neurological Research. 32 (7): 775–8. doi:10.1179/016164109×12478302362653. PMID 19825277. S2CID 22960870.
  140. ^ Jones OP, Prejawa S, Hope TM, Oberhuber M, Seghier ML, Leff AP, Green DW, Price CJ (2014). «Sensory-to-motor integration during auditory repetition: a combined fMRI and lesion study». Frontiers in Human Neuroscience. 8: 24. doi:10.3389/fnhum.2014.00024. PMC 3908611. PMID 24550807.
  141. ^ Quigg M, Fountain NB (March 1999). «Conduction aphasia elicited by stimulation of the left posterior superior temporal gyrus». Journal of Neurology, Neurosurgery, and Psychiatry. 66 (3): 393–6. doi:10.1136/jnnp.66.3.393. PMC 1736266. PMID 10084542.
  142. ^ Quigg M, Geldmacher DS, Elias WJ (May 2006). «Conduction aphasia as a function of the dominant posterior perisylvian cortex. Report of two cases». Journal of Neurosurgery. 104 (5): 845–8. doi:10.3171/jns.2006.104.5.845. PMID 16703895.
  143. ^ Service E, Kohonen V (April 1995). «Is the relation between phonological memory and foreign language learning accounted for by vocabulary acquisition?». Applied Psycholinguistics. 16 (2): 155–172. doi:10.1017/S0142716400007062. S2CID 143974128.
  144. ^ Service E (July 1992). «Phonology, working memory, and foreign-language learning». The Quarterly Journal of Experimental Psychology. A, Human Experimental Psychology. 45 (1): 21–50. doi:10.1080/14640749208401314. PMID 1636010. S2CID 43268252.
  145. ^ Matsumoto R, Nair DR, LaPresto E, Najm I, Bingaman W, Shibasaki H, Lüders HO (October 2004). «Functional connectivity in the human language system: a cortico-cortical evoked potential study». Brain. 127 (Pt 10): 2316–30. doi:10.1093/brain/awh246. PMID 15269116.
  146. ^ Kimura D, Watson N (November 1989). «The relation between oral movement control and speech». Brain and Language. 37 (4): 565–90. doi:10.1016/0093-934x(89)90112-0. PMID 2479446. S2CID 39913744.
  147. ^ Tourville JA, Reilly KJ, Guenther FH (February 2008). «Neural mechanisms underlying auditory feedback control of speech». NeuroImage. 39 (3): 1429–43. doi:10.1016/j.neuroimage.2007.09.054. PMC 3658624. PMID 18035557.
  148. ^ Chang EF, Rieger JW, Johnson K, Berger MS, Barbaro NM, Knight RT (November 2010). «Categorical speech representation in human superior temporal gyrus». Nature Neuroscience. 13 (11): 1428–32. doi:10.1038/nn.2641. PMC 2967728. PMID 20890293.
  149. ^ Nath AR, Beauchamp MS (January 2012). «A neural basis for interindividual differences in the McGurk effect, a multisensory speech illusion». NeuroImage. 59 (1): 781–7. doi:10.1016/j.neuroimage.2011.07.024. PMC 3196040. PMID 21787869.
  150. ^ Beauchamp MS, Nath AR, Pasalar S (February 2010). «fMRI-Guided transcranial magnetic stimulation reveals that the superior temporal sulcus is a cortical locus of the McGurk effect». The Journal of Neuroscience. 30 (7): 2414–7. doi:10.1523/JNEUROSCI.4865-09.2010. PMC 2844713. PMID 20164324.
  151. ^ McGettigan C, Faulkner A, Altarelli I, Obleser J, Baverstock H, Scott SK (April 2012). «Speech comprehension aided by multiple modalities: behavioural and neural interactions». Neuropsychologia. 50 (5): 762–76. doi:10.1016/j.neuropsychologia.2012.01.010. PMC 4050300. PMID 22266262.
  152. ^ Stevenson RA, James TW (February 2009). «Audiovisual integration in human superior temporal sulcus: Inverse effectiveness and the neural processing of speech and object recognition». NeuroImage. 44 (3): 1210–23. doi:10.1016/j.neuroimage.2008.09.034. PMID 18973818. S2CID 8342349.
  153. ^ Bernstein LE, Jiang J, Pantazis D, Lu ZL, Joshi A (October 2011). «Visual phonetic processing localized using speech and nonspeech face gestures in video and point-light displays». Human Brain Mapping. 32 (10): 1660–76. doi:10.1002/hbm.21139. PMC 3120928. PMID 20853377.
  154. ^ Campbell R (March 2008). «The processing of audio-visual speech: empirical and neural bases». Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences. 363 (1493): 1001–10. doi:10.1098/rstb.2007.2155. PMC 2606792. PMID 17827105.
  155. ^ Schwartz MF, Faseyitan O, Kim J, Coslett HB (December 2012). «The dorsal stream contribution to phonological retrieval in object naming». Brain. 135 (Pt 12): 3799–814. doi:10.1093/brain/aws300. PMC 3525060. PMID 23171662.
  156. ^ Schwartz MF, Kimberg DY, Walker GM, Faseyitan O, Brecher A, Dell GS, Coslett HB (December 2009). «Anterior temporal involvement in semantic word retrieval: voxel-based lesion-symptom mapping evidence from aphasia». Brain. 132 (Pt 12): 3411–27. doi:10.1093/brain/awp284. PMC 2792374. PMID 19942676.
  157. ^ Ojemann GA (June 1983). «Brain organization for language from the perspective of electrical stimulation mapping». Behavioral and Brain Sciences. 6 (2): 189–206. doi:10.1017/S0140525X00015491. ISSN 1469-1825. S2CID 143189089.
  158. ^ Cornelissen K, Laine M, Renvall K, Saarinen T, Martin N, Salmelin R (June 2004). «Learning new names for new objects: cortical effects as measured by magnetoencephalography». Brain and Language. 89 (3): 617–22. doi:10.1016/j.bandl.2003.12.007. PMID 15120553. S2CID 32224334.
  159. ^ Hartwigsen G, Baumgaertner A, Price CJ, Koehnke M, Ulmer S, Siebner HR (September 2010). «Phonological decisions require both the left and right supramarginal gyri». Proceedings of the National Academy of Sciences of the United States of America. 107 (38): 16494–9. Bibcode:2010PNAS..10716494H. doi:10.1073/pnas.1008121107. PMC 2944751. PMID 20807747.
  160. ^ Cornelissen K, Laine M, Tarkiainen A, Järvensivu T, Martin N, Salmelin R (April 2003). «Adult brain plasticity elicited by anomia treatment». Journal of Cognitive Neuroscience. 15 (3): 444–61. doi:10.1162/089892903321593153. PMID 12729495. S2CID 1597939.
  161. ^ Mechelli A, Crinion JT, Noppeney U, O’Doherty J, Ashburner J, Frackowiak RS, Price CJ (October 2004). «Neurolinguistics: structural plasticity in the bilingual brain». Nature. 431 (7010): 757. Bibcode:2004Natur.431..757M. doi:10.1038/431757a. hdl:11858/00-001M-0000-0013-D79B-1. PMID 15483594. S2CID 4338340.
  162. ^ Green DW, Crinion J, Price CJ (July 2007). «Exploring cross-linguistic vocabulary effects on brain structures using voxel-based morphometry». Bilingualism. 10 (2): 189–199. doi:10.1017/S1366728907002933. PMC 2312335. PMID 18418473.
  163. ^ Willness C (2016-01-08). «The Oxford handbook of organizational climate and culture By Benjamin Schneider & Karen M. Barbera (Eds.) New York, NY: Oxford University Press, 2014. ISBN 978-0-19-986071-5». Book Reviews. British Journal of Psychology. 107 (1): 201–202. doi:10.1111/bjop.12170.
  164. ^ Lee H, Devlin JT, Shakeshaft C, Stewart LH, Brennan A, Glensman J, Pitcher K, Crinion J, Mechelli A, Frackowiak RS, Green DW, Price CJ (January 2007). «Anatomical traces of vocabulary acquisition in the adolescent brain». The Journal of Neuroscience. 27 (5): 1184–9. doi:10.1523/JNEUROSCI.4442-06.2007. PMC 6673201. PMID 17267574.
  165. ^ Richardson FM, Thomas MS, Filippi R, Harth H, Price CJ (May 2010). «Contrasting effects of vocabulary knowledge on temporal and parietal brain structure across lifespan». Journal of Cognitive Neuroscience. 22 (5): 943–54. doi:10.1162/jocn.2009.21238. PMC 2860571. PMID 19366285.
  166. ^ Jobard G, Crivello F, Tzourio-Mazoyer N (October 2003). «Evaluation of the dual route theory of reading: a metanalysis of 35 neuroimaging studies». NeuroImage. 20 (2): 693–712. doi:10.1016/s1053-8119(03)00343-4. PMID 14568445. S2CID 739665.
  167. ^ Bolger DJ, Perfetti CA, Schneider W (May 2005). «Cross-cultural effect on the brain revisited: universal structures plus writing system variation». Human Brain Mapping (in French). 25 (1): 92–104. doi:10.1002/hbm.20124. PMC 6871743. PMID 15846818.
  168. ^ Brambati SM, Ogar J, Neuhaus J, Miller BL, Gorno-Tempini ML (July 2009). «Reading disorders in primary progressive aphasia: a behavioral and neuroimaging study». Neuropsychologia. 47 (8–9): 1893–900. doi:10.1016/j.neuropsychologia.2009.02.033. PMC 2734967. PMID 19428421.
  169. ^ a b Baddeley A, Lewis V, Vallar G (May 1984). «Exploring the Articulatory Loop». The Quarterly Journal of Experimental Psychology Section A. 36 (2): 233–252. doi:10.1080/14640748408402157. S2CID 144313607.
  170. ^ a b Cowan N (February 2001). «The magical number 4 in short-term memory: a reconsideration of mental storage capacity». The Behavioral and Brain Sciences. 24 (1): 87–114, discussion 114–85. doi:10.1017/S0140525X01003922. PMID 11515286.
  171. ^ Caplan D, Rochon E, Waters GS (August 1992). «Articulatory and phonological determinants of word length effects in span tasks». The Quarterly Journal of Experimental Psychology. A, Human Experimental Psychology. 45 (2): 177–92. doi:10.1080/14640749208401323. PMID 1410554. S2CID 32594562.
  172. ^ Waters GS, Rochon E, Caplan D (February 1992). «The role of high-level speech planning in rehearsal: Evidence from patients with apraxia of speech». Journal of Memory and Language. 31 (1): 54–73. doi:10.1016/0749-596x(92)90005-i.
  173. ^ Cohen L, Bachoud-Levi AC (September 1995). «The role of the output phonological buffer in the control of speech timing: a single case study». Cortex; A Journal Devoted to the Study of the Nervous System and Behavior. 31 (3): 469–86. doi:10.1016/s0010-9452(13)80060-3. PMID 8536476. S2CID 4480375.
  174. ^ Shallice T, Rumiati RI, Zadini A (September 2000). «The selective impairment of the phonological output buffer». Cognitive Neuropsychology. 17 (6): 517–46. doi:10.1080/02643290050110638. PMID 20945193. S2CID 14811413.
  175. ^ Shu H, Xiong H, Han Z, Bi Y, Bai X (2005). «The selective impairment of the phonological output buffer: evidence from a Chinese patient». Behavioural Neurology. 16 (2–3): 179–89. doi:10.1155/2005/647871. PMC 5478832. PMID 16410633.
  176. ^ Oberauer K (2002). «Access to information in working memory: Exploring the focus of attention». Journal of Experimental Psychology: Learning, Memory, and Cognition. 28 (3): 411–421. doi:10.1037/0278-7393.28.3.411. PMID 12018494.
  177. ^ Unsworth N, Engle RW (January 2007). «The nature of individual differences in working memory capacity: active maintenance in primary memory and controlled search from secondary memory». Psychological Review. 114 (1): 104–32. doi:10.1037/0033-295x.114.1.104. PMID 17227183.
  178. ^ Barrouillet P, Camos V (December 2012). «As Time Goes By». Current Directions in Psychological Science. 21 (6): 413–419. doi:10.1177/0963721412459513. S2CID 145540189.
  179. ^ Bornkessel-Schlesewsky I, Schlesewsky M, Small SL, Rauschecker JP (March 2015). «Neurobiological roots of language in primate audition: common computational properties». Trends in Cognitive Sciences. 19 (3): 142–50. doi:10.1016/j.tics.2014.12.008. PMC 4348204. PMID 25600585.
  180. ^ Buchsbaum BR, D’Esposito M (May 2008). «The search for the phonological store: from loop to convolution». Journal of Cognitive Neuroscience. 20 (5): 762–78. doi:10.1162/jocn.2008.20501. PMID 18201133. S2CID 17878480.
  181. ^ Miller LM, Recanzone GH (April 2009). «Populations of auditory cortical neurons can accurately encode acoustic space across stimulus intensity». Proceedings of the National Academy of Sciences of the United States of America. 106 (14): 5931–5. Bibcode:2009PNAS..106.5931M. doi:10.1073/pnas.0901023106. PMC 2667094. PMID 19321750.
  182. ^ Tian B, Reser D, Durham A, Kustov A, Rauschecker JP (April 2001). «Functional specialization in rhesus monkey auditory cortex». Science. 292 (5515): 290–3. Bibcode:2001Sci…292..290T. doi:10.1126/science.1058911. PMID 11303104. S2CID 32846215.
  183. ^ Alain C, Arnott SR, Hevenor S, Graham S, Grady CL (October 2001). ««What» and «where» in the human auditory system». Proceedings of the National Academy of Sciences of the United States of America. 98 (21): 12301–6. Bibcode:2001PNAS…9812301A. doi:10.1073/pnas.211209098. PMC 59809. PMID 11572938.
  184. ^ De Santis L, Clarke S, Murray MM (January 2007). «Automatic and intrinsic auditory «what» and «where» processing in humans revealed by electrical neuroimaging». Cerebral Cortex. 17 (1): 9–17. doi:10.1093/cercor/bhj119. PMID 16421326.
  185. ^ Barrett DJ, Hall DA (August 2006). «Response preferences for «what» and «where» in human non-primary auditory cortex». NeuroImage. 32 (2): 968–77. doi:10.1016/j.neuroimage.2006.03.050. PMID 16733092. S2CID 19988467.
  186. ^ Linden JF, Grunewald A, Andersen RA (July 1999). «Responses to auditory stimuli in macaque lateral intraparietal area. II. Behavioral modulation». Journal of Neurophysiology. 82 (1): 343–58. doi:10.1152/jn.1999.82.1.343. PMID 10400963. S2CID 5317446.
  187. ^ Mazzoni P, Bracewell RM, Barash S, Andersen RA (March 1996). «Spatially tuned auditory responses in area LIP of macaques performing delayed memory saccades to acoustic targets». Journal of Neurophysiology. 75 (3): 1233–41. doi:10.1152/jn.1996.75.3.1233. PMID 8867131.
  188. ^ Lachaux JP, Jerbi K, Bertrand O, Minotti L, Hoffmann D, Schoendorff B, Kahane P (October 2007). «A blueprint for real-time functional mapping via human intracranial recordings». PLOS ONE. 2 (10): e1094. Bibcode:2007PLoSO…2.1094L. doi:10.1371/journal.pone.0001094. PMC 2040217. PMID 17971857.
  189. ^ Jardri R, Houfflin-Debarge V, Delion P, Pruvo JP, Thomas P, Pins D (April 2012). «Assessing fetal response to maternal speech using a noninvasive functional brain imaging technique». International Journal of Developmental Neuroscience. 30 (2): 159–61. doi:10.1016/j.ijdevneu.2011.11.002. PMID 22123457. S2CID 2603226.
  190. ^ Poliva O (2017-09-20). «From where to what: a neuroanatomically based evolutionary model of the emergence of speech in humans». F1000Research. 4: 67. doi:10.12688/f1000research.6175.3. PMC 5600004. PMID 28928931.
  191. ^ Poliva O (2016-06-30). «From Mimicry to Language: A Neuroanatomically Based Evolutionary Model of the Emergence of Vocal Language». Frontiers in Neuroscience. 10: 307. doi:10.3389/fnins.2016.00307. PMC 4928493. PMID 27445676.
  192. ^ a b c d e Suri, Sana. «What sign language teaches us about the brain». The Conversation. Retrieved 2019-10-07.
  193. ^ a b c Scientific American. (2002). Sign language in the brain. [Brochure]. Retrieved from http://lcn.salk.edu/Brochure/SciAM%20ASL.pdf
  194. ^ a b c d e f g h i j k l Norton ES, Kovelman I, Petitto LA (March 2007). «Are There Separate Neural Systems for Spelling? New Insights into the Role of Rules and Memory in Spelling from Functional Magnetic Resonance Imaging». Mind, Brain and Education. 1 (1): 48–59. doi:10.1111/j.1751-228X.2007.00005.x. PMC 2790202. PMID 20011680.
  195. ^ a b c d e Treiman R, Kessler B (2007). Writing Systems and Spelling Development. The Science of Reading: A Handbook. Blackwell Publishing Ltd. pp. 120–134. doi:10.1002/9780470757642.ch7. ISBN 978-0-470-75764-2.

Introduction

Paul Broca (Broca, 1861

) and (Carl Wernicke Wernicke, 1874

) were among the most noted scientists to identify critical brain regions responsible for the production and comprehension of speech. Since their reports of patients with focal brain damage it has become evident that language processing involves a widely distributed network of distinct cortical areas (Belin et al., 2002

; Binder et al., 2000

; Démonet et al.,1994

; Dronkers et al., 2004

; Dronkers, et al., 2007

; Fecteau et al., 2004

; Giraud and Price, 2001

; Indefrey and Cutler, 2005

; Mummery et al., 1999

; Petersen et al., 1988

; Price et al., 1992

; Price et al., 1996

; Scott and Wise, 2004

; Vouloumanos et al., 2001

; Wise et al., 1991

; Wise et al., 2001

; Wong et al., 2002

; Zatorre et al., 1992

) engaged in a complex pattern of activation during linguistic processing (Friederici et al., 1993

; Kutas and Hillyard, 1980

; Marinković et al., 2003

; Marinković, 2004

; Neville et al., 1991

; Osterhout and Holcomb, 1992

; Pulvermuller et al., 2003

; Pulvermuller et al., 2006

). Several theoretical models of language processing have been proposed to explain the spatiotemporal dynamics of cortical activity observed in empirical studies of language processing (Binder et al., 1994

, 1997

, 2000

; Hickok and Poeppel, 2000

, 2004

, 2007

; Pulvermuller, 2005

).

Binder et al. (2000)

propose a hierarchical model of language processing. Using functional magnetic resonance imaging (fMRI), Binder and colleagues generated a map of functional subdivisions within the human temporal cortex by having subject listen to unstructured noise, frequency-modulated (FM) tones, reversed speech, pseudowords, and words. They demonstrated that cortical regions surrounding Heschl’s Gyrus bilaterally – in particular, the planum temporale and dorsolateral superior temporal gyrus (STG) – were more strongly activated by FM tones than by noise, suggesting that the regions are involved in processing temporally structured auditory stimuli. Speech stimuli, on the other hand, showed greater bilateral activation of the cortical regions surrounding the superior temporal sulcus (STS). Their results suggest a hierarchical processing stream which projects from the dorsal temporal cortex ventrally to the STS, the middle temporal gyrus (MTG), the inferior temporal gyrus (ITG), and then posteriorly to the angular gyrus and anteriorly to the temporal pole. Binder and colleagues provide a spatial map of language-related activity, but the neuroimaging method used does not provide temporal information about the onset, duration, and offset of activity in these cortical regions.

In support of a functional subdivision of human lateral temporal cortex, Hickok and Poeppel (2007)

have suggested that language is represented by two processing streams: (1) a bilaterally organized ventral stream, which is involved in mapping sound onto meaning and includes structures in the superior and middle portions of the temporal lobe; and (2) a left dominant dorsal stream, which translates acoustic speech signals into motor representations of speech and includes the posterior frontal lobe and the dorsal-most aspect of the temporal lobe as well as the parietal operculum. Focusing on the ventral stream, Hickok and Poeppel propose a model which suggests that cortical speech processing first involves the spectrotemporal analysis of the acoustic signal by auditory cortices in the dorsal STG and phonological level processing involves the middle to posterior portions of the STS. Subsequently, the system diverges in parallel into the ventral and dorsal streams. The ventral stream projects toward the posterior middle and inferior portions of the temporal lobes, a region believed to link phonological and semantic information. These authors argue that the more anterior regions of the middle and inferior portions of the MTG are involved in a combinatorial network of speech processing. They further argue that parallel pathways are involved in mapping acoustic input into lexical phonological representations. They propose a multi-resolution model where speech is processed concurrently on two different time scales (a slow and fast rate), and then information is extracted and combined for lexical access. One pathway, right dominant lateralized, samples the acoustic input at a slow rate (theta range) and resolves syllable level information. The other pathway samples at a fast rate (gamma range) and resolves segment level information. According to their formulation, the fast pathway may be bilaterally organized, although this idea does not fit easily with the extant aphasia literature documenting a strong left hemisphere bias for language. Under normal conditions, these two pathways interact between hemispheres as well as within hemispheres, and each appears to be capable of activating lexical phonological networks.

A different approach was taken by Pulvermuller (1999, 2005)

, who proposes that the lexicon is implemented by an associative network of activity where distinct cell assemblies represent different words and word classes. According to his theory, content words (nouns, adjectives, and verbs) are represented by a network of neurons located in both hemispheres and function words (pronouns, auxiliary verbs, conjunctions, and articles) which serve a grammatical purpose are housed primarily in the left hemisphere. All word types include a perisylvian cell assembly. Within the content word class, Pulvermuller describes different networks of cell assemblies representing “action words” and “perception words”. According to Pulvermuller, action words (words which refer to the movement of one’s own body) are represented by a spatially extended reverberating circuit which includes perisylvian regions, premotor cortex, and the appropriate region of the motor cortex. In his theory, the word “blow” is represented by a distributed network of cell assemblies residing in perisylvian regions, premotor cortex, and the mouth portion of the motor homunculus, whereas the word “throw” is represented by perisylvian regions, the premotor cortex, and the hand portion of the motor homunculus. In contrast to these action words, perception words such as “tree” and “ocean” are represented by a perisylvian cell assembly linked to neuronal groups in the visual cortices of the occipital and temporal lobes. Pulvermuller and colleagues have provided evidence in support of this somatotopic cell assembly model of language using a variety of neuroimaging techniques, including fMRI, electroencephalography (EEG), magnetoencephalography (MEG), and TMS (Hauk et al., 2004a

; Hauk and Pulvermuller, 2004b

; Pulvermuller et al., 2005a

). Pulvermuller also proposes that cell assembly activation results in a fast, coherent reverberation of neuronal activity occurring in the low gamma range. In support of this “reverberating circuit” hypothesis, several EEG and MEG studies have shown stronger responses in the 25–35 Hz range to words as opposed to pseudowords (Lutzenberger et al., 1994

; Pulvermuller et al., 1994b

, 1995b

, 1996b

) and in the 60–70 Hz range to words as opposed to nonwords (Eulitz et al., 1996

).

It is difficult to fully evaluate the proposed models using only noninvasive neuroimaging techniques alone. Multiple studies using fMRI, positron emission tomography (PET), and patient populations with brain lesions have identified key brain areas involved in language processing. However, these techniques lack the temporal resolution needed to identify the precise order of activation of distinct cortical regions required to test alternative models of linguistic processing. Scalp-recorded EEG and MEG can track the fast time course of language processing but cannot unambiguously determine the spatial location of activated cortical areas (Friederici et al., 1993

; Kutas and Hillyard, 1980

). One neuroimaging method with excellent combined spatial and temporal resolution is electrocorticography (ECoG) recorded directly from the human cortex using subdural electrodes. The ECoG technique has several advantages over EEG and MEG. The subdural ECoG signal is an order of magnitude stronger in amplitude than scalp recorded EEG and is not affected by the ocular and muscle artifacts which contaminate scalp EEG. Furthermore, the source of the signal may be more precisely estimated. Most importantly, the ECoG signal provides access to high frequency electrical brain activity (60–200 Hz) not readily seen in the scalp EEG. The high gamma (HG) band (80–200 Hz) has been shown to be a strong index of sensory-, motor-, and task-related cortical activation across multiple tasks including language processing (Crone et al., 1998a

, 1998b

; Crone et al., 2001a

, 2001b

; Edwards et al., 2005

). HG is largely invisible to scalp EEG due to amplitude attenuation and spatial low-pass filtering (Nunez and Srinivasan, 2006

). HG amplitude can exist as high as 5–10 μV on the cortex and is likely at least an order of magnitude less on the scalp. This is due to a drop in field strength due to distance from the cortical surface to the scalp combined with fact that HG dipole generators of the ECoG can be 180 degrees out of phase within ∼3 mm on the cortical surface. Thus, positive and negative voltages can cancel resulting in no signal at the scalp. In this study, we used the high spatial and temporal resolution characteristic of ECoG HG activity seen with ECoG to expand upon previous findings and constrain competing theories of language by examining the spatiotemporal dynamics of word processing.

Materials and Methods

Participants

The four patients (all females, age range 35–45 years) participating in this study were candidates for surgical treatment for medically refractory epilepsy. Each had undergone a craniotomy for chronic implantation of a subdural electrode array and depth electrodes. The placement of the electrodes was determined on clinical grounds and varied for each subject, but included coverage of an 8 cm × 8 cm area centered over the left frontotemporal region for each of the four subjects described here. Implantation was followed by approximately 1 week of continuous monitoring of the ECoG in order to more precisely localize (1) the seizure focus for later resection, and (2) critical language and motor areas to be avoided during resective surgery. Consenting patients participated in the research study during the week of ECoG monitoring. In addition to the language task discussed in this paper, several other sensory, motor, and cognitive tasks were performed by the subjects while the ongoing ECoG was continuously recorded. The study protocol, approved by the UC San Francisco and UC Berkeley Committees on Human Research, did not interfere with the ECoG recording made for clinical purposes, and presented minimal risk to the participating subjects. Subject A was a 37 year-old right-handed woman with medically intractable complex partial seizures. MRI was normal and PET scan showed left temporal hypometabolism. She had a left anterior temporal lobectomy including the left amygdala and anterior hippocampus. Pathology showed left mesial temporal sclerosis. Subject B was a 45 year-old right-handed woman with intractable complex partial seizures. MRI showed abnormal signal and thinning of the left frontal opercular cortex and insular cortex as well as diminished size of the left hippocampus. She had resection of a portion of the left frontal lobe and left amygdala and hippocampus. Pathology showed cortical dysplasia. Subject C was a 35 year-old right-handed woman with a left temporal abscess in childhood resulting in intractable complex partial seizures. MRI showed a small resection cavity in the anterior inferior left temporal lobe, a small area of gliosis in the left cingulate gyrus, and subtle changes in the left hippocampal body and tail. She had a left anterior temporal lobectomy including amygdala and anterior hippocampus. Pathology showed gliosis and hippocampal sclerosis. Subject D was a 37 year-old right-handed woman with reflex epilepsy: she had reading-induced seizures consisting of word blindness and then a subjective feeling that she was losing awareness of her surroundings. MRI showed left mesial temporal sclerosis. She had a left posterior inferior temporal resection. Pathology was reported as gliosis and focal neuronal loss.

Stimuli and Task Description

As part of an auditory-linguistic target detection task, patients listened to three types of stimuli: mouth- or hand-related action verbs (babble, bark, blow, chew, grin, growl, hiss, howl, kiss, laugh, lick, sigh, sing, smile, spit, suck, clap, fold, hang, knock, mix, pinch, point, pour, scoop, sew, squeeze, stir, swat, type, write, zip; 45.25% occurrence) acoustically matched but unintelligible nonwords (45.25% occurrence), and proper names which served as target stimuli (Alex, Barbara, Becky, Ben, Brad, Brenda, Chad, Charles, Chris, Cindy, Dan, David, Emily, Erik, George, Jake, James, Janet, Jason, Jen, John, Judy, Julie, Justin, Karen, Laura, Linda, Lisa, Liz, Martha, Megan, Mitch, Ryan, Sheila, Steve, Susan, Tom, Tony, Tracy, Vicky; 9.5% occurrence). Subjects were instructed to respond with a button press using their left index finger each time they heard a proper name and to ignore all other stimuli. All stimuli were presented via two speakers placed on a table over the subjects’ bed approximately 1 meter from the subject’s head and were randomly mixed in presentation order with an inter-stimulus interval of 1063 ± 100 ms. All verbs and proper names were recorded by a female native English speaker. The recorded .wav files were opened in MATLAB and adjusted to have the same root-mean-square power (-15.86 dB) and duration (637 ms). Each nonword matched one of the action verbs (i.e., words) in duration, intensity, power spectrum, and temporal modulation but was rendered unintelligible by removing ripple sound components from the spectrogram of individual verbs. Briefly, a spectrogram was generated for each verb and a two-dimensional Fourier transform of the resulting image was performed. This process creates a list of amplitudes and phases for ripple sound components. Ripples corresponding to formants important for human speech discrimination were then removed. The remaining ripples were then summed to recreate a spectrogram. Since the spectrogram does not contain phase information, an iterative process was used to construct a sound waveform via spectrographic inversion (Singh and Theunissen, 2003

). This approach permitted us to subtract the acoustically matched nonword response from the verb response leaving the activity specifically related to word (verb) processing. Number of presentations of each stimulus type for each subject: Subject A, Nverb = 288, Nnonword = 288, Ntarget = 60; Subject B, Nverb = 192, Nnonword = 192, Ntarget = 40; Subject C, Nverb = 192, Nnonword = 192, Ntarget = 40; Subject D, Nverb = 224, Nnonword = 96, Ntarget = 40.

ECoG Recording and Electrode Localization

The electrode grids used to record ECoG for this study were 64-channel 8 × 8 arrays of platinum–iridium electrodes. In these arrays, each electrode is a 4 mm diameter disk with 2.3 mm exposed (thus 2.3 mm effective diameter), with 10 mm center-to-center spacing between adjacent electrodes. The low-pass filter of the recording system used for clinical monitoring does not permit recording of the high frequency content of the ECoG signal. Therefore, the signal for the ECoG grid was split and sent to both the clinical system and a custom recording system. An electrode at the corner of the grid (see Figure 1

A) was used as reference potential for all other grid electrodes. The ECoG for patients 1–3 was amplified ×10 000 and analog filtered in the range of 0.01–250 Hz, while the ECoG for patient 4 was amplified ×5000 and analog filtered in the range of 0.01–1000 Hz. Signals were digitized at 2003 Hz with 16 bit resolution. ECoG was recorded in separate blocks approximately 6 minutes in length. The process used to localize electrodes and coregister them with the structural MRI has been described in detail elsewhere (Dalal, 2007). Preoperative structural MR images were acquired on all patients with a 1.5T MRI scanner. Initial coregistrations were obtained using digital photographs taken immediately before and after the grid implantation and preoperative MRI scans using the Brain Extraction Tool (http://www.fmrib.ox.ac.uk/analysis/research/bet/

), MRIcro (http://www.sph.sc.edu/comd/rorden/ mricro.html

), and SPM2 (http://www.fil.ion.ucl.ac.uk/spm/software/spm2

). Using the gyri and sulci as landmarks, the photographs for each patient were matched to their structural MRI via a 3D–2D projective transform with manual correction (see Figure 1

for grid locations in all subjects). These coregistrations were used to create the MRI renderings with electrode locations shown in Figures 1 and 3

A. We report subject A’s data in detail and list here the MNI coordinates of each electrode for this case (electrode, x, y, z): e1, -52, -14, -42; e2, -53, -6, -36; e3, -54, 2, -30; e4, -49, 8, -22; e5, -45, 15, -14; e6, -52, 22, -10; e7, -50, 29, -2; e8, -47, 36, 6; e9, -58, -20, -37; e10, -60, -12, -31; e11, -59, -5, -24; e12, -55, 1, -16; e13, -52, 8, -7; e14, -55, 16, 0; e15, -54, 23, 8; e16, -47, 29, 16; e17, -64, -26, -30; e18, -67, -18, -24; e19, -64, -12, -16; e20, -60, -5, -8; e21, -58, 2, 0; e22, -58, 9, 8; e23, -56, 16, 16; e24, -49, 22, 24; e25, -66, -32, -22; e26, -69, -25, -15; e27, -66, -18, -7; e28, -62, -12, 1; e29, -61, -5, 9; e30, -60, 2, 17; e31, -57, 9, 25; e32, -51, 15, 33; e33, -68, -40, -14; e34, -69, -32, -7; e35, -67, -26, 1; e36, -65, -19, 9; e37, -63, -12, 17; e38, -60, -5, 26; e39, -57, 2, 33; e40, -52, 7, 42; e41, -67, -47, -6; e42, -67, -39, 1; e43, -66, -33, 9; e44, -65, -26, 17; e45, -63, -19, 25; e46, -61, -13, 34; e47, -57, -6, 42; e48, -50, -1, 49; e49, -65, -53, 2; e50, -66, -46, 10; e51, -64, -39, 17; e52, -64, -32, 25; e53, -63, -26, 33; e54, -59, -20, 41; e55, -54, -13, 49; e56, -48, -8, 57; e57, -62, -59, 11; e58, -65, -52, 18; e59, -65, -45, 26; e60, -65, -38, 34; e61, -60, -32, 41; e62, -55, -26, 49; e63, -48, -21, 55; e64, -43, -15, 62.

www.frontiersin.org

Figure 1. A–D show structural MRI renderings with electrode locations for the four subjects studied. Electrodes that exhibited a significant pre- to post-stimulus increase in HG power following verb presentation are shown with green centers. Electrodes that also showed a greater increase in HG power for presentation of verbs than for presentation of acoustically matched nonwords are outlined in red. Verb processing compared to nonword processing activates a distributed network of cortical areas including the post-STG, the mid-STG, and the STS.

Analysis

All analyses were done using custom MATLAB scripts. Prior to any further processing, channels with a low signal-to-noise ratio (SNR) were identified and deleted. Reasons for low SNR included 60 Hz line interference, electromagnetic noise from hospital equipment, and poor contact with cortical surface. The raw time series, voltage histograms, and power spectra were used to identify noisy channels. Two investigators had to both agree before a noisy channel was dropped. The multi-channel ECoG was digitally re-referenced to a common average and high-pass filtered above 2.3 Hz with a symmetrical (phase true) finite impulse response (FIR) filter (∼35 dB/octave roll-off) in order to minimize heartbeat artifact. Single channels of this minimally processed ECoG are referred to as the “raw signal” xRAW(t) in the following analyses. The raw ECoG signal and the event markers for the auditory stimuli were used to determine the direction, magnitude, and significance of event-related changes in the analytic amplitudes of different frequency bands of the ECoG signal.

To isolate a single frequency band in a single channel, the raw ECoG signal was convolved with an analytic Gabor basis function (Gaussian-weighted complex-valued sinusoid) to produce an analytic amplitude and analytic phase for that band at every sample point. This time-domain convolution was performed as a frequency-domain multiplication for computational efficiency. For example, given the sampling rate of 2003 Hz, a 5 minutes section of the raw, real-valued, time-domain ECoG signal xRAW(t) has N = 5 × 60 × 2003 = 600 900 sample points. An N-point, discrete-time complex Fourier transform (DTFT) of xRAW(t) generates a complex-valued, frequency-domain signal XRAW(f) with N = 600 900 points. Each (frequency-domain) sample point corresponds to the center frequency (CF) of a sinusoid whose time-domain representation has an integer number of cycles in the 5 minutes (N = 600 900 sample point) section considered, from 0 cycles (DC offset) to ±N/2 cycles (Nyquist frequency). Likewise, the analytic Gabor basis function has dual time-domain and frequency-domain representations and is continuous in both domains. Each analytic Gabor basis function is completely defined by two parameters, namely a CF and a fractional bandwidth (FBW). By sampling the analytic Gabor in the frequency-domain at the frequencies specified by XRAW(f), we generate a N-point discrete-frequency representation of the Gabor which we can call GCF,FBW(f). Since GCF,FBW(f) is analytic, it has non-zero weights only at non-negative frequencies. Multiplying XRAW(f) and GCF,FBW(f) generates a new frequency-domain signal ZCF,FBW(f). Applying an inverse DTFT to ZCF,FBW(f) completes the filtering process, generating a new, complex-valued time-domain signal zCF,FBW(t) = ACF,FBW(t) × exp[i × φCF,FBW(t)], where zCF,FBW(t) is the Hilbert transform of the band-passed ECoG signal, filtered with the given CF and FBW, ACF,FBW(t) is the analytic amplitude and φCF,FBW(t) is the analytic phase. The description above does not specify how the CF and FBW parameters were chosen. But as Bruns points out in his excellent paper (Bruns, 2004

), the short-time Fourier transform (STFT), the band-pass Hilbert Transform (HT), and the wavelet transform (WT) as normally applied are mathematically identical to the process described above; each transform differs only in how it samples the available parameter space of CF and FBW. The full-width half-maximum (FWHM) bandwidth in units of Hertz is given by the CF (in Hz) multiplied by the FBW (unitless parameter); BW = CF × FBW. For example, with a CF of 10 Hz and a FBW of 0.25, the -6 dB power level is reached at 8.75 and 11.25 Hz, while for a CF of 85 Hz the -6 dB level is reached at 74.375 and 95.625 Hz. The WT uses a constant FBW, while for the STFT, the product BW = CF × FBW remains constant. In the analyses conducted for this paper, a constant FBW of 0.25 was used for a set of nearly logarithmically spaced center frequencies, which corresponds to a nonorthogonal, overcomplete wavelet decomposition. In particular, the 50 CFs used were: 2.5, 3.7, 4.9, 6.2, 7.4, 8.7, 10.0, 11.4, 12.8, 14.2, 15.6, 17.1, 18.7, 20.3, 22.0, 23.8, 25.5, 27.4, 29.4, 31.45, 33.7, 36.0, 38.4, 41.0, 43.7, 46.6, 49.6, 52.9, 56.4, 60.2, 64.2, 68.5, 73.2, 78.2, 83.6, 89.4, 95.7, 102.6, 110.0, 118.0, 126.8, 136.3, 146.6, 157.9, 170.1, 183.5, 198.1, 214.1, 231.5, 250.5 Hz.

To determine the direction and magnitude of stimulus event-related changes in the analytic amplitude of a given frequency band, first the raw ECoG signal xRAW(t) was convolved with a complex-valued Gabor basis function gCF,FBW(t) to generate the real-valued analytic amplitude time series ACF,FBW(t), which has the same number of samples as the raw ECoG signal. Second, epochs from -500 ms before to 1500 ms after the onset of an auditory stimulus were extracted from the real-valued time-series ACF,FBW(t). Third, these epochs were grouped according to stimulus type. That is, each individual epoch was assigned one of the labels VERBS, NONWORDS, or TARGET NAMES. Fourth, the mean amplitude as a function of time (mean across epochs for each sample point) was computed for each stimulus type. Fifth, the prestimulus mean (mean over time for the 500 ms interval before stimulus onset) was subtracted from each sample point of the trace in order to baseline correct the amplitude level. For each stimulus type, call this baseline-corrected time-series the real amplitude trace ATRACE(t), where -500 ms < t < 1500 ms around stimulus onset. In order to determine the significance of these stimulus event-related changes, an ensemble of surrogate mean amplitude values were created in order to determine the significance of the real amplitude trace. In detail, first the sample points corresponding to the onset of actual stimuli were all shifted forward or backward by the same randomly chosen integer lag, modulo the length of the continuous analytic amplitude time series ACF,FBW(t). This procedure preserves the number of samples between successive epochs, but shifts the surrogate indices away from actual stimulus onsets. Second, the mean amplitude across these surrogate indices is determined and stored. This value is one member of the surrogate ensemble. Third, this procedure was repeated 10 000 times to create a complete ensemble of 10 000 surrogate values. Fourth, a Gaussian distribution was fit to the ensemble. Note that while the raw amplitude values are well-fit by a Gamma distribution, the mean amplitude across epochs is well-fit by a Gaussian, in accord with the Central Limit Theorem. Fifth, the real amplitude trace ATRACE(t) was divided by the standard deviation of the ensemble to create a normalized or z-scored amplitude trace ZTRACE(t). Since the standard deviation of the ensemble of amplitude means is a measure of the intrinsic variability of the across-epoch mean analytic amplitude of the frequency band under examination, ZTRACE(t) can be used to directly determine the uncorrected two-tailed probability that the deviation seen in the real amplitude trace ATRACE(t) at time t is due to chance (rather than evoked by the stimulus itself). Sixth, the above procedure was applied to all CFs and all electrodes in each subject and subjected to a FDR correction of q = 0.01 in order to determine a corrected significance threshold. That is, uncorrected p-values from the time-frequency-channel-condition matrix were sorted in ascending order {p1, p2, p3,…,pM}, where M is the total number of separate comparisons for a single subject, with the threshold T = pa determined such that k > a implies pk > kq/M. The corrected event-related time–frequency z-scores are plotted in Figures 3

, 4

B, and 5

6

7

.

To compute the mean phase-locking value (PLV) as a function of frequency and inter-electrode distance and preferred phase difference plotted in Figure 8

, first the raw ECoG signal xRAW_A(t) from a given channel A was convolved with a complex-valued Gabor basis function gCF,FBW(t) to generate the complex-valued analytic time series zCF,FBW_A(t), which has the same number of samples as the raw ECoG signal. Second, each sample point in this time series was divided by its modulus to generate the unit-length, complex-valued phase time series φCF,FBW A(t). Third, this process was repeated for a different channel B to generate φCF,FBW B(t). Fourth, these two time series were divided in a pointwise fashion to generate a new, unit-length, complex-valued time series φCF,FBW A B DIFF(t), where the angle of each sample point in this time series represents the phase difference between φCF,FBW A(t) and φCF,FBW B(t). Fifth, the mean of φCF,FBW A B DIFF(t) over all time points was taken. The modulus of this mean is the PLV, while the angle of this mean is the preferred direction (the phase difference between φCF,FBW A(t) and φCF,FBW B(t) which occurs most often over time). Sixth, the distance between pairs of channels A and B was determined and the mean PLV of all pairs with this inter-electrode distance was determined for all frequencies between 2 and 32 Hz (Figure 8

A). Seventh, a histogram of preferred directions was computed for all channel pairs and frequencies (Figure 8

B).

Figures 1 and 2

B require the direct comparison of verbs to nonwords, rather than a comparison of pre- to post-stimulus activity, as above. To compute this, first the same analysis steps as above up to step three were completed, generating ensembles of single-trial epochs of band-passed analytic amplitude time-series labeled VERBS (with NVERBS single trials), NONWORDS (with NNONWORDS single trials), and TARGET NAMES (with NTARGET NAMES single trials). Second, the mean amplitude as a function of time (mean across epochs for each sample point) was computed for VERBS and NONWORDS and their difference taken. Call this trace the DREAL(t). Third, new surrogate single-trial ensembles were created by randomly permuting the set {VERBS, NONWORDS} and assigning the first NWORDS single-trial traces to the group SURROGATEVERBS and the remaining NNONWORDS single-trial traces to the group SURROGATENONWORDS. Fourth, the mean amplitude as a function of time (mean across epochs for each sample point) was computed for SURROGATEVERBS and SURROGATENONWORDS and their difference taken. Call this trace the DSURROGATE(t). Fifth, this process was repeated 2500 times to create a distribution of surrogate values at each time point. Sixth, a Gaussian distribution was fit to the distribution of surrogate values at each time point. Seventh, for each time point t, the value of the actual trace DREAL(t) was normalized by the Gaussian fit of surrogate values to create a normalized trace ZTRACE-DIFFERENCE(t), from which the uncorrected probability that the value seen at each sample point was due to chance could be estimated by referencing the standard normal cumulative distribution function. Eighth, the above procedure was applied to all CFs and all electrodes in each subject and subjected to a FDR correction of p = 0.01 in order to determine a corrected significance threshold.

www.frontiersin.org

Figure 2. (A) Mean (±SE) percent signal change of HG analytic amplitude for verbs (red) and nonwords (green) for an electrode over the STS in patient A (49, see Figure 4

A for location). Black vertical lines indicate onset and offset of verb stimulus. (B) Processing of words as opposed to acoustically matched nonwords sequentially activates the post-STG, then the mid-STG, followed by the STS. Mean (±SE) onset time of significantly different HG activity for words versus acoustically matched nonwords in post-STG, mid-STG, and STS. (*: p < 0.05; **: p < 0.001, FDR corrected). (HG, 80–200 Hz) is the most effective frequency band for the temporal tracking of cortical activity associated with word processing.

Results

Time–frequency analysis of the ECoG signals during processing of hand- and mouth-related verbs, acoustically matched nonword stimuli, and target names revealed three key observations.

Spatial Results

All subjects showed an increase in HG power following presentation of words relative to acoustically matched nonwords at electrodes located over the posterior superior temporal gyrus (post-STG), middle superior temporal gyrus (mid-STG), and the STS (electrodes with red circle around green center in Figure 1

, p < 0.01, FDR corrected). Event-related power changes were also observed in the delta (2–4 Hz), theta (4–8 Hz), alpha (8–12 Hz), beta (12–30 Hz), and low gamma (30–80 Hz) bands in all subjects (p < 0.001, FDR corrected). As shown for one subject in Figure 3

, the spatial and temporal pattern of power changes in the delta, theta, alpha, beta, and low gamma bands were distinct from the spatiotemporal maps of HG activity. Across all electrodes in all subjects, a greater number of electrodes exhibited significant power changes for low frequencies than for high frequencies following presentation of verbs, with 39.8% (96/241) of channels showing changes in the delta band, 25.7% (62/241) for theta, 19.5% (47/241) for alpha, 17.0% (41/241) for beta, 18.7% (45/241) for low gamma, and 13.7% (33/241) for HG. While a significant negative correlation between frequency and spatial extent exists (r2 = 0.63, p < 0.0001), HG channels exhibit a high SNR. Thus, HG is a strong, spatially specific signal, while lower frequency bands such as theta exhibit changes over a wider spatial area.

www.frontiersin.org

Figure 3. Example of the spatiotemporally complex oscillatory dynamics associated with verb processing. Spatial pattern of power changes in different frequency bands at successive times in response to verb presentation in subject A (see Figure 4

A for electrode locations on MRI rendering and methods for MNI coordinates). Red indicates power increase and blue indicates power decrease. HG activity along the STG and STS has an early, strong onset, and in this subject is accompanied by activation of premotor regions. An initial beta power decrease occurs at and surrounding regions of strong HG activity, but note the late (850–975 ms) beta power increase over motor areas. Theta power shows a transient power decrease over premotor/frontal areas (350–725 ms) and a late onset power increase over the inferior parietal lobule (e.g., 600–975 ms). Delta activity is late and spatially diffuse over prefrontal and middle temporal regions. Note that power changes in different frequency bands are active in overlapping but distinct cortical territories, and show distinct temporal patterns of onset, duration, and offset.

Temporal Results

Across subjects HG power tracked a sequence of word-specific processing starting in the post-STG at 120 ± 13 ms (mean onset time ± standard error), moving to the mid-STG 73 ms later (193 ± 24 ms), before activating the STS at 268 ± 30 ms. Figure 2

B shows that the onset time of the HG activity which differentiates words from acoustically matched nonwords in the STS is significantly later than in the mid-STG (p < 0.05, FDR corrected) or post-STG (p < 0.001, FDR corrected), and that mid-STG activity is significantly later than in post-STG (p < 0.05, FDR corrected). The duration of HG activity associated with word processing was coupled to stimulus onset and offset, while the magnitude of change depended upon stimulus type. For example, Figure 2

A shows the percent signal change in mean HG amplitude in response to verbs (red) and nonwords (green) with a duration of 637 ms for one electrode over the STS in one subject (electrode 49 in subject A). Presentation of simple tones of 180 ms duration resulted in a shorter duration of associated HG activity (p < 0.001, data not shown). Considering all electrodes in all subjects, we observed a negative correlation between frequency and the time of onset of significant power changes following presentation of verbs (r2 = 0.69, p < 0.0001), with HG activity occurring ∼600 ms before changes in theta power.

Stimulus- and Task-dependent Spectral Results

The spatiotemporal pattern of these frequency-specific oscillatory responses depend on both stimulus type and task demands. While similar results were observed for all subjects, below we consider the results from one subject in greater detail. Figures 5

6

7

show the event-related time-frequency responses for each electrode in subject A in response to the presentation of verbs, acoustically matched nonwords, and proper names which served as targets in this target detection task. Note that while some electrodes show a similar HG response to all auditory stimuli (e.g., electrode 58 over post-STG), several other electrodes show a differential response to linguistic stimuli such as verbs and names versus nonlinguistic stimuli such as the acoustically matched nonwords (e.g., electrodes 49 over the STS, and 55 over premotor area). In contrast, other electrodes exhibit differential responses to targets (names) versus distractors (verbs and nonwords) (e.g., electrodes 8 and 15 over prefrontal cortex). The presentation of proper names (targets in this target detection task) evoked HG activity in electrodes over prefrontal sites in all subjects (p < 0.01, FDR corrected). While verbs, nonwords, and target names all produced distinct changes in the spatiotemporal patterns of spectral power, no significant differences in the ECoG response to the presentation of hand-related verbs alone and mouth-related verbs alone was observed in any electrode, including those over motor and premotor cortices.

While the response for some frequency bands was similar for all stimulus types even when the HG response was not (e.g., beta power drop in electrode 35 over mid-STG), other bands showed sensitivity to targets versus distractors or linguistic category (e.g., theta at electrode 59 over the inferior parietal lobule, or delta at electrode 24 over the frontal lobe). This frequency-specific event-related activity occurred at different times in distinct cortical areas. In particular, note that (1) the power in a particular band can decrease in one local region while simultaneously increasing elsewhere (e.g., theta power profile at 600 ms in Figure 3

), and that (2) different bands can be active in different areas (e.g., delta in frontal and middle temporal areas, theta in inferior parietal lobule, and beta in STG and motor areas at 850 ms in Figure 3

) or at different times (e.g., early HG vs. late theta activity).

Even a single, local cortical area can show a complex oscillatory response during the processing of words, suggesting that multiple, spatially overlapping, frequency-tagged neuronal assemblies may become active in parallel as they engage in selective communication with other cortical regions. As an example, Figure 4

B shows the ECoG time–frequency response for an electrode over a premotor area in subject A in response to word presentation. Note that three key bands show sustained responses: a strong HG (110 Hz) power increase, quickly followed by a power drop in the beta 16 Hz) band, with a drop in theta power occurring 200 ms later.

www.frontiersin.org

Figure 4. (A) Close-up of structural MRI for subject A showing numbered electrode positions (see also Figure Figure 1

A for same subject). See methods for MNI coordinates. (B) Event-related time–frequency plot for ECoG response at electrode 55 (premotor region) following verb presentation. Verb onset (0 ms) and offset (637 ms) are marked by solid black vertical lines. Black horizontal lines mark frequencies of interest (6, 16, 40, and 110 Hz) which are shown inb> Figure 3

. Note strong HG (∼110 Hz, HG) power increase (red), initial beta (∼16 Hz) power decrease (blue) followed by very late beta increase, and late theta (∼6 Hz) power decrease. Outermost black (red) contour line indicates significant power increase (decrease) (p < 0.001, FDR corrected).

Additionally, while the HG and beta responses end with stimulus offset, the theta response continues for several hundred milliseconds after stimulus offset and is followed by a late, transient increase in beta power.

In addition to the univariate analyses above, we also examined the frequency-specific phase-locking value between pairs of channels. PLVs between pairs of electrodes considered as a joint function of frequency (2–32 Hz) and inter-electrode distance (10–100 mm) have local maxima (peaks) in the delta, theta, alpha, and beta bands (e.g., Figure 8

A), with the strongest PLV occurring in the theta band for all inter-electrode distances. The preferred phase difference between electrodes clusters around 0 radians (0 degree, in phase) and π radians (180 degree out of phase) for all frequencies between 2 and 32 Hz.

Discussion

This study employed direct cortical ECoG recording to examine the event-related power changes in several frequency bands in order to evaluate models of language processing. These ECoG results demonstrate an orderly and automatic flow of word processing in the human temporal lobe. In particular, the HG band identifies the cortical regions involved in word processing with a greater spatial and temporal specificity than any other frequency band tested. Word processing involves sequential activation of the post-STG, mid-STG, and STS and these results validate previous spatial results regarding the cortical regions involved in word processing, and, in turn, language comprehension. These neuroanatomical results support lesion and neuroimaging studies which have shown word-related activity to occur in the post-STG, mid-STG, and STS (Belin et al., 2002

; Binder et al., 2000

; Démonet et al., 1994

; Dronkers et al., 2004

; Dronkers et al., 2007

; Fecteau et al., 2004

; Giraud and Price, 2001

; Indefrey and Cutler, 2005

; Mummery et al., 1999

; Petersen et al., 1988

; Price et al., 1992

; Price et al., 1996

; Scott and Wise, 2004

; Vouloumanos et al., 2001

; Wise et al., 2001

; Wong et al., 2002

; Zatorre et al., 1992

). However, these results also reveal the temporal flow of information between these distinct brain regions and support a component of serial processing in language. This study complements and extends Binder and colleagues (2000)

by demonstrating that word processing first activates the post-STG, then the mid-STG, and finally the STS.

Hickok and Poeppel (2007)

propose a hierarchical model of word processing with parallel analysis of a word for its acoustic-spectral content by auditory regions and for its phonetic content by the STS, and later for its meaning-based content by regions in the posterior middle and inferior portions of the temporal lobe. Importantly, they specify that both the left and the right hemispheres are involved in speech processing (i.e., predominately the ventral stream). In this study, we were unable to thoroughly address their argument for parallel processing, because all of our subjects had electrode grids placed over their left hemisphere for clinical purposes. In regards to information flow, however, we did observe a systematic flow of word processing beginning with acoustic processing in the auditory cortices and ending with meaning-based processing in the STS. Note that Hickok and Poeppel argue that the STS is involved in phonetic processing. Our paradigm did not included phonemes so we cannot definitely conclude that the STS is solely involved in the processing of words for meaning and not involved in phonetic level analysis. We can, however, conclude that our data supports their proposal of a hierarchically organized ventral stream, which may not necessarily correspond with their functional subdivision of the temporal cortex.

In accord with Pulvermuller’s theory of speech perception, we found HG activity occurring in the perisylvian regions with verb stimuli. However, we found no evidence in this dataset to support Pulvermuller’s (1999) hypothesis that hand- and mouth-related verbs activate two different networks: one including perisylvian language areas and the mouth region of the motor cortex, and the other including perisylvian language areas and the hand region of the motor cortex. However, this should not be taken as definitive evidence against Pulvermuller’s theory. For instance, EEG and ECoG electrodes are maximally sensitive to dipole sheets of different radii – while the signal from each 2.3 mm diameter ECoG electrode is largely generated by radially oriented cortex directly underneath it, EEG electrodes will record the largest signal from a properly oriented dipole sheet with a radius of 7–10 cm (Nunez et al., 2006). This implies that a highly synchronized neuronal assembly distributed over several different cortical regions may generate a strong scalp-EEG signal but only a weak ECoG signal at a local electrode, while ECoG can detect the activation of a synchronous, spatially localized neuronal assembly which remains invisible to EEG, perhaps explaining the contrast of the results of this study with previous findings (Pulvermuller et al., 2005b

). Nonetheless, word processing did activate electrodes over motor or premotor areas in all subjects examined (green electrodes in Figure 1

), consistent with previous fMRI findings (Wilson et al., 2004

).

It is difficult to model the activity observed at a single electrode in terms of a simple, monochromatic model of cortical “activation” and “inactivation”. A single cortical region can produce a spatiotemporally complex oscillatory response (e.g., Figure 4

B), and the existence of several semi-autonomous but interacting rhythms would seem to require distinct but spatially overlapping neuronal cell assemblies operating at those frequencies. Furthermore, complex behavioral tasks such as language require the coordination and integration of information across several different anatomically segregated brain areas. One class of models for how this integration could be accomplished proposes an oscillatory hierarchy operating at several different scales which can control the effective connectivity between separate neuronal assemblies (Lakatos et al., 2005

). In particular, the receptivity of neurons to post-synaptic input and the probability of spiking output can be modulated by locally adjusting the amplitude and phase of ongoing rhythms, which reflect the population activity of distinct neuronal populations (Fries, 2005

; Jacobs et al., 2007

; Schaefer et al., 2006

).

Examining the ECoG response of subjects to different stimulus types and task demands provides additional insight into the functional roles of neuronal sub-populations. Figures 5

6

7

show the event-related time-frequency response for all electrodes in Subject A following the presentation of hand- and mouth-related verbs, acoustically matched nonwords, and proper names. Importantly, verbs and names are intelligible while the nonwords were not. However, verbs and nonwords served as distractors and proper names served as targets in the task. Thus, observed differences in the oscillatory response patterns for the three conditions provide insight into the functional role of different rhythms; that is, were some oscillatory dynamics particular to language use, or to target detection, or do these oscillations arise with cortical activation in general?

www.frontiersin.org

Figure 5. Event-related time–frequency plots for all electrodes in subject A in response to presentation of verbs. See Figure 4

A and methods for electrode locations. Vertical lines indicate stimulus onset and offset. Horizontal lines indicated frequencies of interest (theta, beta, low gamma, and HG). Outermost black (red) contour line indicates significant power increase (decrease) (p < 0.001, FDR corrected). Note that some electrodes show a similar HG response to all auditory stimuli (e.g., 58 over STG), while the HG response of others depends on linguistic category (verbs and names vs. nonwords, e.g., 49 over STS or 55 over premotor areas) or task demands (targets vs. distractors, e.g., 8 and 15 over prefrontal cortex). Other bands also exhibit stimulus specificity: e.g., theta at 59 over the inferior parietal lobule, or delta at 41 over middle temporal gyrusb>. (c.f. Figures 6 and 7

).

www.frontiersin.org

Figure 6. Event-related time-frequency plots for all electrodes in subject A in response to presentation of acoustically matched (unintelligible) nonwords. See Figure 4

A and methods for electrode locations. Vertical lines indicate stimulus onset and offset. Horizontal lines indicated frequencies of interest (theta, beta, low gamma, and HG). Outermost black (red) contour line indicates significant power increase (decrease) (p < 0.001, FDR corrected). See also legend for Figure 5

.

For example, consider the role of the theta rhythm in this task. The theta rhythm has been associated with many different functional roles in humans and animals, including navigation, working memory, attention, and executive control (Caplan et al., 2003

; Ekstrom et al., 2005

; Gevins et al., 1997

; Ishii et al., 1999

; Kahana et al., 1999

; Onton et al., 2005

; Sederberg et al., 2003

). One notion is that theta activity observed in the ECoG data set may be involved in maintaining task set and readiness. Alternatively, theta could be involved in linguistic or semantic consolidation, supporting the recently described role of theta phase in speech discrimination (Luo and Poeppel, 2007

). If theta power were involved in semantic processing, then a similar response to both distractor verbs and target names would be expected.

Consider the response in electrode 59 in Figures 5

6

7

(situated over the inferior parietal lobule in subject A). This site has no response to the nonlinguistic nonword distractors. In contrast, this site shows a strong increase in theta power for verb distractors but a strong decrease in theta power for target names. In addition, targets produce a strong, sustained increase in HG power. Interestingly, while target detection requires an ipsilateral motor response, self-paced finger tapping generates only a brief, weak drop in the theta power and no HG activity at this electrode (tapping data not shown). This supports the idea that patterns of theta power change seen during components of this study are related to maintaining and regulating task-specific behavior rather than to semantic processing as such. This is consistent with the demonstrated role of the theta rhythm in regulating the top–down modulation required for complex behavioral tasks. Note however that Luo and Poeppel (2007)

using MEG report that theta phase, not power, was associated with speech discriminability when listening to sentences. Thus, while no theta phase resetting was observed in response to the presentation of single words in this study, it is possible that theta phase and power could play different but complementary roles in modulating the activity in a cortical area, with power controlling the amount of activity and phase controlling the timing of neuronal spiking (Bartos et al., 2007

; Klimesch et al., 2007

). Indirect evidence for this is the observed coupling of low gamma and HG power to both theta phase and theta power in human hippocampus and neocortex (Bruns and Eckhorn, 2004

; Canolty et al., 2006

; Mormann et al., 2005

). Theta gating of single-unit activity in human hippocampus (Jacobs et al., 2007

) provides direct evidence for oscillatory control of neuronal activity. The fact that we observed strong phase-locking in the theta band, with phase differences clustered around 0 and π radians (optimal phase offsets for communication and isolation, respectively), suggests that the theta rhythm may be an important regulator of inter-regional communication during complex behavioral tasks (Fries, 2005

).

www.frontiersin.org

Figure 7. Event-related time–frequency plots for all electrodes in subject A in response to presentation of proper names (targets in target detection task). Note HG activity in electrodes 8 and 15 over prefrontal cortex. See Figure 4

A and methods for electrode locations. Vertical lines indicate stimulus onset and offset. Horizontal lines indicated frequencies of interest (theta, beta, low gamma, and HG). Outermost black (red) contour line indicates significant power increase (decrease) (p < 0.001, FDR corrected). See also legend for Figure 5

.

www.frontiersin.org

Figure 8. (A) Mean PLV as a function of frequency and inter-electrode distance for all pairs of electrodes in subject A. Larger PLVs indicate that pairs of electrodes exhibit a greater degree of phase coherence at that frequency. Note that for all inter-electrode distances the strongest phase coherence occurs in the theta (4–8 Hz) band, with smaller peaks occurring in the delta (2–4 Hz), alpha (8–12 Hz), and beta (12–30 Hz) bands. Outermost contour line indicates a PLV of 0.15; other contours indicate steps of 0.05. (B) Normalized polar histogram of preferred phase differences between electrode pairs for all frequencies and inter-electrode distances in subject A. Note that phase differences are clustered around 0 degree (in phase) and 180 degree (out of phase). This has implications for the ease of communication between areas (see Discussion).

Unlike theta, HG activity appears to be a robust, unambiguous indicator of local cortical activity which can be used to infer functional engagement. HG tracked local neuronal activity specifically related to word processing. While this study and others have shown that HG can be used to track functional engagement, the neurophysiological origin of HG activity remains unknown. Simulation studies have shown that stable oscillations in the 100–200 Hz range can be generated by networks of conductance-based neurons, even when each individual neuron fires irregularly and at a much lower rate (Geisler et al., 2005

). It is thus possible that HG reflects the oscillatory population activity generated by networks of neurons coupled via chemical synapses. However, in vitro studies suggest that HG may depend on the propagation of spikelets through axo-axonic gap junctions between local networks of pyramidal cells (Whittington and Traub, 2003

). Note that in this respect HG differs from low gamma, which depends upon fast, strong, shunting synapses between GABAergic interneurons and is stabilized by dendro-dendritic gap junctions (Bartos et al., 2007

). If this model of HG proves to be the case, then HG would be more closely related to the mean spiking density in a cortical area than the local synaptic action density, as is the case for lower frequency bands (Nunez and Srinivasan, 2006

). This interpretation is consistent with the observed correlation between the fMRI BOLD signal and HG in monkeys, cats, and humans (Lachaux et al., 2005

; Logothetis et al., 2001

; Mukamel et al., 2005

; Niessing et al., 2005

). In addition, unlike oscillations in lower frequency bands which tend to have a narrow frequency range, the broad-band HG activity may be more aptly described as “fluctuations” rather than “oscillations”. Single-trial estimates of the instantaneous frequency generated from reassigned time-frequency representations (Gardner and Magnasco, 2006

) show large trial-to-trial variations, and often change quickly within a single trial. Therefore, while low gamma is thought to play a role in synchronizing separate cortical areas (Varela et al., 2001

), the observed wide-band variability makes it seem unlikely that HG frequencies play a direct role in synchronizing distinct brain regions.

In this study, HG was used to track the spatiotemporal pattern of local cortical activity associated with language comprehension and revealed that listening to words sequentially activates first the post-STG, then the mid-STG, followed by the STS. Although we provide novel data regarding the serial temporal flow of word related processing across the temporal lobe, based on our data we cannot rule out the possibility that additional processing is also occurring in a parallel fashion. In sum, the spatiotemporal dynamics of the ECoG signal in different frequency bands reveals the relative roles played by both spiking and synaptic action in overlapping neuronal cell assemblies in widely separated brain areas during a complex behavioral task.

Conflict of Interest Statement

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Acknowledgments

The authors thank Dr. Analia Arevalo for providing the word list and highlighting the ingestive versus communicative mouth verb distinction, Dr. Juliana Baldo for helpful feedback and discussions during analysis, Dr. Frederic Theunissen for advice and scripts used to generate the acoustically matched nonwords, and Emily Jacobs for helping with stimulus development. This work was supported by the Rauch family, an IBM Cognitive Computing Award, National Institute of Neurological Disorders and Stroke grant NS21135, National Science Foundation Fellowship 2004016118, and National Institute on Deafness and Other Communication Disorders grants F31DC006762, RO1 DC004855 and RO1 DC006435.

References

Binder, J. R., Frost, J. A., Hammeke, T. A., Bellgowan, P. S., Sprinker, J. A., Kaufman, J. N., and Possing, E. T. (2000). Human temporal lobe activation by speech and nonspeech sounds. Cereb. Cortex 10, 512-528.

Pubmed Abstract

| Pubmed Full Text

| CrossRef Full Text

Binder, J. R., Frost, J. A., Hammeke, T. A., Cox, R. W., Rao, S. M., and Prieto, T. (1997). Human brain language areas identified by functional magnetic resonance imaging. J. Neurosci. 17, 353-362.

Pubmed Abstract

| Pubmed Full Text

Binder, J. R., Rao, S. M., Hammeke, T. A., Yetkin, F. Z., Jsmanowicz, A., Bandettini, P. A., Wong, E. C., Estkowski, L. D., Goldstein, M. D., Haughton, V. M., and Hyde, J. S. (1994). Functional magnetic resonance imaging of human auditory cortex. Ann. Neurol. 35, 662-672.

Pubmed Abstract

| Pubmed Full Text

| CrossRef Full Text

Broca, P. (1861). Remarques sur le siège de la faculté du langage articulé, suivies d’une observation d’aphémie (perte de la parole). Bull. Mém. Soc. Anat. Paris 36, 330-357.

Canolty, R. T., Edwards, E., Dalal, S. S., Soltani, M., Kirsch, H. E., Berger, M. S., Barbaro, N. M., and Knight, R. T. (2006). High gamma power is phase-locked to theta oscillations in human neocortex. Science 313, 1626-1628.

Pubmed Abstract

| Pubmed Full Text

| CrossRef Full Text

Caplan, J. B., Madsen, J. R., Schulze-Bonhage, A., Aschenbrenner-Scheibe, R., Newman, E. L., and Kahana, E. L. (2003). Human theta oscillations related to sensorimotor integration and spatial learning, J. Neurosci. 23, 4726-4736.

Pubmed Abstract

| Pubmed Full Text

Crone, N. E., Hao, L., Hart, J., Boatman, D., Lesser, R. P., Irizarry, R., and Gordon, B. (2001b). Electrocorticographic gamma activity during word production in spoken and sign language. Neurology 57, 2045-2053.

Pubmed Abstract

| Pubmed Full Text

Crone, N. E., Miglioretti, D. L., Gordon, B., Sieracki, J. M., Wilson, M. T., Uematsu, S., and Lesser, R. P. (1998a). Functional mapping of human sensorimotor cortex with electrocorticographic spectral analysis: I. Alpha and beta event-related desychronization. Brain 121, 2271-2299.

Pubmed Abstract

| Pubmed Full Text

| CrossRef Full Text

Crone, N. E., Miglioretti, D. L., Gordon, B., and Lesser, R. P. (1998b). Functional mapping of human sensorimotor cortex with electrocorticographic spectral analysis: II. Event-related synchronization in the gamma band. Brain 121, 2301-2315.

Pubmed Abstract

| Pubmed Full Text

| CrossRef Full Text

Dalal, S. S., Guggisberg, A. G., Edwards, E., Sekihara, K., Findlay, A. M., Canolty, R. T., Knight, R. T., Barbaro, N. M., Kirsch, H. E., and Nagarajan, S. S. (2007). Spatial Localization of Cortical Time-Frequency Dynamics. Proceedings of the 29th IEEE EMBS Annual International Conference.

Démonet, J. F., Price, C., Wise, R., and Frackowiak, R. S. J. (1994). Differential activation of right and left posterior sylvian regions by semantic and phonological tasks: A positron-emission tomography study in normal human subjects. Neurosci. Let. 182, 25-28.

Pubmed Abstract

| Pubmed Full Text

| CrossRef Full Text

Dronkers, N. F., Wilkins, D. P., Van Valin, R. D., Redfern, B. B., and Jaeger, J. J. (2004). Lesion analysis of the brain areas involved in language comprehension. Cognition 92, 145-177.

Pubmed Abstract

| Pubmed Full Text

| CrossRef Full Text

Dronkers, N. F., Plaisant, O., Iba-Zizen, M. T., and Cabanis, E. A. (2007). Paul Broca’s historic cases: high resolution MR imaging of the brains of Leborgne and Lelong. Brain 130, 1432-1441.

Pubmed Abstract

| Pubmed Full Text

| CrossRef Full Text

Edwards, E., Soltani, M., Deouell, L. Y., Berger, M. S., and Knight, R. T. (2005). High gamma activity in response to deviant auditory stimuli recorded directly from human cortex. J. Neurophysiol. 94, 4269-4280.

Pubmed Abstract

| Pubmed Full Text

| CrossRef Full Text

Ekstrom, A. D., Caplan, J. B., Ho, E., Shattuck, K., Fried, I., and Kahana, M. J. (2005). Human hippocampal theta activity during virtual navigation, Hippocampus 15, 881-889.

Pubmed Abstract

| Pubmed Full Text

| CrossRef Full Text

Eulitz, C., Maβ, B., Pantev, C., Friederici, A. D., Feige, B., and Elbert, T. (1996). Oscillatory neuromagnetic activity induced by language and non-language stimuli. Cogn. Brain Res. 4, 121-132.

Pubmed Abstract

| Pubmed Full Text

| CrossRef Full Text

Friederici, A. D., Pfeifer, E., and Hahne, A. (1993). Event-related brain potentials during natural speech processing: effects of semantic, morphological and syntactic violations. Cogn. Brain Res. 1, 183-192.

Pubmed Abstract

| Pubmed Full Text

| CrossRef Full Text

Geisler, C., Brunel, N., and Wang, X. J. (2005). Contributions of intrinsic membrane dynamics to fast network oscillations with irregular neuronal discharges. J. Neurophysiol. 94, 4344-4361.

Pubmed Abstract

| Pubmed Full Text

| CrossRef Full Text

Gevins, A., Smith, M. E., McEvoy, L., and Yu, D. (1997). High-resolution EEG mapping of cortical activation related to working memory: effects of task difficulty, type of processing, and practice, Cereb. Cortex 7, 374-385.

Pubmed Abstract

| Pubmed Full Text

| CrossRef Full Text

Indefrey, P., and Cutler, A. (2005). Prelexical and lexical processing in listening. In The Cognitive Neurosciences III M. Gazzaniga, ed. (Cambridge, MA, MIT Press) PP. 759-774.

Ishii, R., Shinosaki, K., Ukai, S., Inouye, T., Ishihara, T., Yoshimine, T., Hirabuki, N., Asada, H., Kihara, T., Robinson, S. E., and Takeda, M. (1999). Medial prefrontal cortex generates frontal midline theta rhythm. NeuroReport 10, 675-679.

Pubmed Abstract

| Pubmed Full Text

| CrossRef Full Text

Kahana, M. J., Sekuler, R., Caplan, J. B., Kirschen, M., and Madsen, J. R. (1999). Human theta oscillations exhibit task dependence during virtual maze navigation, Nature 399, 781-784.

Pubmed Abstract

| Pubmed Full Text

| CrossRef Full Text

Klimesch, W., Sauseng, P., Hanslmayr, S., Gruber, W., Freunberger, R. 2007. Event-related phase reorganization may explain evoked neural dynamics. Neurosci. Biobehav. Rev. 31, 1003-1016.

Pubmed Abstract

| Pubmed Full Text

| CrossRef Full Text

Kutas, M., and Hillyard, S. A. (1980). Reading senseless sentences: Brain potentials reflect semantic anomaly. Science 207, 203-205.

Pubmed Abstract

| Pubmed Full Text

Lakatos, P., Shah, A. S., Knuth, K. H., Ulbert, I., and Karmos, G. (2005). An oscillatory hierarchy controlling neuronal excitability and stimulus processing in the auditory cortex. J. Neurophysiol. 94, (3) 1904-1911.

Pubmed Abstract

| Pubmed Full Text

| CrossRef Full Text

Lachaux, J. P., George, N., Tallon-Baudry, C., Martinerie, J., Hugueville, L., Minotti, L., Kahane, P., and Renault, B. (2005). The many faces of gamma band response to complex visual stimuli. Neuroimage 25, 491-501.

Pubmed Abstract

| Pubmed Full Text

| CrossRef Full Text

Logothetis, N. K., Pauls, J., Augath, M., Trinath, T., and Oeltermann, A. (2001). Neurophysiological investigation of the basis of the fMRI signal. Nature 412, 150-156.

Pubmed Abstract

| Pubmed Full Text

| CrossRef Full Text

Lutzenberger, W., Pulvermuller, F., and Birbaumer, N. (1994). Words and pseudowords elicit distinct patterns of 30-Hz activity in humans. Neurosci. Lett. 183, 39-42.

CrossRef Full Text

Marinković, K., Dhond, R. P., Dale, A. M., Glessner, M., Carr, V., and Halgren, E. (2003). Spatiotemporal dynamics of modality-specific and supramodal word processing. Neuron 8, 487-497.

Pubmed Abstract

| Pubmed Full Text

Mormann, F., Fell, J., Axmacher, N., Weber, B., Lehnertz, K., Elger, C. E., and Fernandez, G. (2005). Phase/amplitude reset and theta-gamma interaction in the human medial temporal lobe during a continuous word recognition memory task. Hippocampus 7, 890-900.

CrossRef Full Text

Mukamel, R., Gelbard, H., Arieli, A., Hasson, U., Fried, I., and Malach, R. (2005). Coupling between neuronal firing, field potentials, and fMRI in human auditory cortex. Science 309, 951-954.

Pubmed Abstract

| Pubmed Full Text

| CrossRef Full Text

Mummery, C. J., Ashburner, J., Scott, S. K., and Wise, R. J. S. (1999). Functional neuroimaging of speech perception in six normal and two aphasic subjects. J. Acoust. Soc. Am. 106, 449-457.

Pubmed Abstract

| Pubmed Full Text

| CrossRef Full Text

Neville, H. J., Nicol, J. L., Barss, A., Forster, K. I., and Garrett, M. F. (1991). Syntactically based sentence processing classes: evidence from event-related brain potentials. J. Cog. Neurosci. 3, 151-165.

Niessing, J., Ebisch, B., Schmidt, K. E., Niessing, M., Singer, W., and Galuske, R. A. W. (2005). Hemodynamic signals correlate tightly with synchronized gamma oscillations. Science 309, 948-951.

Pubmed Abstract

| Pubmed Full Text

| CrossRef Full Text

Nunez, P., and Srinivasan, R. (2006). Electric fields of the brain: the neurophysics of EEG. 2nd edn. (New York, Oxford University Press).

Osterhout, L., and Holcomb, P. J. (1992). Event-related brain potentials elicited by syntactic anomaly. J. Mem. Lang. 31, 785-806.

CrossRef Full Text

Petersen, S. E., Fox, P. T., Posner, M. I., Mintun, M., and Raichle, M. E. (1988). Positron emission tomographic studies of the cortical anatomy of single-word processing. Nature 331, 585-589.

Pubmed Abstract

| Pubmed Full Text

| CrossRef Full Text

Price, C. J., Wise, R., Ramsay, S., Friston, K., Howard, D., Patterson, K., and Frackowiak, R. (1992). Regional response differences within the human auditory cortex when listening to words. Neurosci. Lett. 146, 179-182.

Pubmed Abstract

| Pubmed Full Text

| CrossRef Full Text

Price, C. J., Wise, R. J. S., Warburton, E. A., Moore, C. J., Howard, D., Patterson, K., Frackowiak, R. S. J., and Friston, K. J. (1996). Hearing and saying: The functional neuro-anatomy of auditory word processing. Brain 119, 919-931.

Pubmed Abstract

| Pubmed Full Text

| CrossRef Full Text

Pulvermuller, F. (1994b). Syntax and Hirnmechanism. Perspektiven einer multidisziplinären Sprachwissenschaft. Kognitionswissenschaf 4, 17-31.

Pulvermuller, F. (1995a). Agrammatism: behavioral description and neurobiological explanation. J. Cogn. Neurosci. 7, 165-181.

Pulvermuller, F. (1995b). What neurobiology can buy language theory. Stud. Sec. Lang. Acq. 17, 73-77.

Pulvermuller, F. (1996b). Neruobiologie der Sprache. Gehirntheoretische Uberlegungen and empirische zur Sprachverarbeitung. Pschologia Universalis, Neue Reihe, Bd. 1. Lengerich Berlin u. a.: (Pabst Science Publishers).

Sederberg, P. B., Kahana, M. J., Howard, M. W., Donner, E. J., and Madsen, J. R. (2003). Theta and gamma oscillations during encoding predict subsequent recall, J. Neurosci. 23, 10809-10814.

Pubmed Abstract

| Pubmed Full Text

Vouloumanos, A., Kiehl, K. A., Werker, J. F., and Liddle, P. F. (2001). Detection of sounds in the auditory stream: event-related fMRI evidence for differential activation to speech and nonspeech. J. Cog. Neurosci. 13, 994-1005.

Pubmed Abstract

| Pubmed Full Text

| CrossRef Full Text

Wernicke, K. (1874). Der aphasische sypmtomencomplex eine psychologische studie auf anatomischer basis. Hohn and Weigert, Breslau.

Wise, R., Challet, F., Hadar, U., Friston, K., Hoffner, E., and Frackowiak, R. (1991). Distribution of cortical neural networks involved in word comprehension and word retrieval. Brain 114, 1803-1817.

Pubmed Abstract

| Pubmed Full Text

| CrossRef Full Text

Wise, R. J. S., Scott, S. K., Blank, S. C., Mummery, C. J., Murphy, K., and Warburton, E. A. (2001). Separate neural subsystems within ‘Wernicke’s area’. Brain 124, 83-95.

Pubmed Abstract

| Pubmed Full Text

Wong, D., Pisoni, D. B., Learn, J., Gandour, J. T., Miyamoto, R. T., and Hutchins, G. D. (2002). PET imaging of differential cortical activation by monaural speech and nonspeech stimuli. Hearing Res. 166, 9-23.

Pubmed Abstract

| Pubmed Full Text

| CrossRef Full Text

Summary: The left and right hemispheres of the brain are characterized by different word processing strategies.

Source: HSE

When reading words on a screen, the human brain comprehends words placed on the right side of the screen faster. The total amount of presented information on the screen also affects the speed and accuracy of the brain’s ability to process words. These are the findings of HSE University researchers Elena Gorbunova and Maria Falikman presented in an article that was published in the journal, Advances in Cognitive Psychology.

The study, which was carried out at the Laboratory for the Cognitive Psychology of Digital Interface Users, allowed the researchers to determine how the speed and accuracy of the human brain’s ability to process words on a screen depend on the words’ placement and quantity. This is due to the fact that cognitive functions are divided between the hemispheres of the brain. In this case, information received from the left field of vision – the left side of a screen – is received by the brain’s right hemisphere, while an image in the right field of vision is received by the left hemisphere.

In a series of experiments, the HSE researchers used a visual search method. First, subjects were shown a target letter that they would subsequently be required to find. Then a word or a random set of letters was shown on the left or right side of a screen, or two stimuli were shown on both sides of the screen simultaneously. The participants’ task was to locate the target letter as quickly as possible and press a specified key on the console. The reaction time and accuracy of their answers made it possible to determine in which visual hemifield information of various types is processed faster.

The results showed that the left and right hemispheres are characterized by different word processing strategies. When a familiar word appeared on the screen, the left hemisphere of the right-handed participants processed it holistically and found the target letter faster, while the right hemisphere was engaged in a slower search, ‘checking’ each letter in sequence. When participants were presented a meaningless set of letters, the opposite effect was observed: the left hemisphere processed the letter sets letter by letter, and the right processed them quickly and holistically.

Thus, words placed on the right of the screen were processed faster. Moreover, the word processing strategy the brain chooses depends on the total amount of information presented on the screen.

‘When there is a lot of information, that is, when we need to process two words on the left and on the right, our brain begins to save energy and processes the words simultaneously. When there is not enough information, the brain relaxes and processes the information sequentially,’ says study author Elena Gorbunova. ‘Therefore, when placing text on the screen, you need to monitor how much information you are presenting to the user.’

This shows a chalk board with a thought bubble and light bulb on the right

The results showed that the left and right hemispheres are characterized by different word processing strategies. The image is in the public domain.

This and other laboratory studies build upon earlier discoveries in the field of usability. When developing interfaces, sites, and applications, experts advise taking factors of cognitive information processing into consideration.

These factors include:

  • Picture superiority effect. A user will notice and remember a picture more quickly and easily than text.
  • Banner blindness. Banner advertising is noticed more often when the content of the banner text matches the theme of the main material.
  • Masking. If a first and second object are presented in succession on a screen too quickly, the brain’s perception of the objects worsens.
  • Change blindness. The user will not notice a change in details if you do not draw their attention to it, especially if the change did not occur in the zone of the user’s focused attention.
  • Spatial cueing. If a hint is presented to a user in anticipation of the introduction of a new object, the user is more likely to pay attention to this object.

Scientists note that research in this area is useful not only for creating convenient interfaces, but also for understanding the nature of dyslexia and other reading disabilities, as well as for developing online courses and game projects.

About this neuroscience research article

Source:
HSE
Media Contacts:
Liudmila Mezentseva – HSE
Image Source:
The image is in the public domain.

Original Research: Closed access
“Visual Search for Letters in the Right Versus Left Visual Hemifields”. Elena S. Gorbunova, Maria V. Falikman.
Advances in Cognitive Psychology. doi:10.5709/acp-0258-5

Abstract

Visual Search for Letters in the Right Versus Left Visual Hemifields

The current study investigated the relationships between attention, word processing, and visual feld asymmetries. There is a discussion on whether each brain hemisphere possesses its own attentional resources and on how attention allocation depends on hemispheric lateralization of functions. We used stimuli with lateralized processing in an attentional task presented across the two visual hemifields. Three experiments investigated the visual search for a prespecified letter in displays containing words or nonwords, placed left and right to fixation, with a variable target letter position within the strings. In Experiment 1, two letter strings of the same type (words or nonwords) were presented to both visual hemifields. In Experiment 2, there was only one letter string presented right or left to fxation. In Experiment 3, two letter strings of different type were presented to both hemifields. Response times and accuracy data were collected. The results of Experiment 1 provide evidence for letter-by-letter search within a word in the left visual feld (LVF), within a nonword in the right visual field (RVF), and for position-independent access to letters within a nonword in LVF and within a word in RVF. Experiment 3 produced similar results except for letter-by-letter search within words in RVF. In Experiment 2, for all types of letter strings in both hemifields, we observed the same letter-by-letter search. These results demonstrate that presence of stimuli in both one or two hemifields and the readiness to process a certain string type might contribute to the search for a letter within a letter string.

Feel free to share this Neuroscience News.

brain language

“My own brain is to me the most unaccountable of machinery — always buzzing, humming, soaring roaring diving, and then buried in mud. And why? What’s this passion for?” said Virginia Woolf, who was so talented at emulating consciousness and the duplicity of the human mind on the page.

Most writers forget that our brains have anything to do with the words we write — that writer’s block, passion, and creativity are not solely the property of our suspicious unconscious. Arranging words in an artfully syntactical manner is but one aspect of language processing — the way human beings process speech or writing and understand it as language, which is made completely by and inside the brain.

So how do we process language? And how does that neural activity translate into the art of writing?

The History Of Language Processing Research

Pioneering neuroscientists were studying the relationship of language and speech over 160 years ago. In 1861, while Abraham Lincoln was penning his famous inauguration address, French neurologist Paul Broca was busy discovering the parts of the brain behind Lincoln’s speech — the parts that handle language processing, comprehension, and speech production (along with controlling facial neurons).

What we now know as “Broca’s area” is located in the posterior inferior frontal gyrus. It’s where expressive language takes shape. Broca was the first person to associate the left hemisphere with language, which remains true for most of us today. (This can’t be said about every brain — it’s possible to have a language center on the right side, which is where the language loop lies in the brains of about 30% of left-handed people and approximately 10% of right-handers.)

Tucked in the back of Broca’s area is the pars triangularis, which is implicated in the semantics of language. When you stop to think about something someone’s said — a line in a poem, a jargon-heavy sentence — this is the part of your brain doing the heavy work. Because Broca studied patients who had various speech deficiencies, he also gave his name to “Broca’s aphasia,” or “expressive aphasia,” where patients often have right-sided weakness or paralysis of the arm and leg due to lesions to the medial insular cortex.

Another of Broca’s patients was a scientist who, after surgery, was missing Broca’s area. Though the scientist suffered minor language impediments, such as the inability to form complex sentences, his speech eventually recovered — which implied some neuroplacticity in terms of where language processing can take place.

Ten years after Broca’s discoveries, German neurologist Carl Wernicke found that damage to Broca’s area wasn’t the only place in the brain that could cause a language deficit. In the superior posterior temporal lobe, “Wernicke’s area” acts as the Broca’s area counterpart — handling “receptive language,” or language that we hear and process.

The arcuate fasciculus links Broca’s area to Wernicke’s area. If you damage this bundle of nerves you’ll find yourself having some trouble repeating what other people say.

Wernicke was also the first person to create a neurological model of language, mapping out various language processes in the brain — speech-to-comprehension, cognition-to-speech, and writing-to-reading — a model that was updated in 1965 by Dr. Norman Geschwind. Much of modern neurology as it relates to language is modeled on the Wernicke-Geschwind model, although the model is somewhat outdated today — it gives a broad overview but contains some inaccuracies, including the idea that language processing happens in sequential order, rather than in various parts of the brain simultaneously, which is what know today.

In the 1960s, Geschwind discovered that the inferior parietal lobule has something important to do with language processing. Now, thanks to much improved imaging technology, we know there’s another route through which language travels between Broca’s area and Wernicke’s area in the inferior parietal lobule. This region of the brain is all about language acquisition and abstract use of language. This is where we collect and consider spoken and written words — not just understanding their meanings, but also how they sound and work grammatically. This part of the brain helps us classify things using auditory, visual, and sensory stimuli; its late maturation might be why children usually don’t learn to read and write until they’re somewhere around the age of 5.

The fusiform gyrus, which is found in the temporal and occipital lobes, plays an interesting role in language processing in the brain. This area helps you recognize words and classify things within other categories. Damage to this part of the brain can cause difficulty in recognizing words on the page.

Today, we’re constantly learning new things about how language works. For example, we believe that the right brain performs its fair share of language functions, including the ability to comprehend metaphors as well as patterns of intonation and poetic meters.

Whereas we used to believe that people who speak with signs used a different, more visually dependent model of language processing in the brain, we now believe that language happens similarly in verbal and nonverbal ways.

As it turns out, the brains of deaf people function much the same way as their hearing counterparts: The same parts of the brain are activated while speaking, whether that’s by using signs or not. This research was presented in an issue of NeuroImage, and Dr. Karen Emmorey, a professor of speech language at San Diego State University, has presented research at the American Association for the Advancement of Science in San Diego illustrating that the brain reacts to signs that are pantomimes — drinking, for example — in exactly the same way as if the word “drink” were spoken aloud.

“It suggests the brain is organized for language, not for speech,” Emmorey says.

When reading words on a screen, the human brain comprehends words placed on the right side of the screen faster. The total amount of presented information on the screen also affects the speed and accuracy of the brain’s ability to process words. These are the findings of HSE researchers Elena Gorbunova and Maria Falikman presented in an article that was published in the journal, Advances in Cognitive Psychology.

The study, which was carried out at the Laboratory for the Cognitive Psychology of Digital Interface Users, allowed the researchers to determine how the speed and accuracy of the human brain’s ability to process words on a screen depend on the words’ placement and quantity. This is due to the fact that cognitive functions are divided between the hemispheres of the brain. In this case, information received from the left field of vision – the left side of a screen – is received by the brain’s right hemisphere, while an image in the right field of vision is received by the left hemisphere.

In a series of experiments, the HSE researchers used a visual search method. First, subjects were shown a target letter that they would subsequently be required to find. Then a word or a random set of letters was shown on the left or right side of a screen, or two stimuli were shown on both sides of the screen simultaneously. The participants’ task was to locate the target letter as quickly as possible and press a specified key on the console. The reaction time and accuracy of their answers made it possible to determine in which visual hemifield information of various types is processed faster.

The results showed that the left and right hemispheres are characterized by different word processing strategies. When a familiar word appeared on the screen, the left hemisphere of the right-handed participants processed it holistically and found the target letter faster, while the right hemisphere was engaged in a slower search, ‘checking’ each letter in sequence. When participants were presented a meaningless set of letters, the opposite effect was observed: the left hemisphere processed the letter sets letter by letter, and the right processed them quickly and holistically.

Thus, words placed on the right of the screen were processed faster. Moreover, the word processing strategy the brain chooses depends on the total amount of information presented on the screen.

‘When there is a lot of information, that is, when we need to process two words on the left and on the right, our brain begins to save energy and processes the words simultaneously. When there is not enough information, the brain relaxes and processes the information sequentially,’ says study author Elena Gorbunova. ‘Therefore, when placing text on the screen, you need to monitor how much information you are presenting to the user.’

This and other laboratory studies build upon earlier discoveries in the field of usability. When developing interfaces, sites, and applications, experts advise taking factors of cognitive information processing into consideration. These factors include:

Picture superiority effect. A user will notice and remember a picture more quickly and easily than text.

Banner blindness. Banner advertising is noticed more often when the content of the banner text matches the theme of the main material.

Masking. If a first and second object are presented in succession on a screen too quickly, the brain’s perception of the objects worsens.

Change blindness. The user will not notice a change in details if you do not draw their attention to it, especially if the change did not occur in the zone of the user’s focused attention.

Spatial cueing. If a hint is presented to a user in anticipation of the introduction of a new object, the user is more likely to pay attention to this object.

Scientists note that research in this area is useful not only for creating convenient interfaces, but also for understanding the nature of dyslexia and other reading disabilities, as well as for developing online courses and game projects.
IQ

3 сентября, 2019 г.


Понравилась статья? Поделить с друзьями:
  • Breathe a word идиома
  • Brain and word teasers
  • Breathe a word of this перевод
  • Brady шаблон для печати в excel
  • Breathe a word means