The spoken word signs

THE EFFICACY of using simultaneous signs and verbal language to facilitate early spoken words in hearing children with language delays has been documented in the literature (Baumann Leech & Cress, 2011; Dunst, Meter, & Hamby, 2011; Robertson, 2004; Wright, Kaiser, Reikowsky, & Roberts, 2013). In their systematic review, Dunst et al. (2011) concluded that using sign as an intervention to promote verbal language is promising, regardless of the population served (e.g., autism spectrum disorder, Down syndrome, developmental delays, physical disabilities) or the type of sign language used (e.g., American Sign Language [ASL], Signed English). Theoretical support comes from developmental research on the gesture–language continuum (Goodwyn, Acredolo, & Brown, 2000; McCune-Nicolich, 1981; McLaughlin, 1998), as well as language-learning theories such as the socially-based transactional model (Sameroff & Chandler, 1975; Yoder & Warren, 1993) and the cognitively-based information processing model (Ellis Weismer, 2000; Just & Carpenter, 1992). In this article, we review the developmental, theoretical, and empirical research that supports using sign language as an intervention in clinical populations. We further apply research to the tasks of choosing early word–sign targets and implementing word–sign intervention.

Guidance on signing with children is readily available for parents and practitioners in popular parenting books (e.g., Acredolo & Goodwyn, 2009), children’s board books (e.g., Acredolo & Goodwyn, 2002), and easily accessed websites (e.g., https://www.babysigns.com/; https://www.babysignlanguage.com/), as well as practitioner websites (e.g., the Center for Early Literacy Learning [CELL], 2010a, 2010b, 2010c, 2010d) and professional magazines (e.g., Seal, 2010). Although these works report translating research to practice, it must be noted they are without peer review (Nelson, White, & Grewe, 2012). For this reason, additional reflection on this topic is warranted.

Regarding recommendations on selecting a sign system, the aforementioned practice guidelines are in general agreement. Seal (2010) proposes the use of formal sign language signs (e.g., ASL) but accepts child modifications on the basis of motor skill. The Center for Early Literacy Learning (2010a, 2010b, 2010c, 2010d) uses a combination of ASL, ASL-modified, and homemade “baby signs.” Acredolo and Goodwyn (2009), the originators of the Baby SignsR program, added an ASL-only program in response to families who wish to teach universally consistent signs. Although no standard definition exists (Moores, 1978), ASL is a form of gestural communication utilized by individuals with profound hearing impairment (Nicolosi, Harryman, & Kresheck, 1996). American Sign Language is a distinct and formal language with an established system of morphology and syntax, different from that of spoken English. In ASL, some signs are iconic (i.e., how easily the sign visually resembles the concept it is trying to convey; Meuris, Maes, DeMeyer, & Zink, 2014) and motorically easy to produce, whereas others are not.

“Baby signs” are defined as stand-alone gestures made by infants and toddlers to communicate (Acredolo & Goodwyn, 2009; Acredolo, Goodwyn, Horobin, & Emmons, 1999). Baby signs are motorically simple, often generated by the toddler or created by the parent, and most often represent either an object or an activity (e.g., panting to represent “dog,” pulling at lower lip to represent “brush teeth”). Baby signs are also highly iconic. Iconicity has been discussed as a key factor in choosing early signs (Fristoe & Lloyd, 1980). Unlike ASL, baby signs are typically single words with no formal grammar.

Since Dunst et al. (2011) concluded that all sign interventions have value, we advocate here that in teaching isolated vocabulary (i.e., key words), homemade baby signs, formal ASL signs, and ASL-adapted signs all are appropriate as long as signs are iconic and consistent. Hereafter, these signs will be referred to as key word signs (KS) to differentiate them from any trademarked baby signs programs or ASL.

Published practice guidelines have further addressed how to choose word–sign pairs (Acredolo & Goodwyn, 2009; CELL, 2010a, 2010b, 2010c, 2010d; Seal, 2010). Although all agree that targets should be pragmatically functional and developmentally appropriate, none has systematically considered the research on spoken lexical development. Developmental lexical data are available (Fenson et al., 1994; Tardif et al., 2008), as are guidelines for choosing first spoken words (Holland, 1975; Lahey & Bloom, 1977; Lederer, 2002, 2011). These are important resources in choosing first word–sign pairs when the goal is to produce spoken words.

Finally, these research-to-practice guidelines provide useful information for intervention. Tips include the importance of gaining joint attention, pairing signs with spoken words, and the power of repetition within and across contexts, among others (Acredolo & Goodwyn, 2009; CELL, 2010a, 2010b, 2010c, 2010d; Seal, 2010). Collectively, these approaches combine the best of traditional language therapy with sign language intervention.

The purpose of this article is to integrate the aforementioned literature in an effort to guide clinical decision-making for young children with language delays in the absence of hearing loss. Specifically, this article will (a) review the developmental, theoretical, and empirical support for using signs to facilitate spoken words in children with language delays, (b) review guidelines for choosing first word–sign pairs, evaluate specific target recommendations, offer a sample lexicon, and (c) combine recommended practices in sign intervention and early language intervention.

RESEARCH BASIS

Natural gestures have been defined as actions produced by the whole body, arms, hands, or fingers for the purpose of communicating (Centers for Disease Control and Prevention, 2012; Iverson & Thal, 1998). Natural gestures have been further categorized as either deictic or representational (Capone & McGregor, 2004; Crais, Watson, & Baranek, 2009; Iverson & Thal, 1998). Deictic gestures include pointing, showing, giving, and reaching and emerge between 10 and 13 months (Capone & McGregor, 2004). They are used to gain attention and change on the basis of the context. To illustrate, babies may use pointing for several functions and meanings, based on context. For example, a baby may point to a picture in a book to label a duck and point to a bottle to request it.

Representational gestures are used to express a specific language concept (e.g., nodding to signify agreement, waving to greet, and sniffing to signify “flower”) and, therefore, are not context-dependent. Representational gestures can stand alone. For example, if a child pretends to sniff a flower, and the flower is not present, the listener still knows what the child is attempting to communicate. These gestures begin to appear at 12 months (Bates, Benigni, Bretherton, Camaioni, & Volterra, 1979; Capone & McGregor, 2004).

The gesture–speech continuum

Researchers studying gestures in young children have noted a continuum from prelinguistic gestures to first words (McLaughlin, 1998) to multiword combinations (Goodwyn et al., 2000), as well as concomitant milestones such as first symbolic/pretend play gestures and first words (McCune-Nicolich, 1981). Findings from longitudinal research in this area (Goodwyn et al., 2000; Rowe & Goldin-Meadow, 2009; Watt, Wetherby, & Shumway, 2006) have concluded that development of gesture predicts three critical early language-based domains: (a) lexical development (Acredolo & Goodwyn, 1988; Watt et al., 2006); (b) syntactic development in the transition to two-word utterances (Goodwyn et al., 2000); and (c) vocabulary size in kindergarten (Rowe & Goldin-Meadow, 2009).

Children with language impairments often have delays in gesture development (Luyster, Kadlec, Carter, & Tager-Flusberg, 2008; Sauer, Levine, & Goldin-Meadow, 2010). The nature of their gestural lexicons can be used to reliably predict who will and will not catch up in language development (i.e., late bloomers and late talkers, respectively; Thal, Tobias, & Morrision, 1991) and differentiate among those with various disabilities such as autism (Zwaigenbaum et al., 2005) and Down syndrome (Mundy, Kasari, Sigman, & Ruskin, 1995). In two recent studies, both teaching gestures directly to children (McGregor, 2009) and increasing parent use of gestures (Longobardi, Rossi-Arnaud, & Spataro, 2012) supported verbal word learning.

Theoretical support

Two models of language acquisition, that is, the transactional model (Sameroff & Chandler, 1975; Yoder & Warren, 1993) and the information processing model (Ellis Weismer, 2000; Just & Carpenter, 1992) provide further support for pairing spoken words with representational gestures/signs (i.e., KS). The transactional model (Sameroff & Chandler, 1975; Yoder & Warren, 1993) posits that the language-learning process is reciprocal and dynamic. A child-initiated gesture invites an adult to respond. Children with language delays, who do not initiate or respond (either with gestures or words), risk diminished conversational efforts by adults, further compromising the language-learning experience (Rice, 1993). Kirk, Howlett, Pine, and Fletcher (2013) provided support for this model, reporting that the use of baby signs (vs. words alone) increased parents’ responsiveness to their children’s nonverbal cues in infants who are developing typically.

Information processing is a second model of language acquisition that may support the use of simultaneous speech/sign. This model places emphasis upon the importance of a child’s cognitive processing abilities in the areas of attention, discrimination, organization, memory, and retrieval (Ellis Weismer, 2000; Just & Carpenter, 1992). Accordingly, deficits in any one process or task demand that exceed overall processing abilities will cause the system to break down.

Simultaneous speech/sign intervention can address information processing problems in at least four different ways. First, from a neurological perspective, while verbal language engages only the auditory cortex, sign engages both the visual and auditory cortices (Abrahamsen, Cavallo, & McCluer, 1985; Daniels, 1996). A child who has difficulty processing information by solely listening has the added opportunity to learn through the visual modality. This position is aligned with universal design for learning, in that educators and clinicians afford students with multiple means of representation, multiple means of engagement, and multiple means of expression (McGuire, Scott, & Shaw, 2006). Second, words are more fleeting than signs. Although a spoken word quickly fades from a child’s auditory attention, gestures linger longer in the visual domain, thus providing more processing time (Abrahamsen et al., 1985; Gathercole & Baddeley, 1990; Just & Carpenter, 1992; Lahey & Bloom, 1994). Third, visual signs invite joint attention, an important prelinguistic precursor to communication development (Acredolo et al., 1999; Goodwyn et al., 2000; Tomasello & Farrar, 1986). The more signs presented, the more opportunities there are for the child to share attention and intention with the conversational partner. Fourth, both the sign and the word are symbolic. When used together, they essentially cross-train mental representation skills (Goodwyn & Acredolo, 1993; Petitto, 2000).

Empirical support

Inspired by the theoretical and developmental rationales for using signs to facilitate spoken words, researchers have sought to obtain empirical evidence to support use of KS as an intervention strategy in children with language delays (Baumann Leech & Cress, 2011; Dunst et al., 2011; Robertson, 2004; Wright et al., 2013). Dunst et al. (2011) conducted a critical review of 33 studies on the influence of sign/speech intervention on oral language production. Studies included in their review were investigations of clinical populations including autism spectrum disorders, social-emotional disorders, Down syndrome, intellectual disabilities, and physical disabilities. Their review concluded that, regardless of the type of sign system used (e.g., ASL, Signed English), the use of multimodal cues (i.e., sign paired with spoken words) yielded increased verbal communication. It must be noted that a critical review of these studies reveals limited numbers of participants overall (1–21) and primarily single-subject, within-group designs. No randomized, between-group comparisons (the gold standard for empirical research) were identified in this review.

Baumann Leech and Cress (2011) utilized a single-subject, multiple baseline research design to compare two different augmentative alternative communication (AAC) treatment approaches (i.e., picture symbol exchange vs. [unspecified form of] sign) in one participant diagnosed as a “Late Talker” (i.e., a child with expressive language delays only). The participant learned spoken target words using both methods of AAC and generalized these words to different communicative scenarios. Although no difference was noted between AAC intervention strategies, sign (as one of the two strategies) did facilitate spoken language.

Robertson (2004) reported the results of a single-subject, alternating treatment study in which two late-talking toddlers were presented with 20 novel vocabulary words. Ten spoken words were paired with signs, whereas the remaining 10 served as controls. Both children learned all 10 signed words and carried them over to conversational speech versus learning only half of the nonsigned words.

Wright et al. (2013) studied the effect of a speech/sign intervention on four toddlers with Down syndrome exposed to enhanced milieu teaching (EMT; Hancock & Kaiser, 2006) blended with joint attention, symbolic play, and emotional regulation (Kasari, Freeman, & Paparella, 2006). After participating in 20 biweekly sessions, all four children increased their use of signs and spoken words. However, without a control group, it is not possible to infer a cause–effect relationship.

Given this promising empirical research base, coupled with developmental and theoretical support, the authors here conclude that use of KS as an intervention strategy is supported. Therefore, two questions remain: (1) How can research guide choosing first word–sign pairs? (2) What evidence-based strategies should be used to facilitate their production?

HOW TO CHOOSE FIRST WORD–SIGN PAIRS

Choosing first signs, similar to choosing first words, must be based on a variety of both context and content concerns (Holland, 1975; Lahey & Bloom, 1977; Lederer, 2001, 2002, 2011, 2013). In relation to context, targeted word–sign pairs should be useful for communicating an array of pragmatic functions (i.e., the reason why we send a message). For example, children can request to have their needs met, protest to express displeasure, comment to express ideas, and ask a question to obtain information, to name a few of the many pragmatic functions possible (Bloom & Lahey, 1978; Lahey, 1988).

Word–sign pairs should be highly motivating and suitable for use during a range of activities and across settings (e.g., home, school; Lahey & Bloom, 1977; Lederer, 2013). Furthermore, they should be easy to both demonstrate and understand (i.e., highly iconic; Fristoe & Lloyd, 1980; Lahey & Bloom, 1977). In terms of content, rationales for choosing individual word–sign targets and a core lexicon should be derived from both general lexical development and child- and family-specific vocabulary needs. Finally, lexical variety, which lays the foundation for syntax, must be considered (Bloom & Lahey, 1978; Lahey, 1988).

Research to practice guidelines for choosing word–sign pairs provided by CELL (2010a, 2010b, 2010c, 2010d), Acredolo and Goodwyn (2009), and Seal (2010) place emphasis on the contextually-based aspects of language. Regarding content, Seal (2010) consulted developmental ASL research (Anderson & Reilly, 2002) and considered motor development. Acredolo and Goodywn (2009) referred to their own research on the natural development of Baby SignsR (Acredolo & Goodwyn, 1988). However, recommendations from the aforementioned experts do not systematically consider developmental spoken lexical research. Because the purpose of using signs with children with language delays is to facilitate first spoken words, the authors here conclude that the logical approach to selecting word–sign pairs is to identify the spoken targets first.

Early lexical development

Early spoken word targets should be drawn largely from developmental lexical research (e.g., Benedict, 1979; Fenson et al., 1994; Nelson, 1973; Tardif et al., 2008). For a child with typical development, a majority of his or her first 20 words will be nouns, greetings, and “no” (Tardif et al., 2008). As a child’s vocabulary approaches or exceeds 50 words, prepositions subsequently emerge (e.g., “up,” “down”), followed by action verbs (e.g., “go,” “eat”) and adjectives/modifiers (e.g., “more,” “all done,” “hot”; Bloom & Lahey, 1978; Fenson et al., 1994; Lahey, 1988). The lexicon at 50 words typically contains two-thirds of substantive words (i.e., objects or classes of objects expressed with nouns and pronouns such as names of people, toys, animals, and foods) and one-third of relational words (i.e., expressing relationships between objects using verbs, prepositions, adjectives, and other modifiers; Nelson, 1973; Owens, 2011). Late-talking toddlers (Rescorla, Alley, & Christine, 2001) and children with Down syndrome (Oliver & Buckley, 1994) have been reported to follow the same order of lexical acquisition as children developing language typically, but do so at a slower pace.

To help clinicians further facilitate semantic variety, Bloom and Lahey (1978) and Lahey (1988) developed a popular taxonomy to code substantive and relational words. They identified nine different early semantic categories of words and their meanings. Substantive words are contained in the category of existence, whereas relational words can be sorted into the following eight categories: nonexistence, recurrence, rejection, action, locative action, attribution, possession, and denial. Definitions and developmentally early verbal exemplars for each category can be found in Table 1.

T1-3
Table 1:

Taxonomy of Content Categories With Definitions and Earliest Exemplars

Recommendations for choosing word–sign targets

To begin, Lederer (2001, 2002, 2011) and others (e.g., Girolametto, Pearce, & Weitzman, 1996) recommend choosing a small set of 10–12 developmentally early targets representing a range of semantic categories to express a variety of pragmatic intentions. All exemplars in Table 1 meet these criteria. For children who have more significant language impairments, fewer targets should be selected.

As mentioned in the introduction of this article, recommended targets can be found in popular and professional publications (Acredolo & Goodwyn, 2002; Seal, 2010) and websites (e.g., https://www.babysignlanguage.com/; CELL, 2010a, 2010b, 2010c, 2010d). Since CELL (2010a, 2010b, 2010c, 2010d) and Seal (2010) chose their word–sign targets for special populations, we will use these to hone clinical decision-making skills. Specifically, we will reflect on their strengths and weaknesses in relation to (a) spoken lexical development, (b) representation of substantive and relational targets, and (c) variety within and across semantic categories. CELL’s (2010a, 2010b, 2010c, 2010d) and Seal’s (2010) targets appear in Table 2. We will conclude with a sample lexicon for clinical intervention.

T2-3
Table 2:

Proposed Word–Sign Targets by Center for Early Literacy Learning (CELL) (2010) and Seal (2010)

Spoken lexical development

The majority of the targets offered by CELL (2010a, 2010b, 2010c, 2010d) and Seal (2010) are words acquired early by children developing spoken language typically (Fenson et al., 1994; Tardif et al., 2008). (These are bolded in Table 2.) However, both lists include some targets that are acquired after 24 months. (These are not bolded in Table 2.) Targets that are starred in Table 2 did not appear in the database generated by Fenson et al. (1994). Finally, a denotation of “X” indicated that neither CELL (2010a, 2010b, 2010c, 2010d) nor Seal (2010) account for these targets.

In general, choosing words for language intervention that appear developmentally after 24 months is not recommended by the authors here. By the age of two years, toddlers who are developing typically have a vocabulary of approximately 200 words and are generating (at least) two-word combinations (Paul & Norbury, 2012). Given that the single-word lexicon is of approximately 50 words (Nelson, 1973; Owens, 2011), establishing a cutoff at two years of age provides a large enough pool from which to select developmentally early targets.

Substantive–relational representation

Both CELL (2010a, 2010b, 2010c, 2010d) and Seal (2010) provide word lists that contain a majority of relational words. Choosing relational words for children with language delays is highly recommended because they can be used more frequently across activities and settings than substantive words (Lahey & Bloom, 1977). In fact, CELL (2010a, 2010b, 2010c, 2010d) recommends only one substantive word (“book”). Because substantive words are easier to learn than relational words (i.e., they are more easily represented; Bloom & Lahey, 1978; Lahey, 1988), the authors recommend building early lexicons that include both substantive and relational words, with a greater emphasis on the latter, as did Seal (2010).

Semantic variety

Semantic variety refers both to within and across lexical category considerations. With respect to within category substantive words, first nouns include names of people, toys, foods, animals, clothes, and body parts (Fenson et al., 1994). Seal (2010) includes a sufficient semantic variety with people, toys, and food.

With regard to within category relational words, both CELL (2010a, 2010b, 2010c, 2010d), and Seal (2010) include a large number of verbs, similar to those seen in the first 35 ASL signs in young children who are deaf (Anderson & Reilly, 2002) but dissimilar in first word learners (Fenson et al., 1994; Tardif et al., 2008). Anderson and Reilly (2002) explain that these early verb concepts can be easily demonstrated with natural gestures (e.g., “clap,” “hug,” “kiss”). Spoken verbs are among the latest category of single words to be acquired (e.g., the first verb “go” appears at 19 months; the first nouns, “mommy” and “daddy” appear at 12 months; Fenson et al., 1994). Bloom and Lahey (1978) and Lahey (1988) explain that verbs are harder to learn than nouns because they are not always easily represented, permanent, or perceptually distinct from the noun (e.g., “eat” means someone is eating something). Because verbs are harder to learn, but easier to gesture, the authors here recommend including a minimum of two action verbs when building a KS lexicon.

Finally, with respect to variety of relational words across semantic categories, we need to look for exemplars from each of the early nine categories (Bloom & Lahey, 1978; Lahey, 1988). Inspection of CELL’s (2010a, 2010b, 2010c, 2010d) and Seal’s (2010) recommended targets reveals missing lexical items from certain categories as identified by an X in Table 2. According to Bloom and Lahey (1978), a first lexicon should include relational words from at least nonexistence, recurrence, rejection, action, and locative action.

Making decisions

Table 1 provides the earliest acquired words in each semantic category. The bolded targets in Table 2, which are not seen in Table 1, provide additional word–sign targets for consideration. In addition to these recommended developmental targets, child “favorites” and family-specific vocabulary must be included. These are obtained through family interviews about child-preferred items (e.g., toys and foods), as well as alternative labels (e.g., people and foods), which have cultural significance to the family and the child. Rationales for including child-specific targets (e.g., Elmo) stem from individually motivating objects or events. Rationales for identifying culturally-guided (e.g., “ee-mah” for “mommy”) vocabulary foster positive rapport and respect (Robertson, 2007).

Taking spoken lexical development, substantive–relational representation, and semantic variety into account, Table 3 provides a sample first lexicon. These targets are adapted from Lederer (2002, 2011). Suggested KS descriptions are provided. The signs are derived from ASL and Baby SignsR. Users should modify as needed.

T3-3
Table 3:

Sample First Word–Sign Lexicon With Sign Instructions

STRATEGIES TO FACILITATE EARLY WORD–SIGN TARGETS

Many strategies that are effective in facilitating early spoken words can be expanded to include word-sign targets. Evidence-based practices for these shared objectives include the following: (a) focused stimulation (Ellis Weismer & Robertson, 2006; Ellis Weismer & Murray-Branch, 1989; Girolametto et al., 1996; Lederer, 2002; Wolfe & Heilmann, 2010); (b) Enhance Milieu Teaching (EMT) (Hancock & Kaiser, 2006; Wright et al., 2013); and (c) embedded learning opportunities (ELOs; Horn & Banerjee, 2009; Lederer, 2013; Noh, Allen, & Squires, 2009). In addition, evidence-based strategies for facilitating sign language also must be considered (Seal, 2010). Regardless of the teaching strategy being used, parents and professionals should always pair the spoken word with the KS in short, grammatically correct phrases or sentences (Bredin-Oja & Fey, 2013). The child’s sign alone should be accepted fully with the assumption that it will fade once the spoken word emerges (Iverson & Goldin-Meadow, 2005).

Focused stimulation

Focused stimulation is a language intervention approach in which a small pool of target words is preselected and each is modeled five to 10 times before another target is modeled (Ellis Weismer & Murray-Branch, 1989; Girolametto et al., 1996; Lederer, 2002; Wolfe & Heilmann, 2010). Repeating limited targets is supported by information processing theories, suggesting minimizing demands on the processing system (Ellis Weismer, 2000; Just & Carpenter, 1992). The target is presented in short but natural phrases/sentences to help build the concept linguistically. Other modes of representation to build the concept, such as pictures, signs, or demonstrations, are also used. In focused stimulation, the KS is repeated each time the target is spoken. No verbal or signed production is expected or overtly elicited from the child in the classic form of focused stimulation. Exposure alone has been proven sufficient to facilitate learning, using both parents (Girolametto et al., 1996) and professionals (Ellis Weismer & Robertson, 2006; Wolfe & Heilmann, 2010) as intervention agents. A study by Lederer (2001) demonstrated that parents and professionals collaborating in the use of focused stimulation were effective in facilitating vocabulary development. Table 4 provides a sample focused stimulation dialog for facilitating the word–sign target “eat.”

T4-3
Table 4:

Sample Implementation for Three Treatment Strategies to Facilitate the Word–Sign Target “Eat”

Enhanced milieu teaching

Enhanced milieu teaching is a group of language facilitation strategies combining environmental arrangement to stimulate a child’s initiation, responsive interactions, and milieu teaching. Examples of environmental arrangement include placing desired objects out of reach, providing small portions of preferred foods, giving the objects/activities requiring assistance (e.g., bubbles with the top sealed very tightly), or doing something silly (e.g., trying to pour juice with the cap still in place). Responsive interaction strategies include “following the child’s lead, responding to the child’s verbal and nonverbal initiations, providing meaningful semantic feedback, expanding the child’s utterances” both semantically and syntactically (Hancock & Kaiser, 2006, p. 209). These strategies are designed to engage the child and scaffold language. Milieu teaching strategies include but are not limited to asking questions, providing fill-ins, offering choices, and modeling word–sign in increasingly more directive styles (Hancock & Kaiser, 2006).

Enhanced milieu teaching’s theoretical basis comes from both behaviorist (Hart & Rogers-Warren, 1978) and social interactionist theories (e.g., transactional; Ellis Weismer, 2000; Just & Carpenter, 1992). Enhanced milieu teaching uses operant conditioning (i.e., antecedent, behavior, consequence; Skinner, 1957) in prearranged but natural contexts (Hart & Rogers-Warren, 1978). The antecedent can be either nonverbal or verbal.

Both parents and professionals have been shown to implement EMT effectively (Hancock & Kaiser, 2006). Similar to focused stimulation, collaborative use of EMT between interventionists and parents has been shown to produce the greatest impact on vocabulary expansion (Kaiser & Roberts, 2013). See Table 4 for a sample EMT interaction to facilitate the word–sign “eat.”

Embedded learning opportunities

Although focused stimulation and EMT have been shown to help children generalize newly acquired vocabulary, they cannot address the issue of generalization alone. To help children both acquire and generalize new vocabulary, they need to be exposed to words and signs across activities and settings. This is made possible through systematic ELOs; Horn & Banerjee, 2009; Lederer, 2013; Noh et al., 2009). To plan for ELOs, professionals and families must work together to identify opportunities across the child’s day in which the intended targets can be facilitated. Parents are made partners in the decision-making process for selecting targets and identifying multiple opportunities to facilitate these targets. See Table 4 for ELO opportunities to facilitate the word–sign target “eat.”

Sign language strategies

In addition to traditional language facilitation strategies, recommended practices in teaching signs should be considered. Many of these practices are adapted from parents of young children who are deaf and learning ASL. Strategies include establishing joint attention such as tapping a child who does not respond to his name (Clibbens, Powell, & Atkinson, 2002; Waxman & Spencer, 1997) and keeping the sign in front of the child for the duration of the spoken word or phrase (Iverson, Longobardi, Spampinato, & Caselli, 2006; Seal, 2010). In addition, Seal (2010) suggests sitting behind children for hand-over-hand facilitation to help with perspective but also signing face-to-face so that children can see facial expressions and mouth movements. Like parents of children developing language typically, Seal (2010) encourages both parents and professionals to use “motherese,” that is, to present signs slowly, exaggerate their size, extend their duration, and increase their frequency.

SUMMARY

Regarding recommended practices in implementing a word–sign intervention, this article extends the work of previous guidelines and specific word–sign recommendations (Acredolo & Goodwyn, 2002; Acredolo & Goodwyn, 2009; CELL, 2010a, 2010b, 2010c, 2010d; Seal, 2010). This article more systematically considers the roles of spoken language development, as well as language and sign facilitation strategies, in choosing and facilitating early word–sign targets. In addition to pragmatic context considerations embraced by reviewed researchers, early spoken lexical research must be consulted in terms of acquisition of specific words within and across a variety of semantic categories. This process will ensure creation of a diverse early lexicon necessary for communication in the present and the ultimate transition to syntax.

For children with language delays, combining signs with spoken words to facilitate spoken language has strong developmental and theoretical support. Empirical support is promising but more controlled studies are needed. Specifically, researchers must study larger numbers of participants and employ between-group designs, ideally using randomization of participants. In addition, the late talker population has received little attention with respect to word–sign interventions. Because research suggests that these children are the mildest of those with language delays and may even “catch up” without intervention (Paul & Norbury, 2012), it is important to ascertain whether a KS intervention program could speed up the process even further than language therapy without signs. Finally, given research that supports the use of parents as language facilitators (Girolametto et al., 1996; Hancock & Kaiser, 2006), an investigation of whether a parent-implemented home program using KS is warranted.

REFERENCES

Abrahamsen A. A., Cavallo M. M., McCluer J. A. (1985). Is the sign advantage a robust phenomenon? From gesture to language in two modalities. Merrill-Palmer Quarterly, 31, 17–209.

  • Cited Here

Acredolo L., Goodwyn S. (1988). Symbolic gesturing in normal infants. Child Development, 59, 450–466.

  • Cited Here

Acredolo L., Goodwyn S. (2002). My first baby signs. New York, NY: Harper Festival.

  • Cited Here

Acredolo L., Goodwyn S. (2009). Baby signs: How to talk with your baby before your baby can talk (3rd ed.). New York, NY: McGraw-Hill.

  • Cited Here

Acredolo L. P., Goodwyn S. W., Horobin K., Emmons Y. (1999). The signs and sounds of early language development. InBalter L., Tamis-LeMonda C. (Eds.), Child psychology: A handbook of contemporary issues (pp. 116–139). New York, NY: Psychology Press.

  • Cited Here

Anderson D., Reilly J. (2002). The MacArthur communicative development inventory: Normative data for American sign language. Journal of Deaf Studies and Deaf Education, 7(2), 83–119.

  • Cited Here

Bates E., Benigni I., Bretherton I., Camaioni L., Volterra V. (1979). The emergence of symbols; cognition and communication in infancy. New York, NY: Academic Press.

  • Cited Here

Baumann Leech E. R., Cress C. J. (2011). Indirect facilitation of speech in a late talking child by prompted production of picture symbols or signs. Augmentative and Alternative Communication, 27(1), 40–52.

  • Cited Here

Benedict H. (1979). Early lexical development: Comprehension and production. Journal of Child Language, 6, 183–200.

  • Cited Here

Bloom L., Lahey M. (1978). Language development and language disorders. New York, NY: Wiley.

  • Cited Here

Bredin-Oja S. L., Fey M. E. (2013). Children’s responses to telegraphic and grammatically complete prompts to imitate. American Journal of Speech Language Pathology. Retrieved June 18, 2014, from http://ajslp.asha.org/cgi/content/abstract/1058-0360_2013_12-0155v1

  • Cited Here

Capone N. C., McGregor K. (2004). Gesture development: A review for clinical and research practices. Journal of Speech, Language, and Hearing Research, 47, 173–186.

  • Cited Here

Center for Early Literacy Learning. (2010a). Infant gestures. CELL practices. Asheville, NC: Orelena Hawks Puckett Institute. Retrieved June 18, 2014, from www.EarlyLiteracyLearning.org

  • Cited Here

Center for Early Literacy Learning. (2010b). Joint attention activities. CELL practices. Asheville, NC: Orelena Hawks Puckett Institute. Retrieved June 18, 2014, from www.EarlyLiteracyLearning.org

  • Cited Here

Center for Early Literacy Learning. (2010c). Sign language activities. CELL practices. Asheville, NC: Orelena Hawks Puckett Institute. Retrieved June 18, 2014, from www.EarlyLiteracyLearning.org

  • Cited Here

Center for Early Literacy Learning. (2010d). Infant sign language dictionary. CELL practices. Asheville, NC: Orelena Hawks Puckett Institute. Retrieved June 18, 2014, from www.EarlyLiteracyLearning.org

  • Cited Here

Centers for Disease Control and Prevention. (2012, March 1). National Center on Birth Defects and Developmental Disabilities. Retrieved June 18, 2014, from http://www.cdc.gov/ncbddd/hearingloss/parentsguide/building/natural-gestures.html

  • Cited Here

Clibbens J., Powell G. G., Atkinson E. (2002). Strategies for achieving joint attention when signing to children with Down’s syndrome. International Journal of Language & Communication Disorders, 37, 309–323.

  • Cited Here

Crais E. R., Watson L. R., Baranek G. T. (2009). Use of gesture development in profiling children’s prelinguistic communication skills. American Journal of Speech-Language Pathology, 18, 95–108.

  • Cited Here

Daniels M. (1996). Seeing language: The effect over time of sign language on vocabulary development in early childhood education. Child Study Journal, 26(3), 193–209.

  • Cited Here

Dunst C. J., Meter D., Hamby D. W. (2011). Influences of sign and oral language interventions on the speech and oral language production of young children with disabilities. CELLpractices. Asheville, NC. Orelena Hawks Puckett Institute. Retrieved June 18, 2014, from www.EarlyLiteracyLearning.org

  • Cited Here

Ellis Weismer S. (2000). Language intervention for children with developmental language delay. InBishop D., Leonard L. (Eds.), Speech and language impairments: From theory to practice (pp. 157–176). Philadelphia, PA: Psychology Press.

  • Cited Here

Ellis Weismer S., Murray-Branch J. (1989). Modeling versus modeling plus evoked production training: A comparison of two language intervention methods. The Journal of Speech and Hearing Disorders, 54, 269–281.

  • Cited Here

Ellis Weismer S., Robertson S. (2006). Focused stimulation. InMcCauley R., Fey M. (Eds.), Treatment of language disorders in children (pp. 175–202). Baltimore, MD: Brookes.

  • Cited Here

Fenson L., Dale P., Reznick J., Bates E., Thal D., Pethick J. (1994). Variability in early communication development. Monographs of the Society for Research in Child Development, 59 (5, Serial No. 242).

  • Cited Here

Fristoe M., Lloyd L. (1980). Planning an initial expressive sign lexicon for persons with severe communication impairment. The Journal of Speech and Hearing Disorders, 45, 170–180.

  • Cited Here

Gathercole S., Baddeley A. (1990). Phonological memory deficits in language disordered children: Is there a causal connection? Journal of Memory and Language, 29, 336–360.

  • Cited Here

Girolametto L., Pearce P., Weitzman E. (1996). Interactive focused stimulation for toddlers with expressive vocabulary delays. The Journal of Speech and Hearing Research, 39, 1274–1283.

  • Cited Here

Goodwyn S., Acredelo L. (1993). Symbolic gesture versus word: Is there a modality advantage for onset of symbol use? Child Development, 64, 688–701.

  • Cited Here

Goodwyn S., Acredolo L., Brown C. (2000). Impact of symbolic gesturing on early language development. Journal of Nonverbal Behavior, 24, 81–103.

  • Cited Here

Hancock T. B., Kaiser A. P. (2006). Enhanced milieu teaching. InMcCauley R., Fey M. (Eds.), Treatment of language disorders in children (pp. 203–236). Baltimore, MD: Brookes.

  • Cited Here

Hart B., Rogers-Warren A. (1978). A milieu approach to teaching language. In Schiefelbusch R. L. (Ed.), Language intervention strategies (pp. 193–235). Baltimore, MD: University Park Press.

  • Cited Here

Holland A. L. (1975). Language therapy for children: Some thoughts on context and content. The Journal of Speech and Hearing Disorders, 40, 514–523.

  • Cited Here

Horn E., Banerjee R. (2009). Understanding curriculum modifications and embedded learning opportunities in the context of supporting all children’s success. Language, Speech, and Hearing Services in Schools, 40, 406–415.

  • Cited Here

Iverson J. M., Goldin-Meadow S. (2005). Gesture paves the way for language development. American Psychological Society, 16, 367–371.

  • Cited Here

Iverson J. M., Longobardi E., Spampinato K., Caselli M. C. (2006). Gesture and speech in maternal input to children with Down syndrome. International Journal of Language & Communication Disorders, 41, 235–251.

  • Cited Here

Iverson J. M., Thal D. J. (1998). Communicative transitions: There’s more to the hand than meets the eye. InWetherby A. M., Warren S. F., Reichle J. (Eds.), Transitions in prelinguistic communication: Preintentional to intentional and presymbolic to symbolic (pp. 59–86). Baltimore, MD: Paul H. Brookes.

  • Cited Here

Jocelyn M. (2010). Eats. Plattsburg, NY: Tundra Books.

Just M. A., Carpenter P. A. (1992). A capacity theory of comprehension: Individual differences in working memory. Psychological Review, 99(1), 122–149.

  • Cited Here

Kaiser A. P., Roberts M. Y. (2013). Parent-implemented enhanced milieu teaching with preschool children who have intellectual disabilities. Journal of Speech, Language, and Hearing Research, 56, 295–209.

  • Cited Here

Kasari C., Freeman S., Paparella T. (2006). Joint attention and symbolic play in young children with autism: A randomized controlled intervention study. Journal of Child Psychology and Psychiatry, and Allied Disciplines, 47(6), 611–620.

  • Cited Here

Kirk E., Howlett N., Pine K. J., Fletcher B. (2013). To sign or not to sign? The impact of encouraging infants to gesture on infant language and maternal mind-mindedness. Child Development, 84(2), 574–590.

  • Cited Here

Lahey M. (1988). Language disorders and language development. New York NY: Macmillan.

  • Cited Here

Lahey M., Bloom L. (1977). Planning a first lexicon: Which words to teach first. The Journal of Speech and Hearing Disorders, 42, 340–350.

  • Cited Here

Lahey M., Bloom L. (1994). Variability and language learning disabilities. InWallach G. P., Butler K. G. (Eds.), Language learning disabilities in school-age children and adolescents. New York, NY: Macmillan.

  • Cited Here

Lederer S. H. (2001). Efficacy of parent-child language group intervention for late talking toddlers. Infant-Toddler Intervention, 11, 223–235.

  • Cited Here

Lederer S. H. (2002). Selecting and facilitating the first vocabulary for children with developmental language delays: A focused stimulation approach. Young Exceptional Children, 6, 10–17.

  • Cited Here

Lederer S. H. (2011). Finding and facilitating early lexical targets. Retrieved June 18, 2014, from http://www.speechpathology.com/slp-ceus/course/finding-and-facilitating-early-lexical-4189

  • Cited Here

Lederer S. H. (2013). Integrating best practices in language intervention and curriculum design to facilitate first words. Young Exceptional Children. Retrieved June 18, 2014, from http://yec.sagepub.com/content/early/2013/06/18/1096250613493190.citation

  • Cited Here

Longobardi E., Rossi-Arnaud C., Spataro P. (2012). Individual differences in the prevalence of words and gestures in the second year of life: Developmental trends in Italian children. Infant Behavior Development, 35(4), 847–859.

  • Cited Here

Luyster R. J., Kadlec M. B., Carter A., Tager-Flusberg H. (2008). Language assessment and development in toddlers with autism spectrum disorders. Journal of Autism and Developmental Disorders, 38(8), 1426–1438.

  • Cited Here

McCune-Nicolich L. (1981). Toward symbolic functioning: Structure of early pretend games and potential parallels with language. Child Development, 52, 785–797.

  • Cited Here

McGregor W. B. (2009). Linguistics: An introduction. New York, NY: Continuum International Publishing Group.

  • Cited Here

McGuire J. M., Scott S. S., Shaw S. F. (2006). Universal design and its applications in educational environments. Remedial and Special Education, 27(3), 166–175.

  • Cited Here

McLaughlin R. (1998). Introduction to language development. San Diego, CA: Singular.

  • Cited Here

Meuris K., Maes B., DeMeyer A. M., Zink I. (2014). Manual signs in adults with intellectual disability: Influence of sing characteristics on functional sign vocabulary. Journal of Speech, Language, and Hearing Research, 57, 990–1010.

  • Cited Here

Moores D. F. (1978). Educating the deaf. Boston, MA: Houghton Mifflin.

  • Cited Here

Mundy P., Kasari C., Sigman M., Ruskin E. (1995). Nonverbal-communication and early language-acquisition in children with Down syndrome and in normally developing children. The Journal of Speech and Hearing Research, 38, 157–167.

  • Cited Here

Nelson K. (1973). Structure and strategy in learning to talk. Monographs of the Society for Research in Child Development, 38 (Serial No. 149).

  • Cited Here

Nelson L. H., White K. R., Grewe J. (2012). Evidence for website claims about the benefits of teaching sign language to infants and toddlers with normal hearing. Infant Child Development, 21, 474–502.

  • Cited Here

Nicolosi L., Harryman E., Kresheck J. (1996). Terminology of communication disorders: Speech-language-hearing (4th ed.). Baltimore, MD: Lippincott Williams & Wilkins.

  • Cited Here

Noh J., Allen D., Squires J. (2009). Use of embedded learning opportunities within daily routines by early intervention/early childhood special education teachers. International Journal of Special Education, 24, 1–10.

  • Cited Here

Oliver B., Buckley S. (1994). The language development of children with Down syndrome: First words to two-word phrases. Down syndrome Research and Practice, 2(2), 71–75.

  • Cited Here

Owens R. (2011). Language development: An introduction (8th ed.). Boston, MA: Allyn & Bacon.

  • Cited Here

Paul R., Norbury C. (2012). Language disorders from infancy through adolescence: Listening, speaking, reading, writing, and communicating (4th ed.). St. Louis, Missouri, MO: Mosby.

  • Cited Here

Petitto L. A. (2000). On the biological foundations of human language. InEmmorey K., Lane H. (Eds.), The signs of language revisited: An anthology in honor of Ursula Bellugi and Edward Klima. Mahway, NJ: Lawrence Erlbaum Assoc. Inc.

  • Cited Here

Rescorla L., Alley A., Christine J. (2001). Word frequencies in toddlers’ lexicons. Journal of Speech, Language, and Hearing Research, 44, 598–609.

  • Cited Here

Rice M. (1993). “Don’t talk to him, he’s weird.” A social consequences account of language and social interactions. InKaiser A. P., Gray D. B. (Eds.), Communication and language intervention issues: Volume 2. Enhancing children’s communication: Research foundations for intervention (pp. 139–158). Baltimore, MD: Paul H. Brookes Publishers.

  • Cited Here

Robertson S. (2004). Proceedings from ASHA convention ‘07: The effects of sign on the oral vocabulary of two late talking toddlers, Indiana, PA.

  • Cited Here

Robertson S. (2007). Got EQ? Increasing cultural and clinical competence through emotional intelligence. Communication Disorders Quarterly, 29(1), 14–19.

  • Cited Here

Rowe M. L, Goldin-Meadow S. (2009). Early gesture selectively predicts later language learning. Developmental Science, 12, 182–187.

  • Cited Here

Sameroff A., Chandler M. (1975). Reproductive risk and the continuum of caretaking casualty. InHorowitz M. F. D., Hetherington E. M., Scarr-Salapatek S., Seigel G. (Eds.), Review of child development research (pp. 187–244). Chicago, IL: University Park Press. Washington, DC: American Psychological Association.

  • Cited Here

Sauer E., Levine S.C., Goldin-Meadow S. (2010). Early gesture predicts language delay in children with pre- or perinatal brain lesions. Child Development, 81(2), 528–539.

  • Cited Here

Seal B. (2010). About baby signing. The ASHA Leader. Retrieved June 18, 2014, from http://www.asha.org/publications/leader/2010/101102/about-baby-signing.htm

  • Cited Here

Skinner B. F. (1957). Verbal behavior. Cambridge, MA: Prentice Hall, Inc.

  • Cited Here

Tardif T., Liang W., Zhang Z., Fletcher P., Kaciroti N., Marchman V. A. (2008). Baby’s first 10 words. Developmental Psychology, 44, 929–938.

  • Cited Here

Thal D., Tobias S., Morrison D. (1991). Language and gesture in late talkers: A 1-year follow-up. The Journal of Speech and Hearing Research, 34(3), 604–612.

  • Cited Here

Tomasello M., Farrar M. (1986). Joint attention and early language. Child Development, 57, 1454–1463.

  • Cited Here

Watt N., Wetherby A., Shumway S. (2006). Prelinguistic predictors of language ‘outcome at three years of age. Journal of Speech, Language, and Hearing Research, 49, 1224–1237.

  • Cited Here

Waxman R., Spencer P. (1997). What mothers do to support infant visual attention: Sensitivities to age and hearing status. Journal of Deaf Studies and Deaf Education, 2(2), 104–114.

  • Cited Here

Wolfe D., Heilmann J. (2010). Simplified and expanded input in a focused stimulation program for a child with expressive language delay (ELD). Child Language Teaching and Therapy, 26, 335–346.

  • Cited Here

Wright C. A., Kaiser A. P., Reikowsky D. I., Roberts M. Y. (2013). Effects of a naturalistic sign intervention on expressive language of toddlers with Down syndrome. Journal of Speech, Language, and Hearing Research, 56, 994–1008.

  • Cited Here

Yoder P. J., Warren S. F. (1993). Can developmentally delayed children’s language development be enhanced through prelinguistic intervention? InKaiser A. P., Gray D. B. (Eds.), Enhancing children’s communication: Research foundations for intervention (pp. 35–62). Baltimore, MD: Brookes.

  • Cited Here

Zwaigenbaum L., Bryson S., Rogers T., Roberts W., Brian J., Szatmari P. (2005). Behavioral manifestations of autism in the first year of life. International Journal of Developmental Neuroscience, 23(2–3), 143–152.

  • Cited Here

Keywords:

children with language delays; key word signs; recommended practices; sign language

© 2015 Wolters Kluwer Health | Lippincott Williams & Wilkins.

First time? Quick how-to.

This visual quick how-to guide shows you how to search a word, for example «handspeak».

Screenshot of the search instructions

  • All
  • A
  • B
  • C
  • D
  • E
  • F
  • G
  • H
  • I
  • J
  • K
  • L
  • M
  • N
  • O
  • P
  • Q
  • R
  • S
  • T
  • U
  • V
  • W
  • X
  • Y
  • Z

Search Tips and Pointers

Search/Filter: Enter a keyword in the filter/search box to see a list of available words with the «All» selection. Click on the page number if needed. Click on the blue link to look up the word. For best result, enter a partial word to see variations of the word.

Screenshot of dictionary search with notes

Screenshot of the search dictionary

Alphabetical letters: It’s useful for 1) a single-letter word (such as A, B, etc.) and 2) very short words (e.g. «to», «he», etc.) to narrow down the words and pages in the list.

For best result, enter a short word in the search box, then select the alphetical letter (and page number if needed), and click on the blue link.

Screenshot of dictionary search with notes

Screenshot of the search dictionary

Don’t forget to click «All» back when you search another word with a different initial letter.

If you cannot find (perhaps overlook) a word but you can still see a list of links, then keep looking until the links disappear! Sharpening your eye or maybe refine your alphabetical index skill. :)

Add a Word: This dictionary is not exhaustive; ASL signs are constantly added to the dictionary. If you don’t find a word/sign, you can send your request (only if a single link doesn’t show in the result).

Videos: The first video may be NOT the answer you’re looking for. There are several signs for different meanings, contexts, and/or variations. Browsing all the way down to the next search box is highly recommended.

Video speed: Signing too fast in the videos? See HELP in the footer.

ASL has its own grammar and structure in sentences that works differently from English. For plurals, verb inflections, word order, etc., learn grammar in the «ASL Learn» section. For search in the dictionary, use the present-time verbs and base words. If you look for «said», look up the word «say». Likewise, if you look for an adjective word, try the noun or vice versa. E.g. The ASL signs for French and France are the same. If you look for a plural word, use a singular word.

Are you a Deaf artist, author, traveler, etc. etc.?

Some of the word entries in the ASL dictionary feature Deaf stories or anecdotes, arts, photographs, quotes, etc. to educate and to inspire, and to be preserved in Deaf/ASL history, and to expose and recognize Deaf works, talents, experiences, joys and pains, and successes.

If you’re a Deaf artist, book author, or creative and would like your work to be considered for a possible mention on this website/webapp, introduce yourself and your works. Are you a Deaf mother/father, traveler, politician, teacher, etc. etc. and have an inspirational story, anecdote, or bragging rights to share — tiny or big doesn’t matter, you’re welcome to email it. Codas are also welcome.

Hearing ASL student, who might have stories or anecdotes, also are welcome to share.

ASL to English reverse dictionary

Don’t know what a sign mean? Search ASL to English reverse dictionary to find what an ASL sign means.

Vocabulary building

To start with the First 100 ASL signs for beginners, and continue with the Second 100 ASL signs, and further with the Third 100 ASL signs.

Language Building

Learning ASL words does not equate with learning the language. Learn the language beyond sign language words.

Contextual meaning: Some ASL signs in the dictionary may not mean the same in different contexts and/or ASL sentences. A meaning of a word or phrase can change in sentences and contexts. You will see some examples in video sentences.

Grammar: Many ASL words, especially verbs, in the dictionary are a «base»; be aware that many of them can be grammatically inflected within ASL sentences. Some entries have sentence examples.

Sign production (pronunciation): A change or modification of one of the parameters of the sign, such as handshape, movement, palm orientation, location, and non-manual signals (e.g. facial expressions) can change a meaning or a subtle variety of meaning. Or mispronunciation.

Variation: Some ASL signs have regional (and generational) variations across North America. Some common variations are included as much as possible, but for specifically local variations, interact with your local community to learn their local variations.

Fingerspelling: When there is no word in one language, borrowing is a loanword from another language. In sign language, manual alphabet is used to represent a word of the spoken/written language.

American Sign Language (ASL) is very much alive and indefinitely constructable as any spoken language. The best way to use ASL right is to immerse in daily language interactions and conversations with Ameslan/Deaf people (or ASLians).

Sentence building

Browse phrases and sentences to learn sign language, specifically vocabulary, grammar, and how its sentence structure works.

Sign Language Dictionary

According to the archives online, did you know that this dictionary is the oldest sign language dictonary online since 1997 (DWW which was renamed to Handspeak in 2000)?

Pointers to remember

This dictionary is not exhaustive; the ASL signs are constantly added to the dictionary. If you don’t find the word/sign, you can send your request via email. Browse the alphabetical letters or search a signed word above.

Regional variation: there may be regional variations of some ASL words across the regions of North America.

Inflection: most ASL words in the dictionary are a «base», but many of them are grammatically inflectable within ASL sentences.

Contextual meaning: These ASL signs in the dictionary may not mean the same in different contexts and/or ASL sentences. You will see some examples in video sentences.

ASL is very much alive and indefinitely constructable as any spoken language. The best way to use ASL right is to immerse in daily interaction with Deaf Ameslan people (ASLers).

Question book-4.svg

В этой статье не хватает ссылок на источники информации.

Информация должна быть проверяема, иначе она может быть поставлена под сомнение и удалена.
Вы можете отредактировать эту статью, добавив ссылки на авторитетные источники.
Эта отметка установлена 12 мая 2011.

Spoken word (в переводе с английского: произносимое слово) — форма литературного, а иногда и ораторского искусства, художественное выступление, в котором текст, стихи, истории, эссе больше говорятся, чем поются. Термин часто используется (особенно в англоязычных странах) для обозначения соответствующей CD-продукции, не являющейся музыкальной.

Формами «spoken word» могут быть как литературные чтения, чтения стихов и рассказов, доклады, так и поток сознания, и популярные в последнее время политические и социальные комментарии артистов в художественной или театральной форме. Нередко артистами в жанре «spoken word» бывают поэты и музыканты. Иногда голос сопровождается музыкой, но музыка в этом жанре совершенно необязательна.

Так же как и с музыкой, со «spoken word» выпускаются альбомы, видеорелизы, устраиваются живые выступления и турне.

Среди русскоязычных артистов в этом жанре можно отметить Дмитрия Гайдука и альбом Пожары, группы Сансара (Екб.) и рэп группу Marselle (L`One и Nel)

Некоторые представители жанра

(в алфавитном порядке)

  • Бликса Баргельд
  • Уильям Берроуз
  • Бойд Райс
  • Джелло Биафра
  • GG Allin
  • Дмитрий Гайдук
  • Аллен Гинзберг
  • Джек Керуак
  • Лидия Ланч
  • Евгений Гришковец
  • Егор Летов
  • Джим Моррисон
  • Лу Рид
  • Генри Роллинз
  • Патти Смит
  • Серж Танкян
  • Том Уэйтс
  • Дэвид Тибет
  • Levi The Poet
  • Listener

См. также

  • Декламационный стих
  • Мелодекламация
  • Речитатив
  • Художественное чтение

Introduction

Humans acquire language in an astonishingly diverse set of circumstances. Nearly everyone learns a spoken language from birth and a majority of individuals then follow this process by learning to read, an extension of their spoken language experience. In contrast to these two tightly-coupled modalities (written words are a visual representation of phonological forms, specific to a given language), there exists another language form that bears no inherent relationship to a spoken form: Sign language. When deaf children are raised by deaf parents and acquire sign as their native language from birth, they develop proficiency within the same time frame and in a similar manner to that of spoken language in hearing individuals (Anderson and Reilly, 2002; Mayberry and Squires, 2006). This is not surprising given that sign languages have sublexical and syntactic complexity similar to spoken languages (Emmorey, 2002; Sandler and Lillo-Martin, 2006). Neural investigations of sign languages have also shown a close correspondence between the processing of signed words in deaf (Petitto et al., 2000; MacSweeney et al., 2008; Mayberry et al., 2011; Leonard et al., 2012) and hearing native signers (MacSweeney et al., 2002, 2006) and spoken words in hearing individuals (many native signers are also fluent in a written language, although the neural basis of reading in deaf individuals is largely unknown). The predominant finding is that left anteroventral temporal, inferior prefrontal, and superior temporal cortex are the main loci of lexico-semantic processing in spoken/written (Marinkovic et al., 2003) and signed languages, as long as the language is learned early or to a high level of proficiency (Mayberry et al., 2011). However, it is unknown whether the same brain areas are used for sign language processing in hearing second language (L2) learners who are beginning to learn sign language. This is a key question for understanding the generalizability of L2 proficiency effects, and more broadly for understanding language mechanisms in the brain.

In contrast to the processing of word meaning, which occurs between ~200–400 ms after the word is seen or heard (Kutas and Federmeier, 2011), processing of the word form and sublexical structure appears to be modality-specific. Written words are encoded for their visual form primarily in left ventral occipitotemporal areas (McCandliss et al., 2003; Vinckier et al., 2007; Dehaene and Cohen, 2011; Price and Devlin, 2011). Spoken words are likewise encoded for their acoustic-phonetic and phonemic forms in left-lateralized superior temporal cortex, including the superior temporal gyrus/sulcus and planum temporale (Hickok and Poeppel, 2007; Price, 2010; DeWitt and Rauschecker, 2012; Travis et al., in press). Both of these processes occur within the first ~170 ms after the word is presented. While an analogous form encoding stage presumably exists with similar timing for sign language, no such process has been identified. The findings from monolingual users of spoken/written and signed languages to date suggest at least two primary stages of word processing: An early, modality-specific word form encoding stage (observed for spoken/written words and hypothesized for sign), followed by a longer latency response that converges on the classical left fronto-temporal language network where meaning is extracted and integrated independent of the original spoken, written, or signed form (Leonard et al., 2012).

Much of the world’s population is at least passingly familiar with more than one language, which provides a separate set of circumstances for learning and using words. Often, an L2 is acquired later with ultimately lower proficiency compared to the native language. Fluent, balanced speakers of two or more languages have little difficulty producing words in the contextually correct language, and they understand words as rapidly and efficiently as words in their native language (Duñabeitia et al., 2010). However, prior to fluent understanding, the brain appears to go through a learning process that uses the native language as a scaffold, but diverges in subtle, yet important ways from native language processing. The extent of these differences (both behaviorally and neurally) fluctuates in relation to the age at which L2 learning begins, the proficiency level at any given moment during L2 learning, the amount of time spent using each language throughout the course of the day, and possibly the modality of the newly-learned language (DeKeyser and Larson-Hall, 2005; van Heuven and Dijkstra, 2010). Thus, L2 learning provides a unique opportunity to examine the role of experience in how the brain processes words.

In agreement with many L2 speakers’ intuitive experiences, several behavioral studies using cross-language translation priming have found that proficiency and language dominance impact the extent and direction of priming (Basnight-Brown and Altarriba, 2007; Duñabeitia et al., 2010; Dimitropoulou et al., 2011). The most common finding is that priming is strongest in the dominant to non-dominant direction, although the opposite pattern has been observed (Duyck and Warlop, 2009). These results are consistent with models of bilingual lexical representations, including the Revised Hierarchical Model (Kroll and Stewart, 1994) and the Bilingual Interactive Activation + (BIA+) model (Dijkstra and van Heuven, 2002), both of which posit interactive and asymmetric connections between word (and sublexical) representations in both languages. The BIA+ model is particularly relevant here, in that it explains the proficiency-related differences as levels of activation of the integrated (i.e., shared) lexicon driven by the bottom-up input of phonological/orthographic and word-form representations.

An important question is how these behavioral proficiency effects manifest in neural activity patterns: Does the brain process less proficient words differently from more familiar words? Extensive neuroimaging and neurophysiological evidence supports these models, and shows a particularly strong role for proficiency in cortical organization (van Heuven and Dijkstra, 2010). Two recent studies that measured neural activity with magnetoencephalography (MEG) constrained by individual subject anatomy obtained with magnetic resonance imaging (MRI) found that, while both languages for Spanish-English bilinguals evoked activity in the classical left hemisphere fronto-temporal network, the non-dominant language additionally recruited posterior and right hemisphere regions (Leonard et al., 2010, 2011). These areas showed significant non-dominant > dominant activity during an early stage of word encoding (between ~100–200 ms), continuing through the time period typically associated with lexico-semantic processing (~200–400 ms). Crucially, these and other studies (e.g., van Heuven and Dijkstra, 2010) showed that language proficiency was the main factor in determining the recruitment of non-classical language areas. The order in which the languages were acquired did not greatly affect the activity.

These findings are consistent with the hemodynamic imaging and electrophysiological literatures. Using functional MRI (fMRI), proficiency-modulated differences in activity have been observed (Abutalebi et al., 2001; Chee et al., 2001; Perani and Abutalebi, 2005), and there is evidence for greater right hemisphere activity when processing the less proficient L2 (Dehaene et al., 1997; Meschyan and Hernandez, 2006). While fMRI provides spatial resolution on the order of millimeters, the hemodynamic response unfolds over the course of several seconds, far slower than the time course of linguistic processing in the brain. Electroencephalographic methods including event-related potentials (ERPs) are useful for elucidating the timing of activity, and numerous studies have found proficiency-related differences between bilinguals’ two languages. One measure of lexico-semantic processing, the N400 [or N400 m in MEG; (Kutas and Federmeier, 2011)] is delayed by ~40–50 ms in the L2 (Ardal et al., 1990; Weber-Fox and Neville, 1996; Hahne, 2001), and this effect is constrained by language dominance (Moreno and Kutas, 2005), in agreement with the behavioral and MEG studies discussed above. In general, greater occipito-temporal activity in the non-dominant language (particularly on the right), viewed in light of delayed processing, suggests that lower proficiency involves less efficient processing that requires recruitment of greater neural resources. While the exact neural coding mechanism is not known, this is a well-established phenomenon that applies to both non-linguistic (Carpenter et al., 1999) and high-level language tasks (St George et al., 1999) at the neuronal population level.

The research to date thus demonstrates two main findings: (1) In nearly all subject populations that have been examined, lexico-semantic processing is largely unaffected by language modality with respect to spoken, written, and signed language, and (2) lower proficiency involves the recruitment of a network of non-classical language regions that likewise appear to be modality-independent. In the present study, we sought to determine whether the effects of language proficiency extend to hearing individuals who are learning sign language as an L2. Although these individuals have extensive experience with a visual language form (written words), their highly limited exposure to dynamic sign language forms allows us to investigate proficiency (English vs. ASL) and modality (spoken vs. written, vs. signed) effects in a single subject population. We tested a group of individuals with a unique set of circumstances as they relate to these two factors. The subjects were undergraduate students who were native English speakers who began learning American Sign Language (ASL) as an L2 in college. They had at least 40 weeks of experience, and were the top academic performers in their ASL courses and hence able to understand simple ASL signs and phrases. They were, however, unbalanced bilinguals with respect to English/ASL proficiency. Although there have been a few previous investigations of highly proficient, hearing L2 signers (Neville et al., 1997; Newman et al., 2001), no studies have investigated sign language processing in L2 learners with so little instruction. Likewise, no studies have investigated this question using methods that afford high spatiotemporal resolution to determine both the cortical sources and timing of activity during specific processing stages. Similar to our previous studies on hearing bilinguals with two spoken languages, here we combined MEG and structural MRI to examine neural activity in these subjects while they performed a semantic task in two languages/modalities: spoken English, visual (written) English, and visual ASL.

While it is not possible to fully disentangle modality and proficiency effects within a single subject population, these factors have been systematically varied separately in numerous studies with cross-language and between-group comparisons (Marinkovic et al., 2003; Leonard et al., 2010, 2011, 2012), and are well-characterized in isolation. It is in this context that we examined both factors in this group of L2 learners. We hypothesized that a comparison between the magnitudes of MEG responses to spoken, written, and signed words would reveal a modality-specific word encoding stage between ~100–200 ms (left superior planar regions for spoken words, left ventral occipitotemporal regions for written words, and an unknown set of regions for signed words), followed by stronger responses for ASL (the lower proficiency language) in a more extended network of brain regions used to process lexico-semantic content between ~200–400 ms post-stimulus onset. These areas have previously been identified in spoken language L2 learners and include bilateral posterior visual and superior temporal areas (Leonard et al., 2010, 2011). Finding similar patterns for beginning ASL L2 learners would provide novel evidence that linguistic proficiency effects are generalizable, a particularly striking result given the vastly different sensory characteristics of spoken English and ASL. We further characterized the nature of lexico-semantic processing in this group by comparing the N400 effect across modalities, which would reveal differences in the loci of contextual integration for relatively inexperienced learners of a visual second language.

Materials and Methods

Participants

Eleven hearing native English speakers participated in this study (10 F; age range = 19.74–33.16 years, mean = 22.42). All were healthy adults with no history of neurological or psychological impairment, and had normal hearing and vision (or wore corrective lenses that were applied in the MEG). All participants had at least four academic quarters (40 weeks) of instruction in ASL, having reached the highest level of instruction at either UCSD or Mesa College. Participants were either currently enrolled in a course taught in ASL or had been enrolled in such a course in the previous month. One participant had not taken an ASL course in the previous 4 months. Participants completed a self-assessment questionnaire that asked them to rate their ASL proficiency on a scale from 1 to 10. For ASL comprehension, the average score was 7.1 ± 1.2; ASL production was 6.5 ± 1.9; Fingerspelling comprehension was 6.4 ± 1.6; and fingerspelling production was 6.8 ± 1.7. Six participants reported using ASL on a daily basis at the time of enrollment in the study, while the remaining participants indicated weekly use (one participant indicated monthly use).

Participants gave written informed consent to participate in the study, and were paid $20/h for their time. This study was approved by the Institutional Review Board at the University of California, San Diego.

Stimuli and Procedure

In the MEG, participants performed a semantic decision task that involved detecting a match in meaning between a picture and a word. For each trial, subjects saw a photograph of an object for 700 ms, followed by a word that either matched (“congruent”) or mismatched (“incongruent”) the picture in meaning. Participants were instructed to press a button when there was a match; response hand was counterbalanced across blocks within subjects. Words were presented in blocks by language/modality for spoken English, written English, and ASL. Each word appeared once in the congruent and once in the incongruent condition, and did not repeat across modalities. All words were highly imageable concrete nouns that were familiar to the participants in both languages. Since no frequency norms exist for ASL, the stimuli were selected from ASL developmental inventories (Schick, 1997; Anderson and Reilly, 2002) and picture naming data (Bates et al., 2003; Ferjan Ramirez et al., 2013b). The ASL stimuli were piloted with four other subjects who had the same type of ASL instruction to confirm that they were familiar with the words. Stimulus length was the following: Spoken English mean = 473.98 ± 53.17 ms; Written English mean = 4.21 ± 0.86 letters; ASL video clips mean = 467.92 ± 62.88 ms. Written words appeared on the screen for 1500 ms. Auditory stimuli were delivered through earphones at an average amplitude of 65 dB SPL. Written and signed word videos subtended <5 degrees of visual angle on a screen in front of the subjects. For all stimulus types, the total trial duration varied randomly between 2600 and 2800 ms (700 ms picture + 1500 ms word container + 400–600 ms inter-trial interval).

Each participant completed three blocks of stimuli in each language/modality. Each block had 100 trials (50 stimuli in each of the congruent and incongruent conditions) for a total of 150 congruent and incongruent trials in each language/modality. The order of the languages/modalities was counterbalanced across participants. Prior to starting the first block in each language/modality, participants performed a practice run to ensure they understood the stimuli and task. The practice runs were repeated as necessary until subjects were confident in their performance (no subjects required more than one repetition of the practice blocks).

MEG Recording

Participants sat in a magnetically shielded room (IMEDCO-AG, Switzerland) with the head in a Neuromag Vectorview helmet-shaped dewar containing 102 magnetometers and 204 gradiometers (Elekta AB, Helsinki, Finland). Data were collected at a continuous sampling rate of 1000 Hz with minimal filtering (0.1 to 200 Hz). The positions of four non-magnetic coils affixed to the subjects’ heads were digitized along with the main fiduciary points such as the nose, nasion, and preauricular points for subsequent coregistration with high-resolution MR images. The average 3-dimensional Euclidian distance for head movement from the beginning of the session to the end of the session was 7.38 mm (SD = 5.67 mm).

Anatomically-Constrained MEG (aMEG) Analysis

The data were analyzed using a multimodal imaging approach that constrains the MEG activity to the cortical surface as determined by high-resolution structural MRI (Dale et al., 2000). This noise-normalized linear inverse technique, known as dynamic statistical parametric mapping (dSPM) has been used extensively across a variety of paradigms, particularly language tasks that benefit from a distributed source analysis (Marinkovic et al., 2003; Leonard et al., 2010, 2011, 2012; Travis et al., in press), and has been validated by direct intracranial recordings (Halgren et al., 1994; McDonald et al., 2010).

The cortical surface was obtained in each participant with a T1-weighted structural MRI, and was reconstructed using FreeSurfer. The images were collected at the UCSD Radiology Imaging Laboratory with a 1.5T GE Signa HDx scanner using an eight-channel head coil (TR = 9.8 ms, TE = 4.1 ms, TI = 270 ms, flip angle = 8°, bandwidth = ± 15.63 kHz, FOV = 24 cm, matrix = 192 × 192, voxel size = 1.25 × 1.25 × 1.2 mm). All T1 scans were collected using online prospective motion correction (White et al., 2010). A boundary element method forward solution was derived from the inner skull boundary (Oostendorp and Van Oosterom, 1992), and the cortical surface was downsampled to ~2500 dipole locations per hemisphere (Dale et al., 1999; Fischl et al., 1999). The orientation-unconstrained MEG activity of each dipole was estimated every 4 ms, and the noise sensitivity at each location was estimated from the average pre-stimulus baseline from −190 to −20 ms for the localization of the subtraction for congruent-incongruent trials.

The data were inspected for bad channels (channels with excessive noise, no signal, or unexplained artifacts), which were excluded from all further analyses. Additionally, trials with large (>3000 fT for gradiometers) transients were rejected. Blink artifacts were removed using independent components analysis (Delorme and Makeig, 2004) by pairing each MEG channel with the electrooculogram (EOG) channel, and rejecting the independent component that contained the blink. On average, fewer than five trials were rejected for each condition.

Individual participant dSPMs were constructed from the averaged data in the trial epoch for each condition using only data from the gradiometers, and then these data were combined across subjects by taking the mean activity at each vertex on the cortical surface and plotting it on an average brain. Vertices were matched across subjects by morphing the reconstructed cortical surfaces into a common sphere, optimally matching gyral-sulcal patterns and minimizing shear (Sereno et al., 1996; Fischl et al., 1999).

All statistical comparisons were made on region of interest (ROI) timecourses from these group data. ROIs were based on a separate data set not included in this study that compared signed and spoken word processing in congenitally deaf and hearing subjects using the same task presented here (Figure 1; Leonard et al., 2012). These ROIs were originally drawn on the grand average activity across both deaf and hearing participants, and thus are not biased toward either signed or spoken words. In the 80–120 ms time window, we specifically tested bilateral planum temporale (PT) and superior temporal sulcus (STS) because these areas showed significant responses to spoken words, and are known to be involved in early word encoding in the auditory modality (Uusvuori et al., 2008; Travis et al., in press). For the 150–200 ms time window, we were specifically interested in ventral occipitotemporal (vOT) cortex because it is involved in written word form encoding (Vinckier et al., 2007). While there are no previous studies of this stage for signed words, we selected bilateral intraparietal sulcus (IPS) because it has been implicated in some studies of non-temporally specific sign processing (MacSweeney et al., 2002; Emmorey et al., 2005), and because it showed the strongest activity during this time window. For the lexico-semantic time window from 300 to 400 ms, we tested all ten bilateral ROIs. With the exceptions of IPS and lateral occipitotemporal (LOT) cortex, these areas are typically involved in lexico-semantic processing, including anteroventral temporal areas that are hypothesized to be largely supramodal. We also included LOT because it has been implicated as a lexico-semantic area that is modulated by proficiency (Leonard et al., 2010, 2011).

www.frontiersin.org

Figure 1. Diagram of bilateral ROI locations. These ROIs were selected based on an independent data set that compared sign in native deaf individuals to speech in hearing individuals (Leonard et al., 2012). 1, Inferior Prefrontal; 2, Anterior Insula; 3, Planum Temporale; 4, Superior Temporal Sulcus; 5, Posterior STS; 6, Intraparietal Sulcus; 7, Lateral Occipito-temporal cortex; 8, Temporal Pole; 9, Inferior Temporal; 10, Ventral Occipito-temporal cortex.

Results

Reaction Time and Accuracy

The following analyses use these abbreviations: A = auditory words, W = written words, and S = signed words. Participants performed within ranges similar to those of native speakers and signers on the semantic decision task in both reaction time and accuracy compared to results in other studies (Leonard et al., 2012; Travis et al., in press). Table 1 shows the average and standard deviations for each language/modality. A one-way ANOVA comparing reaction times across modalities revealed a significant effect of modality [F(2, 30) = 12.21, p < 0.0002]. Consistent with the fact that English was the subjects’ native and dominant language and ASL was a recently learned L2, reaction times were significantly faster for A than for S [t(10) = 6.85, p < 0.0001], and for W than for S [t(10) = 8.22, p < 0.0001]. A and W were not significantly different. Similarly, there was a significant effect of modality for accuracy on the semantic task [F(2, 30) = 17.31, p < 0.00001]. Participants were more accurate for A than for S [t(10) = 5.13, p < 0.0001], and for W than for S [t(10) = 4.13, p = 0.002], although accuracy for S was still quite good (nearly 90%). Accuracy for A and W were not significantly different.

www.frontiersin.org

Table 1. Mean reaction time and accuracy data across languages/modalities.

aMEG Results Summary

There were distinct patterns of neural activity related to the language/modality subjects saw or heard and language proficiency. These effects began during the earliest stages of word encoding (~100 ms for auditory, and ~150 ms for written and signed words), and continued through lexico-semantic encoding (~300–400 ms). Table 2 summarizes the main findings by time window, and the following sections statistically describe the effects shown in the table and figures. Figure 2 shows sensor-level data from a single representative subject.

www.frontiersin.org

Table 2. Summary of MEG effects by time window.

www.frontiersin.org

Figure 2. Individual subject waveforms showing sensor-level language/modality effects. (A) Congruent-Incongruent waveforms for each modality for a left temporal channel (top) show greater responses for auditory (“AUD”; blue) and written (“WRT”; red) than for signed (“SIGN”; black) words. In a right parietal channel (bottom), there are no strong responses in any condition. (B) Grand average waveforms for each modality for a left temporal channel (top) show an early word encoding peak at ~100 ms for auditory words, followed by overlap between all three conditions at ~400 ms. In the same right parietal channel (bottom), signed words evoke an early and persistent response that is stronger than the responses for both English modalities.

aMEG—80−120 ms (Early Word Encoding)

Previous investigations have identified an evoked potential peak at ~100 ms that shows selectivity for auditory speech stimuli compared to sensory controls in primarily left superior temporal and superior planar cortex (Uusvuori et al., 2008; Travis et al., in press). We tested the MEG response during this window in two areas to determine whether an auditory-selective modality effect was present. We found a main effect of modality in left planum temporale (PT) [F(1, 10) = 3.58, p = 0.047], and in left superior temporal sulcus (STS) [F(1, 10) = 6.22, p = 0.008] (Figure 3). The effect in PT was driven by trending A>W [t(10) = 1.95, p = 0.079] and A>S [t(10) = 1.93, p = 0.083] responses, and likewise for STS [t(10) = 2.77, p = 0.02; t(10) = 2.37, p = 0.039] (Figure 4). Similar effects were obtained in right PT [F(1, 10) = 6.15, p = 0.008] and STS [F(1, 10) = 10.74, p = 0.001]. While the right STS effect was driven by an A>W [t(10) = 4.00, p = 0.003] and A>S [t(10) = 2.81, p = 0.018] response, the right PT effect showed an overall smaller response to W compared with A [t(10) = 3.32, p = 0.008] and S [t(10) = 3.00, p = 0.013]. Thus, during the 80–120 ms time window, the brain showed a preferential response for auditory words primarily in superior temporal areas.

www.frontiersin.org

Figure 3. Grand average group dSPMs during the early encoding time window from 80 to 120 ms. (A) Auditory words (“AUD”) show strong responses in bilateral PT and STS. (B) Written (“WRT”) and (C) signed (“SIGN”) words show sensory processing at the occipital pole. F-values on the color bars represent signal-to-noise ratios.

www.frontiersin.org

Figure 4. ROI timecourses for the grand average across each language/modality. At 80–120 ms in left PT, auditory words (blue lines) show a strong modality-specific peak. From 150 to 200 ms, written words (red lines) show a word encoding peak in left vOT, and signed words (black lines) show a word encoding effect in right IPS. During a later time window from 300 to 400 ms (thick gray bars), all conditions show similar responses in most left hemisphere regions, but signed words show much stronger responses in right hemisphere regions, including LOT, IPS, and PT. Asterisks represent statistically significant differences. Abbreviations: IPS, intraparietal sulcus; LOT, lateral occipitotemporal; PT, planum temporale; STS, superior temporal sulcus; TP, temporal pole; vOT, ventral occipitotemporal.

aMEG—150−200 ms (Early Word Encoding)

The early word encoding response to written words occurs later than for auditory words, and is centered in a left posterior ventral occipitotemporal (vOT) region. During a window from 150 to 200 ms, we tested for a W>A and W>S effect in vOT (Figure 5). In the left hemisphere, there was a main effect of modality [F(1, 10) = 4.57, p = 0.023], driven by W>A [t(10) = 4.58, p = 0.001] and W>S [t(10) = 2.36, p = 0.04] responses (Figure 4). The homologous right hemisphere vOT region did not show significant effects (ps > 0.5).

www.frontiersin.org

Figure 5. Grand average group dSPMs during the early encoding time window from 150 to 200 ms. (A) Auditory words (“AUD”) continue to evoke activity in bilateral superior temporal cortex, while (B) Written words (“WRT”) show a modality-specific peak in left vOT. (C) Signed words (“SIGN”) show a modality-specific peak in right IPS. F-values on the color bars represent signal-to-noise ratios.

Given that there are early word encoding processes for auditory and written words, it is reasonable to ask whether such a process exists for signed words. We examined the response to signs from 150 to 200 ms, when we expect post-sensory, but pre-lexical processing to occur. The dSPM timecourses in Figure 4 revealed a S>A and S>W pattern in right intraparietal sulcus (IPS), and indeed this region showed a marginal main effect of modality [F(1, 10) = 3.20, p = 0.062]. Post-hoc tests revealed a significant S>W response [t(10) = 2.51, p = 0.031], but the differences between W & A and S & A were not significant (Figure 5).

aMEG—300−400 ms (Lexico-Semantic Processing)

Based on results from a previous study that compared sign processing in deaf native signers to spoken word processing in hearing native English speakers using the same task presented here (Leonard et al., 2012), and on previous work examining both early and late processing of spoken vs. written words (Marinkovic et al., 2003), we selected ten ROIs to investigate sensitivity to semantic congruity: Inferior prefrontal cortex, anterior insula, planum temporale, superior temporal sulcus, posterior superior temporal sulcus, intraparietal sulcus, lateral occipitotemporal cortex, temporal pole, inferior temporal cortex, and ventral occipitotemporal cortex (Leonard et al., 2012). For each language and modality, A, W, and S, we calculated dSPMs of the subtraction of incongruent-congruent words, extracted timecourses for each subtraction condition, and tested for within-subject effects of language and modality. Since this procedure isolates brain activity evoked by incongruent vs. congruent trials, it follows that any significant activity indicates the localization of N400-like semantic congruity effects.

We calculated an omnibus ANOVA with three within-subject factors: Language/modality (3), ROI (10), and hemisphere (2). There were highly significant main effects of language/modality [F(2, 20) = 6.96, p = 0.005], ROI [F(9, 90) = 6.76, p < 0.0001], and hemisphere [F(1, 10) = 10.07, p = 0.01]. There were significant interactions between language/modality and ROI [F(18, 180) = 2.35, p = 0.002], and language/modality and hemisphere [F(2, 20) = 9.75, p = 0.001], but no three-way interaction.

Based on a priori hypotheses about specific ROIs from previous studies (see Materials and Methods), we tested a series of planned comparisons across modalities. Overall, there was a highly similar response to A, W, and S words (Figure 6). While A and W showed semantic effects of a similar magnitude, these were weaker for S across most regions (Figure 7). In the left hemisphere, there was a main effect in inferior frontal cortex [F(1, 10) = 9.92, p = 0.001], driven by A>S [t(10) = 3.81, p = 0.003] and W>S [t(10) = 3.29, p = 0.008] responses. Similarly, in inferior temporal (IT) cortex, there was an effect of modality [F(1, 10) = 5.94, p = 0.009] with A>S [t(10) = 2.40, p = 0.038] and W>S [t(10) = 3.50, p = 0.006]. In posterior STS (pSTS), there was a significant difference [F(1, 10) = 4.97, p = 0.018], driven primarily by a W>S response [t(10) = 3.09, p = 0.011] and a trend for A>S [t(10) = 1.98, p = 0.075]. Superior temporal regions showed main effects of modality where all three conditions differed significantly from one another [PT: F(1, 10) = 15.03, p < 0.0001; STS: F(1, 10) = 24.71, p < 0.0001]. None of the other five left hemisphere ROIs showed significant differences between language/modality effects.

www.frontiersin.org

Figure 6. Congruent-Incongruent subtraction dSPMs during the late lexico-semantic time window from 300 to 400 ms. (A–C) All three conditions show similar patterns of activity in predominantly left fronto-temporal regions, including PT, STS, inferior frontal, and anteroventral temporal. (C) Signed words (“SIGN”) show overall smaller subtraction effects. F-values on the color bar represent signal-to-noise ratios.

www.frontiersin.org

Figure 7. ROI timecourses for the Congruent-Incongruent subtraction across each language/modality. From 300 to 400 ms (thick gray bars), auditory (blue lines) and written (red lines) words evoke stronger effects than signed words (black lines). This difference is most prominent in the classical left fronto-temporal language network. Asterisks represent statistically significant differences.

In the right hemisphere, S elicited smaller responses in inferior frontal cortex [F(1, 10) = 4.70, p = 0.021], with W>S [t(10) = 2.66, p = 0.024] and a marginal A>S difference [t(10) = 2.14, p = 0.058]. In STS, there was a main effect of modality [F(1, 10) = 5.68, p = 0.011], driven primarily by a strong A>S response [t(10) = 3.51, p = 0.006] and a trend for A>W [t(10) = 1.88, p = 0.09]. None of the other eight right hemisphere ROIs showed significant language/modality effects. Thus, lexico-semantic congruity effects occurred in similar areas across languages/modalities, but with a smaller magnitude for signed words.

aMEG—300−400 ms (Overall Responses)

To understand which regions responded to words in each language/modality, but which were not necessarily influenced by semantic context, we also examined the grand average responses of congruent and incongruent trials together at 300–400 ms. While the previous analysis demonstrated small congruity effects for signed words, examination of the grand average revealed a different pattern (Figure 8). In the same ROIs, we tested these grand averages for language/modality effects (Figure 4).

www.frontiersin.org

Figure 8. Grand average group dSPMs during the late lexico-semantic time window from 300 to 400 ms. (A–C) All three conditions show a similar pattern of activity in bilateral regions. (C) Signed words (“SIGN”) show much stronger activity, particularly in the right hemisphere. F-values on the color bars represent signal-to-noise ratios.

In the left hemisphere, inferior frontal cortex showed a main effect of language/modality [F(1, 10) = 3.65, p = 0.044] with S>W [t(10) = 2.36, p = 0.04] and S>A [t(10) = 2.76, p = 0.02]. IT showed a similar marginal effect [F(1, 10) = 3.35, p = 0.056], driven by a marginal S>W effect [t(10) = 2.21, p = 0.052] and a trend for S>A [t(10) = 2.05, p = 0.067]. None of the other eight left hemisphere ROIs showed significant language/modality effects.

In the right hemisphere, we observed widespread effects where signs evoked greater activity than auditory or written words. Inferior frontal cortex showed this pattern [F(1, 10) = 10.78, p = 0.001] with S>W [t(10) = 3.19, p = 0.01] and S>A [t(10) = 3.85, p = 0.003]. The same pattern was found for IPS [F(1, 10) = 19.81, p < 0.0001] with S>W [t(10) = 7.03, p < 0.0001] and S>A [t(10) = 3.85, p = 0.003]. In lateral occipitotemporal (LOT) cortex, there was a main effect of language/modality [F(1, 10) = 6.21, p = 0.008] with S>W [t(10) = 2.89, p = 0.016] and S>A [t(10) = 2.62, p = 0.026]. Similarly, language/modality effects were apparent in PT ([F(1, 10) = 5.09, p = 0.016] with S>W [t(10) = 2.76, p = 0.02] and S>A [t(10) = 2.44, p = 0.035]) and in pSTS ([F(1, 10) = 4.97, p = 0.018] with S>W [t(10) = 3.38, p = 0.007] and S>A [t(10) = 2.01, p = 0.072]). The other five right hemisphere ROIs did not show significant language/modality effects. To summarize, although all languages/modalities showed similar lexico-semantic congruity effects in the classical left fronto-temporal language network, the overall response magnitude to signed words was greater primarily in right hemisphere regions.

It is possible that the overall aMEG responses contain a bias when looking at between-modality differences if the effects are not of similar magnitudes for both congruent and incongruent trials. Therefore, we conducted an additional analysis that compared S, W, and A words for congruent and incongruent trials separately (Figure 9). The one-way ANOVAs for each ROI showed that there were significant differences for congruent trials in right IPS [F(2, 30) = 5.33, p = 0.01] and right LOT [F(2, 30) = 5.68, p = 0.008]. For incongruent trials, the same pattern was significant for the following right hemisphere ROIs: IPS [F(2, 30) = 20.07, p < 0.0001], LOT [F(2, 30) = 6.36, p = 0.005], IFG [F(2, 30) = 10.37, p = 0.0004], PT [F(2, 30) = 4.84, p = 0.015], and pSTS [F(2, 30) = 5.116, p = 0.01]. Thus, the right hemisphere effects we observed with the combined congruent/incongruent grand average are observed consistently in analyses with only incongruent trials, and also for only congruent trials in two ROIs.

www.frontiersin.org

Figure 9. Mean dSPM values for all 10 right hemisphere ROIs, analyzed separately for congruent and incongruent trials. (A) One-Way ANOVAs testing for differences across modalities on congruent trials were significant in IPS and LOT. (B) Effects for incongruent trials were significant in IFG, IPS, LOT, PT, and pSTS. *p < 0.05; **p < 0.01; ***p < 0.001.

Discussion

In the present study we examined the spatiotemporal dynamics of word processing across spoken and written English and ASL in a group of hearing, English native speakers who were beginning L2 learners of ASL. During an early word encoding stage (~100 ms for spoken English, and ~150 ms for written English and ASL), words evoked activity in modality-specific brain regions. Responses to English words in the auditory and visual modalities conformed to previous findings in superior temporal and ventral occipitotemporal areas, respectively. ASL signs evoked a strong response in right IPS, although the activity was only marginally significantly larger than for written and spoken English. During a later time window associated with lexico-semantic processing, a distributed network of bilateral regions responded to a semantic congruity manipulation. Several classical left fronto-temporal language areas showed stronger modulation for English (the native language) in spoken and written modalities relative to the L2, ASL. However, when we examined the overall activity during this time window, by disregarding congruity effects, signed words evoked greater activity than both spoken and written words in a network of mostly right hemisphere regions. See Table 2 for a summary of the results.

The early modality-specific word encoding responses are consistent with a large number of previous studies using a variety of methodologies. For written words, we observed a peak in left vOT, a region that has been shown to be important for reading, and specifically for constructing written word-forms (McCandliss et al., 2003; Vinckier et al., 2007; Dehaene and Cohen, 2011; Price and Devlin, 2011). Although there is evidence that it is a multi-modal region (Price and Devlin, 2003), it does seem to play an important role in encoding written words. In addition to the location, the peak timing of the activity in this region at ~170 ms is consistent with previous electrophysiological and neuroimaging studies (McCandliss et al., 2003; McDonald et al., 2010). Additionally, although written and signed words are perceived through the visual modality, signs did not evoke activity in this region in this group of beginning L2 learners of ASL. It is therefore possible that early encoding activity in left vOT is specific to static written word forms.

Also consistent with previous studies, we observed that areas typically associated with encoding spoken words include a bilateral network of superior temporal and superior planar regions (Hickok and Poeppel, 2007; Price, 2010). Many of these areas are sensitive to subtle sublexical and phonetic manipulations (Uusvuori et al., 2008) including the presence of the fundamental frequency (Parviainen et al., 2005) and alterations in voice-onset time (Frye et al., 2007). Specific neural populations within superior temporal cortex have been found to encode categorical and phoneme-selective information within the first ~150 ms (Chang et al., 2010; Travis et al., in press). While the mechanisms and specific representations in superior temporal areas are unknown, research suggests that between ~60–150 ms, the brain encodes spoken word information at a sublexical level. The timing and location of the peak for spoken words in the present study is consistent with the majority of this previous work.

To date, there have not been any investigations into an analogous stage for sign encoding. In part, this may be due to the fact that most previous studies have used hemodynamic methods that do not afford sufficient temporal resolution to distinguish between early and late processing stages. During a time window analogous to the well-established encoding processes for written and spoken words, ASL signs showed an activity peak in right IPS, which was only marginally stronger than for English words. It is unclear whether such activity reflects linguistic encoding (analogous to sublexical amplitude envelope information in spoken language, for example) or quasi-gestural sensory characteristics related to space and motion (Decety and Grèzes, 1999; Grossman and Blake, 2002; Malaia et al., 2012). The early right IPS activity has multiple possible interpretations, and may not be related to the fact that the stimuli were signed words, but rather to the proficiency of the participants in ASL. While prior studies have not found right IPS to be modulated by language proficiency, the participants in those studies have typically possessed higher proficiency in L2 (Leonard et al., 2010, 2011). In case studies of deaf signers with scant proficiency in any language, we have observed right IPS activation later at 300–350 ms (Ferjan Ramirez et al., 2013a). It is possible that modality and proficiency interact in parietal regions, perhaps reflecting a neural processing strategy that is uniquely useful for the dynamic visual linguistic content of sign languages. To fully disentangle these effects, and to unambiguously identify the analogous word encoding stage for sign languages, it will be necessary to conduct studies with native deaf and hearing signers and low proficiency deaf and hearing signers using carefully controlled stimuli that separate linguistic and sensory levels of processing [similar to recent work with spoken words (Travis et al., in press)]. These experiments are yet to be carried out, and the present results provide both anatomical and functional brain regions that can be used to test the interaction between proficiency and modality.

The results for word meaning and higher-level language encoding processes were more definitive and demonstrated that proficiency effects translate across spoken, written, and signed words. Beginning at ~200 ms, all three word types were processed in a highly similar left-lateralized network including inferior frontal, superior temporal, and anteroventral temporal areas. These regions have been hypothesized to provide core support for lexico-semantic encoding at a supramodal level (Marinkovic et al., 2003; Lambon Ralph et al., 2010), and are the main neural generators of the N400 response (Halgren et al., 1994; Marinkovic et al., 2003), even in infants who are only beginning to acquire language (Travis et al., 2011). These areas all showed semantic modulation in the congruent/incongruent picture-word matching task (albeit to a lesser extent for ASL). Analyses of the overall magnitude of the response to words in each language and modality showed that spoken, written, and signed words all evoke strong activity in these regions, consistent with previous intracranial recordings showing locally-generated differential responses to different semantic categories of words in the anterior temporal lobe, regardless of modality (Chan et al., 2011). Previous hemodynamic studies have found activity related to lexico-semantic processing in these areas for sign language (Petitto et al., 2000; MacSweeney et al., 2008; Mayberry et al., 2011), and N400 responses have been observed to sign (Neville et al., 1997). To our knowledge, however, this is the first demonstration of such activation patterns after so little L2 instruction in ASL. These findings provide strong support for the hypothesis that these areas, especially in the anterior temporal lobe, function as supramodal hubs for high-level semantic representations (Patterson et al., 2007; Visser and Lambon Ralph, 2011), and seem difficult to explain as reflecting either knowledge of unique entities or social conceptual knowledge (Simmons and Martin, 2009).

The different patterns we observed for the congruent-incongruent subtraction and the grand average of all activity provide a window into the nature of lexico-semantic processing. Up to this point, we have focused on modality-specific effects. We now turn to how the design of this study provides insights into the role of language experience on the neural processing of words. The participants had an average of almost 23 years of extensive experience with spoken English, approximately 19 years of experience with written English, but only a few months of primarily classroom instruction in ASL. Proficiency has profound effects on neural activity, and even on brain structure (Maguire et al., 2000). Numerous studies have demonstrated experience-related differences in bilingual language processing (Abutalebi et al., 2001; Chee et al., 2001; Perani and Abutalebi, 2005; Leonard et al., 2010, 2011; van Heuven and Dijkstra, 2010). These studies further show that a surprisingly small amount of L2 exposure is required to elicit automatic lexico-semantic processing (McLaughlin et al., 2004). The present results demonstrate that this is true for beginning L2 learning of ASL as well.

As would be expected, an examination of the lexico-semantic effects in the present study indicates that proficiency-modulated activity also occurs in sign processing. In particular, we found that ASL words evoked greater grand average activity than both spoken and written English in a network of mostly right hemisphere regions (the two left hemisphere regions that were significant in the grand average, IFG and IT, were not significant when congruent and incongruent trials were analyzed separately). It is striking that some of these areas (right LOT, pSTS, and IFG) are nearly identical to those that showed a non-dominant > dominant pattern in hearing Spanish-English bilinguals (Leonard et al., 2010, 2011). The results for L2 ASL learners provide additional evidence that these areas play an important role in processing words in a less proficient language. The present results, together with our previous findings, demonstrate that word processing in a less proficient L2 shows increased activity in these regions (particularly for semantically incongruent words) relative to word processing in the native language. The recruitment of these areas for both spoken and sign language L2 processing indicates that they function as an additional supramodal resource for processing meaning in a non-dominant language.

The dissociation between semantic congruity and overall activity across languages provides a finer-grained characterization of how proficiency affects neural processing. The English > ASL congruity effects in left fronto-temporal areas could suggest shallower or less complete processing of semantic content in the non-dominant language. The slower reaction times and lower accuracy for ASL support this hypothesis. However, given that subjects performed the task relatively well indicates that some neural processing strategy was used successfully. The ASL > English responses in the grand average MEG activity across both hemispheres suggest that additional neural resources were recruited to perform the task, although perhaps not at the same semantic depth. The overall stronger ASL > English differences for incongruent words compared to congruent words support this hypothesis. As these L2 learners improve their ASL proficiency, we predict that the grand average activity will decrease to English-like levels, and the congruent/incongruent difference will likewise increase. This represents a testable hypothesis for tracking neural processing strategies during development (Schlaggar et al., 2002; Brown et al., 2005) and later language acquisition in a bilingual context.

Conflict of Interest Statement

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Acknowledgments

Project funded by NSF grant BCS-0924539, NIH grant T-32 DC00041, an innovative research award from the Kavli Institute for Brain and Mind, NIH grant R01DC012797, and a UCSD Chancellor’s Collaboratories grant. We thank D. Hagler, A. Lieberman, K. Travis, T. Brown, P. Lott, M. Hall, and A. Dale for assistance.

References

Abutalebi, J., Cappa, S. F., and Perani, D. (2001). The bilingual brain as revealed by functional neuroimaging. Bilingual. Lang. Cogn. 4, 179–190. doi: 10.1017/S136672890100027X

CrossRef Full Text

Anderson, D., and Reilly, J. (2002). The MacArthur communicative development inventory: normative data for american sign language. J. Deaf Stud. Deaf Educ. 7, 83–106. doi: 10.1093/deafed/7.2.83

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Ardal, S., Donald, M. W., Meuter, R., Muldrew, S., and Luce, M. (1990). Brain responses to semantic incongruity in bilinguals. Brain Lang. 39, 187–205. doi: 10.1016/0093-934X(90)90011-5

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Basnight-Brown, D. M., and Altarriba, J. (2007). Differences in semantic and translation priming across languages: the role of language direction and language dominance. Mem. Cognit. 35, 953–965. doi: 10.3758/BF03193468

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Bates, E., D’amico, S., Jacobsen, T., Szekely, A., Andonova, E., Devescovi, A., et al. (2003). Timed picture naming in seven languages. Psychon. Bull. Rev. 10, 344–380. doi: 10.3758/BF03196494

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Brown, T. T., Lugar, H. M., Coalson, R. S., Miezin, F. M., Petersen, S. E., and Schlaggar, B. L. (2005). Developmental changes in human cerebral functional organization for word generation. Cereb. Cortex 15, 275–290. doi: 10.1093/cercor/bhh129

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Carpenter, P. A., Just, M. A., Keller, T. A., and Eddy, W. (1999). Graded functional activation in the visuospatial system with the amount of task demand. J. Cogn. Neurosci. 11, 9–24. doi: 10.1162/089892999563210

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Chan, A. M., Baker, J. M., Eskandar, E., Schomer, D., Ulbert, I., Marinkovic, K., et al. (2011). First-pass selectivity for semantic categories in human anteroventral temporal lobe. J. Neurosci. 32, 9700–9705.

Pubmed Abstract | Pubmed Full Text

Chang, E. F., Rieger, J. W., Johnson, K., Berger, M. S., Barbaro, N. M., and Knight, R. T. (2010). Categorical speech representation in human superior temporal gyrus. Nat. Neurosci. 13, 1428–1432. doi: 10.1038/nn.2641

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Chee, M., Hon, N., Lee, H. L., and Soon, C. S. (2001). Relative language proficiency modulates BOLD signal change when bilinguals perform semantic judgments. Neuroimage 13, 1155–1163. doi: 10.1006/nimg.2001.0781

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Dale, A. M., Liu, A. K., Fischl, B. R., Buckner, R. L., Belliveau, J. W., Lewine, J. D., et al. (2000). Dynamic statistical parametric mapping: combining fMRI and MEG for high-resolution imaging of cortical activity. Neuron 26, 55–67. doi: 10.1016/S0896-6273(00)81138-1

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Dehaene, S., Dupoux, E., Mehler, J., Cohen, L., Paulesu, E., Perani, D., et al. (1997). Anatomical variability in the cortical representation of first and second language. Neuroreport 8, 3809–3815. doi: 10.1097/00001756-199712010-00030

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

DeKeyser, R., and Larson-Hall, J. (2005). “What does the critical period really mean?,” in Handbook of Bilingualism: Psycholinguistics Approaches, eds J. F. Kroll and A. M. B. D. Groot (New York, NY: Oxford University Press), 88–108.

Dijkstra, T., and van Heuven, W. J. B. (2002). The architecture of the bilingual word recognition system: from identification to decision. Bilingual. Lang. Cogn. 5, 175–197. doi: 10.1017/S1366728902003012

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Duñabeitia, J. A., Perea, M., and Carreiras, M. (2010). Masked translation priming effects with highly proficient simultaneous bilinguals. Exp. Psychol. 57, 98–107. doi: 10.1027/1618-3169/a000013

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Duyck, W., and Warlop, N. (2009). Translation priming between the native language and a second language: New evidence from Dutch-French bilinguals. Exp. Psychol. 56, 173–197. doi: 10.1027/1618-3169.56.3.173

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Emmorey, K., Grabowski, T. J., McCullough, S., Ponto, L. L., Hichwa, R. D., and Damasio, H. (2005). The neural correlates of spatial language in English and American Sign Language: a PET study with hearing bilinguals. Neuroimage 24, 832–840. doi: 10.1016/j.neuroimage.2004.10.008

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Emmorey, K. (2002). Language, Cognition and the Brain: Insights from Sign Language Research. Mahwah: Lawrence Erlbaum Associates.

Ferjan Ramirez, N., Leonard, M. K., Torres, C., Hatrak, M., Halgren, E., and Mayberry, R. I. (2013a). Neural language processing in adolescent first-language learners. Cereb. Cortex doi: 10.1093/cercor/bht137. [Epub ahead of print].

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Ferjan Ramirez, N., Lieberman, A. M., and Mayberry, R. I. (2013b). The initial stages of first-language acquisition begun in adolescence: when late looks early. J. Child Lang. 40, 391–414. doi: 10.1017/S0305000911000535

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Fischl, B. R., Sereno, M. I., Tootell, R. B. H., and Dale, A. M. (1999). High-resolution intersubject averaging and a coordinate system for the cortical surface. Hum. Brain Mapp. 8, 272–284.

Pubmed Abstract | Pubmed Full Text

Frye, R. E., Fisher, J. M., Coty, A., Zarella, M., Liederman, J., and Halgren, E. (2007). Linear coding of voice onset time. J. Cogn. Neurosci. 19, 1476–1487. doi: 10.1162/jocn.2007.19.9.1476

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Halgren, E., Baudena, P., Heit, G., Clarke, J. M., Marinkovic, K., and Clarke, M. (1994). Spatio-temporal stages in face and word processing. 1. Depth-recorded potentials in the human occipital, temporal and parietal lobes. J. Physiol. (Paris) 88, 1–50. doi: 10.1016/0928-4257(94)90092-2

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Kroll, J. F., and Stewart, E. (1994). Category interference in translation and picture naming: evidence for asymmetric connections between bilingual memory representations. J. Mem. Lang. 33, 149–174. doi: 10.1006/jmla.1994.1008

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Kutas, M., and Federmeier, K. D. (2011). Thirty years and counting: finding meaning in the N400 component of the event-related brain potential (ERP). Annu. Rev. Psychol. 62, 621–647. doi: 10.1146/annurev.psych.093008.131123

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Lambon Ralph, M. A., Sage, K., Jones, R. W., and Mayberry, E. J. (2010). Coherent concepts are computed in the anterior temporal lobes. Proc. Natl. Acad. Sci. U.S.A. 107, 2717–2722. doi: 10.1073/pnas.0907307107

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Leonard, M. K., Brown, T. T., Travis, K. E., Gharapetian, L., Hagler, D. J. Jr., Dale, A. M., et al. (2010). Spatiotemporal dynamics of bilingual word processing. Neuroimage 49, 3286–3294. doi: 10.1016/j.neuroimage.2009.12.009

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Leonard, M. K., Torres, C., Travis, K. E., Brown, T. T., Hagler, D. J. Jr., Dale, A. M., et al. (2011). Language proficiency modulates the recruitment of non-classical language areas in bilinguals. PLoS ONE 6:e18240. doi: 10.1371/journal.pone.0018240

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Leonard, M. K., Ferjan Ramirez, N., Torres, C., Travis, K. E., Hatrak, M., Mayberry, R. I., et al. (2012). Signed words in the congenitally deaf evoke typical late lexicosemantic responses with no early visual responses in left superior temporal cortex. J. Neurosci. 32, 9700–9705. doi: 10.1523/JNEUROSCI.1002-12.2012

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Macsweeney, M., Woll, B., Campbell, R., McGuire, P. K., David, A. S., Williams, S. C. R., et al. (2002). Neural systems underlying British Sign Language and audio-visual English processing in native users. Brain 125, 1583–1593. doi: 10.1093/brain/awf153

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Macsweeney, M., Campbell, R., Woll, B., Brammer, M. J., Giampietro, V., David, A. S., et al. (2006). Lexical and sentential processing in British sign language. Hum. Brain Mapp. 27, 63–76. doi: 10.1002/hbm.20167

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Maguire, E. A., Gadian, D. G., Johnsrude, I. S., Good, C. D., Ashburner, J., Frackowiak, R. S., et al. (2000). Navigation-related structural change in the hippocampi of taxi drivers. Proc. Natl. Acad. Sci. U.S.A. 97, 4398–4403. doi: 10.1073/pnas.070039597

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Malaia, E., Ranaweera, R., Wilbur, R. B., and Talavage, T. M. (2012). Event segmentation in a visual language: neural bases of processing American Sign Language predicates. Neuroimage 59, 4094–4101. doi: 10.1016/j.neuroimage.2011.10.034

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Marinkovic, K., Dhond, R. P., Dale, A. M., Glessner, M., Carr, V., and Halgren, E. (2003). Spatiotemporal dynamics of modality-specific and supramodal word processing. Neuron 38, 487–497. doi: 10.1016/S0896-6273(03)00197-1

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Mayberry, R. I., and Squires, B. (2006). “Sign language: Acquisition,” in Encyclopedia of Language and Linguistics II, 2nd Edn., eds K. Brown (Oxford: Elsevier), 739–743.

Mayberry, R. I., Chen, J. K., Witcher, P., and Klein, D. (2011). Age of acquisition effects on the functional organization of language in the adult brain. Brain Lang. 119, 16–29. doi: 10.1016/j.bandl.2011.05.007

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

McCandliss, B. D., Cohen, L., and Dehaene, S. (2003). The visual word form area: Expertise for reading in the fusiform gyrus. Trends Cogn. Sci. 7, 293–299. doi: 10.1016/S1364-6613(03)00134-7

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

McDonald, C. R., Thesen, T., Carlson, C., Blumberg, M., Girard, H. M., Trongnetrpunya, A., et al. (2010). Multimodal imaging of repetition priming: using fMRI, MEG, and intracranial EEG to reveal spatiotemporal profiles of word processing. Neuroimage 53, 707–717. doi: 10.1016/j.neuroimage.2010.06.069

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

McLaughlin, J., Osterhout, L., and Kim, A. (2004). Neural correlates of second-language word learning: minimal instruction produces rapid change. Nat. Neurosci. 7, 703–704. doi: 10.1038/nn1264

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Meschyan, G., and Hernandez, A. E. (2006). Impact of language proficiency and orthographic transparency on bilingual word reading: an fMRI investigation. Neuroimage 29, 1135–1140. doi: 10.1016/j.neuroimage.2005.08.055

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Moreno, E. M., and Kutas, M. (2005). Processing semantic anomalies in two languages: an electrophysiological exploration in both languages of Spanish-English bilinguals. Cogn. Brain Res. 22, 205–220. doi: 10.1016/j.cogbrainres.2004.08.010

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Neville, H., Coffey, S. A., Lawson, D. S., Fischer, A., Emmorey, K., and Bellugi, U. (1997). Neural systems mediating american sign language: effects of sensory experience and age of acquisition. Brain Lang. 57, 285–308. doi: 10.1006/brln.1997.1739

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Newman, A., Bavelier, D., Corina, D., Jezzard, P., and Neville, H. (2001). A critical period for right hemisphere recruitment in American Sign Language. Nat. Neurosci. 5, 76–80. doi: 10.1038/nn775

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Oostendorp, T. F., and Van Oosterom, A. (1992). Source Parameter Estimation using Realistic Geometry in Bioelectricity and Biomagnetism. Helsinki: Helsinki University of Technology.

Parviainen, T., Helenius, P., and Salmelin, R. (2005). Cortical differentiation of speech and nonspeech sounds at 100 ms: implications for dyslexia. Cereb. Cortex 15, 1054–1063. doi: 10.1093/cercor/bhh206

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Patterson, K., Nestor, P. J., and Rogers, T. T. (2007). Where do you know what you know. The representation of semantic knowledge in the human brain. Nat. Rev. Neurosci. 8, 976–987. doi: 10.1038/nrn2277

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Petitto, L. A., Zatorre, R. J., Gauna, K., Nikelski, E. J., Dostie, D., and Evans, A. C. (2000). Speech-like cerebral activity in profoundly deaf people processing signed languages: implications for the neural basis of human language. Proc. Natl. Acad. Sci. U.S.A. 97, 13961–13966. doi: 10.1073/pnas.97.25.13961

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Sandler, W., and Lillo-Martin, D. (2006). Sign Language and Linguistic Universals. Cambridge: Cambridge University Press. doi: 10.1017/CBO9781139163910

CrossRef Full Text

Schick, B. (1997). The American Sign Language Vocabulary Test. Boulder, CO: University of Colorado at Boulder.

Schlaggar, B. L., Brown, T. T., Lugar, H. M., Visscher, K. M., Miezin, F. M., and Petersen, S. E. (2002). Functional neuroanatomical differences between adults and school-age children in the processing of single words. Science 296, 1476–1479. doi: 10.1126/science.1069464

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Sereno, M. I., Dale, A. M., Liu, A., and Tootell, R. B. H. (1996). A surface-based coordinate system for a canonical cortex. Neuroimage 3, S252. doi: 10.1016/S1053-8119(96)80254-0

CrossRef Full Text

Simmons, W. K., and Martin, A. (2009). The anterior temporal lobes and the functional architecture of semantic memory. J. Int. Neuropsychol. Soc. 15, 645–649. doi: 10.1017/S1355617709990348

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

St George, M., Kutas, M., Martinez, A., and Sereno, M. I. (1999). Semantic integration in reading: engagement of the right hemisphere during discourse processing. Brain 122, 1317–1325. doi: 10.1093/brain/122.7.1317

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Travis, K. E., Leonard, M. K., Brown, T. T., Hagler, D. J. Jr., Curran, M., Dale, A. M., et al. (2011). Spatiotemporal neural dynamics of word understanding in 12- to 18-month-old-infants. Cereb. Cortex 21, 1832–1839. doi: 10.1093/cercor/bhq259

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Travis, K. E., Leonard, M. K., Chan, A. M., Torres, C., Sizemore, M. L., Qu, Z., et al. (in press). Independence of early speech processing from word meaning. Cereb. Cortex doi: 10.1093/cercor/bhs228. [Epub ahead of print].

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Uusvuori, J., Parviainen, T., Inkinen, M., and Salmelin, R. (2008). Spatiotemporal interaction between sound form and meaning during spoken word perception. Cereb. Cortex 18, 456–466. doi: 10.1093/cercor/bhm076

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

van Heuven, W. J. B., and Dijkstra, T. (2010). Language comprehension in the bilingual brain: fMRI and ERP support for psycholinguistic models. Brain Res. Rev. 64, 104–122. doi: 10.1016/j.brainresrev.2010.03.002

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Vinckier, F., Dehaene, S., Jobert, A., Dubus, J. P., Sigman, M., and Cohen, L. (2007). Hierarchical coding of letter strings in the ventral stream: dissecting the inner organization of the visual word-form system. Neuron 55, 143–156. doi: 10.1016/j.neuron.2007.05.031

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Visser, M., and Lambon Ralph, M. A. (2011). Differential contributions of bilateral ventral anterior temporal lobe and left anterior superior temporal gyrus to semantic processes. J. Cogn. Neurosci. 23, 3121–3131. doi: 10.1162/jocn_a_00007

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Weber-Fox, C., and Neville, H. J. (1996). Maturational constraints on functional specializations for language processing: ERP and behavioral evidence in bilingual speakers. J. Cogn. Neurosci. 8, 231–256. doi: 10.1162/jocn.1996.8.3.231

CrossRef Full Text

White, N., Roddey, C., Shankaranarayanan, A., Han, E., Rettmann, D., Santos, J., et al. (2010). PROMO: Real-time prospective motion correction in MRI using image-based tracking. Magn. Reson. Med. 63, 91–105. doi: 10.1002/mrm.22176

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Table of Contents

1. Introduction

2. Main part
2.1 Ferdinand de Saussure’s model
2.1.1 signifiant and signifié
2.1.2 concept and sound pattern
2.1.3 relation & value
2.1.4 arbitrariness & convention
2.2 Charles Sanders Peirce’s model
2.2.1 triadic model I: Representamen, Interpretant, Object
2.2.2 triadic model II: sign vehicle, sense, referent
2.2.3 index, icon and symbol
2.3 Karl Bühler’s model
2.3.1 Bühler’s first model
2.3.2 Bühler’s second model

3. Conclusion

4. Bibliography

1. Introduction

We seem to be a species that is driven by “a desire to make meanings” (Chandler: 1995) by creating and interpreting signs. Indeed, it is a fact that “we think only in signs” (Peirce: 1931-58, II.302). These signs can have the shape of sounds, images, objects, acts or flavours. Since these things do not have an intrinsic meaning, we have to give them a meaning so that they can become signs. Peirce states that “Nothing is a sign unless it is interpreted as a sign” (Peirce: 1931-58, II.172). This means that everything can become a sign as long as it ‘signifies’ something – refers to or stands for “something other than itself” (Chandler: 1995). Our interpretation of signs is an unconscious process in our minds as we constantly relate the signs we experience to a system of conventions that is familiar to us.

This system of conventions and the use of signs in general is what semiotics is about. There are three major models that give a detailed explanation of the constitution of a sign; these are the models of Ferdinand de Saussure’s, Charles Sanders Peirce’s and Karl Bühler’s model. At first, they will be presented in detail and secondly, there will be a brief discussion about them.

2. Main Part

2.1 Ferdinand de Saussure’s model

2.1.1 signifiant and signifié

illustration not visible in this excerpt

Saussure offered a “two-sided” (Saussure: 1983, 66) model of the linguistic sign, which may be represented by the following diagram (Fig. 1):

illustration not visible in this excerpt

Fig. 1 Fig. 2

Chandler: 1995

According to this diagram, the linguistic sign consists of a signifier, which can also be called a signifiant and a signified or signifié. Whereas the signifiant represents the form which the sign takes, the signifié represents the sign’s concept. The relationship between these two (– the signifiant and the signifié –) is called the signification and is shown by the two arrows that are on the diagram’s right and left side. As these arrows point in both directions, it is indicated that the elements of the sign are “intimately linked” (ibid) and “each triggers the other” (ibid). At last, there is a horizontal line between the signifiant and the signifié which is called the bar.

A linguistic example may be the word ‘book’: It is a sign that consists of, firstly, the signifiant — the word ‘book’ — and, secondly, the signifié — the concept we have in mind when we hear or read the word ‘book’. This example shows that it is not sufficient to have only a signifiant or only a signifié; a sign must consist of both, a signifiant AND a signifié. Moreover, a linguistic sign can only be a sign if there is a combination of these two elements.

Another example is Fig. 2: the ‘tree’ is the signifiant and what we have in mind when we hear or read the word ‘tree’ is the signifié.

illustration not visible in this excerpt

In addition, it is also important to mention that one signifiant can have different signifiés, such as the German word ‘Pferd’. In different contexts, this word can have three different meanings: it can be an animal, a figure in chess and an apparatus in sports (Fig. 3)[1].

illustration not visible in this excerpt

For Saussure, the sign with its two components – signifiant and signifié – is something “psychological […,…] rather form than substance” (ibid). He states that a “linguistic sign is not a link between a thing and a name, but between a concept and a sound pattern. The sound pattern is not actually a sound, for a sound is something physical. A sound pattern is the hearer’s psychological impression of a sound, as given to him by the evidence of his senses. This sound pattern may be called a `material´ element only in that it is the representation of our sensory impressions. The sound pattern may thus be distinguished from the other element associated with it in a linguistic sign. This other element is generally a more abstract kind: the concept” (ibid). This explanation brings us to another diagram, which is comparable to Fig. 1 (Fig. 4).

illustration not visible in this excerpt

Fig. 4

Saussure: 1983, 67

This diagram shows that Saussure preferred the spoken to (???)the written word; he uses the term image acoustique or sound pattern for it. According to his theory, writing is a ‘separate’ linguistic sign system, because it “is in itself not part of the internal system of the language” (Saussure: 1983, 24). Nevertheless, it is “impossible to ignore this way in which the language is constantly represented” (ibid.). To Saussure, writing really is important for Saussure; due to the fact that some languages are now dead, he is aware that they are only known because they were written down. However, it is the spoken word that is important for semiotics — not the written one. Although it is a fact that there is an important connection between the written and the spoken word, one has to concentrate on the last one in order to study the linguistic sign. Saussure compares this fact to a person and its photograph and declares: “It is rather as if people believed that in order to find out what a person looks like it is better to study his photograph than his face” (Saussure: 1983, 25).

Concerning the signifié in Saussure’s model (Fig. 3) it becomes obvious that it is a concept in the speaker’s mind; “it is not a thing, but the notion of a thing” (Chandler: 1995). To make clear what is meant by that, there will be an example from Susanne Langer in turn. She states that symbols – which is ‘her’ word for Saussure’s linguistic sign – “are not proxy for their objects but are vehicles for the conception of objects […] In talking about things we have conceptions of them, not the things themselves; and it is the conceptions, not the things, that symbols directly mean” (Langer: 1951, 61). She even gives an example and declares “If I say ‘Napoleon’, you do not bow to the conqueror of Europe as though I had introduced him, but merely think of him” (ibid.).

Nevertheless, Saussure decided to use the terms signifiant and signifié to indicate a “distinction which separates each from the other” (Saussure: 1983, 67). He compares this with a sheet of paper: the signifiant (sound) is on one side and the signifié (thought) is on the other. It is impossible to cut only one side of the sheet without cutting the other. Therefore, it is impossible to separate thought from sound.

2.1.3 relation and value

However, in a linguistic system, “everything depends on relations” (Saussure: 1983, 121). This means that no sign can make sense if there is no relation to other signs. If we take the word ‘tree’ as a linguistic example: the word ‘tree’ makes sense for us, but only in a certain context and in relation to other words which are used. Another example may be the infinitive ‘to bark’. If we now take two sentences 1. ‘the dog barks’ and 2. *‘the cat barks’,

the first one obviously makes more sense than the second one, as the infinitive itself reminds us of a dog that barks and certainly not a cat, because cats – as we all know — do not bark. That is why the word ‘bark’ only makes sense if it is used in context with the word ‘dog’.

Saussure uses the term ‘ value ’ for signs that are in relation to other signs. He declares that signs do not have the same value in different contexts (see Fig. 5). He compares this thought with a game of chess, as “a state of the board in chess corresponds exactly to a state of the language” (Saussure: 1983, 88): first of all, each chess piece has a certain position on the chess board on which their value depends. Secondly, the value is not fixed, as it changes from one position to the next. Thirdly, there are rules for a game of chess that are fixed and cannot be changed; everybody has to obey them. At last, only one piece is needed to change the state of the chess game. Saussure sums this comparison up in a few words and states that in a chess game “any given state of the board is totally independent of any previous state of the board. It does not matter at all whether the state in question has been reached by one sequence of moves or another sequence. Anyone who has followed the whole game has not the least advantage over a passer-by who happens to look at the game […]. All this applies equally to a language […] Speech operates only upon a given linguistic state, and the changes which supervene between one state and another have no place in either” (ibid.). (There is only one point in which the comparison lacks Although, there is, in fact, one weak point of the comparison left: in the chess game, the player has an intention – he wants to make moves and change something on the board – whereas in the language system, “there is no premeditation” (ibid.)

Abbildung in dieser Leseprobe nicht enthalten
Fig. 5 Chandler: 1995

There are many more examples that prove the existence of value in language. Another one are the coins of each country, such as a two-Euro coin:

first of all, it is clear that the coin can be exchanged for other things, e.g. a coffee-to-go. Secondly, it can also be compared with other coins and therefore, other values, of a.) the same country, which then is the same system, such as a one-Euro coin or fifty-cent coin and b.) another country, which then is another system, such as a dollar.

Another example that even proves that the meaning of a sign is different from the value of a sign is the French word mouton. Although it has the same meaning as the English word sheep, it does not necessarily have the same value, because the English word for “the meat of this animal, as prepared and served for a meal, is not sheep but mutton.” (Saussure: 1983, 114). Due to the fact that the English has the word mutton for the meat, there is a difference between mouton and sheep because mouton covers both – the animal itself AND the meat.

2.1.4 arbitrariness and convention

Although it was stated before that the signifiant stands for or refers to the signifié, there “is no internal connection” (Saussure: 1983, 67) between a sound and the idea behind it; there is no reason why the sounds /teibl/ indicate the idea of a ‘table’, there is no reason why a ‘tree’ should be called ‘tree’ because the word itself does not indicate that there really is something ‘treeish’ about the ‘tree’. And, above all, there is no reason why the letter ‘t’ is pronounced /ti:/ and not /bi:/. That is why Saussure stressed the arbitrariness of signs. Arbitrariness simply means that the signs are unmotivated. Saussure argues that a language “is in no way limited in its choice of means. For there is nothing at all to prevent the association of any idea whatsoever with any sequence of sounds whatsoever” (Saussure: 1983, 76).

Even Plato was aware of the arbitrariness of signs and declared that “whatever name you give to a thing is its right name; and if you give up that name and change it for another, the later name is no less correct than the earlier, just as we change the name of our servants; for I think no name belongs to a particular thing by nature” (Harris: 1987, 67).

Plato’s statement shows one of the problems with the arbitrariness which were also seen by Saussure. Concerning his arbitrariness principle he also states that there cannot be a complete arbitrariness, as this would lead to a chaos in society and communication would not be possible any more. Therefore, a language “is not entirely arbitrary, for the system has a certain rationality” (Saussure: 1983, 73). And although it seems as if every signifiant is freely chosen by every linguistic community, this is not the fact. Due to the fact that “a language is always an inheritance from the past” (ibid.), society does not “establish a contract between concepts and sound patterns” (ibid.) as this establishment has happened in the past, so that the contract between these two already exists. Although society is aware of the principles of the arbitrariness of a linguistic sign, every society’s language is inherited and there is nothing else society can do, “but to accept” (ibid.). Moreover, it is also true that if an Englishman uses the words ‘book’ and ‘tree’, he does this only because his father and grand-father and so on have done it before. This, in turn, brings one to the following conclusion: “it is because the linguistic sign is arbitrary that it knows no other law than that of tradition, and because it is founded upon tradition that it can be arbitrary” (Saussure: 1983, 74). It becomes obvious that concerning the relationship between signifiant and signifié convention plays an important role as well; convention means that it is based on a social and cultural background. Easily stated, “a word means what it does to us only because we collectively agree to let it do so” (Chandler: 1995).

At last, there are, however, two examples that are against the arbitrary nature of signs:

1. Onomatopoeic words:

Onomatopoetic words have been introduced in a language after the language has come into existence. They are no “organic elements of a linguistic system” (Saussure: 1983, 69), as they are only imitations of certain sounds. These imitations are partly conventionalised, which is evident when we think of words like wauwau or kickericki in German. There are different o Onomatopoetic words in other languages, of course.

2. Injections:

Injections are spontaneous reactions of people in certain situations. As their signifiant does often not have anything to do with it signifié, it is very difficult to accept that there always has to be a link between these two. An example may be the French word ‘ diable ’. If someone exclaims this word, he does not necessarily call for the devil. Another example is the German ‘ au! ’; in another language this word does not have any meaning and although it does not have anything in common with ‘being hurt’ in our language, everybody knows what is meant by it.

For Saussure, the arbitrariness principle was most important and necessary when talking about linguistic signs. He even declared that “signs which are entirely arbitrary convey better than others the ideal semiological process. That is why the most complex and the most widespread of all systems of expression, which is the one we find in human languages, is also the most characteristic of all. In this sense, linguistics serves as a model for the whole of semiology, even though languages represent only one type of semiological system” (Saussure: 1983, 68).

[…]


[1] All examples that do not have a source afterwards are done by myself or were stated in the course.

Понравилась статья? Поделить с друзьями:
  • The root word of light
  • The spoken word project
  • The spoken word presentation
  • The root word lit
  • The root word for art