The importance of recognizing word boundaries is illustrated by this advertisement from the County Down Spectator.
In writing, word boundaries are conventionally represented by spaces between words. In speech, word boundaries are determined in various ways, as discussed below.
Related Grammatical and Rhetorical Terms
- Assimilation and Dissimilation
- Conceptual Meaning
- Connected Speech
- Intonation
- Metanalysis
- Mondegreen
- Morpheme and Phoneme
- Oronyms
- Pause
- Phonetics and Phonology
- Phonological Word
- Prosody
- Segment and Suprasegmental
- Slip of the Ear
- Sound Change
Examples of Word Boundaries
- «When I was very young, my mother scolded me for flatulating by saying, ‘Johnny, who made an odor?’ I misheard her euphemism as ‘who made a motor?’ For days I ran around the house amusing myself with those delicious words.» (John B. Lee, Building Bicycles in the Dark: A Practical Guide on How to Write. Black Moss Press, 2001
- «I could have sworn I heard on the news that the Chinese were producing new trombones. No, it was neutron bombs.» (Doug Stone, quoted by Rosemarie Jarski in Dim Wit: The Funniest, Stupidest Things Ever Said. Ebury, 2008
- «As far as input processing is concerned, we may also recognize slips of the ear, as when we start to hear a particular sequence and then realize that we have misperceived it in some way; e.g. perceiving the ambulance at the start of the yam balanced delicately on the top . . ..» (Michael Garman, Psycholinguistics. Cambridge University Press, 2000
Word Recognition
- «The usual criterion for word recognition is that suggested by the linguist Leonard Bloomfield, who defined a word as ‘a minimal free form.’ . . .
- «The concept of a word as ‘a minimal free form’ suggests two important things about words. First, their ability to stand on their own as isolates. This is reflected in the space which surrounds a word in its orthographical form. And secondly, their internal integrity, or cohesion, as units. If we move a word around in a sentence, whether spoken or written, we have to move the whole word or none of it—we cannot move part of a word.»
(Geoffrey Finch, Linguistic Terms, and Concepts. Palgrave Macmillan, 2000) - «[T]he great majority of English nouns begins with a stressed syllable. Listeners use this expectation about the structure of English and partition the continuous speech stream employing stressed syllables.»
(Z.S. Bond, «Slips of the Ear.» The Handbook of Speech Perception, ed. by David Pisoni and Robert Remez. Wiley-Blackwell, 2005)
Tests of Word Identification
- Potential pause: Say a sentence out loud, and ask someone to ‘repeat it very slowly, with pauses.’ The pauses will tend to fall between words, and not within words. For example, the / three / little / pigs / went / to / market. . . .
- Indivisibility: Say a sentence out loud, and ask someone to ‘add extra words’ to it. The extra item will be added between the words and not within them. For example, the pig went to market might become the big pig once went straight to the market. . . .
- Phonetic boundaries: It is sometimes possible to tell from the sound of a word where it begins or ends. In Welsh, for example, long words generally have their stress on the penultimate syllable . . .. But there are many exceptions to such rules.
- Semantic units: In the sentence Dog bites vicar, there are plainly three units of meaning, and each unit corresponds to a word. But language is often not as neat as this. In I switched on the light, the has little clear ‘meaning,’ and the single action of ‘switching on’ involves two words.
(Adapted from The Cambridge Encyclopedia of Language, 3rd ed., by David Crystal. Cambridge University Press, 2010)
Explicit Segmentation
- «»[E]xperiments in English have suggested that listeners segment speech at strong syllable onsets. For example, finding a real word in a spoken nonsense sequence is hard if the word is spread over two strong syllables (e.g., mint in [mǀntef]) but easier if the word is spread over a strong and a following weak syllable (e.g., mint in [mǀntəf]; Cutler & Norris, 1988).
The proposed explanation for this is that listeners divide the former sequence at the onset of the second strong syllable, so that detecting the embedded word requires recombination of speech material across a segmentation point, while the latter sequence offers no such obstacles to embedded word detection as the non-initial syllable is weak and so the sequence is simply not divided.
Similarly, when English speakers make slips of the ear that involve mistakes in word boundary placement, they tend most often to insert boundaries before strong syllables (e.g., hearing by loose analogy as by Luce and Allergy) or delete boundaries before weak syllables (e.g., hearing how big is it? as how bigoted?; Cutler & Butterfield, 1992).
These findings prompted the proposal of the Metrical Segmentation Strategy for English (Cutler & Norris, 1988; Cutler, 1990), whereby listeners are assumed to segment speech at strong syllable onsets because they operate on the assumption, justified by distributional patterns in the input, that strong syllables are highly likely to signal the onset of lexical words. . . .
Explicit segmentation has the strong theoretical advantage that it offers a solution to the word boundary problem both for the adult and for the infant listener. . . .
«Together these strands of evidence motivate the claim that the explicit segmentation procedures used by adult listeners may in fact have their origin in the infant’s exploitation of
rhythmic structure to solve the initial word boundary problem.»
(Anne Cutler, «Prosody and the Word Boundary Problem.» Signal to Syntax: Bootstrapping from Speech to Grammar in Early Acquisition, ed. by James L. Morgan and Katherine Demuth. Lawrence Erlbaum, 1996)
From Wikipedia, the free encyclopedia
Speech segmentation is the process of identifying the boundaries between words, syllables, or phonemes in spoken natural languages. The term applies both to the mental processes used by humans, and to artificial processes of natural language processing.
Speech segmentation is a subfield of general speech perception and an important subproblem of the technologically focused field of speech recognition, and cannot be adequately solved in isolation. As in most natural language processing problems, one must take into account context, grammar, and semantics, and even so the result is often a probabilistic division (statistically based on likelihood) rather than a categorical one. Though it seems that coarticulation—a phenomenon which may happen between adjacent words just as easily as within a single word—presents the main challenge in speech segmentation across languages, some other problems and strategies employed in solving those problems can be seen in the following sections.
This problem overlaps to some extent with the problem of text segmentation that occurs in some languages which are traditionally written without inter-word spaces, like Chinese and Japanese, compared to writing systems which indicate speech segmentation between words by a word divider, such as the space. However, even for those languages, text segmentation is often much easier than speech segmentation, because the written language usually has little interference between adjacent words, and often contains additional clues not present in speech (such as the use of Chinese characters for word stems in Japanese).
Lexical recognition[edit]
In natural languages, the meaning of a complex spoken sentence can be understood by decomposing it into smaller lexical segments (roughly, the words of the language), associating a meaning to each segment, and combining those meanings according to the grammar rules of the language.
Though lexical recognition is not thought to be used by infants in their first year, due to their highly limited vocabularies, it is one of the major processes involved in speech segmentation for adults. Three main models of lexical recognition exist in current research: first, whole-word access, which argues that words have a whole-word representation in the lexicon; second, decomposition, which argues that morphologically complex words are broken down into their morphemes (roots, stems, inflections, etc.) and then interpreted and; third, the view that whole-word and decomposition models are both used, but that the whole-word model provides some computational advantages and is therefore dominant in lexical recognition.[1]
To give an example, in a whole-word model, the word «cats» might be stored and searched for by letter, first «c», then «ca», «cat», and finally «cats». The same word, in a decompositional model, would likely be stored under the root word «cat» and could be searched for after removing the «s» suffix. «Falling», similarly, would be stored as «fall» and suffixed with the «ing» inflection.[2]
Though proponents of the decompositional model recognize that a morpheme-by-morpheme analysis may require significantly more computation, they argue that the unpacking of morphological information is necessary for other processes (such as syntactic structure) which may occur parallel to lexical searches.
As a whole, research into systems of human lexical recognition is limited due to little experimental evidence that fully discriminates between the three main models.[1]
In any case, lexical recognition likely contributes significantly to speech segmentation through the contextual clues it provides, given that it is a heavily probabilistic system—based on the statistical likelihood of certain words or constituents occurring together. For example, one can imagine a situation where a person might say «I bought my dog at a ____ shop» and the missing word’s vowel is pronounced as in «net», «sweat», or «pet». While the probability of «netshop» is extremely low, since «netshop» isn’t currently a compound or phrase in English, and «sweatshop» also seems contextually improbable, «pet shop» is a good fit because it is a common phrase and is also related to the word «dog».[3]
Moreover, an utterance can have different meanings depending on how it is split into words. A popular example, often quoted in the field, is the phrase «How to wreck a nice beach», which sounds very similar to «How to recognize speech».[4] As this example shows, proper lexical segmentation depends on context and semantics which draws on the whole of human knowledge and experience, and would thus require advanced pattern recognition and artificial intelligence technologies to be implemented on a computer.
Lexical recognition is of particular value in the field of computer speech recognition, since the ability to build and search a network of semantically connected ideas would greatly increase the effectiveness of speech-recognition software. Statistical models can be used to segment and align recorded speech to words or phones. Applications include automatic lip-synch timing for cartoon animation, follow-the-bouncing-ball video sub-titling, and linguistic research. Automatic segmentation and alignment software is commercially available.
Phonotactic cues[edit]
For most spoken languages, the boundaries between lexical units are difficult to identify; phonotactics are one answer to this issue. One might expect that the inter-word spaces used by many written languages like English or Spanish would correspond to pauses in their spoken version, but that is true only in very slow speech, when the speaker deliberately inserts those pauses. In normal speech, one typically finds many consecutive words being said with no pauses between them, and often the final sounds of one word blend smoothly or fuse with the initial sounds of the next word.
The notion that speech is produced like writing, as a sequence of distinct vowels and consonants, may be a relic of alphabetic heritage for some language communities. In fact, the way vowels are produced depends on the surrounding consonants just as consonants are affected by surrounding vowels; this is called coarticulation. For example, in the word «kit», the [k] is farther forward than when we say ‘caught’. But also, the vowel in «kick» is phonetically different from the vowel in «kit», though we normally do not hear this. In addition, there are language-specific changes which occur in casual speech which makes it quite different from spelling. For example, in English, the phrase «hit you» could often be more appropriately spelled «hitcha».
From a decompositional perspective, in many cases, phonotactics play a part in letting speakers know where to draw word boundaries. In English, the word «strawberry» is perceived by speakers as consisting (phonetically) of two parts: «straw» and «berry». Other interpretations such as «stra» and «wberry» are inhibited by English phonotactics, which does not allow the cluster «wb» word-initially. Other such examples are «day/dream» and «mile/stone» which are unlikely to be interpreted as «da/ydream» or «mil/estone» due to the phonotactic probability or improbability of certain clusters. The sentence «Five women left», which could be phonetically transcribed as [faɪvwɪmɘnlɛft], is marked since neither /vw/ in /faɪvwɪmɘn/ or /nl/ in /wɪmɘnlɛft/ are allowed as syllable onsets or codas in English phonotactics. These phonotactic cues often allow speakers to easily distinguish the boundaries in words.
Vowel harmony in languages like Finnish can also serve to provide phonotactic cues. While the system does not allow front vowels and back vowels to exist together within one morpheme, compounds allow two morphemes to maintain their own vowel harmony while coexisting in a word. Therefore, in compounds such as «selkä/ongelma» (‘back problem’) where vowel harmony is distinct between two constituents in a compound, the boundary will be wherever the switch in harmony takes place—between the «ä» and the «ö» in this case.[5] Still, there are instances where phonotactics may not aid in segmentation. Words with unclear clusters or uncontrasted vowel harmony as in «opinto/uudistus» (‘student reform’) do not offer phonotactic clues as to how they are segmented.[6][full citation needed]
From the perspective of the whole-word model, however, these words are thought be stored as full words, so the constituent parts wouldn’t necessarily be relevant to lexical recognition.
Speech segmentation in infants and non-natives[edit]
Infants are one major focus of research in speech segmentation. Since infants have not yet acquired a lexicon capable of providing extensive contextual clues or probability-based word searches within their first year, as mentioned above, they must often rely primarily upon phonotactic and rhythmic cues (with prosody being the dominant cue), all of which are language-specific. Between 6 and 9 months, infants begin to lose the ability to discriminate between sounds not present in their native language and grow sensitive to the sound structure of their native language, with the word segmentation abilities appearing around 7.5 months.
Though much more research needs to be done on the exact processes that infants use to begin speech segmentation, current and past studies suggest that English-native infants approach stressed syllables as the beginning of words. At 7.5 months, infants appear to be able to segment bisyllabic words with strong-weak stress patterns, though weak-strong stress patterns are often misinterpreted, e.g. interpreting «guiTAR is» as «GUI TARis». It seems that infants also show some complexity in tracking frequency and probability of words, for instance, recognizing that although the syllables «the» and «dog» occur together frequently, «the» also commonly occurs with other syllables, which may lead to the analysis that «dog» is an individual word or concept instead of the interpretation «thedog».[7][8]
Language learners are another set of individuals being researched within speech segmentation. In some ways, learning to segment speech may be more difficult for a second-language learner than for an infant, not only in the lack of familiarity with sound probabilities and restrictions but particularly in the overapplication of the native language’s patterns. While some patterns may occur between languages, as in the syllabic segmentation of French and English, they may not work well with languages such as Japanese, which has a mora-based segmentation system. Further, phonotactic restrictions like the boundary-marking cluster /ld/ in German or Dutch are permitted (without necessarily marking boundaries) in English. Even the relationship between stress and vowel length, which may seem intuitive to speakers of English, may not exist in other languages, so second-language learners face an especially great challenge when learning a language and its segmentation cues.[9]
See also[edit]
- Ambiguity
- Speech recognition
- Speech processing
- Hyphenation
- Mondegreen
- Speech perception
- Sentence boundary disambiguation
References[edit]
- ^ a b Badecker, William and Mark Allen. «Morphological Parsing and the Perception of Lexical Identity: A Masked Priming Study of Stem Homographs». Journal of Memory and Language 47.1 (2002): 125–144. Retrieved 27 April 2014.
- ^ Taft, Marcus and Kenneth I. Forster. «Lexical Storage and Retrieval of Polymorphemic and Polysyllabic Words». Journal of Verbal Learning and Verbal Behavior 15.6 (1976): 607–620. Retrieved 27 April 2014.
- ^ Lieberman, Henry; Alexander Faaborg; Waseem Daher; José Espinosa (January 9–12, 2005). «How to Wreck a Nice Beach You Sing Calm Incense» (PDF). MIT Media Library.
- ^ An often-used example in the literature of speech recognition. An early example is N. Rex Dixon, «Some Problems in Automatic Recognition of Continuous Speech and Their Implications for Pattern Recognition» Proceedings of the First International Joint Conference on Pattern Recognition, IEEE, 1973 as quoted in Mark Liberman, «Wrecking a nice beach», Language Log August 5, 2014
- ^ Bertram, Raymond; Alexander Pollatsek; and Jukka Hyönä. «Morphological Parsing and the Use of Segmentation Cues in Reading Finnish Compounds». Journal of Memory and Language 51.3 (2004): 325–345. Retrieved 27 April 2014.
- ^ «General Introduction» (PDF). Archived from the original (PDF) on 2014-04-27.
- ^ Jusczyk, Peter W. and Derek M. Houston. «The Beginnings of Word Segmentation in English-Learning Infants». Cognitive Psychology 39 (1999): 159–207. Retrieved 27 April 2014.
- ^ Johnson, Elizabeth K. and Peter W. Jusczyk. «Word Segmentation by 8-Month-Olds: When Speech Cues Count More Than Statistics». Journal of Memory and Language 44 (2001): 548–567. Retrieved 27 April 2014.
- ^ Tyler, Michael D. and Anne Cutler. «Cross-Language Differences in Cue Use for Speech Segmentation». Journal of the Acoustical Society of America 126 (2009): 367–376. Retrieved 27 April 2014.
External links[edit]
- «Phonolyze» speech segmentation software
- SPPAS — the automatic annotation and analysis of speech
The question relies on a number of unidentified assumptions about word boundaries, which are not totally alien but also are not obvious or obviously right. The main problem I see is the premise that there is this one thing, word boundary, that solves myriad problems.
The notion of there being a single «phonological tree» seems to be historically based on importing notions of structure from syntax (we wanted phonology to be more like syntax), but the properties of tree-like representations as used in syllable and foot structure are not the same as those employed in syntactic representations (prosodic structure is not seriously recursive in the way that syntactic trees are; phonological «trees» flout the single-mother convention). Attempting to align phonological grouping with morphosyntactic grouping just leads to tears, though that is not obvious if you consider just English. The problem is that combining a VC root with a VC prefix and a VC suffix typically leads to phonological V.C+V.C+VC, i.e. syllable boundaries seriously misaligned with morpheme boundaries.
In English and in contrast to other languages such as Arabic, there is not much evidence for resyllabification between words, so prosodic and syntactic constituency are not generally at odds. At the level of affixation, we do have mismatches involving V-initial suffixes (invite [ɪn.ˈvajʔ], invitee [ɪn.vaj.ˈtʰi]), but not at the phrasal level in e.g. «invite Igor». In asking about word boundaries in «the big house», «motorcycle» or «What are you going to do?», you have to have a theory of entities (are there both word and syllable boundaries? Are there also morpheme boundaries?), and what those entities do for you. Are there necessary or sufficient criteria for diagnosing «.», «+» or «#»?
The reason for positing word boundaries is usually syntactic: «the» is a word, it occupies a certain syntactic position, same with «big». We might claim that «motorcycle» has an internal word boundary because «motor» and «cycle» are words, and neither can reasonably be called a prefix or suffix. Phonologically speaking, there is nothing about «motorcycle» that demands a word boundary.
Certain concatenations that can be lumped together under the rubric «contraction», for example «going to» → «gonna», «will not» → «won’t», «got you» → «gotcha», also «Harry’s», behave phonologically more like affixational structures, even though they are syntactically more like word combinations. Just positing a readjustment of boundaries (removing the «#») does not solve all of the problems, especially in negative inflections (my analytic prejudice is now revealed).
The final complication in analyzing the aforementioned concatenations is that boundaries are also invoked to account for some facts of speech speech rhythm. The two syllables of «lighthouse» have a fixed rhythmic organization (prominence on the first syllable), but the phrase «light house» has variable rhythm (depends on whether you’re shopping for a light house vs a heavy house; or is the discussion about a house that is light vs. a hose that is light). Again, attempting to reduce these speech rhythm properties to nothing more than differences in word boundaries has proven to be futile. Once you introduce some other mechanism for encoding rhythmic distinctions, manipulations of word boundaries becomes unnecessary – we can just posit that word boundaries are there if and only if we syntactically concatenate two words. You still have to have an account of whether «won’t» is two syntactic words (as opposed to two syntactically-mandated functions manifested within a single word).
In other words, manipulating word boundaries has not proven to be a useful method of analysis.
This article is about the unit of speech and writing. For the computer software, see Microsoft Word. For other uses, see Word (disambiguation).
Codex Claromontanus in Latin. The practice of separating words with spaces was not universal when this manuscript was written.
A word is a basic element of language that carries an objective or practical meaning, can be used on its own, and is uninterruptible. Despite the fact that language speakers often have an intuitive grasp of what a word is, there is no consensus among linguists on its definition and numerous attempts to find specific criteria of the concept remain controversial. Different standards have been proposed, depending on the theoretical background and descriptive context; these do not converge on a single definition.: 13:618 Some specific definitions of the term «word» are employed to convey its different meanings at different levels of description, for example based on phonological, grammatical or orthographic basis. Others suggest that the concept is simply a convention used in everyday situations.: 6
The concept of «word» is distinguished from that of a morpheme, which is the smallest unit of language that has a meaning, even if it cannot stand on its own. Words are made out of at least one morpheme. Morphemes can also be joined to create other words in a process of morphological derivation.: 768 In English and many other languages, the morphemes that make up a word generally include at least one root (such as «rock», «god», «type», «writ», «can», «not») and possibly some affixes («-s», «un-«, «-ly», «-ness»). Words with more than one root («[type][writ]er», «[cow][boy]s», «[tele][graph]ically») are called compound words. In turn, words are combined to form other elements of language, such as phrases («a red rock», «put up with»), clauses («I threw a rock»), and sentences («I threw a rock, but missed»).
In many languages, the notion of what constitutes a «word» may be learned as part of learning the writing system. This is the case for the English language, and for most languages that are written with alphabets derived from the ancient Latin or Greek alphabets. In English orthography, the letter sequences «rock», «god», «write», «with», «the», and «not» are considered to be single-morpheme words, whereas «rocks», «ungodliness», «typewriter», and «cannot» are words composed of two or more morphemes («rock»+»s», «un»+»god»+»li»+»ness», «type»+»writ»+»er», and «can»+»not»).
Definitions and meanings
Since the beginning of the study of linguistics, numerous attempts at defining what a word is have been made, with many different criteria. However, no satisfying definition has yet been found to apply to all languages and at all levels of linguistic analysis. It is, however, possible to find consistent definitions of «word» at different levels of description.: 6 These include definitions on the phonetic and phonological level, that it is the smallest segment of sound that can be theoretically isolated by word accent and boundary markers; on the orthographic level as a segment indicated by blank spaces in writing or print; on the basis of morphology as the basic element of grammatical paradigms like inflection, different from word-forms; within semantics as the smallest and relatively independent carrier of meaning in a lexicon; and syntactically, as the smallest permutable and substitutable unit of a sentence.: 1285
In some languages, these different types of words coincide and one can analyze, for example, a «phonological word» as essentially the same as «grammatical word». However, in other languages they may correspond to elements of different size.: 1 Much of the difficulty stems from the eurocentric bias, as languages from outside of Europe may not follow the intuitions of European scholars. Some of the criteria for «word» developed can only be applicable to languages of broadly European synthetic structure.: 1-3 Because of this unclear status, some linguists propose avoiding the term «word» altogether, instead focusing on better defined terms such as morphemes.
Dictionaries categorize a language’s lexicon into individually listed forms called lemmas. These can be taken as an indication of what constitutes a «word» in the opinion of the writers of that language. This written form of a word constitutes a lexeme.: 670-671 The most appropriate means of measuring the length of a word is by counting its syllables or morphemes. When a word has multiple definitions or multiple senses, it may result in confusion in a debate or discussion.
Phonology
One distinguishable meaning of the term «word» can be defined on phonological grounds. It is a unit larger or equal to a syllable, which can be distinguished based on segmental or prosodic features, or through its interactions with phonological rules. In Walmatjari, an Australian language, roots or suffixes may have only one syllable but a phonologic word must have at least two syllables. A disyllabic verb root may take a zero suffix, e.g. luwa-ø ‘hit!’, but a monosyllabic root must take a suffix, e.g. ya-nta ‘go!’, thus conforming to a segmental pattern of Walmatjari words. In the Pitjantjatjara dialect of the Wati language, another language form Australia, a word-medial syllable can end with a consonant but a word-final syllable must end with a vowel.: 14
In most languages, stress may serve a criterion for a phonological word. In languages with a fixed stress, it is possible to ascertain word boundaries from its location. Although it is impossible to predict word boundaries from stress alone in languages with phonemic stress, there will be just one syllable with primary stress per word, which allows for determining the total number of words in an utterance.: 16
Many phonological rules operate only within a phonological word or specifically across word boundaries. In Hungarian, dental consonants /d/, /t/, /l/ or /n/ assimilate to a following semi-vowel /j/, yielding the corresponding palatal sound, but only within one word. Conversely, external sandhi rules act across word boundaries. The prototypical example of this rule comes from Sanskrit; however, initial consonant mutation in contemporary Celtic languages or the linking r phenomenon in some non-rhotic English dialects can also be used to illustrate word boundaries.: 17
It is often the case that a phonological word does not correspond to our intuitive conception of a word. The Finnish compound word pääkaupunki ‘capital’ is phonologically two words (pää ‘head’ and kaupunki ‘city’) because it does not conform to Finnish patterns of vowel harmony within words. Conversely, a single phonological word may be made up of more than one syntactical elements, such as in the English phrase I’ll come, where I’ll forms one phonological word.: 13:618
Lexemes
A word can be thought of as an item in a speaker’s internal lexicon; this is called a lexeme. Nevertheless, it is considered different from a word used in everyday speech, since it is assumed to also include inflected forms. Therefore, the lexeme teapot refers to the singular teapot as well as the plural, teapots. There is also the question to what extent should inflected or compounded words be included in a lexeme, especially in agglutinative languages. For example, there is little doubt that in Turkish the lexeme for house should include nominative singular ev or plural evler. However, it is not clear if it should also encompass the word evlerinizden ‘from your houses’, formed through regular suffixation. There are also lexemes such as «black and white» or «do-it-yourself», which, although consist of multiple words, still form a single collocation with a set meaning.: 13:618
Grammar
Grammatical words are proposed to consist of a number of grammatical elements which occur together (not in separate places within a clause) in a fixed order and have a set meaning. However, there are exceptions to all of these criteria.: 19
Single grammatical words have a fixed internal structure; when the structure is changed, the meaning of the word also changes. In Dyirbal, which can use many derivational affixes with its nouns, there are the dual suffix -jarran and the suffix -gabun meaning «another». With the noun yibi they can be arranged into yibi-jarran-gabun («another two women») or yibi-gabun-jarran («two other women») but changing the suffix order also changes their meaning. Speakers of a language also usually associate a specific meaning with a word and not a single morpheme. For example, when asked to talk about untruthfulness they rarely focus on the meaning of morphemes such as -th or -ness.: 19-20
Semantics
Leonard Bloomfield introduced the concept of «Minimal Free Forms» in 1928. Words are thought of as the smallest meaningful unit of speech that can stand by themselves.: 11 This correlates phonemes (units of sound) to lexemes (units of meaning). However, some written words are not minimal free forms as they make no sense by themselves (for example, the and of).: 77 Some semanticists have put forward a theory of so-called semantic primitives or semantic primes, indefinable words representing fundamental concepts that are intuitively meaningful. According to this theory, semantic primes serve as the basis for describing the meaning, without circularity, of other words and their associated conceptual denotations.
Features
In the Minimalist school of theoretical syntax, words (also called lexical items in the literature) are construed as «bundles» of linguistic features that are united into a structure with form and meaning.: 36–37 For example, the word «koalas» has semantic features (it denotes real-world objects, koalas), category features (it is a noun), number features (it is plural and must agree with verbs, pronouns, and demonstratives in its domain), phonological features (it is pronounced a certain way), etc.
Orthography
Words made out of letters, divided by spaces
In languages with a literary tradition, the question of what is considered a single word is influenced by orthography. Word separators, typically spaces and punctuation marks are common in modern orthography of languages using alphabetic scripts, but these are a relatively modern development in the history of writing. In character encoding, word segmentation depends on which characters are defined as word dividers. In English orthography, compound expressions may contain spaces. For example, ice cream, air raid shelter and get up each are generally considered to consist of more than one word (as each of the components are free forms, with the possible exception of get), and so is no one, but the similarly compounded someone and nobody are considered single words.
Sometimes, languages which are close grammatically will consider the same order of words in different ways. For example, reflexive verbs in the French infinitive are separate from their respective particle, e.g. se laver («to wash oneself»), whereas in Portuguese they are hyphenated, e.g. lavar-se, and in Spanish they are joined, e.g. lavarse.
Not all languages delimit words expressly. Mandarin Chinese is a highly analytic language with few inflectional affixes, making it unnecessary to delimit words orthographically. However, there are many multiple-morpheme compounds in Mandarin, as well as a variety of bound morphemes that make it difficult to clearly determine what constitutes a word.: 56 Japanese uses orthographic cues to delimit words, such as switching between kanji (characters borrowed from Chinese writing) and the two kana syllabaries. This is a fairly soft rule, because content words can also be written in hiragana for effect, though if done extensively spaces are typically added to maintain legibility. Vietnamese orthography, although using the Latin alphabet, delimits monosyllabic morphemes rather than words.
Word boundaries
The task of defining what constitutes a «word» involves determining where one word ends and another word begins, that is identifying word boundaries. There are several ways to determine where the word boundaries of spoken language should be placed:
- Potential pause: A speaker is told to repeat a given sentence slowly, allowing for pauses. The speaker will tend to insert pauses at the word boundaries. However, this method is not foolproof: the speaker could easily break up polysyllabic words, or fail to separate two or more closely linked words (e.g. «to a» in «He went to a house»).
- Indivisibility: A speaker is told to say a sentence out loud, and then is told to say the sentence again with extra words added to it. Thus, I have lived in this village for ten years might become My family and I have lived in this little village for about ten or so years. These extra words will tend to be added in the word boundaries of the original sentence. However, some languages have infixes, which are put inside a word. Similarly, some have separable affixes: in the German sentence «Ich komme gut zu Hause an«, the verb ankommen is separated.
- Phonetic boundaries: Some languages have particular rules of pronunciation that make it easy to spot where a word boundary should be. For example, in a language that regularly stresses the last syllable of a word, a word boundary is likely to fall after each stressed syllable. Another example can be seen in a language that has vowel harmony (like Turkish):: 9 the vowels within a given word share the same quality, so a word boundary is likely to occur whenever the vowel quality changes. Nevertheless, not all languages have such convenient phonetic rules, and even those that do present the occasional exceptions.
- Orthographic boundaries: Word separators, such as spaces and punctuation marks can be used to distinguish single words. However, this depends on a specific language. East-asian writing systems often do not separate their characters. This is the case with Chinese, Japanese writing, which use logographic characters, as well as Thai and Lao, which are abugidas.
Morphology
A morphology tree of the English word «independently»
Morphology is the study of word formation and structure. Words may undergo different morphological processes which are traditionally classified into two broad groups: derivation and inflection. Derivation is a process in which a new word is created from existing ones, often with a change of meaning. For example, in English the verb to convert may be modified into the noun a convert through stress shift and into the adjective convertible through affixation. Inflection adds grammatical information to a word, such as indicating case, tense, or gender.: 73
In synthetic languages, a single word stem (for example, love) may inflect to have a number of different forms (for example, loves, loving, and loved). However, for some purposes these are not usually considered to be different words, but rather different forms of the same word. In these languages, words may be considered to be constructed from a number of morphemes.
In Indo-European languages in particular, the morphemes distinguished are:
- The root.
- Optional suffixes.
- A inflectional suffix.
Thus, the Proto-Indo-European *wr̥dhom would be analyzed as consisting of
- *wr̥-, the zero grade of the root *wer-.
- A root-extension *-dh- (diachronically a suffix), resulting in a complex root *wr̥dh-.
- The thematic suffix *-o-.
- The neuter gender nominative or accusative singular suffix *-m.
Definitions and meanings
Since the beginning of the study of linguistics, numerous attempts at defining what a word is have been made, with many different criteria. However, no satisfying definition has yet been found to apply to all languages and at all levels of linguistic analysis. It is, however, possible to find consistent definitions of «word» at different levels of description.: 6 These include definitions on the phonetic and phonological level, that it is the smallest segment of sound that can be theoretically isolated by word accent and boundary markers; on the orthographic level as a segment indicated by blank spaces in writing or print; on the basis of morphology as the basic element of grammatical paradigms like inflection, different from word-forms; within semantics as the smallest and relatively independent carrier of meaning in a lexicon; and syntactically, as the smallest permutable and substitutable unit of a sentence.: 1285
In some languages, these different types of words coincide and one can analyze, for example, a «phonological word» as essentially the same as «grammatical word». However, in other languages they may correspond to elements of different size.: 1 Much of the difficulty stems from the eurocentric bias, as languages from outside of Europe may not follow the intuitions of European scholars. Some of the criteria for «word» developed can only be applicable to languages of broadly European synthetic structure.: 1-3 Because of this unclear status, some linguists propose avoiding the term «word» altogether, instead focusing on better defined terms such as morphemes.
Dictionaries categorize a language’s lexicon into individually listed forms called lemmas. These can be taken as an indication of what constitutes a «word» in the opinion of the writers of that language. This written form of a word constitutes a lexeme.: 670-671 The most appropriate means of measuring the length of a word is by counting its syllables or morphemes. When a word has multiple definitions or multiple senses, it may result in confusion in a debate or discussion.
Phonology
One distinguishable meaning of the term «word» can be defined on phonological grounds. It is a unit larger or equal to a syllable, which can be distinguished based on segmental or prosodic features, or through its interactions with phonological rules. In Walmatjari, an Australian language, roots or suffixes may have only one syllable but a phonologic word must have at least two syllables. A disyllabic verb root may take a zero suffix, e.g. luwa-ø ‘hit!’, but a monosyllabic root must take a suffix, e.g. ya-nta ‘go!’, thus conforming to a segmental pattern of Walmatjari words. In the Pitjantjatjara dialect of the Wati language, another language form Australia, a word-medial syllable can end with a consonant but a word-final syllable must end with a vowel.: 14
In most languages, stress may serve a criterion for a phonological word. In languages with a fixed stress, it is possible to ascertain word boundaries from its location. Although it is impossible to predict word boundaries from stress alone in languages with phonemic stress, there will be just one syllable with primary stress per word, which allows for determining the total number of words in an utterance.: 16
Many phonological rules operate only within a phonological word or specifically across word boundaries. In Hungarian, dental consonants /d/, /t/, /l/ or /n/ assimilate to a following semi-vowel /j/, yielding the corresponding palatal sound, but only within one word. Conversely, external sandhi rules act across word boundaries. The prototypical example of this rule comes from Sanskrit; however, initial consonant mutation in contemporary Celtic languages or the linking r phenomenon in some non-rhotic English dialects can also be used to illustrate word boundaries.: 17
It is often the case that a phonological word does not correspond to our intuitive conception of a word. The Finnish compound word pääkaupunki ‘capital’ is phonologically two words (pää ‘head’ and kaupunki ‘city’) because it does not conform to Finnish patterns of vowel harmony within words. Conversely, a single phonological word may be made up of more than one syntactical elements, such as in the English phrase I’ll come, where I’ll forms one phonological word.: 13:618
Lexemes
A word can be thought of as an item in a speaker’s internal lexicon; this is called a lexeme. Nevertheless, it is considered different from a word used in everyday speech, since it is assumed to also include inflected forms. Therefore, the lexeme teapot refers to the singular teapot as well as the plural, teapots. There is also the question to what extent should inflected or compounded words be included in a lexeme, especially in agglutinative languages. For example, there is little doubt that in Turkish the lexeme for house should include nominative singular ev or plural evler. However, it is not clear if it should also encompass the word evlerinizden ‘from your houses’, formed through regular suffixation. There are also lexemes such as «black and white» or «do-it-yourself», which, although consist of multiple words, still form a single collocation with a set meaning.: 13:618
Grammar
Grammatical words are proposed to consist of a number of grammatical elements which occur together (not in separate places within a clause) in a fixed order and have a set meaning. However, there are exceptions to all of these criteria.: 19
Single grammatical words have a fixed internal structure; when the structure is changed, the meaning of the word also changes. In Dyirbal, which can use many derivational affixes with its nouns, there are the dual suffix -jarran and the suffix -gabun meaning «another». With the noun yibi they can be arranged into yibi-jarran-gabun («another two women») or yibi-gabun-jarran («two other women») but changing the suffix order also changes their meaning. Speakers of a language also usually associate a specific meaning with a word and not a single morpheme. For example, when asked to talk about untruthfulness they rarely focus on the meaning of morphemes such as -th or -ness.: 19-20
Semantics
Leonard Bloomfield introduced the concept of «Minimal Free Forms» in 1928. Words are thought of as the smallest meaningful unit of speech that can stand by themselves.: 11 This correlates phonemes (units of sound) to lexemes (units of meaning). However, some written words are not minimal free forms as they make no sense by themselves (for example, the and of).: 77 Some semanticists have put forward a theory of so-called semantic primitives or semantic primes, indefinable words representing fundamental concepts that are intuitively meaningful. According to this theory, semantic primes serve as the basis for describing the meaning, without circularity, of other words and their associated conceptual denotations.
Features
In the Minimalist school of theoretical syntax, words (also called lexical items in the literature) are construed as «bundles» of linguistic features that are united into a structure with form and meaning.: 36–37 For example, the word «koalas» has semantic features (it denotes real-world objects, koalas), category features (it is a noun), number features (it is plural and must agree with verbs, pronouns, and demonstratives in its domain), phonological features (it is pronounced a certain way), etc.
Orthography
Words made out of letters, divided by spaces
In languages with a literary tradition, the question of what is considered a single word is influenced by orthography. Word separators, typically spaces and punctuation marks are common in modern orthography of languages using alphabetic scripts, but these are a relatively modern development in the history of writing. In character encoding, word segmentation depends on which characters are defined as word dividers. In English orthography, compound expressions may contain spaces. For example, ice cream, air raid shelter and get up each are generally considered to consist of more than one word (as each of the components are free forms, with the possible exception of get), and so is no one, but the similarly compounded someone and nobody are considered single words.
Sometimes, languages which are close grammatically will consider the same order of words in different ways. For example, reflexive verbs in the French infinitive are separate from their respective particle, e.g. se laver («to wash oneself»), whereas in Portuguese they are hyphenated, e.g. lavar-se, and in Spanish they are joined, e.g. lavarse.
Not all languages delimit words expressly. Mandarin Chinese is a highly analytic language with few inflectional affixes, making it unnecessary to delimit words orthographically. However, there are many multiple-morpheme compounds in Mandarin, as well as a variety of bound morphemes that make it difficult to clearly determine what constitutes a word.: 56 Japanese uses orthographic cues to delimit words, such as switching between kanji (characters borrowed from Chinese writing) and the two kana syllabaries. This is a fairly soft rule, because content words can also be written in hiragana for effect, though if done extensively spaces are typically added to maintain legibility. Vietnamese orthography, although using the Latin alphabet, delimits monosyllabic morphemes rather than words.
Word boundaries
The task of defining what constitutes a «word» involves determining where one word ends and another word begins, that is identifying word boundaries. There are several ways to determine where the word boundaries of spoken language should be placed:
- Potential pause: A speaker is told to repeat a given sentence slowly, allowing for pauses. The speaker will tend to insert pauses at the word boundaries. However, this method is not foolproof: the speaker could easily break up polysyllabic words, or fail to separate two or more closely linked words (e.g. «to a» in «He went to a house»).
- Indivisibility: A speaker is told to say a sentence out loud, and then is told to say the sentence again with extra words added to it. Thus, I have lived in this village for ten years might become My family and I have lived in this little village for about ten or so years. These extra words will tend to be added in the word boundaries of the original sentence. However, some languages have infixes, which are put inside a word. Similarly, some have separable affixes: in the German sentence «Ich komme gut zu Hause an«, the verb ankommen is separated.
- Phonetic boundaries: Some languages have particular rules of pronunciation that make it easy to spot where a word boundary should be. For example, in a language that regularly stresses the last syllable of a word, a word boundary is likely to fall after each stressed syllable. Another example can be seen in a language that has vowel harmony (like Turkish):: 9 the vowels within a given word share the same quality, so a word boundary is likely to occur whenever the vowel quality changes. Nevertheless, not all languages have such convenient phonetic rules, and even those that do present the occasional exceptions.
- Orthographic boundaries: Word separators, such as spaces and punctuation marks can be used to distinguish single words. However, this depends on a specific language. East-asian writing systems often do not separate their characters. This is the case with Chinese, Japanese writing, which use logographic characters, as well as Thai and Lao, which are abugidas.
«independently»]]
Philosophy
Philosophers have found words to be objects of fascination since at least the 5th century BC, with the foundation of the philosophy of language. Plato analyzed words in terms of their origins and the sounds making them up, concluding that there was some connection between sound and meaning, though words change a great deal over time. John Locke wrote that the use of words «is to be sensible marks of ideas», though they are chosen «not by any natural connexion that there is between particular articulate sounds and certain ideas, for then there would be but one language amongst all men; but by a voluntary imposition, whereby such a word is made arbitrarily the mark of such an idea». Wittgenstein’s thought transitioned from a word as representation of meaning to «the meaning of a word is its use in the language.»
Classes
Each word belongs to a category, based on shared grammatical properties. Typically, a language’s lexicon may be classified into several such groups of words. The total number of categories as well as their types are not universal and vary among languages. For example, English has a group of words called articles, such as the (the definite article) or a (the indefinite article), which mark definiteness or identifiability. This class is not present in Japanese, which depends on context to indicate this difference. On the other hand, Japanese has a class of words called particles which are used to mark noun phrases according to their grammatical function or thematic relation, which English marks using word order or prosody.: 21–24
It is not clear if any categories other than interjection are universal parts of human language. The basic bipartite division that is ubiquitous in natural languages is that of nouns vs verbs. However, in some Wakashan and Salish languages, all content words may be understood as verbal in nature. In Lushootseed, a Salish language, all words with ‘noun-like’ meanings can be used predicatively, where they function like verb. For example, the word sbiaw can be understood as ‘(is a) coyote’ rather than simply ‘coyote’.: 13:631 On the other hand, in Eskimo–Aleut languages all content words can be analyzed as nominal, with agentive nouns serving the role closest to verbs. Finally, in some Austronesian languages it is not clear whether the distinction is applicable and all words can be best described as interjections which can perform the roles of other categories.: 13:631
The current classification of words into classes is based on the work of Dionysius Thrax, who, in the 1st century BC, distinguished eight categories of Ancient Greek words: noun, verb, participle, article, pronoun, preposition, adverb, and conjunction. Later Latin authors, Apollonius Dyscolus and Priscian, applied his framework to their own language; since Latin has no articles, they replaced this class with interjection. Adjectives (‘happy’), quantifiers (‘few’), and numerals (‘eleven’) were not made separate in those classifications due to their morphological similarity to nouns in Latin and Ancient Greek. They were recognized as distinct categories only when scholars started studying later European languages.: 13:629
In Indian grammatical tradition, Pāṇini introduced a similar fundamental classification into a nominal (nāma, suP) and a verbal (ākhyāta, tiN) class, based on the set of suffixes taken by the word. Some words can be controversial, such as slang in formal contexts; misnomers, due to them not meaning what they would imply; or polysemous words, due to the potential confusion between their various senses.
History
In ancient Greek and Roman grammatical tradition, the word was the basic unit of analysis. Different grammatical forms of a given lexeme were studied; however, there was no attempt to decompose them into morphemes. : 70 This may have been the result of the synthetic nature of these languages, where the internal structure of words may be harder to decode than in analytic languages. There was also no concept of different kinds of words, such as grammatical or phonological – the word was considered a unitary construct.: 269 The word (dictiō) was defined as the minimal unit of an utterance (ōrātiō), the expression of a complete thought.: 70
See also
- Longest words
- Utterance
- Word (computer architecture)
- Word count, the number of words in a document or passage of text
- Wording
- Etymology
Chinese word segmentation (CWS) models have achieved very high performance when the training data is sufficient and in-domain. However, the performance drops drastically when shifting to cross-domain and low-resource scenarios due to data sparseness issues. Considering that constructing large-scale manually annotated data is time-consuming and labor-intensive, in this work, we for the first time propose to mine word boundary information from pauses in speech to efficiently obtain large-scale CWS naturally annotated data. We present a simple yet effective complete-then-train method to utilize these natural annotations from speech for CWS model training. Extensive experiments demonstrate that the CWS performance in cross-domain and low-resource scenarios can be significantly improved by leveraging our naturally annotated data extracted from speech.
PDF
Abstract
Code
No code implementations yet. Submit
your code now
Tasks
Datasets
Add Datasets
introduced or used in this paper
Results from the Paper
Submit
results from this paper
to get state-of-the-art GitHub badges and help the
community compare results to other papers.
Methods
No methods listed for this paper. Add
relevant methods here