What is grammatical structure of the word

The grammatical structure of language is a system of means used to
turn linguistic units into communicative ones, in other words – the
units of language into the units of speech. Such means are
inflexions, affixation, word order, function words and phonological
means.

Generally
speaking, Indo-European languages are classified into two structural
types – synthetic and
analytic.
Synthetic languages are defined as ones of ‘internal’ grammar of
the word – most of grammatical meanings and grammatical relations
of words are expressed with the help of inflexions (Ukrainian —
зроблю,
Russian, Latin, etc). Analytical languages
are those of ‘external’ grammar because most grammatical meanings
and grammatical forms are expressed with the help of words (will
do
). However, we cannot speak of
languages as purely synthetic or analytic – the English language
(Modern English) possesses analytical forms as prevailing, while in
the Ukrainian language synthetic devices are dominant. In the process
of time English has become more analytical as compared to Old
English. Analytical changes in Modern English (especially American)
are still under way.

4. Morphology and syntax as two parts of linguistic description.

As the word is the main unit of traditional grammatical theory, it
serves the basis of the distinction which is frequently drawn between
morphology and syntax. Morphology deals with the internal structure
of words, peculiarities of their grammatical categories and their
semantics while traditional syntax deals with the rules governing
combination of words in sentences (and texts in modern linguistics).
We can therefore say that the word is the main unit of morphology.

It
is difficult to arrive at a one-sentence definition of such a complex
linguistic unit as the word. First of all, it is the main expressive
unit of human language which ensures
the thought-forming function of the language. It is also the basic
nominative
unit of language with the help of which the naming function of
language is realized. As any linguistic sign the word is a level
unit. In the structure of language it belongs to the upper stage of
the morphological level. It is a unit of the sphere of ‘language’
and it exists only through its speech actualization. One of the most
characteristic features of the word is its indivisibility. As any
other linguistic unit the word is a bilateral entity. It unites a
concept (поняття, ідея) and
a sound image and thus has two sides – the content and
expression sides (план змісту та план
вислову): concept
and sound form.

The noun

  1. General characteristics.

The
noun is the central lexical unit of language. It is the main
nominative unit of speech. As any other part of speech, the noun can
be characterised by three criteria: semantic
(the meaning), morphological
(the form and grammatical categories) and syntactical
(functions, distribution).

Semantic
features of the noun. The noun possesses the grammatical meaning of
thingness, substantiality. According to different principles of
classification nouns fall into several subclasses:

  1. According
    to the type of nomination they may be proper
    and common;

  2. According
    to the form of existence they may be animate
    and
    inanimate.
    Animate nouns in their turn fall into human
    and non-human.

  3. According
    to their quantitative structure nouns can be countable
    and
    uncountable.

This
set of subclasses cannot be put together into one table because of
the different principles of classification.

Morphological
features
of the noun. In accordance with the morphological structure of the
stems all nouns can be classified into: simple,
derived
( stem + affix, affix + stem – thingness);
compound
( stem+ stem – armchair
) and composite
( the Hague ).

Syntactic
features
of the noun. The noun can be used un the sentence in all syntactic
functions
but predicate. It can go into right-hand and left-hand connections
with practically all parts of speech. That is why practically all
parts of speech but the verb can act as noun determiners.
However, the most common noun determiners are considered to be
articles, pronouns, numerals, adjectives and nouns themselves in the
common and genitive case.

Соседние файлы в предмете [НЕСОРТИРОВАННОЕ]

  • #
  • #
  • #
  • #
  • #
  • #
  • #
  • #
  • #
  • #
  • #

Grammar: Functional Approaches

W. Croft, in International Encyclopedia of the Social & Behavioral Sciences, 2001

3.6 More Radically Functionalist Schools

Grammatical structure is commonly assumed to exist in speakers’ minds. However, grammatical structure is also directly involved in social interaction in language use, and language use is central to accounting for language acquisition, language variation and language change. In the more dynamic process of language acquisition and language change, functional factors have been argued to play a role. In both language acquisition and language change, it has been argued that competing motivations among functional principles play a major role.

The concept of competing motivations is that (functional) principles may come into conflict such that there is no grammatical system that satisfies all of the functional principles. As a result, change occurs over time in acquisition and in the history of a language, and the languages of the world exhibit structural diversity even though their speakers’ linguistic behavior conforms to the same functional principles. (Competing motivation models do not presuppose that the competing principles are functional, and in fact competing motivation models are now used by many formalists; see below).

The most general and commonly offered example of competing motivations is that between economy and iconicity (see Linguistics: Iconicity). Economy is the principle that a speaker uses the least effort necessary to express himself or herself. For example, English leaves the singular number of nouns unexpressed: book-ϕ vs. book-s. Economy is considered to be a speaker-oriented functional principle: its influence on language is for the benefit of the speaker. Iconicity is, in part, the principle that all of the relevant parts of the meaning conveyed are in fact conveyed by grammatical elements in the utterance (words, inflections, etc.). For example, in some languages, both the singular and the plural of a noun are expressed by overt suffixes. This aspect of iconicity is considered to be hearer-oriented: any aspect of meaning left out by the speaker may not be recoverable by the hearer. Economy and iconicity compete with each other: a linguistic expression that is economical will not be iconic (since it leaves some elements of meaning unexpressed), and an expression that is iconic will not be economical (since certain elements of meaning are not left unexpressed). Hence, across languages, there is diversity in the expression of the category of number, and languages change from one form of expression of number to another over time.

Perhaps the best known competing motivations model in language acquisition is the competition model of E. Bates and B. MacWhinney (1989). In language change, a number of linguists have put forward competing motivations models, in particular J. Haiman (1985).

More recently, some functionalist linguists (P. Hopper, J. Haiman, and J. Bybee) have emphasized the dynamic character of language in ordinary use, and have argued that a speaker’s grammatical knowledge should not be considered to be as static and immutable as is usually believed. They argue that a speaker’s grammatical knowledge is not a tightly integrated system, but rather a more loosely structured inventory of conventionalized routines that have emerged through language use. The empirical research of the functionalist linguists who advocate this view has focused on the role of frequency of use on the entrenchment of grammatical knowledge, and on inductive models of abstracting grammatical knowledge from exposure to language use, both in acquisition and in adult usage. There have been a number of studies of inflectional paradigms of words that support the hypothesis that frequency of use influences grammatical representation; however, studies of syntactic construction in this approach are in their infancy.

Read full chapter

URL: 

https://www.sciencedirect.com/science/article/pii/B0080430767029466

Language and Society: Cultural Concerns

C. Goddard, A. Wierzbicka, in International Encyclopedia of the Social & Behavioral Sciences, 2001

The vocabulary, grammatical structure and usage conventions of any language are linked in innumerable ways with the social, cultural, and historical experience of its speakers. Drawing on examples from many languages, this article demonstrates the nature and range of these links. It highlights the culture-specific nature of many words, including terms for social categories, emotions, and value concepts. It explains the notion of cultural key words and the significance of lexical elaboration. In the domain of grammar, the article shows how grammatical constructions and marking may express and reflect culture-related meanings. In the realm of language in use, topics considered include discourse particles and interjections, linguistic routines, speech genres and speech styles, and broader discourse styles, demonstrating diverse ways in which language use can be related to differing cultural norms, values, and attitudes.

Read full chapter

URL: 

https://www.sciencedirect.com/science/article/pii/B0080430767046118

Creativity in Computing and DataFlow SuperComputing

D. Bojić, M. Bojović, in Advances in Computers, 2017

Abstract

Parsing is the task of analyzing grammatical structures of an input sentence and deriving its parse tree. Efficient solutions for parsing are needed in many applications such as natural language processing, bioinformatics, and pattern recognition. The Cocke–Younger–Kasami (CYK) algorithm is a well-known parsing algorithm that operates on context-free grammars in Chomsky normal form and has been extensively studied for execution on parallel machines. In this chapter, we analyze the parallelizing opportunities for the CYK algorithm and give an overview of existing implementations on different hardware architectures. We propose a novel, efficient streaming dataflow implementation of the CYK algorithm on reconfigurable hardware (Maxeler dataflow engines), which achieves 18–76 × speedup over an optimized sequential implementation for real-life grammars for natural language processing, depending on the length of the input string.

Read full chapter

URL: 

https://www.sciencedirect.com/science/article/pii/S0065245816300602

Parsers

Keith D. Cooper, Linda Torczon, in Engineering a Compiler (Third Edition), 2023

3.2.4 Encoding Meaning into Structure

The

Image 43

Image 16

Image 17

ambiguity points out the relationship between meaning and grammatical structure. However, ambiguity is not the only situation where meaning and grammatical structure interact. Consider the parse tree that would be built from a rightmost derivation of the simple expression

Image 56

.

One natural way to evaluate the expression is with a simple postorder treewalk. It would first compute

Image 58

and then multiply that result by

Image 42

to produce the result

Image 37

. This evaluation order contradicts the classic rules of algebraic precedence, which would evaluate it as

Image 59

. Since the ultimate point of parsing the expression is to produce code that will implement it, the expression grammar should have the property that it builds a tree whose “natural” treewalk evaluation produces the correct result.

The real problem lies in the structure of the grammar on page 92. It treats all the arithmetic operators in the same way, ignoring precedence. A rightmost derivation of

Image 6 generates a different parse tree than does a leftmost derivation of the same string. The grammar is ambiguous.

The example in Fig. 3.1 showed a string with parentheses. The parentheses forced the leftmost and rightmost derivations into the same parse tree. That extra production in the grammar added a level to the parse tree that, in turn, forces the same evaluation order independent of derivation order.

We can use this effect to encode levels of precedence into the grammar. First, we must decide how many levels of precedence are required. The simple expression grammar needs three precedence levels: highest precedence for

Image 38 Image 25

, medium precedence for × and

Image 8

, and lowest precedence for + and -. Next, we group the operators at distinct levels and use a nonterminal to isolate that part of the grammar. Fig. 3.2 shows the resulting grammar; it adds a start symbol,

Image 60

and a production for the terminal symbol

Image 61

.

Figure 3.2

Figure 3.2. The Classic Expression Grammar.

In the classic expression grammar,

Image 36 forms a level for + and -, Image 62 forms alevel for × and Image 8, and Image 63 forms a level for Image 38 Image 25. The modified grammar derives a parse tree for Image 6 that models standard algebraic precedence.

A postorder treewalk over this parse tree will first evaluate

Image 65 and then add the result to Image 66. The grammar enforces the standard rules of arithmetic precedence. This grammar is unambiguous; the leftmost derivation produces the same parse tree.

Representing the Precedence of Operators

Thompson’s construction must apply its three transformations in an order that is consistent with the precedence of the operators in the regular expression. To represent that order, an implementation of Thompson’s construction can build a tree that represents the regular expression and its internal precedence. The

Image 67 produces the following tree:

where Image 69 represents concatenation, | represents alternation, and Image 70 represents closure. The parentheses are folded into the structure of the tree and, thus, have no explicit representation.

The construction applies the individual transformations in a postorder walk over the tree. Since transformations correspond to operations, the postorder walk builds the following sequence of nfas: a, b, c, b|c, (b|c), and, finally, a(b|c). Section 5.3 discusses a mechanism to build expression trees.

The changes affect derivation lengths and parse tree sizes. The new nonterminals that enforce precedence add steps to the derivation and interior nodes to the tree. At the same time, moving the operators inline eliminated one production and one node per operator.

Other operations require high precedence. For example, array subscripts should be applied before standard arithmetic operations. This ensures, for example, that

Image 71 evaluates Image 72 to a value before adding it to Image 66, as opposed to treating Image 73 as a subscript on some array whose location is computed as Image 58. Similarly, operations that change the type of a value, known as type casts in languages such as c or java, have higher precedence than arithmetic operations but lower precedence than parentheses or subscript operations.

If the language allows assignment inside expressions, the assignment operator should have low precedence. This ensures that the code completely evaluates both the left-hand side and the right-hand side of the assignment before performing the assignment. If assignment (←) had the same precedence as addition, for example, a left-to-right evaluation of

Image 74 would assign Image 75‘s value to Image 66 rather than evaluating Image 76 and then assigning that result to Image 77.

Read full chapter

URL: 

https://www.sciencedirect.com/science/article/pii/B9780128154120000097

Language Obsolescence

N.C. Dorian, in International Encyclopedia of the Social & Behavioral Sciences, 2001

1.2 Assessing Structural Changes by Sampling along the Proficiency Continuum

By sampling the usage of speakers at various points along the proficiency continuum, ongoing changes in phonological and grammatical structure can be identified in an obsolescent language. Caution is necessary both in identifying and in interpreting evidence of change, however. In assessing change, the norms against which any deviation is measured must be established strictly in terms of the most conservative local usage. Local norms are essential, since any standardized or official form of the language may be historically irrelevant to the local form, based mainly on quite different dialects, for example, or relatively recently devised. Furthermore, structural changes found to be in progress are not necessarily the unequivocal result of obsolescence as such since intensive contact with another language often promotes structural change, whether language shift is underway or not. Reports of structural convergence appear, for example, both where a language long in contact with others has become obsolescent (as with Tariana, cited above) and where languages long in contact with one another all continue to be maintained (as in the Urdu, Marathi, and Kannada case discussed by Gumperz and Wilson 1971). Generally speaking, the broader the age-range encompassed by the proficiency continuum and the larger the number of speakers sampled at various points along it, the greater the likelihood that changes to which obsolescence contributes substantially can be identified. Where circumstances permit, longitudinal sampling can offer important additional evidence, by determining the degree to which innovative structural features once absent or rare among the oldest and most conservative speakers establish themselves subsequently among those who were previously the younger speakers.

Read full chapter

URL: 

https://www.sciencedirect.com/science/article/pii/B0080430767030345

Mobile digital storytelling in the second language classroom

Apostolos Koutropoulos, … Ronda Zelezny-Green, in The Plugged-In Professor, 2013

Abstract:

This assignment facilitates the acquisition of etic perspectives of the target culture, as well as new vocabulary and new grammatical structures, through the use of mobile digital storytelling. Students are asked to explore a target culture through the lens of a nonnative of that culture, and then asked to put together a short narrated video on a given topic. Students can use a smartphone (or iPod Touch) to record video clips and then edit them together on their phone to create a longer video. Videos can then be uploaded to YouTube where fellow students will be able to view and comment on them. Although the original assignment was aimed at intermediate to advanced students in a language-learning classroom, the assignment can be utilized in other disciplines that deal with issues of group identity and self-identity.

Read full chapter

URL: 

https://www.sciencedirect.com/science/article/pii/B9781843346944500181

Gene Finding: Putting the Parts Together

Anders Krogh, in Guide to Human Genome Computing (Second Edition), 1998

3 STATE MODELS

The idea of combining the predictions into a complete gene structure is that the ‘grammatical’ constraints can rule out some wrong exon assemblies. The grammatical structure of the problem has been stressed by David Searls (Searls, 1992; Dong and Searls, 1994), who also proposed using the methods of formal grammars from computer science and linguistics. The dynamic programming can often be described conveniently by some sort of finite state automaton (Searls and Murphy, 1995; Durbin et al., 1998). A model might have a state for translation start (S), one for donor sites (D), one for acceptor sites (A) and one for translation termination (T). Each time a transition is made from one state to another, a score (or a penalty) is added. For the transition from the donor state to the acceptor state, the intron score is added to the total score, and so on. In Figure 11.2 the state diagram is shown for the simple dynamic programming algorithm above. For each variable in the algorithm there is a corresponding state with the same name, and also a begin and end state is needed.

Figure 11.2. A finite state automaton corresponding to the simple DP algorithm.

The advantage of such a formulation is that the dynamic programming for finding the maximum score (or minimum penalty) is of a more general type, and therefore adding new states or new transitions is easy. For instance, drawing the state diagram for a more general dynamic programming algorithm that allows for any number of genes and also partial genes is straightforward (Figure 11.3), whereas it is involved to write down. Similarly the state diagram for the frame-aware algorithm sketched out above is shown in Figure 11.4.

Figure 11.3. The model of Figure 11.2 with transitions added that allow for prediction of any number of genes and partial genes where the sequence starts or ends in the middle of an exon or intron.

Figure 11.4. A model that ensures frame consistency throughout a gene. As in the two previous figures, dotted lines correspond to intergenic regions, dashed to introns and full lines to coding regions (exons).

If the scores used are log probabilities or log odds, then a finite state automaton is essentially a hidden Markov model (HMM), and these have been introduced recently into gene finding by several groups. The only fundamental difference from the dynamic programming schemes discussed above is that these models are fully probabilistic, which has certain advantages. One of the advantages is that the weighting problem is easier.

VEIL (Henderson et al., 1997) is an application of an HMM to the gene finding problem. In this model all the sensors are HMMs. The exon module is essentially a first-order inhomogeneous Markov chain, which is described above. This is the natural order for implementation in an HMM, because then each of the conditional probabilities of the inhomogeneous Markov chain corresponds to the probability of a transition from one state to the next in the HMM. It is not possible to avoid stop codons in the reading frame when using a first-order model, but in VEIL a few more states are added in a clever way, which makes the probability of a stop codon zero. Sensors for splice sites are made in a similar way. The individual modules are then combined essentially as in Figure 11.2 (i.e. frame consistency is not enforced). The combined model is one big HMM, and all the transitions have associated probabilities. These probabilities can be estimated from a set of training data by a maximum likelihood method. For combining the models this essentially boils down to counting occurrences of the different types of transitions in the dataset. Therefore, the implicit weighting of the individual sensors is not really an issue.

Although the way the optimal gene structure is found is similar in spirit to the dynamic programming above, it looks quite different in practice. This is because the dynamic programming is done at the level of the individual states in all the submodels; there are more than 200 such states in VEIL. Because the model is fully probabilistic, one can calculate the probability of any sequence of states for a given DNA sequence. This state sequence (called a path) determines the assignment of exons and introns. If the path goes through the exon model, that part of the sequence is labelled as exon; if it goes through the intron model it is labelled intron, and so forth. The dynamic programming algorithm, which is called the Viterbi algorithm, finds the most probable path through the model for a given sequence, and from this the predicted gene structure is derived (see Rabiner (1989) for a general introduction to HMMs).

This probabilistic model has the advantage of solving the problem of weighting the individual sensors. The maximum likelihood estimation of the parameters can be shown to be optimal if there are sufficient training data, and if the statistical nature of genes can be described by such a model. A weak part of VEIL is the first-order exon model, which is probably not capable of capturing the statistics of coding regions, and most other methods use fourth- or fifth-order models.

A HMM-based gene finder called HMMgene is currently being developed. The basic method is the same as VEIL, but it includes several extensions to the standard HMM methodology, which are described by Krogh (1997). One of the most important is that coding regions are modelled by a fourth-order inhomogeneous Markov chain instead of a first-order chain. This is done by an almost trivial extension of the standard HMM formalism, which allows a Markov chain of any order in a state of the model, whereas the standard HMM has a simple unconditional probability distribution over the four bases (corresponding to 0th order). The model is frame-aware and can predict any number of genes and partial genes, so the overall structure of the model is as in Figure 11.4 with transitions added to allow for begin and end in introns, as in Figure 11.3.

As already mentioned, the maximum likelihood estimation method works well if the model structure can describe the true statistics of genes. This is a very idealized assumption, and therefore HMMgene uses another method for estimating the parameters called conditional maximum likelihood (Juang and Rabiner, 1991; Krogh, 1994). Loosely speaking, maximum likelihood maximizes the probability of the DNA sequences in the training set, whereas conditional maximum likelihood maximizes the probability of the gene structures of these sequences, which, after all, is what we are interested in. This kind of optimization is conceptually similar to that used in GeneParser, where the prediction accuracy is also optimized. HMMgene also uses a dynamic programming algorithm different from the Viterbi algorithm for prediction of the gene structure. All of these methods have contributed to a high performance of HMMgene.

Genie is another example of a probabilistic state model which is called a generalized HMM (Kulp et al., 1996; Reese et al., 1997). Figure 11.4 is in fact Genie’s state structure, and both this figure and Figure 11.2 are essentially copied from Kulp et al. (1996). In Genie, the signal sensors (splice sites) and content sensors (coding potential) are neural networks, and the output of these networks is interpreted as probabilities. This interpretation requires estimation of additional probability parameters which work like weights on the sensors. So, although it is formulated as a probabilistic model, the weighting problem still appears in disguise. The algorithm for prediction is almost identical to the dynamic programming algorithm of the last section. A version of Genie also includes database similarities as part of the exon sensor (Kulp et al., 1997).

There are two main advantages of generalized HMMs compared with standard HMMs. First, the individual sensors can be of any type, such as neural networks, whereas in a standard HMM they are restricted by the HMM framework. Second, the length distribution (of, for example, coding regions) can be taken into account explicitly, whereas the natural length distribution for an HMM is a geometric distribution, which decays exponentially with the length. However, it is possible to have a fairly advanced length modelling in an HMM if several states are used (Durbin et al., 1998). The advantage of a system like HMMgene, on the other hand, is that it is one integrated model, which can be optimized all at once for maximum prediction accuracy.

Another gene finder based on a generalized HMM is GENSCAN (Burge and Karlin, 1997). The main differences between the GENSCAN state structure and that of Genie or HMMgene is that GENSCAN models the sequence in both directions simultaneously. In many gene finders, such as those described above, genes are first predicted on one strand, and then on the other. Modelling both strands simultaneously was done very successfully in GeneMark, and a similar method is implemented in GENSCAN. One advantage (and perhaps the main one) is that this construction avoids predictions of overlapping genes on the two strands, which presumably are very rare in the human genome. GENSCAN models any number of genes and partial genes like HMMgene. The sensors in GENSCAN are similar to those used in HMMgene. For instance, the coding sensor is a fifth-order inhomogeneous Markov chain. The signal sensors are essentially position-dependent weight matrices, and thus are also very similar to those of HMMgene, but there are more advanced features in the splice site models. GENSCAN also model promoters and the 5‘ and 3‘ UTRs.

Read full chapter

URL: 

https://www.sciencedirect.com/science/article/pii/B978012102051450012X

Sapir, Edward (1884–1939)

V. Golla, in International Encyclopedia of the Social & Behavioral Sciences, 2001

2.2 The Professionalization of American Linguistics

By 1921, Sapir was able to draw on the analytic details of the American Indian languages on which he had worked to illustrate, in his book Language, the wide variety of grammatical structures represented in human speech. One of the most significant impacts of this highly successful book was to provide a model for the professionalization of academic linguistics in the US during the 1920s and early 1930s.

Under Sapir’s guidance, a distinctive American School of linguistics arose, focused on the empirical documentation of language, primarily in field situations. Although most of the students Sapir himself trained at Chicago and Yale worked largely if not exclusively on American Indian languages, the methods that were developed were transferable to other languages. In the mid-1930s Sapir directed a project to analyze English, and during World War II many of Sapir’s former students were recruited to use linguistic methods to develop teaching materials for such strategically important languages as Thai, Burmese, Mandarin, and Russian.

A distinction is sometimes drawn between the prewar generation of American linguists—dominated by Sapir and his students and emphasizing holistic descriptions of American Indian languages—and the immediate postwar generation, whose more rigid and focused formal methods were codified by Leonard Bloomfield and others. Since many of the major figures of ‘Bloomfieldian’ linguistics were Sapir’s students, this distinction is somewhat artificial, and a single American Structuralist tradition can be identified extending from the late 1920s through 1960 (Hymes and Fought 1981). There is little doubt that Sapir’s influence on this tradition was decisive.

Read full chapter

URL: 

https://www.sciencedirect.com/science/article/pii/B0080430767003296

Sentence Comprehension, Psychology of

C. CliftonJr., in International Encyclopedia of the Social & Behavioral Sciences, 2001

3.5 Effects of Context

Sentences are normally not interpreted in isolation, which raises the question of how context can affect the processes by which they are understood. One line of research has emphasized the referential requirements of grammatical structures. One use of a modifier, such as a relative clause, is to select one referent from a set of possible referents. For instance, if there are two books on a table, you can ask for the ‘book that has the red cover.’ Some researchers (e.g., Altmann and Steedman 1988) have suggested that the difficulty of the relative clause in a ‘horse raced’ type garden-path arises simply because the sentence is presented out of context and there is no set of referents for the relative clause to select from. This suggestion entails that the garden-path would disappear in a context in which two horses are introduced, one of which was raced past a barn. Most experimental research fails to give strong support to this claim (cf. Mitchell 1994, Tanenhaus and Trueswell 1995). However, the claim may be correct for weaker garden-paths, for example the prepositional phrase attachment discussed above (‘The doctor examined the patient with a broken wrist’), at least when the verb does not require a prepositional phrase argument. And a related claim may even be correct for relative clause modification when temporal relations, not simply referential context, affect the plausibility of main verb vs. relative clause interpretations (cf. Tanenhaus and Trueswell 1995).

Read full chapter

URL: 

https://www.sciencedirect.com/science/article/pii/B0080430767015497

Algebra, Abstract

KiHang Kim, Fred W. Roush, in Encyclopedia of Physical Science and Technology (Third Edition), 2003

II.F Mathematical Linguistics

We start with a set W of basic units considered words. Mathematical linguistics is concerned with the formal theory of sentences, that is, sequences of words that are grammatically allowed and the grammatical structure of sentences (or longer units). This is syntax. Meaning (semantics) is usually not dealt with.

For a set X, let X* be the set of finite sequences from X including the empty sequence e. For instance, if X is {0, 1}, then X* is {e, 0, 1, 00, 01, 10, 11, 000 ⋯}. For a more important example, we can consider the family of all sequences of logical variables p, q, r, and ∨ (or), ∧ (and), (,) (parentheses), → (if then), ~ (not). The set of logical formulas will be a subset of this.

A phrase structure grammar is a quadruple (N, W, ρ, ψ), where (W set of words) is nonempty and finite, N (nonterminals) is a finite set disjoint from W, ψ ∈ N, and ρ is a finite subset of ((NW)* W*) × (NW)*. The set N is a set of valid grammatical forms involving abstract concepts such as ψ (sentence) or subject, predicate, object. The set ρ (productions) is a set of ways we can substitute into a valid grammatical form to obtain another, more specific one. The element ψ is called the starting symbol. If (x, y) ∈ ρ, we are allowed to change any occurrence of x with y. Members of W are called terminals.

Consider logical formulas involving operations ∨, ~ and variables p, q, r. Let W = {p, q, r, ∨, (,), ~}, N = {ψ}. We derive formulas by successive substitution, as ψ,(ψ ∨ ψ), ((ψ ∨ ψ) ∨ ψ),((ψ ∨ ψ) ∨ ~ ψ), ((p ∨ ψ) ∨ ~ ψ),((pq) ∨ ~ ψ),((pq) ∨ ~ r).

An element y ∈ (NW)* is said to be directly derived from x ∈ (NW)* if x = azb, y = awb for some (z, w) ∈ ρ, a, b ∈ (NW)*. An indirect derivation is a sequence of direct derivations. Here ρ = {(ψ, p),(ψ, q),(ψ, r),(ψ, ~ ψ), (ψ,(ψ ∨ ψ))}.

The language determined by a phrase structure grammar is the set of all a ∈ W* that can be derived from ψ.

A grammar is called context free if and only if for all (a, b) ∈ ρ, a ∈ N, be0. This means that what items can be substituted for a given grammatical element do not depend on other grammatical elements. The grammar above is context free.

A grammar is called regular if for all (a, b) ∈ ρ we have a ∈ N, b = tn, where t ∈ W, n ∈ N, or n = e0. This means at each derivation we go from t1t2trn to t1t2trtr + 1m, where ti are terminals, n, m nonterminals, (n, tr + 1m) ∈ ρ. So we fill in one terminal at each step, going from left to right. The grammar mentioned above is not regular.

To recognize a grammar is to be able to tell whether or not a sequence from W* is in the language. A grammar is regular if and only if some finite state machine recognizes it. The elements of W are input 1 at a time and outputs are “yes, no,” meaning all symbols up to the present are or are not in the language. Let the internal states of the machine be in 1–1 correspondence with all subsets of N, and let the initial state be ψ. For a set S1 of nonterminals and input x let the next state be the set S2 of all nonterminals z such that for some u ∈ S1, (u, xz) is a production. Then at any time the state consists of all nonterminals that could occur after the given seqence of inputs. Let the output be “yes” if and only if for some u ∈ N,(u, x) ∈ ρ. This is if and only if the previous inputs together with the current input form a word in the language.

For the converse, if a finite state machine can recognize a language, let W be as in the language, N be the set of internal states, ψ the initial state, the productions the set of pairs (n1, xn2) such that if the machine is in state n1 and x is input, state n2 is the next state, and the set of pairs (n1, x) such that in state n1 after input x the machine answers “yes.”

A further characterization of regular language is the Myhill–Nerode theorem. Let W* be considered a semigroup of words. Then a language L ⊂ W* is regular if and only if the congruence {(x, y) ∈ W* × W* : axb ∈ L if and only if ayb ∈ L for all a, b ∈ L} has finitely many equivalence classes. This is if and only if there exists a finite semigroup H, a homomorphism h : W* → H is onto, and a subset S ⊂ H such that L = h−1(S).

Read full chapter

URL: 

https://www.sciencedirect.com/science/article/pii/B0122274105000193

In English language structure and grammar, there are specified ways of using words to achieve the right grammatical structure in sentences. In an informal setting (and even in some professional situations), these specifics have been thrown out the window simply because it is too formal, or as a result of lack of knowledge of it. If you write a lot, you should not be in that group of writers that think “it is not necessary to stick to these grammar sentence structure rules, because, in the end, all that matters is that you get to communicate”. Communication is of utmost importance, but if you have to write it down, and you do not do it the right way, you just may end up not communicating at all; we all need help with sentence structure and grammar from time to time. The term structure is really important in everything in this life, and so is it in spoken and written English. What exactly is it, and how can you check for it in your work? Do you need a quality simple compound or complex sentence checker tool? 

What Is Grammatical Structure in English Language?

Grammatical structure in English language is simply the arrangement of words, phrases, and clauses in a sentence. Naturally, this definition brings to mind the parts of speech in English language which is like the building block of grammar and sentence structure. It is not just enough to know what these parts of speech are, it pays to know how they are used in communicating, how they are organized to form phrases, grammar and sentence structures that can efficiently make sense to the reader or listener. So the structure and organization of a sentence is what grammatical structures in English language is all about, and it is greatly dependent on what is called the “syntax or syntactic structure”. The grammatical structure in English language demands that words and phrases are arranged in a certain manner to create a well-formed sentence; that is what syntactic structure is all about.

Types of Sentence Structure

There are four basic types of sentence structure in English language.

They are as follows:

  • Simple sentence
  • Compound sentence
  • Complex sentence
  • Compound-complex sentence

A good understanding of four of these basic sentence structure types will help with sentence structure and grammar. Here is a brief overview of what they are and some of their examples

Simple sentences

A simple sentence is simply a sentence with a grammar structure containing only one independent clause and no dependent clauses at all.

Examples:

  • He went home after meeting
  • Jimmy loves potatoes
  • Mother is a teacher

These are examples of the simplest grammar and sentence structures in the English language. They form the basis of other more complex sentences.

Compound sentences

Grammar and sentence structures that have two or more simple sentences are defined as compound sentences. They are just like simple sentences, but at least two times more than what can be obtained from simple sentences; talking about two or more independent clauses.

Examples:

  • Sarah went to school; lily slept at home.
  • I was running late but I decided to take the bus
  • Ella swept the floor, and Tim washed the window.

The notable thing about compound sentences is that no matter how your grammar structure is presented, it signals to the reader that you are discussing two different ideas that are equally important.

Complex sentences

Another grammar structure considered one of the basic types of English structure is the complex sentence. This grammar structure contains an independent clause (or main clause) and at least one independent clause. So, the main clause and one or more dependent clauses all put together in a sentence with appropriate conjunction or pronoun is all it takes to come up a complex grammar structure in English language.

Examples:

  • Desmond laughed when his father dropped the hot kettle upside down on the floor
  • Because she was so smart, Jane was always one of the best in her class
  • She learned a memorable lesson about cheating after she changed the mark on her report card last year.

As you can see, the complex sentence is more complex than the other grammar structures discussed earlier. You need to know this about better sentence structure English grammar.

Compound-complex sentences

Of all the most basic types of grammatical structures in English language, the compound-complex structure is the most complex. It combines elements of the compound and complex grammar structures, and as such is the most sophisticated type of sentence you can use.

Examples:

  • Her blue car was cleaned new, bright and sparkling on the road, but the tires do not run smooth, as though it needs some repairs and replacements.
  • She gave me another of her cold keen look, and I could see that she was again asking herself I was responsible for breaking down the air conditioner in the common room.
  • In the company, everyone is of the opinion that I have no superiors since I exhibit so much confidence and interact in an assertive manner with everyone, but I do not think I have any inferiors either, because, to me, all men are equal.

Understanding compound-complex grammatical structure in English language will help you take your writing to a new level of complexity in a really good way. The English grammar sentence structure rules must be followed. This is basically all you need to know about the most common sentence structures in English language, and quite frankly, they can come through for you in any form of writing you may wish to do.

Other really good factors to all of these, which are as efficient as the basic types of grammatical structure in helping with cool sentence structure and grammar formation, are the:

  • Parts of speech
  • Sentence coordination
  • Adjective Clauses
  • Appositives
  • Adverb Clauses
  • Absolute phrases
  • Participial phrases
  • Four functional types of sentences

All of these can help with sentence structure and grammar formation in writing and speaking. It is one thing to know how to make correct English sentence; it is another thing to use them correctly in writing. With constant practice, anyone can learn all these and get used to using perfect grammatical structures in the English language.

grammatical structures in english

How to Check Grammatical Structures for Mistakes

If you pay close attention, you will notice from the examples of the grammar structures above, that punctuation can be everything. The meaning of a sentence can be totally altered depending on the punctuations used in the sentence. You may not readily know or see this in your content as a writer, but that is not to say that you should go right ahead and write wrong grammar structures all the time. You can always consult the second party to help you out with checking for wrong grammatical structures; which can take more time than it took to write, or you can just use of grammar checker tool to check your written sentences for incorrect grammar structures in as little time as possible.

Our free online English grammar checker is a simple to use software application developed to help writers write the best they can with no form of errors in their works. It is the sentence structure for dummies tool you need to make your English better. For the fact that you are a human being, you are bound to make mistakes every now and then as you write. And even after editing, there are times you miss out on important errors that need to be corrected even after going through your document. This is not always as a result of your inability to write, it is because as human beings, we a susceptible to err; so we all need help with sentence structure grammar formation. So far, it is obvious that even the most articulate professional writer needs our grammar checker tool to make sure their works are in perfect condition at all times.

Impeccable Sentence Structure Corrector Online

Improve your text by removing sentence structure mistakes through our specialized parts of speech checker tool online

Challenges in Correcting Sentence Structure

Before going into the challenges, let’s examine what is sentence structure. A meaningful clause with a subject, predicate, verb, and object starting with a capital letter and ending with a period is called a complete sentence structure. Correcting sentence structure in the English language is governed by numerous interlinked as well as independent English sentence structure rules. It is a complex process to find, fix, and define sentence structure issues while editing a professional document. You need a full grasp of grammatical concepts and rules as well as strong writing and editing skillset and experience to accomplish this task. Almost all writers, especially those who use English as a second language (ESL), face some uphill challenges in correcting the sentence structure; a few of those major challenges include:

Having Extensive Command over Grammar and Linguistic Rules

The first and most important challenge of any new learner or writer to correct sentence structures is to understand and establish a full command over numerous grammatical and linguistic rules that govern the correct formation of a sentence. Among such rules, sentence format, tenses, active and passive voices, direct and indirect narration, correct use of parts of speech, proper punctuation, sentence fragment rules, and others are a few to name. For an ESL writer, it is the most fundamental challenge to cope with correcting the grammatical structure examples in writing.

Controlling Intervention of Native Language Paradigm

A sizeable number of English writers use English as a second language other than a few native-speaking countries like the UK, USA, Canada, Australia, and New Zealand. For non-native writers, it is a big challenge to control the innate paradigm about sentence structures and the way they create writings for obtaining certain goals. They make mistakes by mixing up the native prototyping in their writings. This is called the interference of the native language, which causes serious issues in correcting English sentence structures for non-native writers.

Overcoming Disparity in Inter-Language Grammar Rules

Similar to native language interference, the grammatical and writing rules in different languages are different. Non-native writers initially learn the grammatical rules of their native language; those rules keep following them while learning English sentences. But in English, numerous rules are different and some rules are extra, which are not available in many other languages such as the use of articles and a few others. Thus, this becomes a big challenge for ESL writers to overcome the disparity in inter-language grammar rules.

Differentiating Regular and Irregular Conjugation of Verb and Adjectives

Another very unique type of challenge the writers, especially ESL writers, face is the differentiation between the irregular and regular changes in the formation of different forms of verbs and degrees of adjectives. This irregular formation or conjugation of verbs and adjectives is not governed by any specific grammar rule. They have been adopted as an evolution of language. If a few rules apply, most of them have numerous exceptions. Thus, it is a big challenge for writers to discern between those two types of variations.

Understanding the Correct Use of Articles in English Sentences

Articles are a few very special things in the English language. They look very simple and easy but their correct use is highly tricky and complex. Everyone, an ESL writer, feels the heat of using it correctly in sentences. A large number of mistakes occur due to the misuse of articles. It poses a serious challenge for writers to correct sentence structures.

Our Grammar Checker Tool and Grammar Structures

There are a host of things out tool can do to help with sentence structure and grammar correctness:

  • Confidence: our grammar and sentence structure checker tool will not write for you or give you ideas on what to write, you need to do all that yourself. But once you know what you want and you as much as attempt to put it down in writing, the tool can go all the way to make it perfect as you write your article and to convey the right meaning to your readers and provide you help with grammar and sentence structure. This boosts your confidence in writing and in English language generally (for people who may still be learning how to speak English).
  • Saves time: it is understandable when all you want to do is write and not go back to it to check for one error or the other. Proofreading can be a daunting task to a lot of people. Our tools have features that can proofread your work in very little time and produce a flawless work with correct grammar and sentence structures. People that are in short of time can really benefit from this one.
  • Increases knowledge: some people have all the time in the world, as well as the best ideas and reasons to write, but they may be lacking in the knowledge of grammatical structures in English language. Using this tool will help such people check grammar and sentence structure online, grammar formations, and also enlighten them on the use of words and punctuations in English language. This can help them become better in the language generally.

You can check grammatical structures in English and find the best free online grammar and sentence structure checker right here!

The word as a grammatical unit has its form (grammatical form) and meaning (lexical and grammatical). Grammatical forms of words (word forms) are typically constructed by morphemes added synthetically, or structurals added analytically:

Number: book – books, family – families, leaf – leaves.

Case: my sister’s children, the title of the book, the students’ papers.

Aspect: was drawing – drew, repaired – have repaired – have been repairing.

Degrees of comparison: cold – colder – the coldest, difficult – more difficult – the most difficult, less interesting – the least interesting.

By grammatical forms we understand variants of a word having the same lexical meaning but differing grammatically. In other words, the grammatical form (grameme) is the total of formal means to render a particular grammatical meaning.

There are the following ways of changing grammatical forms of words:

· The use of affixes as word changing morphemic elements added to the root of the word: e(s) (the plural of nouns, the possessive of nouns, the 3rd person singular of Present Simple); ing (Present Participle, Gerund); er/est (Comparative and Superlative Degrees); ed (the Past Simple of the Indicative Mood, the Subjunctive Mood, Past Participle).

· Sound interchange as the use of different root sounds in grammatical forms of a word, which may be either consonants or vowels (e.g. speak – spoke, crisis – crises, write – wrote, wife – wives, analysis – analyses).

· Suppletivity as creating grammatical forms of a word coming from different roots (e.g. far – further, he – him, bad – worst, was – been).

· Analytical forms being made up of two components: a notional word used as an unchanged element carrying a lexical meaning and a structural changed grammatically but expressing no lexical meaning (e.g. will be reading, can sing, will be able to translate, would bring, less expensive, the most beautiful).

Grammatical forms being on the plane of expression (form) and possessing morphemic features, expressed either syntactically or analytically, convey certain grammatical meanings being on the plane of content (meaning) shaped in morphology as meanings of number, case, degree, voice, tense, etc. The system of grammatical forms of a word is called a paradigm with paradigmatic lines, the elements of which build up typically the so called privative morphological opposition based on a morphological differential feature (synthetical or analytical) present in its strong (marked) member and absent in its weak (unmarked) member. Compare: zero::Ved; zero::shall/willV; zero::Ving. Of minor types is an equipollent opposition (person forms of the verb ‘be’: am – is – are) and a gradual opposition (zero::adjer::adjest). Thus a grammatical paradigm is represented by the opposition of marked and non-marked members specifically connected with paradigmatic relations in order to express number, tense, mood, case, etc. The general grammatical meaning of two or more grammatical forms in a paradigm opposed to each other generates a grammatical category. The evidence is seen in the following examples:

the word forms ‘ student, book ’denote singularity, while ‘ books, students ’ denote plurality; as opposed to each other in the paradigmatic series, they have one grammatical meaning, that of number; thus the opposition of grammatical forms makes up the category of number;

the word forms ‘ swims, is working ’indicate reference to present including the moment of speaking, whereas ‘ swam, was working ’ indicate reference to past excluding the moment of speaking; and the opposition of grammatical forms in the paradigmatic series having the grammatical meaning of reference to the moment of speaking makes up the category of tense.

Taking into account the given assumptions, the grammatical category is defined as a system, expressing a generalized grammatical meaning by means of paradigmatic correlation of grammatical forms, analytical or synthetical, which makes the specific peculiarity of the language.

Key words:

levels of grammatical description уровни грамматического описания

constituent part конституирующая часть

grammatical system грамматическая система

prescriptive предписывающая без объяснения

explanatory объяснительная

kernel ядерная

theme тема (известная информация)

rheme рема (новая информация)

informative value информативная значимость

speech act речевой акт

coherent целостный

cohesive связный

grammatical formation of utterance грамматическая организация высказывания

grammatical structure of language грамматическая структура языка

coherent system целостная система

morpheme морфема

word слово

phrase фраза

sentence предложение

grammatical unit грамматическая единица

word form словоформа

morphological морфологический

categorical features категориальные признаки

parts of speech части речи

communicative unit коммуникативная единица

structural unit структурная единица

nominative unit номинативная единица

segmental сегментный

Morphology Морфология

Syntax Синтаксис

subject matter предмет изучения

paradigm парадигма

grammatical structure of language грамматическая структура языка

synthetical ситетический

analytical аналитический

grameme граммема (словоформа с грамматическим значением)

inflection инфлексия

affixation аффиксация

suppletivity суплетивизм

grammatical form грамматическая форма

grammatical meaning грамматическое значение

grammatical category грамматическая категория

functional words функциональные слова

auxiliary вспомогательный глагол

article артикль

preposition предлог

fixed word order фиксированный порядок слов

grammatical relations грамматические отношения

Number Число

Case Падеж

Aspect Вид

Degrees of comparison Степени сравнения

root of the word корень слова

plural множественное число

possessive притяжательный падеж

3rd person singular 3 лицо ед. число

sound change чередование

analytical form аналитическая форма

notional word знаменательное слово

paradigmatic line парадигматический ряд

privative morphological opposition привативная морфологическая оппозиция

strong (marked) member сильныймаркированный компонент

weak(unmarked)member слабыйнемаркированный компонент

equipollent opposition эквиполентная оппозиция

gradual opposition последовательная оппозиция

paradigmatic relations парадигматические отношения

tense грамматическое время

mood наклонение

case падеж

singularity единичность

plurality множественность

reference соотнесенность

Понравилась статья? Поделить с друзьями:
  • What is german word for english
  • What is gantt chart in excel
  • What is gano excel
  • What is function formula in excel
  • What is friendship in one word