Domain of the word

domain of word

область слова

to turn up a word in the dictionary — искать слово в словаре

to confine the use of a word — ограничить употребление слова

the secondary meaning of a word — производное значение слова

to explain the meaning of a word — объяснить значение слова

in the bad sense of the word — в плохом смысле этого слова

English-Russian base dictionary .
2014.

Смотреть что такое «domain of word» в других словарях:

  • Domain name speculation — is the practice of identifying and registering or acquiring Internet domain names with the intent of selling them later for a profit. The main targets of domain name speculation are generic words which can be valuable for type in traffic and for… …   Wikipedia

  • Domain-driven design — (DDD) is an approach to developing software for complex needs by deeply connecting the implementation to an evolving model of the core business concepts.[1] The premise of domain driven design is the following: Placing the project s primary focus …   Wikipedia

  • Word error rate — (WER) is a common metric of the performance of a speech recognition system.The general difficulty of measuring performance lies in the fact that the recognized word sequence can have a different length from the reference word sequence (supposedly …   Wikipedia

  • Word usage — is how a word, phrase, or concept is used in a language. Lexicographers gather samples of written or spoken instances where a word is used and analyze them to determine patterns of regional or social usage as well as meaning. A word, for example… …   Wikipedia

  • Word of Mouth (website) — Word of Mouth is a scam website promoted by spam e mail. Word of Mouth spam mails state that an anonymous person posted a secret about the recipient and that he needs to pay a fee in order to see the message. The editors of Snopes examined the… …   Wikipedia

  • Domain hack — A domain hack is an unconventional domain name that combines domain levels, especially the top level domain (TLD), to spell out the full name or title of the domain.[1] Examples include del.icio.us (http://del.icio.us/), goo.gl (http://goo.gl/)… …   Wikipedia

  • Word-sense disambiguation — Disambiguation redirects here. For other uses, see Disambiguation (disambiguation). In computational linguistics, word sense disambiguation (WSD) is an open problem of natural language processing, which governs the process of identifying which… …   Wikipedia

  • Domain name — A domain name is an identification string that defines a realm of administrative autonomy, authority, or control in the Internet. Domain names are formed by the rules and procedures of the Domain Name System (DNS). Domain names are used in… …   Wikipedia

  • Word processor — OpenOffice.org Writer in Version 3.2 …   Wikipedia

  • domain */*/ — UK [dəʊˈmeɪn] / US [doʊˈmeɪn] noun [countable] Word forms domain : singular domain plural domains 1) a) a particular area of activity or life This is a subject that has now moved into the political domain. b) an area of activity considered as… …   English dictionary

  • domain hack — noun An Internet domain name in which the TLD (such as a country code) can be understood as part of the name rather than the usual suffix. It is however, possible to create a domain hack. This is where parts of the domain spell out a particular… …   Wiktionary

Printer-friendly versionPrinter-friendly versionPDF versionPDF version

A semantic domain is an area of meaning and the words used to talk about it. A domain is often given a name consisting of a common word in the domain. For instance English has a domain ‘Rain’, which includes words such as rain, drizzle, downpour, raindrop, puddle. We use these words to talk about the rain.

The words within a domain are related to each other by lexical relations. Linguists use the term lexical relations to refer to various kinds of relationships that exist between words. There are two basic types of lexical relations. The first type are known as collocates—words that are frequently used together in a sentence. For instance we often use the words bird and fly in the same sentence. Bird andfly are related by the lexical relation agent:typical action. The second type are known as paradigm forms and include relations such assynonyms, antonyms, and the generic-specific relation. The words big and large are close synonyms. Kind and unkind are antonyms.Bird is a generic term that includes the more specific term chicken.

As a child learns to speak, he forms lexical relations in his mind. We need these lexical relations in order to speak correctly. Each of us has a mental dictionary which is organized into a giant network of lexical relations. Within the network are important clusters, like cities and towns linked by roads. So a semantic domain is a cluster of words in the mental network. The words within the domain are linked by lexical relations and the domains themselves are linked by lexical relations.

Next…

Contents

  • 1 Semantic Domains
  • 2 The Theory of Semantic Fields
  • 3 Semantic Domains
    • 3.1 Lexical Coherence Assumption
    • 3.2 Role of Semantic Domains
  • 4 Representation
    • 4.1 Domain Sets
    • 4.2 Domain Model
    • 4.3 Obtaining Domain Models
    • 4.4 WordNet Based Domain Model
    • 4.5 Corpus-Based Acquisition of Domain Models
    • 4.6 Domain Space
    • 4.7 Domain Kernel
  • 5 Usage
  • 6 References
  • 7 Sources
  • A semantic field corresponds to words grouped by their meaning
  • it consists of words from some domain
  • e.g. in English the field «Rain» may include words like «rain», «drizzle», «downpour», «raindrop», «puddle», all of these words can be used to talk about rain

Words within one domain are related by lexical relations

  • there are two kinds of lexical relations:
  • term lexical relations: relations between words, typically about words that collocate (frequently used together: e.g. words «bird» and «fly»)
  • paradigm form: synonyms, antonyms, etc. E.g. words «big» and «large»

Forming lexical relations:

  • we learn lexical relations to speak correctly
  • each of us has a big mental dictionary organized into a big network of lexical relations
  • there are clusters in this network: semantic field

Semantic Domains:

  • In Computational Linguistics and NLP Semantic Domains is a computational model for Lexical Semantics
  • Semantic domains are a way of finding Semantic Fields

The Theory of Semantic Fields

Lexicon (words of a natural language) is structured into Semantic Fields

  • Semantic Fields: «clusters» of semantically related terms
  • relations among concepts that belong to the same Semantic Field are very dense
  • and concept from different Semantic Field are typically unrelated
  • Theory of Semantic Fields: it claims that words are structured into a set of semantic fields

Structural Semantics models relations between words, like in WordNet

  • The Theory of Semantic Fields goes further by introducing an additional aggregation level
  • Semantic Fields form higher-level abstractions
  • relations between them are much more stable than between words
  • even if word senses change over time, the field still stays the same
  • so, Semantic Fields are usually consistent across languages, cultures and time
  • there’s a strong connection between Semantic Fields of different Languages: such connections don’t exist among the terms themselves

Limitation of this theory:

  • it doesn’t provide any objective criteria for identifying Semantic Fields in the language
  • solution to this problem: Semantic Domains, it’s a computational approach for finding Semantic Fields
  • use the lexical coherence assumption: words from the same field should co-occur in texts

Semantic Domains

Semantic Domains are Semantic Fields that are characterized by set of domain words which often occur in texts about corresponding domain

  • words belonging to the same lexical field are called «domain words»
  • usually large potion of the language terminology is characterized by domain words

Lexical Coherence Assumption

Basic hypothesis:

  • a great percentage of the concepts expressed in the same text belongs to the same domain
  • it’s a basic property of any natural language: domain-specific words co-occur with each other in the same text, this property is called «lexical coherence»
  • There are common areas of human knowledge such as Economics, Politics, Law, Science, etc. All these areas demonstrate lexical coherence

So, what about Semantic Fields?

  • Semantic Fields are lexically coherent: words in one SF tend to co-occur in texts
  • We call these fields «Semantic Domains»: they are Semantic Fields characterized by lexically coherent words

Lexical coherence assumption:

  • We assume that real-world documents are lexically coherent
  • this guarantees the existence of Semantic Domains
  • it’s also proven by experiments: in real texts if you count the percentage of words that belong to the same domain, you’ll see that the most belong to one domain

There are 3 types of words

  • Text-Related Domain words: words that have at least one sense that contributes to determining the domain of the whore text
    • e.g. word «bank» in a text about economy
  • Text-Unrelated Domain words: words that are from some non-generic domain, but don’t contribute to the domain of the text
    • e.g. word «church» in a text about economy
  • Text-Unrelated Generic words: don’t bring any relevant domain information
    • e.g. «to be»

Let’s put the lexical coherence assumption more formally:

  • «One domain per discourse» ($approx$ text, document) assumption
  • if a word is used in one sense in some discourse
  • then other occurrences of this word should also have the same sense
  • smart way of putting it: «multiple occurrences of a word in coherent portions of texts tend to share the same domain»

The lexical coherence assumption allows us to represent Semantic Domains by the set of domain-specific texts

Role of Semantic Domains

Characterizing word senses (i.e. lexical concepts)

  • typically by assigning domain labels to words in a lexicon
  • e.g. Crane has senses in Zoology and Construction
  • WordNet Domains — extension of WordNet that adds the information about domain

Characterizing texts

  • can use Semantic Domains for text categorization
  • at the textual level, semantic domains are clusters of texts on similar topics
  • so can see Semantic Domains as a collection of domain-specific texts

practical points of view: Semantic Domains are lists of related terms that describe a particular subject/area

Representation

Domain Sets

  • domain relations: two words are domain-related if they belong to the same domain
  • domain set is used to describe semantic classes of texts
  • semantic classes of strongly related lexical concepts are domain concepts
  • so a domain set should relate each word to one or more domain sets

Requirements of an «ideal» domain set:

  • completeness: all possible texts should be assigned to at least one domain
  • balancement: number of texts belonging to each domain should be uniform
  • separability: the same text/concept can’t be assigned to more than one domain

Usually not achievable:

  • it’s quite difficult to define a complete domain set, general enough to represent all possible aspects of human knowledge
  • and it’s also not possible to collect a corpus that contains all the human knowledge
  • a certain degree of overlapping is unavoidable (e.g. math/physics)

Domain Model

We can easily obtain term-based representation of documents e.g. by using Vector Space Models

  • but VSMs have lexical ambiguity problem
  • domain terms are typically highly correlated within texts: they tend to co-occur inside the same types of text
  • this is justified by the lexical coherence property of natural languages (Leacock96)

Domain model is a computational model for Semantic Domains to represent domain information

  • it describes relations at the term level
  • it does that by defining a set of term clusters (see also Term Clustering)
  • each cluster represent a semantic domain: set of terms that often co-occur in texts with similar topics
  • it’s a way to represent domain information at the textual level

Domain Model:

  • is a matrix that describes the degree of association between terms in the vocabulary and Semantic Domains
  • rows are indexed by words
  • columns are the corresponding domains

Domain Model is a shallow model for lexical semantics, but it capture ambiguity and variability

DM is represented by a $n times k$ rectangular matrix $D$

  • $D$ contains the domain relevance for each term w.r.t each domain

E.g.

Medicine CS
HIV 1 0
AIDS 1 0
virus 0.5 0.5
laptop 0 1

Formally,

  • let $mathcal D = { D_1 , … , D_k }$ be a set of domains
  • and we have $n$ words $V = { w_1, … , w_n }$ ($n$ — vocabulary size)
  • then $D$ is a $n times k$ matrix, where $D_{iz}$ is domain relevance of term $w_i$ w.r.t. domain $D_z$
  • let $R(D_z, o)$ denote domain relevance of domain $D_z$ w.r.t. some linguistic object $o$ (text, term, concept)
  • it gives a measure of association between $D_z$ and $o$
  • typically higher values indicate higher association and often the value ranges from 0 to 1

DMs can describe ambiguity and variability:

  • ambiguity: by associating one term to several domains
  • variability: by associating different terms to the same domain

A domain Model defines a Domain Space

Obtaining Domain Models

Obtaining Domain Models

  • Domain Models can be obtained from unsupervised learning or manual annotation
  • can use WordNet Domain
  • or by performing Term Clustering

domain relations among terms can be detected by analyzing co-occurrence in the corpus

  • motivated by the lexical coherence assumption
  • co-occurring terms have a good chance to show domain relations

WordNet Based Domain Model

WordNet Domains is an extension of WordNet:

  • each synset here is annotated with one or more domain labels
  • it has ~ 200 domain labels

Using WordNet Domain for building a domain model:

  • if $mathcal D = { D_1 , … , D_k }$ are domains of the word net domains
  • and $mathcal C = { c_1 , … , c_s }$ are concepts (synsets) from WordNet
  • then let $text{senses}(w)$ be a set of all synsets that contain $w$: $text{senses}(w) = {c mid c in mathcal C, text{$c$ is a sense of $w$} }$
  • let $R_s: mathcal D times mathcal C to mathbb R$ be a domain relevance function for concepts
  • $text{dom}(c)$ is a domain assignment function, $text{dom}(c) subseteq mathcal D$: returns a set of domains associated with a synset $c$
  • $R_s(D, c) = begin{cases}

1 / |text{dom}(c)| & text{ if } D in text{dom}(c) \
1 / k & text{ if } text{dom}(c) equiv { text{Factotum} } \
0 & text{ otherwise } \
end{cases}$

  • Factotum = generic concept for all non-domain words
  • $k$ — cardinality of $mathcal D$
  • $R_s(D, c) approx$ estimated prior probability of the domain given the concept

This is for synsets, not words

  • now let $V = { w_1 , … , w_n }$ the vocabulary
  • then domain relevance of a word is a function $R: mathcal D times V to mathbb R$
  • define $R$ as $$R(D_z, w_i) = cfrac{1}{| text{senses}(w_i) |} sumlimits_{c in text{senses}(w_i)} R(D_z, c)$$
  • so it’s average relevance of all $w_i$’s senses
  • if $w$ has only one sense, then $R(D_z, w) = R_s(D_z, c)$
  • a word with several senses («polysemous») will be less relevant than a word with few senses
  • words with just one sense are («monosemic») — they will be the most relevant: they provide more information about the domain

This is consistent with the phenomenon that less frequent words are more informative: because they have fewer senses

The domain model $D$ is defined as $D_{ij} = R(D_j, w_i)$

Limitations:

  • $mathcal D$ is fixed because WordNet Domains is fixed
  • WordNet Domains is limited: not complete
  • and lexicon in WordNet Domains is also limited

Corpus-Based Acquisition of Domain Models

We want automatically extract domain models from corpus:

  • to avoid subjectivity
  • to find more flexible models

Term Clustering techniques are usually used for this

  • usually need soft clustering techniques for this: want one term to be in several clusters
  • there are several ways:
  • Fuzzy C-Means, Information bottleneck method, etc
  • we’ll use Latent Semantic Analysis

LSA is done by projecting TermVSM and TextVSM to a common LSA space using some linear transformations

  • first-order (shallow) relations between terms: their co-occurrence in texts
  • it takes into account both second-order relations: their semantics, established by co-occurrence

DO SVD:

  • $T = W Sigma P^T$
  • $W$ (for Words) are orthogonal eigenvectors of $T T^T$: word vectors
  • $P$ (for Passages) are orthogonal eigenvectors of $T^T T$: document vectors
  • Truncated SVD: use $Sigma_k$: first $k$ singular values and the rest set to 0
  • $T_k = W Sigma_k P^T approx T$ the best approximation

Now let’s define the domain matrix

  • $D = I^{text{N}} W sqrt{Sigma}$
  • $I^{text{N}}$ is a diagonal matrix s.t. $I^{text{N}}_{ii} = cfrac{1}{| w_i |}$
  • $w_i$ is $i$th column of $W sqrt{Sigma}$ — principal components ($W sqrt{Sigma}$ are loadings for words)

Domain Space

Domain Models define the Domain Space

Once a DM is determined, we can define a Domain Space

  • it’s a geometric space where terms and documents can be represented as vectors
  • it’s a Vector Space Model

There are some problems of VSMs:

  • TextVSM can’t deal with lexical ambiguity and variability
  • e.g.: «he’s affected by AIDS» and «HIV is a virus» don’t have any words in common
  • so in the TextVSM the similarity is 0: these vectors are orthogonal even though the concepts are related
  • on the other hand, similarity between «the laptop has a virus» and «HIV is a virus» is not 0: due to the ambiguity of «virus»

Term VSM:

  • feature sparseness
  • if we want to model domain relations, we’re mostly interested in domain-specific words
  • such words are quite infrequent compared to non-domain words, so vectors for these words are very sparse, esp in large corpus
  • so similarity between domain words would tend to 0
  • and the results overall will not be very meaningful and interesting

Domain Spaces ftw

so a Domain Space is a cluster-based representation for estimating term and text meaning

  • it’s a vector space where both terms and texts can be compared
  • once a domain space is defined by a matrix $D$, can represent both terms and texts by domain vectors
  • domain vectors — vectors that represent relevance among linguistic objects and each domain

Domain space is

  • it’s an instance of Generalized Vector Space Model
  • for text $t_i$ in the Text VSM
  • $t_i’ = t_i (I^{text{idf}} D)$ (TODO: why left multiplication? )
  • where $I^{text{idf}}$ is a diagonal matrix s.t. $I^{text{idf}}_{ii} = text{idf}(w_i)$ — it’s inverse document frequency of word $w_i$ (see TF-IDF)
  • so we define a mapping function and thus have a generalized VSM

In the domain space the vector representation of terms and documents is «augmented» by domain relations represented by the domain model

Geometrically:

  • 0fe98cf82f2b4e369c5043c522b283a6.png
  • source: Semantic Domains in Computational Linguistics (book), Fig 3.2
  • both terms and texts are represented in common vector space
  • so comparison between terms and texts are possible
  • also, the dimensionality of Domain Space is generally lower

Domain Space allows to reduce the impact of ambiguity and variability:

  • by introducing non-sparse space

So advantages of DS:

  • lower dimensionality
  • sparseness is avoided
  • duality: allows direct and uniform similarity between texts and terms

Domain Kernel

Domain Kernel is a similarity function for terms and documents in the domain space. Domain Kernel is a Mercer Kernel, so it can be used in any kernel-based algorithm.

This kernel is represented by a DOmain Model matrix $D$

  • $K : mathbb R^n cup V to mathbb R^k$
  • maps texts $t in mathbb R^n$ and terms $w in V$ into Domain Space: $t’ in mathbb R^k$ and $w’ in mathbb R^k$

$K$ is defined as

  • $K(w) = w_i’$ if $w = w_i in V$
  • $K(w) = cfrac{sum_{t in T} text{tf}(w, t) cdot t’} {| sum_{t in T} text{tf}(w, t) cdot t’ |}$ if $w not in V$
  • $K(t) = t (I^{text{idf}} D) = t’$ for documents
  • $text{tf}(w, t)$ is a term frequency of $w$ in text $t$
  • $I^{text{idf}}$ is a diagonal matrix with IDFs: $I^{text{idf}}_{ii} = cfrac{1}{| { t in T mid text{tf}(w_i, t) > 0} |}$

Can compute the similarity using cosine

$K$ is defined for any term and text

  • $K$ is a mercer kernel by construction: it’s a dot product, but unlike many other kernels, it reduces the dimensionality instead of increasing it

Usage

  • after that we can use Domain Models for many NLP task
  • can use domain model to estimate topic similarity

Domain Kernels can be used for any instance-based algorithm in many NLP applications:

  • Document Classification
  • Document Clustering
  • Term Clustering
  • can use any Machine Learning algorithm with this kernel, e.g. SVM

References

  • Leacock, Claudia, et al. «Towards building contextual representations of word senses using statistical models.» (1996). [1]

Sources

  • http://www.semdom.org/description
  • Semantic Domains in Computational Linguistics (book)

область, владения, владение, сфера, территория, имение

существительное

- владения; территория

to maintain the imperial domain — владычествовать над громадными территориями

- владение, имение, поместье
- область, сфера, поле деятельности, знаний и т. п.

in the domain of science [literature] — в области науки [литературы]
question within the domain of astronomy — вопрос, относящийся к области астрономии
it doesn’t come within my domain — это не моя область; я в этом несведущ

- мат. область

domain of a relation — область отношения
domain of function — область определения функции
admissible deviation domain — допустимая область отклонения

- мат. интервал

frequency [time] domain — частотный [временной] интервал

- физ. домен
- вчт. проблемная область

domain knowledge — знания проблемной области

Мои примеры

Словосочетания

continuous in domain — непрерывный в области  
the domain of rushing streams — район быстрых рек  
domain expert — специалист в определённой области знаний  
domain of study — область изучения  
closure domain — замыкающий домен  
an excursion into the historical domain — исторический экскурс  
territorial domain — территориальная собственность  
public domain software — программы общего пользования (не защищены авторским  
project domain — предметная область проекта  
domain subsystem — предметная подсистема (подсистема, относящаяся к конкретной проблемной области)  

Примеры с переводом

The forest is part of the king’s domain.

Этот лес является частью королевских владений.

This information is in the public domain.

Это общедоступная информация.

This problem is outside the domain of medical science.

Эта проблема не относится к сфере медицины.

Childcare is no longer solely a female domain.

Уход за детьми больше не является исключительно женской сферой деятельности.

Looking after the house was viewed as a woman’s domain.

Забота о доме считается женской прерогативой.

My sister is the math expert in the family, but literature is my domain.

У нас в семье спец по математике — моя сестра, но вот литература — по моей части.

His principal grounds of doctrine were for the alienability of the domain.

Основные положения его доктрины были связаны с возможностью передачи права землевладения.

The image of f(x) = x^2 is the set of all non-negative real numbers if the domain of the function is the set of all real numbers.

Множество значений функции f(x) = x^2 представляет собой множество всех неотрицательных действительных чисел, если областью определения данной функции является множество всех действительных чисел.

Примеры, ожидающие перевода

…abnegated all claims to the deceased lord’s domain…

Для того чтобы добавить вариант перевода, кликните по иконке , напротив примера.

Возможные однокоренные слова

domainal  — относящийся к имению, поместью, государственный,

Формы слова

noun
ед. ч.(singular): domain
мн. ч.(plural): domains

  • 36
  • 0
  • 0

Скачать книгу в формате:

  • fb2
  • rtf
  • txt
  • epub
  • pdf

Аннотация

The book brings together a set of related studies on the nature of Scripture and of Christian theology by one of the most prominent representatives of Protestant theology of our time. After a brief introduction on the setting of the book and its major themes, the first part of the volume examines topics on the nature and interpretation of Scripture. A comprehensive proposal about Scripture and its interpretation is followed by a study of Scripture as the embassy of the risen Christ, and by three related chapters analyzing the ways in which widely different major modern theologians (Barth, T.F. Torrance and Rowan Williams) have understood the nature and interpretation of the Bible. The second part of the volume makes a cumulative proposal about the nature and tasks of Christian theology, examining the fundamental principles of systematic theology, the distinctive role and scope of reason in Christian theology, the relation of theology to the humanities, and the vocation of theology to promote the peace of the church.

ЕЩЕ

Популярные книги

  • Похороните меня за плинтусом

    • Читаю
    • В архив
    • 66783
    • 15
    • 3

    Аннотация:

    Павел САНАЕВ ПОХОРОНИТЕ МЕНЯ ЗА ПЛИНТУСОМ Меня зовут Савельев Саша. Я учусь во втором классе и ж…

    Блок — 7 стр.

  • Пиши, сокращай

    • Читаю
    • В архив
    • 97021
    • 52
    • 3

    Аннотация:

    О чем книга Авторы на конкретных примерах показывают, что такое хорошо и что такое плохо в информа…

    Блок — 16 стр.

  • Бойся, я с тобой

    • Читаю
    • В архив
    • 100000
    • 14
    • 39

    Аннотация:

    «…Эти с виду вежливые и предупредительные люди очень коварны. Один из них рассказал: „Если я хочу к…

    Блок — 13 стр.

  • Язык жизни. Ненасильственное общение

    • Читаю
    • В архив
    • 79236
    • 38
    • 8

    Аннотация:

    Метод ненасильственного общения (ННО) реально улучшает жизнь тысяч людей. Он применим и в супруж…

    Фрагмент — 15 стр.

  • П. Ш.

    • Читаю
    • В архив
    • 83344
    • 10
    • 25

    Аннотация:

    Annotation У Олега свое дело, он работает на износ и ждет от отпуска «чего-то особого». Случайно…

    Блок — 15 стр.

Привет тебе, любитель чтения. Не советуем тебе открывать «Domain of the Word» Webster John (EN) утром перед выходом на работу, можешь существенно опоздать. Произведение пронизано тонким юмором, и этот юмор, будучи одной из форм, способствует лучшему пониманию и восприятию происходящего. Сюжет разворачивается в живописном месте, которое легко ложится в основу и становится практически родным и словно, знакомым с детства. На развязку возложена огромная миссия и она не разочаровывает, а наоборот дает возможность для дальнейших размышлений. Интрига настолько запутанна, что несмотря на встречающиеся подсказки невероятно сложно угадать дорогу, по которой пойдет сюжет. С помощью описания событий с разных сторон, множества точек зрения, автор постепенно развивает сюжет, что в свою очередь увлекает читателя не позволяя скучать. Одну из важнейших ролей в описании окружающего мира играет цвет, он ощутимо изменяется во время смены сюжетов. События происходят в сложные времена, но если разобраться, то проблемы и сложности практически всегда одинаковы для всех времен и народов. С невероятным волнением воспринимается написанное! – Каждый шаг, каждый нюанс подсказан, но при этом удивляет. Попытки найти ответ откуда в людях та или иная черта, отчего человек поступает так или иначе, частично затронуты, частично раскрыты. Небезынтересно наблюдать как герои, обладающие не высокой моралью, пройдя через сложные испытания, преобразились духовно и кардинально сменили свои взгляды на жизнь. «Domain of the Word» Webster John (EN) читать бесплатно онлайн невозможно без переживания чувства любви, признательности и благодарности.

Читать Domain of the Word

Новинки

Гарри Поттер и Кольцо Согласия

  • 8
  • 0
  • 0

Аннотация:

Мой вариант 7-й книги. Гарри приходится разбираться с хоркруксами, Волдемортом и прочей нечистью. …

Полный текст — 241 стр.

Мой вариант 7-й книги. Гарри приходится разбираться с хоркруксами, Волдемортом и прочей нечистью. …

Энфис 2

  • 12
  • 0
  • 0

Аннотация:

Осколки громадной империи, которые только начали подтягиваться друг к другу. Кланы и Дома, ставшие…

Полный текст — 55 стр.

Осколки громадной империи, которые только начали подтягиваться друг к другу. Кланы и Дома, ставшие…

Животные Южной Америки

  • 8
  • 0
  • 0

Аннотация:

Энциклопедия «Животные Южной Америки» откроет тебе удивительный и многоликий мир животных четверто…

Полный текст — 9 стр.

Энциклопедия «Животные Южной Америки» откроет тебе удивительный и многоликий мир животных четверто…

Взаперти

  • 21
  • 0
  • 0

Аннотация:

Приключения Александра в безумной «песочнице» продолжаются. По надуманной причине против него опол…

Полный текст — 76 стр.

Приключения Александра в безумной «песочнице» продолжаются. По надуманной причине против него опол…

Похмелье. Головокружительная охота за лекарством от болезни, в которой виноваты мы сами

  • 17
  • 0
  • 0

Аннотация:

Кто сталкивался с похмельем, знает: редко человек чувствует себя столь одиноко, скверно и ничтожно…

Полный текст — 86 стр.

Кто сталкивался с похмельем, знает: редко человек чувствует себя столь одиноко, скверно и ничтожно…

Черные начала. Том 10

  • 31
  • 2
  • 0

Аннотация:

Вот они, легендарные края, о которых рассказывают в сказках. Земли, где скрывается империя Пьениан…

Полный текст — 81 стр.

Вот они, легендарные края, о которых рассказывают в сказках. Земли, где скрывается империя Пьениан…

Испытание Джасинды (ЛП)

  • 50
  • 0
  • 0

Аннотация:

Джотэм — король Дома Защиты. Он — самый молодой король в истории Кириниана, занявший эту должность…

Полный текст — 116 стр.

Джотэм — король Дома Защиты. Он — самый молодой король в истории Кириниана, занявший эту должность…

Like this post? Please share to your friends:
  • Dokument microsoft office word docx
  • Doing word crossword clue
  • Doing things over and over word
  • Doing the word dance
  • Doing the same thing every day word