Window in a sentence for each word

Write a sentence for each word/phrase.
1. (at the moment)
2. (on Sundays)
3. (in the summer)
4. (always)
5. (right now)
6. (in the winter)
7. (never)

reshalka.com

Английский язык 5 класс (рабочая тетрадь) Ваулина. 7 Grammar Practice. Номер №3

Решение

Перевод задания
Напишите предложение для каждого слова / фразы.
1. (на данный момент)
2. (по воскресеньям)
3. (летом)
4. (всегда)
5. (прямо сейчас)
6. (зимой)
7. (никогда)

 
ОТВЕТ
1. (at the moment) I am doing my homework at the moment.
2. (on Sundays) We go swimming in the swimming pool on Sundays.
3. (in the summer) We go camping in the summer.
4. (always) I always help my mother in the kitchen.
5. (right now) My sister is having a picnic right now.
6. (in the winter) My dad goes skiing in the winter.
7. (never) My sister never walks our dog.

 
Перевод ответа
1. (в данный момент) Я сейчас делаю домашнее задание.
2. (по воскресеньям) По воскресеньям купаемся в бассейне.
3. (летом) Летом ходим в походы.
4. (всегда) Я всегда помогаю маме на кухне.
5. (прямо сейчас) Моя сестра сейчас на пикнике.
6. (зимой) Папа зимой катается на лыжах.
7. (никогда) Моя сестра никогда не выгуливает нашу собаку.

Assignment #3

A primer on named entity recognition

In this section, we will build several different models to implement named entity recognition (NER). NER is a sub-task of information extraction, which aims to locate and classify named entities in text into predefined categories, such as names, organizations, locations, time expressions, quantities, currency values, percentages, etc. For a given word in the context, it is predicted whether it represents one of the following four categories:

  • Personal Name (PER): For example, «Martha Stewart», «Obama», «Tim Wagner» and so on, pronouns «he» or «she» are not considered as named entities.
  • ORG: For example, «American Airlines», «Goldman Sachs», «Department of Defense».
  • Location (LOC): For example, «Germany», «Panama Strait», «Brussels», excluding unnamed locations such as «the bar» or «the farm».
  • Other (MISC): such as «Japanese», «USD», «1000», «Englishmen».

We define this as a five-category problem, using the above four classes and an empty class (O) to represent words that do not represent named entities (most words belong to this category). For entities spanning more than one word («Department of Defense»), each word is marked separately, and each successive sequence of non-empty tags is treated as an entity.

Here is an example sentence(Each word is marked with a named entity() and hypothesis prediction generated by the system():

In the above example, the system mistakenly predicts that «American» is a MISC class, and ignores «Airlines» and «Corp». Overall, it predicts three entities, «American», «AMR», «Tim Wagner». In order to evaluate the quality of NER system output, we focus on accuracy, recall rate and F1 value. In particular, we report accuracy, recall and F1 values at both token and named entity levels. In the previous example:

  • Accuracy is calculated as the ratio of the correct non-empty labels predicted to the total number of non-empty labels predicted (p=3/4 in the example above).

  • The recall rate is calculated as the ratio of the predicted total number of correct non-empty labels to the correct total number of non-empty labels (r=3/6 in the example above).

  • F1 is the harmonic average of accuracy and recall (in the example above, F1=6/10).

For entity level F1:

  • The accuracy rate is the score of predicting the span of entity names, which is completely consistent with the span in the gold standard evaluation data. In our example, «AMR» will be mismarked because it does not include the entity as a whole, that is, «AMR Corp.», and «American» will also get a 1/3 accuracy score.
  • The recall rate is also the number of names in the gold standard that appear at exactly the same location in the forecast. Here, we get a recall score of 1/3.
  • Finally, the F1 value is still the harmonic average of the two, with 1/3 in the example.

Our model also outputs a word-level obfuscation matrix. Obfuscation matrix is a specific table layout that allows visual classification performance. Each column of the matrix represents an instance in the predicted category, and each row represents an instance in the actual category. The name derives from the fact that it can easily see whether the system confuses two classes (i.e., one class is usually mismarked as another).

1.A window into NER

Let’s look at a simple baseline model that uses features from surrounding windows to predict the labels of each word separately.

Figure 1 shows an example of an input sequence and the first window of the sequence. orderFor an input sequence of length T,It is an output sequence of length T. Each elementandThey are one-hot vectors representing the words indexed to t in the sequence. In a window-based classifier, each input sequence is divided into T new data points, each point represents a window and its label. ByThe w words on the left and right sides are joined together.The surrounding window constructs a new input:We continue to useAs its label. For the tag-centered window at the beginning of the sentence, we add a special start tag (< start >) at the beginning of the window, and for the tag-centered window at the end of the sentence, we add a special end tag (< end >). For example, consider building a window around «Jim» in the sentence above. If the window size is 1, we add a start word to the window (producing a window with [<START>, Jim, buy]). If the window size is 2, we add two start words to the window (producing a window with [<START>, <START>, Jim, buy, 300]).

With these, each input and output has a uniform length (w and 1, respectively), and we can use a simple feedforward neural network from theForecast:

As a simple but effective model for predicting labels from each window, we will use a single hidden layer with ReLU activation, combined with the soft max output layer, and cross-entropy loss:

amongIt’s a word vector.It’s H-dimensional.It is C-dimensional, where V is the size of the vocabulary, D is the size of the word vector, H is the size of the hidden layer, and C is the number of predicted categories (5 here).

(a)

i. Provide two examples of sentences containing named entities with ambiguous types (e.g., entities can be individuals or organizations, or organizations or non-entities).

1)»Spokesperson for Levis, Bill Murray, said…», where Levis may be a person’s name or an organization.

2) Heartbreak is a new virus, in which Heartbreak may be another named entity (actually the name of virus) or simply a noun.

ii. Why is it important to use features other than words themselves to predict named entity labels?

Normally named entities are rare words, such as person names or «heartbreak», and the use of such features as case makes the system generalized.

iii. Describe at least two features that help to predict whether a word belongs to a named entity (excluding words).

Word capitalization and part of speech.

(b)

i. If the window size is w, thenWhat is the dimension?

ii. What is the computational complexity of tags whose predicted sequence length is T?

(c) Implementing a window-based classifier model:

i. In make_windowed_data function, batch of an input sequence is converted to batch of a windowed input-output pair.

def make_windowed_data(data, start, end, window_size = 1):
    """Uses the input sequences in @data to construct new windowed data points.

    TODO: In the code below, construct a window from each word in the
    input sentence by concatenating the words @window_size to the left
    and @window_size to the right to the word. Finally, add this new
    window data point and its label. to windowed_data.

    Args:
        data: is a list of (sentence, labels) tuples. @sentence is a list
            containing the words in the sentence and @label is a list of
            output labels. Each word is itself a list of
            @n_features features. For example, the sentence "Chris
            Manning is amazing" and labels "PER PER O O" would become
            ([[1,9], [2,9], [3,8], [4,8]], [1, 1, 4, 4]). Here "Chris"
            the word has been featurized as "[1, 9]", and "[1, 1, 4, 4]"
            is the list of labels.
        start: the featurized `start' token to be used for windows at the very
            beginning of the sentence.
        end: the featurized `end' token to be used for windows at the very
            end of the sentence.
        window_size: the length of the window to construct.
    Returns:
        a new list of data points, corresponding to each window in the
        sentence. Each data point consists of a list of
        @n_window_features features (corresponding to words from the
        window) to be used in the sentence and its NER label.
        If start=[5,8] and end=[6,8], the above example should return
        the list
        [([5, 8, 1, 9, 2, 9], 1),
         ([1, 9, 2, 9, 3, 8], 1),
         ...
         ]
    """

    windowed_data = []
    for sentence, labels in data:
        # YOUR CODE HERE (5-20 lines)
        T = len(labels)  # Sequence Length T
        for t in range(T):  # Traversing through each word in a single sequence
            sen2fea = []
            for l in range(window_size, 0, -1):  # w Words in the Left Window
                if t-l < 0:
                    sen2fea.extend(start)
                else:
                    sen2fea.extend(sentence[t-l])
            sen2fea.extend(sentence[t])
            for r in range(1, window_size+1):  # w words in the right window
                if t+r >= T:
                    sen2fea.extend(end)
                else:
                    sen2fea.extend(sentence[t+r])
            windowed_data.append((sen2fea, labels[t]))
        # END YOUR CODE
    return windowed_data

ii. Implement the feed-forward model described above in the Windows Model class.

class WindowModel(NERModel):
    """
    Implements a feedforward neural network with an embedding layer and
    single hidden layer.
    This network will predict what label (e.g. PER) should be given to a
    given token (e.g. Manning) by  using a featurized window around the token.
    """

    def add_placeholders(self):
        """Generates placeholder variables to represent the input tensors

        These placeholders are used as inputs by the rest of the model building and will be fed
        data during training.  Note that when "None" is in a placeholder's shape, it's flexible
        (so we can use different batch sizes without rebuilding the model).

        Adds following nodes to the computational graph

        input_placeholder: Input placeholder tensor of  shape (None, n_window_features), type tf.int32
        labels_placeholder: Labels placeholder tensor of shape (None,), type tf.int32
        dropout_placeholder: Dropout value placeholder (scalar), type tf.float32

        Add these placeholders to self as the instance variables
            self.input_placeholder
            self.labels_placeholder
            self.dropout_placeholder

        (Don't change the variable names)
        """
        # YOUR CODE HERE (~3-5 lines)
        self.input_placeholder = tf.placeholder(shape=[None, Config.n_window_features], dtype=tf.int32)
        self.labels_placeholder = tf.placeholder(shape=[None, ], dtype=tf.int32)
        self.dropout_placeholder = tf.placeholder(dtype=tf.float32)
        # END YOUR CODE

    def create_feed_dict(self, inputs_batch, labels_batch=None, dropout=1):
        """Creates the feed_dict for the model.
        A feed_dict takes the form of:
        feed_dict = {
                <placeholder>: <tensor of values to be passed for placeholder>,
                ....
        }

        Hint: The keys for the feed_dict should be a subset of the placeholder
                    tensors created in add_placeholders.
        Hint: When an argument is None, don't add it to the feed_dict.

        Args:
            inputs_batch: A batch of input data.
            labels_batch: A batch of label data.
            dropout: The dropout rate.
        Returns:
            feed_dict: The feed dictionary mapping from placeholders to values.
        """
        # YOUR CODE HERE (~5-10 lines)
        if labels_batch is None:
            feed_dict = {self.input_placeholder: inputs_batch,
                         self.dropout_placeholder: dropout}
        else:
            feed_dict = {self.input_placeholder: inputs_batch,
                         self.labels_placeholder: labels_batch,
                         self.dropout_placeholder: dropout}
        # END YOUR CODE
        return feed_dict

    def add_embedding(self):
        """Adds an embedding layer that maps from input tokens (integers) to vectors and then
        concatenates those vectors:
            - Creates an embedding tensor and initializes it with self.pretrained_embeddings.
            - Uses the input_placeholder to index into the embeddings tensor, resulting in a
              tensor of shape (None, n_window_features, embedding_size).
            - Concatenates the embeddings by reshaping the embeddings tensor to shape
              (None, n_window_features * embedding_size).

        Hint: You might find tf.nn.embedding_lookup useful.
        Hint: You can use tf.reshape to concatenate the vectors. See following link to understand
            what -1 in a shape means.
            https://www.tensorflow.org/api_docs/python/array_ops/shapes_and_shaping#reshape.
        Returns:
            embeddings: tf.Tensor of shape (None, n_window_features*embed_size)
        """
        # YOUR CODE HERE (!3-5 lines)
        embedding = tf.Variable(self.pretrained_embeddings, name='embedding')
        embeddings_3d = tf.nn.embedding_lookup(embedding, self.input_placeholder)
        embeddings = tf.reshape(embeddings_3d, shape=[-1, Config.n_window_features*Config.embed_size])
        # END YOUR CODE
        return embeddings

    def add_prediction_op(self):
        """Adds the 1-hidden-layer NN:
            h = Relu(xW + b1)
            h_drop = Dropout(h, dropout_rate)
            pred = h_dropU + b2

        Recall that we are not applying a softmax to pred. The softmax will instead be done in
        the add_loss_op function, which improves efficiency because we can use
        tf.nn.softmax_cross_entropy_with_logits

        When creating a new variable, use the tf.get_variable function
        because it lets us specify an initializer.

        Use tf.contrib.layers.xavier_initializer to initialize matrices.
        This is TensorFlow's implementation of the Xavier initialization
        trick we used in last assignment.

        Note: tf.nn.dropout takes the keep probability (1 - p_drop) as an argument.
            The keep probability should be set to the value of dropout_rate.

        Returns:
            pred: tf.Tensor of shape (batch_size, n_classes)
        """

        x = self.add_embedding()
        dropout_rate = self.dropout_placeholder
        # YOUR CODE HERE (~10-20 lines)
        W = tf.get_variable(initializer=tf.contrib.layers.xavier_initializer(),
                            shape=[Config.n_window_features*Config.embed_size, Config.hidden_size],
                            name='W')
        b1 = tf.get_variable(initializer=tf.zeros(Config.hidden_size), name='b1')
        h = tf.nn.relu(tf.matmul(x, W) + b1)
        h_drop = tf.nn.dropout(h, keep_prob=dropout_rate)
        U = tf.get_variable(initializer=tf.contrib.layers.xavier_initializer(),
                            shape=[Config.hidden_size, Config.n_classes],
                            name='U')
        b2 = tf.get_variable(initializer=tf.zeros(Config.n_classes), name='b2')
        pred = tf.matmul(h_drop, U) + b2
        # END YOUR CODE
        return pred

    def add_loss_op(self, pred):
        """Adds Ops for the loss function to the computational graph.
        In this case we are using cross entropy loss.
        The loss should be averaged over all examples in the current minibatch.

        Remember that you can use tf.nn.sparse_softmax_cross_entropy_with_logits to simplify your
        implementation. You might find tf.reduce_mean useful.
        Args:
            pred: A tensor of shape (batch_size, n_classes) containing the output of the neural
                  network before the softmax layer.
        Returns:
            loss: A 0-d tensor (scalar)
        """
        # YOUR CODE HERE (~2-5 lines)
        loss = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(logits=pred,
                                                                             labels=self.labels_placeholder))
        # END YOUR CODE
        return loss

    def add_training_op(self, loss):
        """Sets up the training Ops.

        Creates an optimizer and applies the gradients to all trainable variables.
        The Op returned by this function is what must be passed to the
        `sess.run()` call to cause the model to train. See

        https://www.tensorflow.org/versions/r0.7/api_docs/python/train.html#Optimizer

        for more information.

        Use tf.train.AdamOptimizer for this model.
        Calling optimizer.minimize() will return a train_op object.

        Args:
            loss: Loss tensor, from cross_entropy_loss.
        Returns:
            train_op: The Op for training.
        """
        # YOUR CODE HERE (~1-2 lines)
        train_op = tf.train.AdamOptimizer(learning_rate=Config.lr).minimize(loss)
        # END YOUR CODE
        return train_op

    def preprocess_sequence_data(self, examples):
        return make_windowed_data(examples, start=self.helper.START, end=self.helper.END, window_size=self.config.window_size)

    def consolidate_predictions(self, examples_raw, examples, preds):
        """Batch the predictions into groups of sentence length.
        """
        ret = []
        #pdb.set_trace()
        i = 0
        for sentence, labels in examples_raw:
            labels_ = preds[i:i+len(sentence)]
            i += len(sentence)
            ret.append([sentence, labels, labels_])
        return ret

    def predict_on_batch(self, sess, inputs_batch):
        """Make predictions for the provided batch of data

        Args:
            sess: tf.Session()
            input_batch: np.ndarray of shape (n_samples, n_features)
        Returns:
            predictions: np.ndarray of shape (n_samples, n_classes)
        """
        feed = self.create_feed_dict(inputs_batch)
        predictions = sess.run(tf.argmax(self.pred, axis=1), feed_dict=feed)
        return predictions

    def train_on_batch(self, sess, inputs_batch, labels_batch):
        feed = self.create_feed_dict(inputs_batch, labels_batch=labels_batch,
                                     dropout=self.config.dropout)
        _, loss = sess.run([self.train_op, self.loss], feed_dict=feed)
        return loss

    def __init__(self, helper, config, pretrained_embeddings, report=None):
        super(WindowModel, self).__init__(helper, config, report)
        self.pretrained_embeddings = pretrained_embeddings

        # Defining placeholders.
        self.input_placeholder = None
        self.labels_placeholder = None
        self.dropout_placeholder = None

        self.build()

iii. Training models, models and outputs will be stored in results/window/<timestamp>/results.txt contains the formatted output of the model’s prediction on the verification set, and log files contain the printed output, i.e., the confusion matrix and F1 value calculated in training.

(d) Prediction using the file analysis model generated above.

i. Briefly describe the information about model prediction errors displayed by the obfuscation matrix.

The confusion matrix shows that the biggest source of confusion in the model comes from organization labels, many of which are mistaken for names or ignored directly. On the other hand, names seem to be well recognized.

ii. Describe at least two modeling constraints for window-based models.

The window-based model can not use the information from adjacent prediction to eliminate ambiguity in label decision-making, which leads to discontinuous entity prediction.

On the difference between tf.Variable and tf.get_variable:

https://blog.csdn.net/MrR1ght/article/details/81228087

On tf.nn.embedding_lookup:

https://blog.csdn.net/yinruiyang94/article/details/77600453

https://tensorflow.google.cn/api_docs/python/tf/nn/embedding_lookup

On tf.contrib.layers.xavier_initializer:

https://blog.csdn.net/yinruiyang94/article/details/78354257

https://tensorflow.google.cn/api_docs/python/tf/contrib/layers/xavier_initializer

Обучайтесь и развивайтесь всесторонне вместе с нами, делитесь знаниями и накопленным опытом, расширяйте границы знаний и ваших умений.

поделиться знаниями или
запомнить страничку

  • Все категории
  • экономические
    43,633
  • гуманитарные
    33,652
  • юридические
    17,917
  • школьный раздел
    611,709
  • разное
    16,898

Популярное на сайте:

Как быстро выучить стихотворение наизусть? Запоминание стихов является стандартным заданием во многих школах. 

Как научится читать по диагонали? Скорость чтения зависит от скорости восприятия каждого отдельного слова в тексте. 

Как быстро и эффективно исправить почерк?  Люди часто предполагают, что каллиграфия и почерк являются синонимами, но это не так.

Как научится говорить грамотно и правильно? Общение на хорошем, уверенном и естественном русском языке является достижимой целью. 

I am trying to run word2vec (skip-gram model implemented in gensim with a default window size of 5) on a corpus of .txt files. The iterator that I use looks something like this:

class Corpus(object):
    """Iterator for feeding sentences to word2vec"""
    def __init__(self, dirname):
        self.dirname = dirname

    def __iter__(self):

        word_tokenizer = TreebankWordTokenizer()
        sent_tokenizer = nltk.data.load('tokenizers/punkt/english.pickle')
        text = ''

        for root, dirs, files in os.walk(self.dirname):

            for file in files:

                if file.endswith(".txt"):

                    file_path = os.path.join(root, file)


                    with open(file_path, 'r') as f:

                         text = f.read().decode('utf-8')
                         sentences = sent_tokenizer.tokenize(text)

                         for sent in sentences:
                             yield word_tokenizer.tokenize(sent)

Here I use the punkt tokenizer (which uses an unsupervised algorithm for detecting sentence boundaries) in the nltk package for splitting the text into sentences. However, when I replace this with just a simple line.split() i.e just considering each sentence as one line and splitting the words, I get a time efficiency that is 1.5 times faster than using the nltk parser. The code inside the ‘with open’ looks something like this:

                 with open(file_path, 'r') as f:
                    for line in f:
                    line.decode('utf-8')
                    yield line.split()

My question is how important is it for the word2vec algorithm to be fed sentences that are actual sentences (something that I attempt to do with punkt tokenizer)? Is it sufficient for each word in the algorithm to receive a context of the surrounding words that lie on one line (these words may not necessarily be an actual sentence in the case of a sentence spanning several lines) as opposed to the context of words that the word may have in a sentence spanning several lines. Also, what sort of a part does window size play in this. When a window size is set to 5 for example, does the size of sentences yielded by the Sentences iterator ceases to play a part? Will only window size decide the context words then? In that case should I just use line.split() instead of trying to detect actual sentence boundaries using the punkt tokenizer?

I hope I have been able to describe the issue sufficiently, I would really appreciate any opinions or pointers or help regarding this.

word2vec is not a singular algorithm, rather, it is a family of model architectures and optimizations that can be used to learn word embeddings from large datasets. Embeddings learned through word2vec have proven to be successful on a variety of downstream natural language processing tasks.

These papers proposed two methods for learning representations of words:

  • Continuous bag-of-words model: predicts the middle word based on surrounding context words. The context consists of a few words before and after the current (middle) word. This architecture is called a bag-of-words model as the order of words in the context is not important.
  • Continuous skip-gram model: predicts words within a certain range before and after the current word in the same sentence. A worked example of this is given below.

You’ll use the skip-gram approach in this tutorial. First, you’ll explore skip-grams and other concepts using a single sentence for illustration. Next, you’ll train your own word2vec model on a small dataset. This tutorial also contains code to export the trained embeddings and visualize them in the TensorFlow Embedding Projector.

Skip-gram and negative sampling

While a bag-of-words model predicts a word given the neighboring context, a skip-gram model predicts the context (or neighbors) of a word, given the word itself. The model is trained on skip-grams, which are n-grams that allow tokens to be skipped (see the diagram below for an example). The context of a word can be represented through a set of skip-gram pairs of (target_word, context_word) where context_word appears in the neighboring context of target_word.

Consider the following sentence of eight words:

The wide road shimmered in the hot sun.

The context words for each of the 8 words of this sentence are defined by a window size. The window size determines the span of words on either side of a target_word that can be considered a context word. Below is a table of skip-grams for target words based on different window sizes.

word2vec_skipgrams

The training objective of the skip-gram model is to maximize the probability of predicting context words given the target word. For a sequence of words w1, w2, … wT, the objective can be written as the average log probability

word2vec_skipgram_objective

where c is the size of the training context. The basic skip-gram formulation defines this probability using the softmax function.

word2vec_full_softmax

where v and v are target and context vector representations of words and W is vocabulary size.

Computing the denominator of this formulation involves performing a full softmax over the entire vocabulary words, which are often large (105-107) terms.

The noise contrastive estimation (NCE) loss function is an efficient approximation for a full softmax. With an objective to learn word embeddings instead of modeling the word distribution, the NCE loss can be simplified to use negative sampling.

The simplified negative sampling objective for a target word is to distinguish the context word from num_ns negative samples drawn from noise distribution Pn(w) of words. More precisely, an efficient approximation of full softmax over the vocabulary is, for a skip-gram pair, to pose the loss for a target word as a classification problem between the context word and num_ns negative samples.

A negative sample is defined as a (target_word, context_word) pair such that the context_word does not appear in the window_size neighborhood of the target_word. For the example sentence, these are a few potential negative samples (when window_size is 2).

(hot, shimmered)
(wide, hot)
(wide, sun)

In the next section, you’ll generate skip-grams and negative samples for a single sentence. You’ll also learn about subsampling techniques and train a classification model for positive and negative training examples later in the tutorial.

Setup

import io
import re
import string
import tqdm

import numpy as np

import tensorflow as tf
from tensorflow.keras import layers
2022-12-14 06:16:44.816296: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory
2022-12-14 06:16:44.816401: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory
2022-12-14 06:16:44.816412: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
# Load the TensorBoard notebook extension
%load_ext tensorboard
SEED = 42
AUTOTUNE = tf.data.AUTOTUNE

Vectorize an example sentence

Consider the following sentence:

The wide road shimmered in the hot sun.

Tokenize the sentence:

sentence = "The wide road shimmered in the hot sun"
tokens = list(sentence.lower().split())
print(len(tokens))
8

Create a vocabulary to save mappings from tokens to integer indices:

vocab, index = {}, 1  # start indexing from 1
vocab['<pad>'] = 0  # add a padding token
for token in tokens:
  if token not in vocab:
    vocab[token] = index
    index += 1
vocab_size = len(vocab)
print(vocab)
{'<pad>': 0, 'the': 1, 'wide': 2, 'road': 3, 'shimmered': 4, 'in': 5, 'hot': 6, 'sun': 7}

Create an inverse vocabulary to save mappings from integer indices to tokens:

inverse_vocab = {index: token for token, index in vocab.items()}
print(inverse_vocab)
{0: '<pad>', 1: 'the', 2: 'wide', 3: 'road', 4: 'shimmered', 5: 'in', 6: 'hot', 7: 'sun'}

Vectorize your sentence:

example_sequence = [vocab[word] for word in tokens]
print(example_sequence)
[1, 2, 3, 4, 5, 1, 6, 7]

Generate skip-grams from one sentence

The tf.keras.preprocessing.sequence module provides useful functions that simplify data preparation for word2vec. You can use the tf.keras.preprocessing.sequence.skipgrams to generate skip-gram pairs from the example_sequence with a given window_size from tokens in the range [0, vocab_size).

window_size = 2
positive_skip_grams, _ = tf.keras.preprocessing.sequence.skipgrams(
      example_sequence,
      vocabulary_size=vocab_size,
      window_size=window_size,
      negative_samples=0)
print(len(positive_skip_grams))
26

Print a few positive skip-grams:

for target, context in positive_skip_grams[:5]:
  print(f"({target}, {context}): ({inverse_vocab[target]}, {inverse_vocab[context]})")
(3, 4): (road, shimmered)
(5, 1): (in, the)
(2, 1): (wide, the)
(5, 3): (in, road)
(4, 2): (shimmered, wide)

Negative sampling for one skip-gram

The skipgrams function returns all positive skip-gram pairs by sliding over a given window span. To produce additional skip-gram pairs that would serve as negative samples for training, you need to sample random words from the vocabulary. Use the tf.random.log_uniform_candidate_sampler function to sample num_ns number of negative samples for a given target word in a window. You can call the function on one skip-grams’s target word and pass the context word as true class to exclude it from being sampled.

# Get target and context words for one positive skip-gram.
target_word, context_word = positive_skip_grams[0]

# Set the number of negative samples per positive context.
num_ns = 4

context_class = tf.reshape(tf.constant(context_word, dtype="int64"), (1, 1))
negative_sampling_candidates, _, _ = tf.random.log_uniform_candidate_sampler(
    true_classes=context_class,  # class that should be sampled as 'positive'
    num_true=1,  # each positive skip-gram has 1 positive context class
    num_sampled=num_ns,  # number of negative context words to sample
    unique=True,  # all the negative samples should be unique
    range_max=vocab_size,  # pick index of the samples from [0, vocab_size]
    seed=SEED,  # seed for reproducibility
    name="negative_sampling"  # name of this operation
)
print(negative_sampling_candidates)
print([inverse_vocab[index.numpy()] for index in negative_sampling_candidates])
tf.Tensor([2 1 4 3], shape=(4,), dtype=int64)
['wide', 'the', 'shimmered', 'road']

Construct one training example

For a given positive (target_word, context_word) skip-gram, you now also have num_ns negative sampled context words that do not appear in the window size neighborhood of target_word. Batch the 1 positive context_word and num_ns negative context words into one tensor. This produces a set of positive skip-grams (labeled as 1) and negative samples (labeled as 0) for each target word.

# Reduce a dimension so you can use concatenation (in the next step).
squeezed_context_class = tf.squeeze(context_class, 1)

# Concatenate a positive context word with negative sampled words.
context = tf.concat([squeezed_context_class, negative_sampling_candidates], 0)

# Label the first context word as `1` (positive) followed by `num_ns` `0`s (negative).
label = tf.constant([1] + [0]*num_ns, dtype="int64")
target = target_word

Check out the context and the corresponding labels for the target word from the skip-gram example above:

print(f"target_index    : {target}")
print(f"target_word     : {inverse_vocab[target_word]}")
print(f"context_indices : {context}")
print(f"context_words   : {[inverse_vocab[c.numpy()] for c in context]}")
print(f"label           : {label}")
target_index    : 3
target_word     : road
context_indices : [4 2 1 4 3]
context_words   : ['shimmered', 'wide', 'the', 'shimmered', 'road']
label           : [1 0 0 0 0]

A tuple of (target, context, label) tensors constitutes one training example for training your skip-gram negative sampling word2vec model. Notice that the target is of shape (1,) while the context and label are of shape (1+num_ns,)

print("target  :", target)
print("context :", context)
print("label   :", label)
target  : 3
context : tf.Tensor([4 2 1 4 3], shape=(5,), dtype=int64)
label   : tf.Tensor([1 0 0 0 0], shape=(5,), dtype=int64)

Summary

This diagram summarizes the procedure of generating a training example from a sentence:

word2vec_negative_sampling

Notice that the words temperature and code are not part of the input sentence. They belong to the vocabulary like certain other indices used in the diagram above.

Compile all steps into one function

Skip-gram sampling table

A large dataset means larger vocabulary with higher number of more frequent words such as stopwords. Training examples obtained from sampling commonly occurring words (such as the, is, on) don’t add much useful information for the model to learn from. Mikolov et al. suggest subsampling of frequent words as a helpful practice to improve embedding quality.

The tf.keras.preprocessing.sequence.skipgrams function accepts a sampling table argument to encode probabilities of sampling any token. You can use the tf.keras.preprocessing.sequence.make_sampling_table to generate a word-frequency rank based probabilistic sampling table and pass it to the skipgrams function. Inspect the sampling probabilities for a vocab_size of 10.

sampling_table = tf.keras.preprocessing.sequence.make_sampling_table(size=10)
print(sampling_table)
[0.00315225 0.00315225 0.00547597 0.00741556 0.00912817 0.01068435
 0.01212381 0.01347162 0.01474487 0.0159558 ]

sampling_table[i] denotes the probability of sampling the i-th most common word in a dataset. The function assumes a Zipf’s distribution of the word frequencies for sampling.

Generate training data

Compile all the steps described above into a function that can be called on a list of vectorized sentences obtained from any text dataset. Notice that the sampling table is built before sampling skip-gram word pairs. You will use this function in the later sections.

# Generates skip-gram pairs with negative sampling for a list of sequences
# (int-encoded sentences) based on window size, number of negative samples
# and vocabulary size.
def generate_training_data(sequences, window_size, num_ns, vocab_size, seed):
  # Elements of each training example are appended to these lists.
  targets, contexts, labels = [], [], []

  # Build the sampling table for `vocab_size` tokens.
  sampling_table = tf.keras.preprocessing.sequence.make_sampling_table(vocab_size)

  # Iterate over all sequences (sentences) in the dataset.
  for sequence in tqdm.tqdm(sequences):

    # Generate positive skip-gram pairs for a sequence (sentence).
    positive_skip_grams, _ = tf.keras.preprocessing.sequence.skipgrams(
          sequence,
          vocabulary_size=vocab_size,
          sampling_table=sampling_table,
          window_size=window_size,
          negative_samples=0)

    # Iterate over each positive skip-gram pair to produce training examples
    # with a positive context word and negative samples.
    for target_word, context_word in positive_skip_grams:
      context_class = tf.expand_dims(
          tf.constant([context_word], dtype="int64"), 1)
      negative_sampling_candidates, _, _ = tf.random.log_uniform_candidate_sampler(
          true_classes=context_class,
          num_true=1,
          num_sampled=num_ns,
          unique=True,
          range_max=vocab_size,
          seed=seed,
          name="negative_sampling")

      # Build context and label vectors (for one target word)
      context = tf.concat([tf.squeeze(context_class,1), negative_sampling_candidates], 0)
      label = tf.constant([1] + [0]*num_ns, dtype="int64")

      # Append each element from the training example to global lists.
      targets.append(target_word)
      contexts.append(context)
      labels.append(label)

  return targets, contexts, labels

Prepare training data for word2vec

With an understanding of how to work with one sentence for a skip-gram negative sampling based word2vec model, you can proceed to generate training examples from a larger list of sentences!

Download text corpus

You will use a text file of Shakespeare’s writing for this tutorial. Change the following line to run this code on your own data.

path_to_file = tf.keras.utils.get_file('shakespeare.txt', 'https://storage.googleapis.com/download.tensorflow.org/data/shakespeare.txt')
Downloading data from https://storage.googleapis.com/download.tensorflow.org/data/shakespeare.txt
1115394/1115394 [==============================] - 0s 0us/step

Read the text from the file and print the first few lines:

with open(path_to_file) as f:
  lines = f.read().splitlines()
for line in lines[:20]:
  print(line)
First Citizen:
Before we proceed any further, hear me speak.

All:
Speak, speak.

First Citizen:
You are all resolved rather to die than to famish?

All:
Resolved. resolved.

First Citizen:
First, you know Caius Marcius is chief enemy to the people.

All:
We know't, we know't.

First Citizen:
Let us kill him, and we'll have corn at our own price.

Use the non empty lines to construct a tf.data.TextLineDataset object for the next steps:

text_ds = tf.data.TextLineDataset(path_to_file).filter(lambda x: tf.cast(tf.strings.length(x), bool))
WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.9/site-packages/tensorflow/python/autograph/pyct/static_analysis/liveness.py:83: Analyzer.lamba_check (from tensorflow.python.autograph.pyct.static_analysis.liveness) is deprecated and will be removed after 2023-09-23.
Instructions for updating:
Lambda fuctions will be no more assumed to be used in the statement where they are used, or at least in the same block. https://github.com/tensorflow/tensorflow/issues/56089

Vectorize sentences from the corpus

You can use the TextVectorization layer to vectorize sentences from the corpus. Learn more about using this layer in this Text classification tutorial. Notice from the first few sentences above that the text needs to be in one case and punctuation needs to be removed. To do this, define a custom_standardization function that can be used in the TextVectorization layer.

# Now, create a custom standardization function to lowercase the text and
# remove punctuation.
def custom_standardization(input_data):
  lowercase = tf.strings.lower(input_data)
  return tf.strings.regex_replace(lowercase,
                                  '[%s]' % re.escape(string.punctuation), '')


# Define the vocabulary size and the number of words in a sequence.
vocab_size = 4096
sequence_length = 10

# Use the `TextVectorization` layer to normalize, split, and map strings to
# integers. Set the `output_sequence_length` length to pad all samples to the
# same length.
vectorize_layer = layers.TextVectorization(
    standardize=custom_standardization,
    max_tokens=vocab_size,
    output_mode='int',
    output_sequence_length=sequence_length)

Call TextVectorization.adapt on the text dataset to create vocabulary.

vectorize_layer.adapt(text_ds.batch(1024))

Once the state of the layer has been adapted to represent the text corpus, the vocabulary can be accessed with TextVectorization.get_vocabulary. This function returns a list of all vocabulary tokens sorted (descending) by their frequency.

# Save the created vocabulary for reference.
inverse_vocab = vectorize_layer.get_vocabulary()
print(inverse_vocab[:20])
['', '[UNK]', 'the', 'and', 'to', 'i', 'of', 'you', 'my', 'a', 'that', 'in', 'is', 'not', 'for', 'with', 'me', 'it', 'be', 'your']

The vectorize_layer can now be used to generate vectors for each element in the text_ds (a tf.data.Dataset). Apply Dataset.batch, Dataset.prefetch, Dataset.map, and Dataset.unbatch.

# Vectorize the data in text_ds.
text_vector_ds = text_ds.batch(1024).prefetch(AUTOTUNE).map(vectorize_layer).unbatch()

Obtain sequences from the dataset

You now have a tf.data.Dataset of integer encoded sentences. To prepare the dataset for training a word2vec model, flatten the dataset into a list of sentence vector sequences. This step is required as you would iterate over each sentence in the dataset to produce positive and negative examples.

sequences = list(text_vector_ds.as_numpy_iterator())
print(len(sequences))
32777

Inspect a few examples from sequences:

for seq in sequences[:5]:
  print(f"{seq} => {[inverse_vocab[i] for i in seq]}")
[ 89 270   0   0   0   0   0   0   0   0] => ['first', 'citizen', '', '', '', '', '', '', '', '']
[138  36 982 144 673 125  16 106   0   0] => ['before', 'we', 'proceed', 'any', 'further', 'hear', 'me', 'speak', '', '']
[34  0  0  0  0  0  0  0  0  0] => ['all', '', '', '', '', '', '', '', '', '']
[106 106   0   0   0   0   0   0   0   0] => ['speak', 'speak', '', '', '', '', '', '', '', '']
[ 89 270   0   0   0   0   0   0   0   0] => ['first', 'citizen', '', '', '', '', '', '', '', '']

Generate training examples from sequences

sequences is now a list of int encoded sentences. Just call the generate_training_data function defined earlier to generate training examples for the word2vec model. To recap, the function iterates over each word from each sequence to collect positive and negative context words. Length of target, contexts and labels should be the same, representing the total number of training examples.

targets, contexts, labels = generate_training_data(
    sequences=sequences,
    window_size=2,
    num_ns=4,
    vocab_size=vocab_size,
    seed=SEED)

targets = np.array(targets)
contexts = np.array(contexts)
labels = np.array(labels)

print('n')
print(f"targets.shape: {targets.shape}")
print(f"contexts.shape: {contexts.shape}")
print(f"labels.shape: {labels.shape}")
100%|██████████| 32777/32777 [00:47<00:00, 696.80it/s]
targets.shape: (64953,)
contexts.shape: (64953, 5)
labels.shape: (64953, 5)

Configure the dataset for performance

To perform efficient batching for the potentially large number of training examples, use the tf.data.Dataset API. After this step, you would have a tf.data.Dataset object of (target_word, context_word), (label) elements to train your word2vec model!

BATCH_SIZE = 1024
BUFFER_SIZE = 10000
dataset = tf.data.Dataset.from_tensor_slices(((targets, contexts), labels))
dataset = dataset.shuffle(BUFFER_SIZE).batch(BATCH_SIZE, drop_remainder=True)
print(dataset)
<BatchDataset element_spec=((TensorSpec(shape=(1024,), dtype=tf.int64, name=None), TensorSpec(shape=(1024, 5), dtype=tf.int64, name=None)), TensorSpec(shape=(1024, 5), dtype=tf.int64, name=None))>

Apply Dataset.cache and Dataset.prefetch to improve performance:

dataset = dataset.cache().prefetch(buffer_size=AUTOTUNE)
print(dataset)
<PrefetchDataset element_spec=((TensorSpec(shape=(1024,), dtype=tf.int64, name=None), TensorSpec(shape=(1024, 5), dtype=tf.int64, name=None)), TensorSpec(shape=(1024, 5), dtype=tf.int64, name=None))>

Model and training

The word2vec model can be implemented as a classifier to distinguish between true context words from skip-grams and false context words obtained through negative sampling. You can perform a dot product multiplication between the embeddings of target and context words to obtain predictions for labels and compute the loss function against true labels in the dataset.

Subclassed word2vec model

Use the Keras Subclassing API to define your word2vec model with the following layers:

  • target_embedding: A tf.keras.layers.Embedding layer, which looks up the embedding of a word when it appears as a target word. The number of parameters in this layer are (vocab_size * embedding_dim).
  • context_embedding: Another tf.keras.layers.Embedding layer, which looks up the embedding of a word when it appears as a context word. The number of parameters in this layer are the same as those in target_embedding, i.e. (vocab_size * embedding_dim).
  • dots: A tf.keras.layers.Dot layer that computes the dot product of target and context embeddings from a training pair.
  • flatten: A tf.keras.layers.Flatten layer to flatten the results of dots layer into logits.

With the subclassed model, you can define the call() function that accepts (target, context) pairs which can then be passed into their corresponding embedding layer. Reshape the context_embedding to perform a dot product with target_embedding and return the flattened result.

class Word2Vec(tf.keras.Model):
  def __init__(self, vocab_size, embedding_dim):
    super(Word2Vec, self).__init__()
    self.target_embedding = layers.Embedding(vocab_size,
                                      embedding_dim,
                                      input_length=1,
                                      name="w2v_embedding")
    self.context_embedding = layers.Embedding(vocab_size,
                                       embedding_dim,
                                       input_length=num_ns+1)

  def call(self, pair):
    target, context = pair
    # target: (batch, dummy?)  # The dummy axis doesn't exist in TF2.7+
    # context: (batch, context)
    if len(target.shape) == 2:
      target = tf.squeeze(target, axis=1)
    # target: (batch,)
    word_emb = self.target_embedding(target)
    # word_emb: (batch, embed)
    context_emb = self.context_embedding(context)
    # context_emb: (batch, context, embed)
    dots = tf.einsum('be,bce->bc', word_emb, context_emb)
    # dots: (batch, context)
    return dots

Define loss function and compile model

For simplicity, you can use tf.keras.losses.CategoricalCrossEntropy as an alternative to the negative sampling loss. If you would like to write your own custom loss function, you can also do so as follows:

def custom_loss(x_logit, y_true):
      return tf.nn.sigmoid_cross_entropy_with_logits(logits=x_logit, labels=y_true)

It’s time to build your model! Instantiate your word2vec class with an embedding dimension of 128 (you could experiment with different values). Compile the model with the tf.keras.optimizers.Adam optimizer.

embedding_dim = 128
word2vec = Word2Vec(vocab_size, embedding_dim)
word2vec.compile(optimizer='adam',
                 loss=tf.keras.losses.CategoricalCrossentropy(from_logits=True),
                 metrics=['accuracy'])

Also define a callback to log training statistics for TensorBoard:

tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir="logs")

Train the model on the dataset for some number of epochs:

word2vec.fit(dataset, epochs=20, callbacks=[tensorboard_callback])
Epoch 1/20
63/63 [==============================] - 8s 112ms/step - loss: 1.6082 - accuracy: 0.2314
Epoch 2/20
63/63 [==============================] - 0s 3ms/step - loss: 1.5886 - accuracy: 0.5562
Epoch 3/20
63/63 [==============================] - 0s 3ms/step - loss: 1.5403 - accuracy: 0.5982
Epoch 4/20
63/63 [==============================] - 0s 3ms/step - loss: 1.4573 - accuracy: 0.5730
Epoch 5/20
63/63 [==============================] - 0s 3ms/step - loss: 1.3589 - accuracy: 0.5810
Epoch 6/20
63/63 [==============================] - 0s 3ms/step - loss: 1.2615 - accuracy: 0.6101
Epoch 7/20
63/63 [==============================] - 0s 3ms/step - loss: 1.1704 - accuracy: 0.6450
Epoch 8/20
63/63 [==============================] - 0s 3ms/step - loss: 1.0858 - accuracy: 0.6794
Epoch 9/20
63/63 [==============================] - 0s 3ms/step - loss: 1.0075 - accuracy: 0.7106
Epoch 10/20
63/63 [==============================] - 0s 3ms/step - loss: 0.9348 - accuracy: 0.7413
Epoch 11/20
63/63 [==============================] - 0s 3ms/step - loss: 0.8676 - accuracy: 0.7657
Epoch 12/20
63/63 [==============================] - 0s 3ms/step - loss: 0.8056 - accuracy: 0.7871
Epoch 13/20
63/63 [==============================] - 0s 3ms/step - loss: 0.7485 - accuracy: 0.8069
Epoch 14/20
63/63 [==============================] - 0s 3ms/step - loss: 0.6962 - accuracy: 0.8258
Epoch 15/20
63/63 [==============================] - 0s 3ms/step - loss: 0.6484 - accuracy: 0.8415
Epoch 16/20
63/63 [==============================] - 0s 3ms/step - loss: 0.6048 - accuracy: 0.8549
Epoch 17/20
63/63 [==============================] - 0s 3ms/step - loss: 0.5650 - accuracy: 0.8671
Epoch 18/20
63/63 [==============================] - 0s 3ms/step - loss: 0.5288 - accuracy: 0.8775
Epoch 19/20
63/63 [==============================] - 0s 3ms/step - loss: 0.4959 - accuracy: 0.8864
Epoch 20/20
63/63 [==============================] - 0s 3ms/step - loss: 0.4659 - accuracy: 0.8959
<keras.callbacks.History at 0x7f6bd0344f70>

TensorBoard now shows the word2vec model’s accuracy and loss:

#docs_infra: no_execute
%tensorboard --logdir logs

Embedding lookup and analysis

Obtain the weights from the model using Model.get_layer and Layer.get_weights. The TextVectorization.get_vocabulary function provides the vocabulary to build a metadata file with one token per line.

weights = word2vec.get_layer('w2v_embedding').get_weights()[0]
vocab = vectorize_layer.get_vocabulary()

Create and save the vectors and metadata files:

out_v = io.open('vectors.tsv', 'w', encoding='utf-8')
out_m = io.open('metadata.tsv', 'w', encoding='utf-8')

for index, word in enumerate(vocab):
  if index == 0:
    continue  # skip 0, it's padding.
  vec = weights[index]
  out_v.write('t'.join([str(x) for x in vec]) + "n")
  out_m.write(word + "n")
out_v.close()
out_m.close()

Download the vectors.tsv and metadata.tsv to analyze the obtained embeddings in the Embedding Projector:

try:
  from google.colab import files
  files.download('vectors.tsv')
  files.download('metadata.tsv')
except Exception:
  pass

Next steps

This tutorial has shown you how to implement a skip-gram word2vec model with negative sampling from scratch and visualize the obtained word embeddings.

  • To learn more about word vectors and their mathematical representations, refer to these notes.

  • To learn more about advanced text processing, read the Transformer model for language understanding tutorial.

  • If you’re interested in pre-trained embedding models, you may also be interested in Exploring the TF-Hub CORD-19 Swivel Embeddings, or the Multilingual Universal Sentence Encoder.

  • You may also like to train the model on a new dataset (there are many available in TensorFlow Datasets).

Write a sentence for each word / phrase.

1 (at the moment) 2(on Sundays) 3(in the summer) 4(always) 5(right now) 6(in the winter) 7(never).

Если вам необходимо получить ответ на вопрос Write a sentence for each word / phrase?, относящийся
к уровню подготовки учащихся 5 — 9 классов, вы открыли нужную страницу.
В категории Английский язык вы также найдете ответы на похожие вопросы по
интересующей теме, с помощью автоматического «умного» поиска. Если после
ознакомления со всеми вариантами ответа у вас остались сомнения, или
полученная информация не полностью освещает тематику, создайте свой вопрос с
помощью кнопки, которая находится вверху страницы, или обсудите вопрос с
посетителями этой страницы.

#!/usr/bin/env python # -*- coding: utf-8 -*- # # Copyright (C) 2013 Radim Rehurek <me@radimrehurek.com> # Licensed under the GNU LGPL v2.1 — http://www.gnu.org/licenses/lgpl.html «»» Produce word vectors with deep learning via word2vec’s «skip-gram and CBOW models», using either hierarchical softmax or negative sampling [1]_ [2]_. NOTE: There are more ways to get word vectors in Gensim than just Word2Vec. See wrappers for FastText, VarEmbed and WordRank. The training algorithms were originally ported from the C package https://code.google.com/p/word2vec/ and extended with additional functionality. For a blog tutorial on gensim word2vec, with an interactive web app trained on GoogleNews, visit http://radimrehurek.com/2014/02/word2vec-tutorial/ **Make sure you have a C compiler before installing gensim, to use optimized (compiled) word2vec training** (70x speedup compared to plain NumPy implementation [3]_). Initialize a model with e.g.:: >>> model = Word2Vec(sentences, size=100, window=5, min_count=5, workers=4) Persist a model to disk with:: >>> model.save(fname) >>> model = Word2Vec.load(fname) # you can continue training with the loaded model! The word vectors are stored in a KeyedVectors instance in model.wv. This separates the read-only word vector lookup operations in KeyedVectors from the training code in Word2Vec. >>> model.wv[‘computer’] # numpy vector of a word array([-0.00449447, -0.00310097, 0.02421786, …], dtype=float32) The word vectors can also be instantiated from an existing file on disk in the word2vec C format as a KeyedVectors instance:: NOTE: It is impossible to continue training the vectors loaded from the C format because hidden weights, vocabulary frequency and the binary tree is missing. >>> from gensim.models.keyedvectors import KeyedVectors >>> word_vectors = KeyedVectors.load_word2vec_format(‘/tmp/vectors.txt’, binary=False) # C text format >>> word_vectors = KeyedVectors.load_word2vec_format(‘/tmp/vectors.bin’, binary=True) # C binary format You can perform various NLP word tasks with the model. Some of them are already built-in:: >>> model.wv.most_similar(positive=[‘woman’, ‘king’], negative=[‘man’]) [(‘queen’, 0.50882536), …] >>> model.wv.most_similar_cosmul(positive=[‘woman’, ‘king’], negative=[‘man’]) [(‘queen’, 0.71382287), …] >>> model.wv.doesnt_match(«breakfast cereal dinner lunch».split()) ‘cereal’ >>> model.wv.similarity(‘woman’, ‘man’) 0.73723527 Probability of a text under the model:: >>> model.score([«The fox jumped over a lazy dog».split()]) 0.2158356 Correlation with human opinion on word similarity:: >>> model.wv.evaluate_word_pairs(os.path.join(module_path, ‘test_data’,’wordsim353.tsv’)) 0.51, 0.62, 0.13 And on analogies:: >>> model.wv.accuracy(os.path.join(module_path, ‘test_data’, ‘questions-words.txt’)) and so on. If you’re finished training a model (i.e. no more updates, only querying), then switch to the :mod:`gensim.models.KeyedVectors` instance in wv >>> word_vectors = model.wv >>> del model to trim unneeded model memory = use much less RAM. Note that there is a :mod:`gensim.models.phrases` module which lets you automatically detect phrases longer than one word. Using phrases, you can learn a word2vec model where «words» are actually multiword expressions, such as `new_york_times` or `financial_crisis`: >>> bigram_transformer = gensim.models.Phrases(sentences) >>> model = Word2Vec(bigram_transformer[sentences], size=100, …) .. [1] Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient Estimation of Word Representations in Vector Space. In Proceedings of Workshop at ICLR, 2013. .. [2] Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean. Distributed Representations of Words and Phrases and their Compositionality. In Proceedings of NIPS, 2013. .. [3] Optimizing word2vec in gensim, http://radimrehurek.com/2013/09/word2vec-in-python-part-two-optimizing/ «»» from __future__ import division # py3 «true division» import logging import sys import os import heapq from timeit import default_timer from copy import deepcopy from collections import defaultdict import threading import itertools import warnings from gensim.utils import keep_vocab_item, call_on_class_only from gensim.models.keyedvectors import KeyedVectors, Vocab try: from queue import Queue, Empty except ImportError: from Queue import Queue, Empty from numpy import exp, log, dot, zeros, outer, random, dtype, float32 as REAL, double, uint32, seterr, array, uint8, vstack, fromstring, sqrt, newaxis, ndarray, empty, sum as np_sum, prod, ones, ascontiguousarray, vstack, logaddexp from scipy.special import expit from gensim import utils, matutils # utility fnc for pickling, common scipy operations etc from gensim.corpora.dictionary import Dictionary from six import iteritems, itervalues, string_types from six.moves import xrange from types import GeneratorType from scipy import stats logger = logging.getLogger(__name__) try: from gensim.models.word2vec_inner import train_batch_sg, train_batch_cbow from gensim.models.word2vec_inner import score_sentence_sg, score_sentence_cbow from gensim.models.word2vec_inner import FAST_VERSION, MAX_WORDS_IN_BATCH except ImportError: # failed… fall back to plain numpy (20-80x slower training than the above) FAST_VERSION = 1 MAX_WORDS_IN_BATCH = 10000 def train_batch_sg(model, sentences, alpha, work=None): «»» Update skip-gram model by training on a sequence of sentences. Each sentence is a list of string tokens, which are looked up in the model’s vocab dictionary. Called internally from `Word2Vec.train()`. This is the non-optimized, Python version. If you have cython installed, gensim will use the optimized version from word2vec_inner instead. «»» result = 0 for sentence in sentences: word_vocabs = [model.wv.vocab[w] for w in sentence if w in model.wv.vocab and model.wv.vocab[w].sample_int > model.random.rand() * 2**32] for pos, word in enumerate(word_vocabs): reduced_window = model.random.randint(model.window) # `b` in the original word2vec code # now go over all words from the (reduced) window, predicting each one in turn start = max(0, pos model.window + reduced_window) for pos2, word2 in enumerate(word_vocabs[start :( pos + model.window + 1 reduced_window)], start): # don’t train on the `word` itself if pos2 != pos: train_sg_pair(model, model.wv.index2word[word.index], word2.index, alpha) result += len(word_vocabs) return result def train_batch_cbow(model, sentences, alpha, work=None, neu1=None): «»» Update CBOW model by training on a sequence of sentences. Each sentence is a list of string tokens, which are looked up in the model’s vocab dictionary. Called internally from `Word2Vec.train()`. This is the non-optimized, Python version. If you have cython installed, gensim will use the optimized version from word2vec_inner instead. «»» result = 0 for sentence in sentences: word_vocabs = [model.wv.vocab[w] for w in sentence if w in model.wv.vocab and model.wv.vocab[w].sample_int > model.random.rand() * 2**32] for pos, word in enumerate(word_vocabs): reduced_window = model.random.randint(model.window) # `b` in the original word2vec code start = max(0, pos model.window + reduced_window) window_pos = enumerate(word_vocabs[start :( pos + model.window + 1 reduced_window)], start) word2_indices = [word2.index for pos2, word2 in window_pos if (word2 is not None and pos2 != pos)] l1 = np_sum(model.wv.syn0[word2_indices], axis=0) # 1 x vector_size if word2_indices and model.cbow_mean: l1 /= len(word2_indices) train_cbow_pair(model, word, word2_indices, l1, alpha) result += len(word_vocabs) return result def score_sentence_sg(model, sentence, work=None): «»» Obtain likelihood score for a single sentence in a fitted skip-gram representaion. The sentence is a list of Vocab objects (or None, when the corresponding word is not in the vocabulary). Called internally from `Word2Vec.score()`. This is the non-optimized, Python version. If you have cython installed, gensim will use the optimized version from word2vec_inner instead. «»» log_prob_sentence = 0.0 if model.negative: raise RuntimeError(«scoring is only available for HS=True») word_vocabs = [model.wv.vocab[w] for w in sentence if w in model.wv.vocab] for pos, word in enumerate(word_vocabs): if word is None: continue # OOV word in the input sentence => skip # now go over all words from the window, predicting each one in turn start = max(0, pos model.window) for pos2, word2 in enumerate(word_vocabs[start : pos + model.window + 1], start): # don’t train on OOV words and on the `word` itself if word2 is not None and pos2 != pos: log_prob_sentence += score_sg_pair(model, word, word2) return log_prob_sentence def score_sentence_cbow(model, sentence, alpha, work=None, neu1=None): «»» Obtain likelihood score for a single sentence in a fitted CBOW representaion. The sentence is a list of Vocab objects (or None, where the corresponding word is not in the vocabulary. Called internally from `Word2Vec.score()`. This is the non-optimized, Python version. If you have cython installed, gensim will use the optimized version from word2vec_inner instead. «»» log_prob_sentence = 0.0 if model.negative: raise RuntimeError(«scoring is only available for HS=True») word_vocabs = [model.wv.vocab[w] for w in sentence if w in model.wv.vocab] for pos, word in enumerate(word_vocabs): if word is None: continue # OOV word in the input sentence => skip start = max(0, pos model.window) window_pos = enumerate(word_vocabs[start :( pos + model.window + 1)], start) word2_indices = [word2.index for pos2, word2 in window_pos if (word2 is not None and pos2 != pos)] l1 = np_sum(model.wv.syn0[word2_indices], axis=0) # 1 x layer1_size if word2_indices and model.cbow_mean: l1 /= len(word2_indices) log_prob_sentence += score_cbow_pair(model, word, word2_indices, l1) return log_prob_sentence def train_sg_pair(model, word, context_index, alpha, learn_vectors=True, learn_hidden=True, context_vectors=None, context_locks=None): if context_vectors is None: context_vectors = model.wv.syn0 if context_locks is None: context_locks = model.syn0_lockf if word not in model.wv.vocab: return predict_word = model.wv.vocab[word] # target word (NN output) l1 = context_vectors[context_index] # input word (NN input/projection layer) lock_factor = context_locks[context_index] neu1e = zeros(l1.shape) if model.hs: # work on the entire tree at once, to push as much work into numpy’s C routines as possible (performance) l2a = deepcopy(model.syn1[predict_word.point]) # 2d matrix, codelen x layer1_size fa = expit(dot(l1, l2a.T)) # propagate hidden -> output ga = (1 predict_word.code fa) * alpha # vector of error gradients multiplied by the learning rate if learn_hidden: model.syn1[predict_word.point] += outer(ga, l1) # learn hidden -> output neu1e += dot(ga, l2a) # save error if model.negative: # use this word (label = 1) + `negative` other random words not from this sentence (label = 0) word_indices = [predict_word.index] while len(word_indices) < model.negative + 1: w = model.cum_table.searchsorted(model.random.randint(model.cum_table[1])) if w != predict_word.index: word_indices.append(w) l2b = model.syn1neg[word_indices] # 2d matrix, k+1 x layer1_size fb = expit(dot(l1, l2b.T)) # propagate hidden -> output gb = (model.neg_labels fb) * alpha # vector of error gradients multiplied by the learning rate if learn_hidden: model.syn1neg[word_indices] += outer(gb, l1) # learn hidden -> output neu1e += dot(gb, l2b) # save error if learn_vectors: l1 += neu1e * lock_factor # learn input -> hidden (mutates model.wv.syn0[word2.index], if that is l1) return neu1e def train_cbow_pair(model, word, input_word_indices, l1, alpha, learn_vectors=True, learn_hidden=True): neu1e = zeros(l1.shape) if model.hs: l2a = model.syn1[word.point] # 2d matrix, codelen x layer1_size fa = expit(dot(l1, l2a.T)) # propagate hidden -> output ga = (1. word.code fa) * alpha # vector of error gradients multiplied by the learning rate if learn_hidden: model.syn1[word.point] += outer(ga, l1) # learn hidden -> output neu1e += dot(ga, l2a) # save error if model.negative: # use this word (label = 1) + `negative` other random words not from this sentence (label = 0) word_indices = [word.index] while len(word_indices) < model.negative + 1: w = model.cum_table.searchsorted(model.random.randint(model.cum_table[1])) if w != word.index: word_indices.append(w) l2b = model.syn1neg[word_indices] # 2d matrix, k+1 x layer1_size fb = expit(dot(l1, l2b.T)) # propagate hidden -> output gb = (model.neg_labels fb) * alpha # vector of error gradients multiplied by the learning rate if learn_hidden: model.syn1neg[word_indices] += outer(gb, l1) # learn hidden -> output neu1e += dot(gb, l2b) # save error if learn_vectors: # learn input -> hidden, here for all words in the window separately if not model.cbow_mean and input_word_indices: neu1e /= len(input_word_indices) for i in input_word_indices: model.wv.syn0[i] += neu1e * model.syn0_lockf[i] return neu1e def score_sg_pair(model, word, word2): l1 = model.wv.syn0[word2.index] l2a = deepcopy(model.syn1[word.point]) # 2d matrix, codelen x layer1_size sgn = (1.0)**word.code # ch function, 0-> 1, 1 -> -1 lprob = logaddexp(0, sgn * dot(l1, l2a.T)) return sum(lprob) def score_cbow_pair(model, word, word2_indices, l1): l2a = model.syn1[word.point] # 2d matrix, codelen x layer1_size sgn = (1.0)**word.code # ch function, 0-> 1, 1 -> -1 lprob = logaddexp(0, sgn * dot(l1, l2a.T)) return sum(lprob) class Word2Vec(utils.SaveLoad): «»» Class for training, using and evaluating neural networks described in https://code.google.com/p/word2vec/ If you’re finished training a model (=no more updates, only querying) then switch to the :mod:`gensim.models.KeyedVectors` instance in wv The model can be stored/loaded via its `save()` and `load()` methods, or stored/loaded in a format compatible with the original word2vec implementation via `wv.save_word2vec_format()` and `KeyedVectors.load_word2vec_format()`. «»» def __init__( self, sentences=None, size=100, alpha=0.025, window=5, min_count=5, max_vocab_size=None, sample=1e-3, seed=1, workers=3, min_alpha=0.0001, sg=0, hs=0, negative=5, cbow_mean=1, hashfxn=hash, iter=5, null_word=0, trim_rule=None, sorted_vocab=1, batch_words=MAX_WORDS_IN_BATCH): «»» Initialize the model from an iterable of `sentences`. Each sentence is a list of words (unicode strings) that will be used for training. The `sentences` iterable can be simply a list, but for larger corpora, consider an iterable that streams the sentences directly from disk/network. See :class:`BrownCorpus`, :class:`Text8Corpus` or :class:`LineSentence` in this module for such examples. If you don’t supply `sentences`, the model is left uninitialized — use if you plan to initialize it in some other way. `sg` defines the training algorithm. By default (`sg=0`), CBOW is used. Otherwise (`sg=1`), skip-gram is employed. `size` is the dimensionality of the feature vectors. `window` is the maximum distance between the current and predicted word within a sentence. `alpha` is the initial learning rate (will linearly drop to `min_alpha` as training progresses). `seed` = for the random number generator. Initial vectors for each word are seeded with a hash of the concatenation of word + str(seed). Note that for a fully deterministically-reproducible run, you must also limit the model to a single worker thread, to eliminate ordering jitter from OS thread scheduling. (In Python 3, reproducibility between interpreter launches also requires use of the PYTHONHASHSEED environment variable to control hash randomization.) `min_count` = ignore all words with total frequency lower than this. `max_vocab_size` = limit RAM during vocabulary building; if there are more unique words than this, then prune the infrequent ones. Every 10 million word types need about 1GB of RAM. Set to `None` for no limit (default). `sample` = threshold for configuring which higher-frequency words are randomly downsampled; default is 1e-3, useful range is (0, 1e-5). `workers` = use this many worker threads to train the model (=faster training with multicore machines). `hs` = if 1, hierarchical softmax will be used for model training. If set to 0 (default), and `negative` is non-zero, negative sampling will be used. `negative` = if > 0, negative sampling will be used, the int for negative specifies how many «noise words» should be drawn (usually between 5-20). Default is 5. If set to 0, no negative samping is used. `cbow_mean` = if 0, use the sum of the context word vectors. If 1 (default), use the mean. Only applies when cbow is used. `hashfxn` = hash function to use to randomly initialize weights, for increased training reproducibility. Default is Python’s rudimentary built in hash function. `iter` = number of iterations (epochs) over the corpus. Default is 5. `trim_rule` = vocabulary trimming rule, specifies whether certain words should remain in the vocabulary, be trimmed away, or handled using the default (discard if word count < min_count). Can be None (min_count will be used), or a callable that accepts parameters (word, count, min_count) and returns either `utils.RULE_DISCARD`, `utils.RULE_KEEP` or `utils.RULE_DEFAULT`. Note: The rule, if given, is only used to prune vocabulary during build_vocab() and is not stored as part of the model. `sorted_vocab` = if 1 (default), sort the vocabulary by descending frequency before assigning word indexes. `batch_words` = target size (in words) for batches of examples passed to worker threads (and thus cython routines). Default is 10000. (Larger batches will be passed if individual texts are longer than 10000 words, but the standard cython code truncates to that maximum.) «»» self.load = call_on_class_only if FAST_VERSION == 1: logger.warning(‘Slow version of {0} is being used’.format(__name__)) else: logger.debug(‘Fast version of {0} is being used’.format(__name__)) self.initialize_word_vectors() self.sg = int(sg) self.cum_table = None # for negative sampling self.vector_size = int(size) self.layer1_size = int(size) if size % 4 != 0: logger.warning(«consider setting layer size to a multiple of 4 for greater performance») self.alpha = float(alpha) self.min_alpha_yet_reached = float(alpha) # To warn user if alpha increases self.window = int(window) self.max_vocab_size = max_vocab_size self.seed = seed self.random = random.RandomState(seed) self.min_count = min_count self.sample = sample self.workers = int(workers) self.min_alpha = float(min_alpha) self.hs = hs self.negative = negative self.cbow_mean = int(cbow_mean) self.hashfxn = hashfxn self.iter = iter self.null_word = null_word self.train_count = 0 self.total_train_time = 0 self.sorted_vocab = sorted_vocab self.batch_words = batch_words self.model_trimmed_post_training = False if sentences is not None: if isinstance(sentences, GeneratorType): raise TypeError(«You can’t pass a generator as the sentences argument. Try an iterator.») self.build_vocab(sentences, trim_rule=trim_rule) self.train(sentences, total_examples=self.corpus_count, epochs=self.iter, start_alpha=self.alpha, end_alpha=self.min_alpha) else : if trim_rule is not None : logger.warning(«The rule, if given, is only used to prune vocabulary during build_vocab() and is not stored as part of the model. «) logger.warning(«Model initialized without sentences. trim_rule provided, if any, will be ignored.» ) def initialize_word_vectors(self): self.wv = KeyedVectors() def make_cum_table(self, power=0.75, domain=2**31 1): «»» Create a cumulative-distribution table using stored vocabulary word counts for drawing random words in the negative-sampling training routines. To draw a word index, choose a random integer up to the maximum value in the table (cum_table[-1]), then finding that integer’s sorted insertion point (as if by bisect_left or ndarray.searchsorted()). That insertion point is the drawn index, coming up in proportion equal to the increment at that slot. Called internally from ‘build_vocab()’. «»» vocab_size = len(self.wv.index2word) self.cum_table = zeros(vocab_size, dtype=uint32) # compute sum of all power (Z in paper) train_words_pow = 0.0 for word_index in xrange(vocab_size): train_words_pow += self.wv.vocab[self.wv.index2word[word_index]].count**power cumulative = 0.0 for word_index in xrange(vocab_size): cumulative += self.wv.vocab[self.wv.index2word[word_index]].count**power self.cum_table[word_index] = round(cumulative / train_words_pow * domain) if len(self.cum_table) > 0: assert self.cum_table[1] == domain def create_binary_tree(self): «»» Create a binary Huffman tree using stored vocabulary word counts. Frequent words will have shorter binary codes. Called internally from `build_vocab()`. «»» logger.info(«constructing a huffman tree from %i words», len(self.wv.vocab)) # build the huffman tree heap = list(itervalues(self.wv.vocab)) heapq.heapify(heap) for i in xrange(len(self.wv.vocab) 1): min1, min2 = heapq.heappop(heap), heapq.heappop(heap) heapq.heappush(heap, Vocab(count=min1.count + min2.count, index=i + len(self.wv.vocab), left=min1, right=min2)) # recurse over the tree, assigning a binary code to each vocabulary word if heap: max_depth, stack = 0, [(heap[0], [], [])] while stack: node, codes, points = stack.pop() if node.index < len(self.wv.vocab): # leaf node => store its path from the root node.code, node.point = codes, points max_depth = max(len(codes), max_depth) else: # inner node => continue recursion points = array(list(points) + [node.index len(self.wv.vocab)], dtype=uint32) stack.append((node.left, array(list(codes) + [0], dtype=uint8), points)) stack.append((node.right, array(list(codes) + [1], dtype=uint8), points)) logger.info(«built huffman tree with maximum node depth %i», max_depth) def build_vocab(self, sentences, keep_raw_vocab=False, trim_rule=None, progress_per=10000, update=False): «»» Build vocabulary from a sequence of sentences (can be a once-only generator stream). Each sentence must be a list of unicode strings. «»» self.scan_vocab(sentences, progress_per=progress_per, trim_rule=trim_rule) # initial survey self.scale_vocab(keep_raw_vocab=keep_raw_vocab, trim_rule=trim_rule, update=update) # trim by min_count & precalculate downsampling self.finalize_vocab(update=update) # build tables & arrays def scan_vocab(self, sentences, progress_per=10000, trim_rule=None): «»»Do an initial scan of all words appearing in sentences.»»» logger.info(«collecting all words and their counts») sentence_no = 1 total_words = 0 min_reduce = 1 vocab = defaultdict(int) checked_string_types = 0 for sentence_no, sentence in enumerate(sentences): if not checked_string_types: if isinstance(sentence, string_types): logger.warning( «Each ‘sentences’ item should be a list of words (usually unicode strings).» «First item here is instead plain %s.», type(sentence) ) checked_string_types += 1 if sentence_no % progress_per == 0: logger.info(«PROGRESS: at sentence #%i, processed %i words, keeping %i word types», sentence_no, sum(itervalues(vocab)) + total_words, len(vocab)) for word in sentence: vocab[word] += 1 if self.max_vocab_size and len(vocab) > self.max_vocab_size: total_words += utils.prune_vocab(vocab, min_reduce, trim_rule=trim_rule) min_reduce += 1 total_words += sum(itervalues(vocab)) logger.info(«collected %i word types from a corpus of %i raw words and %i sentences», len(vocab), total_words, sentence_no + 1) self.corpus_count = sentence_no + 1 self.raw_vocab = vocab def scale_vocab(self, min_count=None, sample=None, dry_run=False, keep_raw_vocab=False, trim_rule=None, update=False): «»» Apply vocabulary settings for `min_count` (discarding less-frequent words) and `sample` (controlling the downsampling of more-frequent words). Calling with `dry_run=True` will only simulate the provided settings and report the size of the retained vocabulary, effective corpus length, and estimated memory requirements. Results are both printed via logging and returned as a dict. Delete the raw vocabulary after the scaling is done to free up RAM, unless `keep_raw_vocab` is set. «»» min_count = min_count or self.min_count sample = sample or self.sample drop_total = drop_unique = 0 if not update: logger.info(«Loading a fresh vocabulary») retain_total, retain_words = 0, [] # Discard words less-frequent than min_count if not dry_run: self.wv.index2word = [] # make stored settings match these applied settings self.min_count = min_count self.sample = sample self.wv.vocab = {} for word, v in iteritems(self.raw_vocab): if keep_vocab_item(word, v, min_count, trim_rule=trim_rule): retain_words.append(word) retain_total += v if not dry_run: self.wv.vocab[word] = Vocab(count=v, index=len(self.wv.index2word)) self.wv.index2word.append(word) else: drop_unique += 1 drop_total += v original_unique_total = len(retain_words) + drop_unique retain_unique_pct = len(retain_words) * 100 / max(original_unique_total, 1) logger.info(«min_count=%d retains %i unique words (%i%% of original %i, drops %i)», min_count, len(retain_words), retain_unique_pct, original_unique_total, drop_unique) original_total = retain_total + drop_total retain_pct = retain_total * 100 / max(original_total, 1) logger.info(«min_count=%d leaves %i word corpus (%i%% of original %i, drops %i)», min_count, retain_total, retain_pct, original_total, drop_total) else: logger.info(«Updating model with new vocabulary») new_total = pre_exist_total = 0 new_words = pre_exist_words = [] for word, v in iteritems(self.raw_vocab): if keep_vocab_item(word, v, min_count, trim_rule=trim_rule): if word in self.wv.vocab: pre_exist_words.append(word) pre_exist_total += v if not dry_run: self.wv.vocab[word].count += v else: new_words.append(word) new_total += v if not dry_run: self.wv.vocab[word] = Vocab(count=v, index=len(self.wv.index2word)) self.wv.index2word.append(word) else: drop_unique += 1 drop_total += v original_unique_total = len(pre_exist_words) + len(new_words) + drop_unique pre_exist_unique_pct = len(pre_exist_words) * 100 / max(original_unique_total, 1) new_unique_pct = len(new_words) * 100 / max(original_unique_total, 1) logger.info(«»»New added %i unique words (%i%% of original %i) and increased the count of %i pre-existing words (%i%% of original %i)»»», len(new_words), new_unique_pct, original_unique_total, len(pre_exist_words), pre_exist_unique_pct, original_unique_total) retain_words = new_words + pre_exist_words retain_total = new_total + pre_exist_total # Precalculate each vocabulary item’s threshold for sampling if not sample: # no words downsampled threshold_count = retain_total elif sample < 1.0: # traditional meaning: set parameter as proportion of total threshold_count = sample * retain_total else: # new shorthand: sample >= 1 means downsample all words with higher count than sample threshold_count = int(sample * (3 + sqrt(5)) / 2) downsample_total, downsample_unique = 0, 0 for w in retain_words: v = self.raw_vocab[w] word_probability = (sqrt(v / threshold_count) + 1) * (threshold_count / v) if word_probability < 1.0: downsample_unique += 1 downsample_total += word_probability * v else: word_probability = 1.0 downsample_total += v if not dry_run: self.wv.vocab[w].sample_int = int(round(word_probability * 2**32)) if not dry_run and not keep_raw_vocab: logger.info(«deleting the raw counts dictionary of %i items», len(self.raw_vocab)) self.raw_vocab = defaultdict(int) logger.info(«sample=%g downsamples %i most-common words», sample, downsample_unique) logger.info(«downsampling leaves estimated %i word corpus (%.1f%% of prior %i)», downsample_total, downsample_total * 100.0 / max(retain_total, 1), retain_total) # return from each step: words-affected, resulting-corpus-size report_values = {‘drop_unique’: drop_unique, ‘retain_total’: retain_total, ‘downsample_unique’: downsample_unique, ‘downsample_total’: int(downsample_total)} # print extra memory estimates report_values[‘memory’] = self.estimate_memory(vocab_size=len(retain_words)) return report_values def finalize_vocab(self, update=False): «»»Build tables and model weights based on final vocabulary settings.»»» if not self.wv.index2word: self.scale_vocab() if self.sorted_vocab and not update: self.sort_vocab() if self.hs: # add info about each word’s Huffman encoding self.create_binary_tree() if self.negative: # build the table for drawing random words (for negative sampling) self.make_cum_table() if self.null_word: # create null pseudo-word for padding when using concatenative L1 (run-of-words) # this word is only ever input – never predicted – so count, huffman-point, etc doesn’t matter word, v = », Vocab(count=1, sample_int=0) v.index = len(self.wv.vocab) self.wv.index2word.append(word) self.wv.vocab[word] = v # set initial input/projection and hidden weights if not update: self.reset_weights() else: self.update_weights() def sort_vocab(self): «»»Sort the vocabulary so the most frequent words have the lowest indexes.»»» if len(self.wv.syn0): raise RuntimeError(«cannot sort vocabulary after model weights already initialized.») self.wv.index2word.sort(key=lambda word: self.wv.vocab[word].count, reverse=True) for i, word in enumerate(self.wv.index2word): self.wv.vocab[word].index = i def reset_from(self, other_model): «»» Borrow shareable pre-built structures (like vocab) from the other_model. Useful if testing multiple models in parallel on the same corpus. «»» self.wv.vocab = other_model.wv.vocab self.wv.index2word = other_model.wv.index2word self.cum_table = other_model.cum_table self.corpus_count = other_model.corpus_count self.reset_weights() def _do_train_job(self, sentences, alpha, inits): «»» Train a single batch of sentences. Return 2-tuple `(effective word count after ignoring unknown words and sentence length trimming, total word count)`. «»» work, neu1 = inits tally = 0 if self.sg: tally += train_batch_sg(self, sentences, alpha, work) else: tally += train_batch_cbow(self, sentences, alpha, work, neu1) return tally, self._raw_word_count(sentences) def _raw_word_count(self, job): «»»Return the number of words in a given job.»»» return sum(len(sentence) for sentence in job) def train(self, sentences, total_examples=None, total_words=None, epochs=None, start_alpha=None, end_alpha=None, word_count=0, queue_factor=2, report_delay=1.0): «»» Update the model’s neural weights from a sequence of sentences (can be a once-only generator stream). For Word2Vec, each sentence must be a list of unicode strings. (Subclasses may accept other examples.) To support linear learning-rate decay from (initial) alpha to min_alpha, and accurate progres-percentage logging, either total_examples (count of sentences) or total_words (count of raw words in sentences) MUST be provided. (If the corpus is the same as was provided to `build_vocab()`, the count of examples in that corpus will be available in the model’s `corpus_count` property.) To avoid common mistakes around the model’s ability to do multiple training passes itself, an explicit `epochs` argument MUST be provided. In the common and recommended case, where `train()` is only called once, the model’s cached `iter` value should be supplied as `epochs` value. «»» if (self.model_trimmed_post_training): raise RuntimeError(«Parameters for training were discarded using model_trimmed_post_training method») if FAST_VERSION < 0: warnings.warn(«C extension not loaded for Word2Vec, training will be slow. « «Install a C compiler and reinstall gensim for fast training.») self.neg_labels = [] if self.negative > 0: # precompute negative labels optimization for pure-python training self.neg_labels = zeros(self.negative + 1) self.neg_labels[0] = 1. logger.info( «training model with %i workers on %i vocabulary and %i features, « «using sg=%s hs=%s sample=%s negative=%s window=%s», self.workers, len(self.wv.vocab), self.layer1_size, self.sg, self.hs, self.sample, self.negative, self.window) if not self.wv.vocab: raise RuntimeError(«you must first build vocabulary before training the model») if not len(self.wv.syn0): raise RuntimeError(«you must first finalize vocabulary before training the model») if not hasattr(self, ‘corpus_count’): raise ValueError( «The number of sentences in the training corpus is missing. Did you load the model via KeyedVectors.load_word2vec_format?» «Models loaded via load_word2vec_format don’t support further training. « «Instead start with a blank model, scan_vocab on the new corpus, intersect_word2vec_format with the old model, then train.») if total_words is None and total_examples is None: raise ValueError(«You must specify either total_examples or total_words, for proper alpha and progress calculations. The usual value is total_examples=model.corpus_count.») if epochs is None: raise ValueError(«You must specify an explict epochs count. The usual value is epochs=model.iter.») start_alpha = start_alpha or self.alpha end_alpha = end_alpha or self.min_alpha job_tally = 0 if epochs > 1: sentences = utils.RepeatCorpusNTimes(sentences, epochs) total_words = total_words and total_words * epochs total_examples = total_examples and total_examples * epochs def worker_loop(): «»»Train the model, lifting lists of sentences from the job_queue.»»» work = matutils.zeros_aligned(self.layer1_size, dtype=REAL) # per-thread private work memory neu1 = matutils.zeros_aligned(self.layer1_size, dtype=REAL) jobs_processed = 0 while True: job = job_queue.get() if job is None: progress_queue.put(None) break # no more jobs => quit this worker sentences, alpha = job tally, raw_tally = self._do_train_job(sentences, alpha, (work, neu1)) progress_queue.put((len(sentences), tally, raw_tally)) # report back progress jobs_processed += 1 logger.debug(«worker exiting, processed %i jobs», jobs_processed) def job_producer(): «»»Fill jobs queue using the input `sentences` iterator.»»» job_batch, batch_size = [], 0 pushed_words, pushed_examples = 0, 0 next_alpha = start_alpha if next_alpha > self.min_alpha_yet_reached: logger.warning( «Effective ‘alpha’ higher than previous training cycles» ) self.min_alpha_yet_reached = next_alpha job_no = 0 for sent_idx, sentence in enumerate(sentences): sentence_length = self._raw_word_count([sentence]) # can we fit this sentence into the existing job batch? if batch_size + sentence_length <= self.batch_words: # yes => add it to the current job job_batch.append(sentence) batch_size += sentence_length else: # no => submit the existing job logger.debug( «queueing job #%i (%i words, %i sentences) at alpha %.05f», job_no, batch_size, len(job_batch), next_alpha) job_no += 1 job_queue.put((job_batch, next_alpha)) # update the learning rate for the next job if end_alpha < next_alpha: if total_examples: # examples-based decay pushed_examples += len(job_batch) progress = 1.0 * pushed_examples / total_examples else: # words-based decay pushed_words += self._raw_word_count(job_batch) progress = 1.0 * pushed_words / total_words next_alpha = start_alpha (start_alpha end_alpha) * progress next_alpha = max(end_alpha, next_alpha) # add the sentence that didn’t fit as the first item of a new job job_batch, batch_size = [sentence], sentence_length # add the last job too (may be significantly smaller than batch_words) if job_batch: logger.debug( «queueing job #%i (%i words, %i sentences) at alpha %.05f», job_no, batch_size, len(job_batch), next_alpha) job_no += 1 job_queue.put((job_batch, next_alpha)) if job_no == 0 and self.train_count == 0: logger.warning( «train() called with an empty iterator (if not intended, « «be sure to provide a corpus that offers restartable « «iteration = an iterable).» ) # give the workers heads up that they can finish — no more work! for _ in xrange(self.workers): job_queue.put(None) logger.debug(«job loop exiting, total %i jobs», job_no) # buffer ahead only a limited number of jobs.. this is the reason we can’t simply use ThreadPool :( job_queue = Queue(maxsize=queue_factor * self.workers) progress_queue = Queue(maxsize=(queue_factor + 1) * self.workers) workers = [threading.Thread(target=worker_loop) for _ in xrange(self.workers)] unfinished_worker_count = len(workers) workers.append(threading.Thread(target=job_producer)) for thread in workers: thread.daemon = True # make interrupting the process with ctrl+c easier thread.start() example_count, trained_word_count, raw_word_count = 0, 0, word_count start, next_report = default_timer() 0.00001, 1.0 while unfinished_worker_count > 0: report = progress_queue.get() # blocks if workers too slow if report is None: # a thread reporting that it finished unfinished_worker_count -= 1 logger.info(«worker thread finished; awaiting finish of %i more threads», unfinished_worker_count) continue examples, trained_words, raw_words = report job_tally += 1 # update progress stats example_count += examples trained_word_count += trained_words # only words in vocab & sampled raw_word_count += raw_words # log progress once every report_delay seconds elapsed = default_timer() start if elapsed >= next_report: if total_examples: # examples-based progress % logger.info( «PROGRESS: at %.2f%% examples, %.0f words/s, in_qsize %i, out_qsize %i», 100.0 * example_count / total_examples, trained_word_count / elapsed, utils.qsize(job_queue), utils.qsize(progress_queue)) else: # words-based progress % logger.info( «PROGRESS: at %.2f%% words, %.0f words/s, in_qsize %i, out_qsize %i», 100.0 * raw_word_count / total_words, trained_word_count / elapsed, utils.qsize(job_queue), utils.qsize(progress_queue)) next_report = elapsed + report_delay # all done; report the final stats elapsed = default_timer() start logger.info( «training on %i raw words (%i effective words) took %.1fs, %.0f effective words/s», raw_word_count, trained_word_count, elapsed, trained_word_count / elapsed) if job_tally < 10 * self.workers: logger.warning( «under 10 jobs per worker: consider setting a smaller `batch_words’ for smoother alpha decay» ) # check that the input corpus hasn’t changed during iteration if total_examples and total_examples != example_count: logger.warning( «supplied example count (%i) did not equal expected count (%i)», example_count, total_examples ) if total_words and total_words != raw_word_count: logger.warning( «supplied raw word count (%i) did not equal expected count (%i)», raw_word_count, total_words ) self.train_count += 1 # number of times train() has been called self.total_train_time += elapsed self.clear_sims() return trained_word_count # basics copied from the train() function def score(self, sentences, total_sentences=int(1e6), chunksize=100, queue_factor=2, report_delay=1): «»» Score the log probability for a sequence of sentences (can be a once-only generator stream). Each sentence must be a list of unicode strings. This does not change the fitted model in any way (see Word2Vec.train() for that). We have currently only implemented score for the hierarchical softmax scheme, so you need to have run word2vec with hs=1 and negative=0 for this to work. Note that you should specify total_sentences; we’ll run into problems if you ask to score more than this number of sentences but it is inefficient to set the value too high. See the article by [taddy]_ and the gensim demo at [deepir]_ for examples of how to use such scores in document classification. .. [taddy] Taddy, Matt. Document Classification by Inversion of Distributed Language Representations, in Proceedings of the 2015 Conference of the Association of Computational Linguistics. .. [deepir] https://github.com/piskvorky/gensim/blob/develop/docs/notebooks/deepir.ipynb «»» if FAST_VERSION < 0: warnings.warn(«C extension compilation failed, scoring will be slow. « «Install a C compiler and reinstall gensim for fastness.») logger.info( «scoring sentences with %i workers on %i vocabulary and %i features, « «using sg=%s hs=%s sample=%s and negative=%s», self.workers, len(self.wv.vocab), self.layer1_size, self.sg, self.hs, self.sample, self.negative) if not self.wv.vocab: raise RuntimeError(«you must first build vocabulary before scoring new data») if not self.hs: raise RuntimeError(«We have currently only implemented score for the hierarchical softmax scheme, so you need to have run word2vec with hs=1 and negative=0 for this to work.») def worker_loop(): «»»Compute log probability for each sentence, lifting lists of sentences from the jobs queue.»»» work = zeros(1, dtype=REAL) # for sg hs, we actually only need one memory loc (running sum) neu1 = matutils.zeros_aligned(self.layer1_size, dtype=REAL) while True: job = job_queue.get() if job is None: # signal to finish break ns = 0 for sentence_id, sentence in job: if sentence_id >= total_sentences: break if self.sg: score = score_sentence_sg(self, sentence, work) else: score = score_sentence_cbow(self, sentence, work, neu1) sentence_scores[sentence_id] = score ns += 1 progress_queue.put(ns) # report progress start, next_report = default_timer(), 1.0 # buffer ahead only a limited number of jobs.. this is the reason we can’t simply use ThreadPool :( job_queue = Queue(maxsize=queue_factor * self.workers) progress_queue = Queue(maxsize=(queue_factor + 1) * self.workers) workers = [threading.Thread(target=worker_loop) for _ in xrange(self.workers)] for thread in workers: thread.daemon = True # make interrupting the process with ctrl+c easier thread.start() sentence_count = 0 sentence_scores = matutils.zeros_aligned(total_sentences, dtype=REAL) push_done = False done_jobs = 0 jobs_source = enumerate(utils.grouper(enumerate(sentences), chunksize)) # fill jobs queue with (id, sentence) job items while True: try: job_no, items = next(jobs_source) if (job_no 1) * chunksize > total_sentences: logger.warning( «terminating after %i sentences (set higher total_sentences if you want more).», total_sentences) job_no -= 1 raise StopIteration() logger.debug(«putting job #%i in the queue», job_no) job_queue.put(items) except StopIteration: logger.info( «reached end of input; waiting to finish %i outstanding jobs», job_no done_jobs + 1) for _ in xrange(self.workers): job_queue.put(None) # give the workers heads up that they can finish — no more work! push_done = True try: while done_jobs < (job_no + 1) or not push_done: ns = progress_queue.get(push_done) # only block after all jobs pushed sentence_count += ns done_jobs += 1 elapsed = default_timer() start if elapsed >= next_report: logger.info( «PROGRESS: at %.2f%% sentences, %.0f sentences/s», 100.0 * sentence_count, sentence_count / elapsed) next_report = elapsed + report_delay # don’t flood log, wait report_delay seconds else: # loop ended by job count; really done break except Empty: pass # already out of loop; continue to next push elapsed = default_timer() start self.clear_sims() logger.info( «scoring %i sentences took %.1fs, %.0f sentences/s», sentence_count, elapsed, sentence_count / elapsed) return sentence_scores[:sentence_count] def clear_sims(self): «»» Removes all L2-normalized vectors for words from the model. You will have to recompute them using init_sims method. «»» self.wv.syn0norm = None def update_weights(self): «»» Copy all the existing weights, and reset the weights for the newly added vocabulary. «»» logger.info(«updating layer weights») gained_vocab = len(self.wv.vocab) len(self.wv.syn0) newsyn0 = empty((gained_vocab, self.vector_size), dtype=REAL) # randomize the remaining words for i in xrange(len(self.wv.syn0), len(self.wv.vocab)): # construct deterministic seed from word AND seed argument newsyn0[ilen(self.wv.syn0)] = self.seeded_vector(self.wv.index2word[i] + str(self.seed)) # Raise an error if an online update is run before initial training on a corpus if not len(self.wv.syn0): raise RuntimeError(«You cannot do an online vocabulary-update of a model which has no prior vocabulary. « «First build the vocabulary of your model with a corpus « «before doing an online update.») self.wv.syn0 = vstack([self.wv.syn0, newsyn0]) if self.hs: self.syn1 = vstack([self.syn1, zeros((gained_vocab, self.layer1_size), dtype=REAL)]) if self.negative: self.syn1neg = vstack([self.syn1neg, zeros((gained_vocab, self.layer1_size), dtype=REAL)]) self.wv.syn0norm = None # do not suppress learning for already learned words self.syn0_lockf = ones(len(self.wv.vocab), dtype=REAL) # zeros suppress learning def reset_weights(self): «»»Reset all projection weights to an initial (untrained) state, but keep the existing vocabulary.»»» logger.info(«resetting layer weights») self.wv.syn0 = empty((len(self.wv.vocab), self.vector_size), dtype=REAL) # randomize weights vector by vector, rather than materializing a huge random matrix in RAM at once for i in xrange(len(self.wv.vocab)): # construct deterministic seed from word AND seed argument self.wv.syn0[i] = self.seeded_vector(self.wv.index2word[i] + str(self.seed)) if self.hs: self.syn1 = zeros((len(self.wv.vocab), self.layer1_size), dtype=REAL) if self.negative: self.syn1neg = zeros((len(self.wv.vocab), self.layer1_size), dtype=REAL) self.wv.syn0norm = None self.syn0_lockf = ones(len(self.wv.vocab), dtype=REAL) # zeros suppress learning def seeded_vector(self, seed_string): «»»Create one ‘random’ vector (but deterministic by seed_string)»»» # Note: built-in hash() may vary by Python version or even (in Py3.x) per launch once = random.RandomState(self.hashfxn(seed_string) & 0xffffffff) return (once.rand(self.vector_size) 0.5) / self.vector_size def intersect_word2vec_format(self, fname, lockf=0.0, binary=False, encoding=‘utf8’, unicode_errors=‘strict’): «»» Merge the input-hidden weight matrix from the original C word2vec-tool format given, where it intersects with the current vocabulary. (No words are added to the existing vocabulary, but intersecting words adopt the file’s weights, and non-intersecting words are left alone.) `binary` is a boolean indicating whether the data is in binary word2vec format. `lockf` is a lock-factor value to be set for any imported word-vectors; the default value of 0.0 prevents further updating of the vector during subsequent training. Use 1.0 to allow further training updates of merged vectors. «»» overlap_count = 0 logger.info(«loading projection weights from %s» % (fname)) with utils.smart_open(fname) as fin: header = utils.to_unicode(fin.readline(), encoding=encoding) vocab_size, vector_size = map(int, header.split()) # throws for invalid file format if not vector_size == self.vector_size: raise ValueError(«incompatible vector size %d in file %s» % (vector_size, fname)) # TOCONSIDER: maybe mismatched vectors still useful enough to merge (truncating/padding)? if binary: binary_len = dtype(REAL).itemsize * vector_size for line_no in xrange(vocab_size): # mixed text and binary: read text first, then binary word = [] while True: ch = fin.read(1) if ch == b’ ‘: break if ch != b’n: # ignore newlines in front of words (some binary files have) word.append(ch) word = utils.to_unicode(.join(word), encoding=encoding, errors=unicode_errors) weights = fromstring(fin.read(binary_len), dtype=REAL) if word in self.wv.vocab: overlap_count += 1 self.wv.syn0[self.wv.vocab[word].index] = weights self.syn0_lockf[self.wv.vocab[word].index] = lockf # lock-factor: 0.0 stops further changes else: for line_no, line in enumerate(fin): parts = utils.to_unicode(line.rstrip(), encoding=encoding, errors=unicode_errors).split(» «) if len(parts) != vector_size + 1: raise ValueError(«invalid vector on line %s (is this really the text format?)» % (line_no)) word, weights = parts[0], list(map(REAL, parts[1:])) if word in self.wv.vocab: overlap_count += 1 self.wv.syn0[self.wv.vocab[word].index] = weights self.syn0_lockf[self.wv.vocab[word].index] = lockf # lock-factor: 0.0 stops further changes logger.info(«merged %d vectors into %s matrix from %s» % (overlap_count, self.wv.syn0.shape, fname)) def most_similar(self, positive=[], negative=[], topn=10, restrict_vocab=None, indexer=None): «»» Deprecated. Use self.wv.most_similar() instead. Refer to the documentation for `gensim.models.KeyedVectors.most_similar` «»» return self.wv.most_similar(positive, negative, topn, restrict_vocab, indexer) def wmdistance(self, document1, document2): «»» Deprecated. Use self.wv.wmdistance() instead. Refer to the documentation for `gensim.models.KeyedVectors.wmdistance` «»» return self.wv.wmdistance(document1, document2) def most_similar_cosmul(self, positive=[], negative=[], topn=10): «»» Deprecated. Use self.wv.most_similar_cosmul() instead. Refer to the documentation for `gensim.models.KeyedVectors.most_similar_cosmul` «»» return self.wv.most_similar_cosmul(positive, negative, topn) def similar_by_word(self, word, topn=10, restrict_vocab=None): «»» Deprecated. Use self.wv.similar_by_word() instead. Refer to the documentation for `gensim.models.KeyedVectors.similar_by_word` «»» return self.wv.similar_by_word(word, topn, restrict_vocab) def similar_by_vector(self, vector, topn=10, restrict_vocab=None): «»» Deprecated. Use self.wv.similar_by_vector() instead. Refer to the documentation for `gensim.models.KeyedVectors.similar_by_vector` «»» return self.wv.similar_by_vector(vector, topn, restrict_vocab) def doesnt_match(self, words): «»» Deprecated. Use self.wv.doesnt_match() instead. Refer to the documentation for `gensim.models.KeyedVectors.doesnt_match` «»» return self.wv.doesnt_match(words) def __getitem__(self, words): «»» Deprecated. Use self.wv.__getitem__() instead. Refer to the documentation for `gensim.models.KeyedVectors.__getitem__` «»» return self.wv.__getitem__(words) def __contains__(self, word): «»» Deprecated. Use self.wv.__contains__() instead. Refer to the documentation for `gensim.models.KeyedVectors.__contains__` «»» return self.wv.__contains__(word) def similarity(self, w1, w2): «»» Deprecated. Use self.wv.similarity() instead. Refer to the documentation for `gensim.models.KeyedVectors.similarity` «»» return self.wv.similarity(w1, w2) def n_similarity(self, ws1, ws2): «»» Deprecated. Use self.wv.n_similarity() instead. Refer to the documentation for `gensim.models.KeyedVectors.n_similarity` «»» return self.wv.n_similarity(ws1, ws2) def predict_output_word(self, context_words_list, topn=10): «»»Report the probability distribution of the center word given the context words as input to the trained model.»»» if not self.negative: raise RuntimeError(«We have currently only implemented predict_output_word « «for the negative sampling scheme, so you need to have « «run word2vec with negative > 0 for this to work.») if not hasattr(self.wv, ‘syn0’) or not hasattr(self, ‘syn1neg’): raise RuntimeError(«Parameters required for predicting the output words not found.») word_vocabs = [self.wv.vocab[w] for w in context_words_list if w in self.wv.vocab] if not word_vocabs: warnings.warn(«All the input context words are out-of-vocabulary for the current model.») return None word2_indices = [word.index for word in word_vocabs] l1 = np_sum(self.wv.syn0[word2_indices], axis=0) if word2_indices and self.cbow_mean: l1 /= len(word2_indices) prob_values = exp(dot(l1, self.syn1neg.T)) # propagate hidden -> output and take softmax to get probabilities prob_values /= sum(prob_values) top_indices = matutils.argsort(prob_values, topn=topn, reverse=True) return [(self.wv.index2word[index1], prob_values[index1]) for index1 in top_indices] #returning the most probable output words with their probabilities def init_sims(self, replace=False): «»» init_sims() resides in KeyedVectors because it deals with syn0 mainly, but because syn1 is not an attribute of KeyedVectors, it has to be deleted in this class, and the normalizing of syn0 happens inside of KeyedVectors «»» if replace and hasattr(self, ‘syn1’): del self.syn1 return self.wv.init_sims(replace) def estimate_memory(self, vocab_size=None, report=None): «»»Estimate required memory for a model using current settings and provided vocabulary size.»»» vocab_size = vocab_size or len(self.wv.vocab) report = report or {} report[‘vocab’] = vocab_size * (700 if self.hs else 500) report[‘syn0’] = vocab_size * self.vector_size * dtype(REAL).itemsize if self.hs: report[‘syn1’] = vocab_size * self.layer1_size * dtype(REAL).itemsize if self.negative: report[‘syn1neg’] = vocab_size * self.layer1_size * dtype(REAL).itemsize report[‘total’] = sum(report.values()) logger.info(«estimated required memory for %i words and %i dimensions: %i bytes», vocab_size, self.vector_size, report[‘total’]) return report @staticmethod def log_accuracy(section): return KeyedVectors.log_accuracy(section) def accuracy(self, questions, restrict_vocab=30000, most_similar=None, case_insensitive=True): most_similar = most_similar or KeyedVectors.most_similar return self.wv.accuracy(questions, restrict_vocab, most_similar, case_insensitive) @staticmethod def log_evaluate_word_pairs(pearson, spearman, oov, pairs): «»» Deprecated. Use self.wv.log_evaluate_word_pairs() instead. Refer to the documentation for `gensim.models.KeyedVectors.log_evaluate_word_pairs` «»» return KeyedVectors.log_evaluate_word_pairs(pearson, spearman, oov, pairs) def evaluate_word_pairs(self, pairs, delimiter=t, restrict_vocab=300000, case_insensitive=True, dummy4unknown=False): «»» Deprecated. Use self.wv.evaluate_word_pairs() instead. Refer to the documentation for `gensim.models.KeyedVectors.evaluate_word_pairs` «»» return self.wv.evaluate_word_pairs(pairs, delimiter, restrict_vocab, case_insensitive, dummy4unknown) def __str__(self): return «%s(vocab=%s, size=%s, alpha=%s)» % (self.__class__.__name__, len(self.wv.index2word), self.vector_size, self.alpha) def _minimize_model(self, save_syn1 = False, save_syn1neg = False, save_syn0_lockf = False): warnings.warn(«This method would be deprecated in the future. Keep just_word_vectors = model.wv to retain just the KeyedVectors instance for read-only querying of word vectors.») if save_syn1 and save_syn1neg and save_syn0_lockf: return if hasattr(self, ‘syn1’) and not save_syn1: del self.syn1 if hasattr(self, ‘syn1neg’) and not save_syn1neg: del self.syn1neg if hasattr(self, ‘syn0_lockf’) and not save_syn0_lockf: del self.syn0_lockf self.model_trimmed_post_training = True def delete_temporary_training_data(self, replace_word_vectors_with_normalized=False): «»» Discard parameters that are used in training and score. Use if you’re sure you’re done training a model. If `replace_word_vectors_with_normalized` is set, forget the original vectors and only keep the normalized ones = saves lots of memory! «»» if replace_word_vectors_with_normalized: self.init_sims(replace=True) self._minimize_model() def save(self, *args, **kwargs): # don’t bother storing the cached normalized vectors, recalculable table kwargs[‘ignore’] = kwargs.get(‘ignore’, [‘syn0norm’, ‘table’, ‘cum_table’]) super(Word2Vec, self).save(*args, **kwargs) save.__doc__ = utils.SaveLoad.save.__doc__ @classmethod def load(cls, *args, **kwargs): model = super(Word2Vec, cls).load(*args, **kwargs) # update older models if hasattr(model, ‘table’): delattr(model, ‘table’) # discard in favor of cum_table if model.negative and hasattr(model.wv, ‘index2word’): model.make_cum_table() # rebuild cum_table from vocabulary if not hasattr(model, ‘corpus_count’): model.corpus_count = None for v in model.wv.vocab.values(): if hasattr(v, ‘sample_int’): break # already 0.12.0+ style int probabilities elif hasattr(v, ‘sample_probability’): v.sample_int = int(round(v.sample_probability * 2**32)) del v.sample_probability if not hasattr(model, ‘syn0_lockf’) and hasattr(model, ‘syn0’): model.syn0_lockf = ones(len(model.wv.syn0), dtype=REAL) if not hasattr(model, ‘random’): model.random = random.RandomState(model.seed) if not hasattr(model, ‘train_count’): model.train_count = 0 model.total_train_time = 0 return model def _load_specials(self, *args, **kwargs): super(Word2Vec, self)._load_specials(*args, **kwargs) # loading from a pre-KeyedVectors word2vec model if not hasattr(self, ‘wv’): wv = KeyedVectors() wv.syn0 = self.__dict__.get(‘syn0’, []) wv.syn0norm = self.__dict__.get(‘syn0norm’, None) wv.vocab = self.__dict__.get(‘vocab’, {}) wv.index2word = self.__dict__.get(‘index2word’, []) self.wv = wv @classmethod def load_word2vec_format(cls, fname, fvocab=None, binary=False, encoding=‘utf8’, unicode_errors=‘strict’, limit=None, datatype=REAL): «»»Deprecated. Use gensim.models.KeyedVectors.load_word2vec_format instead.»»» raise DeprecationWarning(«Deprecated. Use gensim.models.KeyedVectors.load_word2vec_format instead.») def save_word2vec_format(self, fname, fvocab=None, binary=False): «»»Deprecated. Use model.wv.save_word2vec_format instead.»»» raise DeprecationWarning(«Deprecated. Use model.wv.save_word2vec_format instead.») class BrownCorpus(object): «»»Iterate over sentences from the Brown corpus (part of NLTK data).»»» def __init__(self, dirname): self.dirname = dirname def __iter__(self): for fname in os.listdir(self.dirname): fname = os.path.join(self.dirname, fname) if not os.path.isfile(fname): continue for line in utils.smart_open(fname): line = utils.to_unicode(line) # each file line is a single sentence in the Brown corpus # each token is WORD/POS_TAG token_tags = [t.split(‘/’) for t in line.split() if len(t.split(‘/’)) == 2] # ignore words with non-alphabetic tags like «,», «!» etc (punctuation, weird stuff) words = [«%s/%s» % (token.lower(), tag[:2]) for token, tag in token_tags if tag[:2].isalpha()] if not words: # don’t bother sending out empty sentences continue yield words class Text8Corpus(object): «»»Iterate over sentences from the «text8″ corpus, unzipped from http://mattmahoney.net/dc/text8.zip .»»» def __init__(self, fname, max_sentence_length=MAX_WORDS_IN_BATCH): self.fname = fname self.max_sentence_length = max_sentence_length def __iter__(self): # the entire corpus is one gigantic line — there are no sentence marks at all # so just split the sequence of tokens arbitrarily: 1 sentence = 1000 tokens sentence, rest = [], with utils.smart_open(self.fname) as fin: while True: text = rest + fin.read(8192) # avoid loading the entire file (=1 line) into RAM if text == rest: # EOF words = utils.to_unicode(text).split() sentence.extend(words) # return the last chunk of words, too (may be shorter/longer) if sentence: yield sentence break last_token = text.rfind(b’ ‘) # last token may have been split in two… keep for next iteration words, rest = (utils.to_unicode(text[:last_token]).split(), text[last_token:].strip()) if last_token >= 0 else ([], text) sentence.extend(words) while len(sentence) >= self.max_sentence_length: yield sentence[:self.max_sentence_length] sentence = sentence[self.max_sentence_length:] class LineSentence(object): «»» Simple format: one sentence = one line; words already preprocessed and separated by whitespace. «»» def __init__(self, source, max_sentence_length=MAX_WORDS_IN_BATCH, limit=None): «»» `source` can be either a string or a file object. Clip the file to the first `limit` lines (or no clipped if limit is None, the default). Example:: sentences = LineSentence(‘myfile.txt’) Or for compressed files:: sentences = LineSentence(‘compressed_text.txt.bz2’) sentences = LineSentence(‘compressed_text.txt.gz’) «»» self.source = source self.max_sentence_length = max_sentence_length self.limit = limit def __iter__(self): «»»Iterate through the lines in the source.»»» try: # Assume it is a file-like object and try treating it as such # Things that don’t have seek will trigger an exception self.source.seek(0) for line in itertools.islice(self.source, self.limit): line = utils.to_unicode(line).split() i = 0 while i < len(line): yield line[i : i + self.max_sentence_length] i += self.max_sentence_length except AttributeError: # If it didn’t work like a file, use it as a string filename with utils.smart_open(self.source) as fin: for line in itertools.islice(fin, self.limit): line = utils.to_unicode(line).split() i = 0 while i < len(line): yield line[i : i + self.max_sentence_length] i += self.max_sentence_length # Example: ./word2vec.py -train data.txt -output vec.txt -size 200 -window 5 -sample 1e-4 -negative 5 -hs 0 -binary 0 -cbow 1 -iter 3 if __name__ == «__main__»: import argparse logging.basicConfig( format=‘%(asctime)s : %(threadName)s : %(levelname)s : %(message)s’, level=logging.INFO) logging.info(«running %s», » «.join(sys.argv)) logging.info(«using optimization %s», FAST_VERSION) # check and process cmdline input program = os.path.basename(sys.argv[0]) if len(sys.argv) < 2: print(globals()[‘__doc__’] % locals()) sys.exit(1) from gensim.models.word2vec import Word2Vec # avoid referencing __main__ in pickle seterr(all=‘raise’) # don’t ignore numpy errors parser = argparse.ArgumentParser() parser.add_argument(«-train», help=«Use text data from file TRAIN to train the model», required=True) parser.add_argument(«-output», help=«Use file OUTPUT to save the resulting word vectors») parser.add_argument(«-window», help=«Set max skip length WINDOW between words; default is 5», type=int, default=5) parser.add_argument(«-size», help=«Set size of word vectors; default is 100», type=int, default=100) parser.add_argument(«-sample», help=«Set threshold for occurrence of words. Those that appear with higher frequency in the training data will be randomly down-sampled; default is 1e-3, useful range is (0, 1e-5)», type=float, default=1e-3) parser.add_argument(«-hs», help=«Use Hierarchical Softmax; default is 0 (not used)», type=int, default=0, choices=[0, 1]) parser.add_argument(«-negative», help=«Number of negative examples; default is 5, common values are 3 — 10 (0 = not used)», type=int, default=5) parser.add_argument(«-threads», help=«Use THREADS threads (default 12)», type=int, default=12) parser.add_argument(«-iter», help=«Run more training iterations (default 5)», type=int, default=5) parser.add_argument(«-min_count», help=«This will discard words that appear less than MIN_COUNT times; default is 5», type=int, default=5) parser.add_argument(«-cbow», help=«Use the continuous bag of words model; default is 1 (use 0 for skip-gram model)», type=int, default=1, choices=[0, 1]) parser.add_argument(«-binary», help=«Save the resulting vectors in binary mode; default is 0 (off)», type=int, default=0, choices=[0, 1]) parser.add_argument(«-accuracy», help=«Use questions from file ACCURACY to evaluate the model») args = parser.parse_args() if args.cbow == 0: skipgram = 1 else: skipgram = 0 corpus = LineSentence(args.train) model = Word2Vec( corpus, size=args.size, min_count=args.min_count, workers=args.threads, window=args.window, sample=args.sample, sg=skipgram, hs=args.hs, negative=args.negative, cbow_mean=1, iter=args.iter) if args.output: outfile = args.output model.wv.save_word2vec_format(outfile, binary=args.binary) else: outfile = args.train model.save(outfile + ‘.model’) if args.binary == 1: model.wv.save_word2vec_format(outfile + ‘.model.bin’, binary=True) else: model.wv.save_word2vec_format(outfile + ‘.model.txt’, binary=False) if args.accuracy: model.accuracy(args.accuracy) logger.info(«finished running %s», program)

3. Write a sentence for each word/phrase. 1) (at the moment) 2) (on Sundays) 3) (in the summer) 4) (always) 5) (right now) 6) (in the winter) 7) (never) 5. White the questions and then answer them. 1) where/you/go/now Where are you going now? To the park. 2) what / you/wear/right/now 3) what/be/the/weather/like/today 4) what/your/parents/do/at/the moment 5) what/time/you/get/up/every/day 6) which/season/you/like/most

Найди верный ответ на вопрос ✅ «3. Write a sentence for each word/phrase. 1) (at the moment) 2) (on Sundays) 3) (in the summer) 4) (always) 5) (right now) 6) (in the …» по предмету 📙 Английский язык, а если ответа нет или никто не дал верного ответа, то воспользуйся поиском и попробуй найти ответ среди похожих вопросов.

Искать другие ответы

Главная » Английский язык » 3. Write a sentence for each word/phrase. 1) (at the moment) 2) (on Sundays) 3) (in the summer) 4) (always) 5) (right now) 6) (in the winter) 7) (never) 5. White the questions and then answer them.

Понравилась статья? Поделить с друзьями:
  • Window 2007 word download
  • Winder word учебник ответы
  • Winder word тетрадь гдз
  • Win32com client dispatch word application
  • Win word текстовый редактор это