CHECK YOUR ENGLISH VOCABULARY
FOR COMPUTER AND INFORMATION TECHNOLOGY
SECTION 2: SOFTWARE
2.6 Word processing
A. Write the numbers next to the words
2 |
top margin bottom margin left-hand margin right-hand margin heading (or title) body text paragraph break indent illustration border (or frame) page number page border (or edge of the |
|
B. Match the words with the types of
lettering
bold |
bold italic |
italic |
lower case (or small letters) |
outline |
plain text |
shadow |
upper case (or capital |
strikethrough |
underline |
C. Choose the best words.
1 |
The text about typewriters is |
||||
A |
sections |
B |
paragraphs |
C |
chunks |
2 |
Times, Arial and Courier are |
||||
A |
lettering |
B |
character |
C |
font |
3 |
The text about typewriters is |
||||
A |
single spaced |
B |
double spaced |
C |
one-and-half spaced |
4 |
«Inventions that Changed |
||||
A |
header |
B |
footer |
C |
footnote |
5 |
Do you think the margins are |
||||
A |
big/small |
B |
wide/narrow |
C |
long/short |
6 |
Do you like the page |
||||
A |
layout |
B |
organisation |
C |
pattern |
ANSWER
KEY
B: 1 upper case (or capital letters), 2
lower case (or small letters), 3 plain text, 4 bold, 5 italic,
6 bold italic, 7 underline, 8
strikethrough, 9 outline, 12 shadow
C: 1 b, 2 c, 3 b, 4 a, 5 b, 6 a
Home
FAQ
About
Log in
Subscribe now
30-day free trial
Java Games: Flashcards, matching, concentration, and word search. | ||
Learn your Word Processing vocabulary by first looking at the Flash cards, then attempting a Word Search for identifying, a matching game for recalling the definition, or a concentration game for higher-order thinking. |
Tools
|
|||||||||||||
Or log in to play for credit. This activity is tracked by Mr. Molina. If you are in Mr. Molina’s class, please log in for credit:
|
|
|
This is the third article in this series of articles on Python for Natural Language Processing. In the previous article, we saw how Python’s NLTK and spaCy libraries can be used to perform simple NLP tasks such as tokenization, stemming and lemmatization. We also saw how to perform parts of speech tagging, named entity recognition and noun-parsing. However, all of these operations are performed on individual words.
In this article, we will move a step further and explore vocabulary and phrase matching using the spaCy library. We will define patterns and then will see which phrases that match the pattern we define. This is similar to defining regular expressions that involve parts of speech.
Rule-Based Matching
The spaCy library comes with Matcher
tool that can be used to specify custom rules for phrase matching. The process to use the Matcher
tool is pretty straight forward. The first thing you have to do is define the patterns that you want to match. Next, you have to add the patterns to the Matcher
tool and finally, you have to apply the Matcher
tool to the document that you want to match your rules with. This is best explained with the help of an example.
For rule-based matching, you need to perform the following steps:
Creating Matcher Object
The first step is to create the matcher object:
import spacy
nlp = spacy.load('en_core_web_sm')
from spacy.matcher import Matcher
m_tool = Matcher(nlp.vocab)
Defining Patterns
The next step is to define the patterns that will be used to filter similar phrases. Suppose we want to find the phrases «quick-brown-fox», «quick brown fox», «quickbrownfox» or «quick brownfox». To do so, we need to create the following four patterns:
p1 = [{'LOWER': 'quickbrownfox'}]
p2 = [{'LOWER': 'quick'}, {'IS_PUNCT': True}, {'LOWER': 'brown'}, {'IS_PUNCT': True}, {'LOWER': 'fox'}]
p3 = [{'LOWER': 'quick'}, {'LOWER': 'brown'}, {'LOWER': 'fox'}]
p4 = [{'LOWER': 'quick'}, {'LOWER': 'brownfox'}]
In the above script,
- p1 looks for the phrase «quickbrownfox»
- p2 looks for the phrase «quick-brown-fox»
- p3 tries to search for «qucik brown fox»
- p4 looks for the phrase «quick brownfox»
The token attribute LOWER
defines that the phrase should be converted into lower case before matching.
Once the patterns are defined, we need to add them to the Matcher
object that we created earlier.
m_tool.add('QBF', None, p1, p2, p3, p4)
Here «QBF» is the name of our matcher. You can give it any name.
Applying Matcher to the Document
We have our matcher ready. The next step is to apply the matcher on a text document and see if we can get any match. Let’s first create a simple document:
sentence = nlp(u'The quick-brown-fox jumps over the lazy dog. The quick brown fox eats well.
the quickbrownfox is dead. the dog misses the quick brownfox')
To apply the matcher to a document. The document is needed to be passed as a parameter to the matcher object. The result will be all the ids of the phrases matched in the document, along with their starting and ending positions in the document. Execute the following script:
phrase_matches = m_tool(sentence)
print(phrase_matches )
The output of the script above looks like this:
[(12825528024649263697, 1, 6), (12825528024649263697, 13, 16), (12825528024649263697, 21, 22), (12825528024649263697, 29, 31)]
From the output, you can see that four phrases have been matched. The first long number in each output is the id of the phrase matched, the second and third numbers are the starting and ending positions of the phrase.
To actually view the result in a better way, we can iterate through each matched phrase and display its string value. Execute the following script:
for match_id, start, end in phrase_matches:
string_id = nlp.vocab.strings[match_id]
span = sentence[start:end]
print(match_id, string_id, start, end, span.text)
Output:
12825528024649263697 QBF 1 6 quick-brown-fox
12825528024649263697 QBF 13 16 quick brown fox
12825528024649263697 QBF 21 22 quickbrownfox
12825528024649263697 QBF 29 31 quick brownfox
From the output, you can see all the matched phrases along with their vocabulary ids and start and ending position.
More Options for Rule-Based Matching
Official documentation from the sPacy library contains details of all the tokens and wildcards that can be used for phrase matching.
For instance, the «*» attribute is defined to search for one or more instances of the token.
Let’s write a simple pattern that can identify the phrase «quick—brown—fox» or quick-brown—fox.
Let’s first remove the previous matcher QBF
.
m_tool.remove('QBF')
Next, we need to define our new pattern:
p1 = [{'LOWER': 'quick'}, {'IS_PUNCT': True, 'OP':'*'}, {'LOWER': 'brown'}, {'IS_PUNCT': True, 'OP':'*'}, {'LOWER': 'fox'}]
m_tool.add('QBF', None, p1)
The pattern p1
will match all the phrases where there are one or more punctuations in the phrase quick brown fox
. Let’s now define our document for filtering:
sentence = nlp(u'The quick--brown--fox jumps over the quick-brown---fox')
You can see our document has two phrases quick—brown—fox and quick-brown—fox, that you should match our pattern. Let’s apply our mather to the document and see the results:
phrase_matches = m_tool(sentence)
for match_id, start, end in phrase_matches:
string_id = nlp.vocab.strings[match_id]
span = sentence[start:end]
print(match_id, string_id, start, end, span.text)
The output of the script above looks like this:
12825528024649263697 QBF 1 6 quick--brown--fox
12825528024649263697 QBF 10 15 quick-brown---fox
From the output, you can see that our matcher has successfully matched the two phrases.
Phrase-Based Matching
In the last section, we saw how we can define rules that can be used to identify phrases from the document. In addition to defining rules, we can directly specify the phrases that we are looking for.
This is a more efficient way of phrase matching.
Check out our hands-on, practical guide to learning Git, with best-practices, industry-accepted standards, and included cheat sheet. Stop Googling Git commands and actually learn it!
In this section, we will be doing phrase matching inside a Wikipedia article on Artificial intelligence.
Before we see the steps to perform phrase-matching, let’s first parse the Wikipedia article that we will be using to perform phrase matching. Execute the following script:
import bs4 as bs
import urllib.request
import re
import nltk
scrapped_data = urllib.request.urlopen('https://en.wikipedia.org/wiki/Artificial_intelligence')
article = scrapped_data .read()
parsed_article = bs.BeautifulSoup(article,'lxml')
paragraphs = parsed_article.find_all('p')
article_text = ""
for p in paragraphs:
article_text += p.text
processed_article = article_text.lower()
processed_article = re.sub('[^a-zA-Z]', ' ', processed_article )
processed_article = re.sub(r's+', ' ', processed_article)
The script has been explained in detail in my article on Implementing Word2Vec with Gensim Library in Python. You can go and read the article if you want to understand how parsing works in Python.
The processed_article
contains the document that we will use for phrase-matching.
The steps to perform phrase matching are quite similar to rule based matching.
Create Phrase Matcher Object
As a first step, you need to create PhraseMatcher
object. The following script does that:
import spacy
nlp = spacy.load('en_core_web_sm')
from spacy.matcher import PhraseMatcher
phrase_matcher = PhraseMatcher(nlp.vocab)
Notice in the previous section we created Matcher
object. Here, in this case, we are creating PhraseMathcer
object.
Create Phrase List
In the second step, you need to create a list of phrases to match and then convert the list to spaCy NLP documents as shown in the following script:
phrases = ['machine learning', 'robots', 'intelligent agents']
patterns = [nlp(text) for text in phrases]
Finally, you need to add your phrase list to the phrase matcher.
phrase_matcher.add('AI', None, *patterns)
Here the name of our matcher is AI.
Applying Matcher to the Document
Like rule-based matching, we again need to apply our phrase matcher to the document. However, our parsed article is not in spaCy document format. Therefore, we will convert our article into sPacy document format and will then apply our phrase matcher to the article.
sentence = nlp (processed_article)
matched_phrases = phrase_matcher(sentence)
In the output, we will have all the ids of all the matched phrases along with their start and end indexes in the document as shown below:
[(5530044837203964789, 37, 39),
(5530044837203964789, 402, 404),
(5530044837203964789, 693, 694),
(5530044837203964789, 1284, 1286),
(5530044837203964789, 3059, 3061),
(5530044837203964789, 3218, 3220),
(5530044837203964789, 3753, 3754),
(5530044837203964789, 5212, 5213),
(5530044837203964789, 5287, 5288),
(5530044837203964789, 6769, 6771),
(5530044837203964789, 6781, 6783),
(5530044837203964789, 7496, 7498),
(5530044837203964789, 7635, 7637),
(5530044837203964789, 8002, 8004),
(5530044837203964789, 9461, 9462),
(5530044837203964789, 9955, 9957),
(5530044837203964789, 10784, 10785),
(5530044837203964789, 11250, 11251),
(5530044837203964789, 12290, 12291),
(5530044837203964789, 12411, 12412),
(5530044837203964789, 12455, 12456)]
To see the string value of the matched phrases, execute the following script:
for match_id, start, end in matched_phrases:
string_id = nlp.vocab.strings[match_id]
span = sentence[start:end]
print(match_id, string_id, start, end, span.text)
In the output, you will see the strig value of the matched phrases as shown below:
5530044837203964789 AI 37 39 intelligent agents
5530044837203964789 AI 402 404 machine learning
5530044837203964789 AI 693 694 robots
5530044837203964789 AI 1284 1286 machine learning
5530044837203964789 AI 3059 3061 intelligent agents
5530044837203964789 AI 3218 3220 machine learning
5530044837203964789 AI 3753 3754 robots
5530044837203964789 AI 5212 5213 robots
5530044837203964789 AI 5287 5288 robots
5530044837203964789 AI 6769 6771 machine learning
5530044837203964789 AI 6781 6783 machine learning
5530044837203964789 AI 7496 7498 machine learning
5530044837203964789 AI 7635 7637 machine learning
5530044837203964789 AI 8002 8004 machine learning
5530044837203964789 AI 9461 9462 robots
5530044837203964789 AI 9955 9957 machine learning
5530044837203964789 AI 10784 10785 robots
5530044837203964789 AI 11250 11251 robots
5530044837203964789 AI 12290 12291 robots
5530044837203964789 AI 12411 12412 robots
5530044837203964789 AI 12455 12456 robots
From the output, you can see all the three phrases that we tried to search along with their start and end index and the string ids.
Stop Words
Before we conclude this article, I just wanted to touch the concept of stop words. Stop words are English words such as «the», «a», «an» etc that do not have any meaning of their own. Stop words are often not very useful for NLP tasks such as text classification or language modeling. So it is often better to remove these stop words before further processing of the document.
The spaCy library contains 305 stop words. In addition, depending upon our requirements, we can also add or remove stop words from the spaCy library.
To see the default spaCy stop words, we can use stop_words
attribute of the spaCy model as shown below:
import spacy
sp = spacy.load('en_core_web_sm')
print(sp.Defaults.stop_words)
In the output, you will see all the sPacy stop words:
{'less', 'except', 'top', 'me', 'three', 'fifteen', 'a', 'is', 'those', 'all', 'then', 'everyone', 'without', 'must', 'has', 'any', 'anyhow', 'keep', 'through', 'bottom', 'get', 'indeed', 'it', 'still', 'ten', 'whatever', 'doing', 'though', 'eight', 'various', 'myself', 'across', 'wherever', 'himself', 'always', 'thus', 'am', 'after', 'should', 'perhaps', 'at', 'down', 'own', 'rather', 'regarding', 'which', 'anywhere', 'whence', 'would', 'been', 'how', 'herself', 'now', 'might', 'please', 'behind', 'every', 'seems', 'alone', 'from', 'via', 'its', 'become', 'hers', 'there', 'front', 'whose', 'before', 'against', 'whereafter', 'up', 'whither', 'two', 'five', 'eleven', 'why', 'below', 'out', 'whereas', 'serious', 'six', 'give', 'also', 'became', 'his', 'anyway', 'none', 'again', 'onto', 'else', 'have', 'few', 'thereby', 'whoever', 'yet', 'part', 'just', 'afterwards', 'mostly', 'see', 'hereby', 'not', 'can', 'once', 'therefore', 'together', 'whom', 'elsewhere', 'beforehand', 'themselves', 'with', 'seem', 'many', 'upon', 'former', 'are', 'who', 'becoming', 'formerly', 'between', 'cannot', 'him', 'that', 'first', 'more', 'although', 'whenever', 'under', 'whereby', 'my', 'whereupon', 'anyone', 'toward', 'by', 'four', 'since', 'amongst', 'move', 'each', 'forty', 'somehow', 'as', 'besides', 'used', 'if', 'name', 'when', 'ever', 'however', 'otherwise', 'hundred', 'moreover', 'your', 'sometimes', 'the', 'empty', 'another', 'where', 'her', 'enough', 'quite', 'throughout', 'anything', 'she', 'and', 'does', 'above', 'within', 'show', 'in', 'this', 'back', 'made', 'nobody', 'off', 're', 'meanwhile', 'than', 'neither', 'twenty', 'call', 'you', 'next', 'thereupon', 'therein', 'go', 'or', 'seemed', 'such', 'latterly', 'already', 'mine', 'yourself', 'an', 'amount', 'hereupon', 'namely', 'same', 'their', 'of', 'yours', 'could', 'be', 'done', 'whole', 'seeming', 'someone', 'these', 'towards', 'among', 'becomes', 'per', 'thru', 'beyond', 'beside', 'both', 'latter', 'ours', 'well', 'make', 'nowhere', 'about', 'were', 'others', 'due', 'yourselves', 'unless', 'thereafter', 'even', 'too', 'most', 'everything', 'our', 'something', 'did', 'using', 'full', 'while', 'will', 'only', 'nor', 'often', 'side', 'being', 'least', 'over', 'some', 'along', 'was', 'very', 'on', 'into', 'nine', 'noone', 'several', 'i', 'one', 'third', 'herein', 'but', 'further', 'here', 'whether', 'because', 'either', 'hereafter', 'really', 'so', 'somewhere', 'we', 'nevertheless', 'last', 'had', 'they', 'thence', 'almost', 'ca', 'everywhere', 'itself', 'no', 'ourselves', 'may', 'wherein', 'take', 'around', 'never', 'them', 'to', 'until', 'do', 'what', 'say', 'twelve', 'nothing', 'during', 'sixty', 'sometime', 'us', 'fifty', 'much', 'for', 'other', 'hence', 'he', 'put'}
You can also check if a word is a stop word or not. To do so, you can use the is_stop
attribute as shown below:
sp.vocab['wonder'].is_stop
Since «wonder» is not a spaCy stop word, you will see False
in the output.
To add or remove stopwords in spaCy, you can use sp.Defaults.stop_words.add()
and sp.Defaults.stop_words.remove()
methods respectively.
sp.Defaults.stop_words.add('wonder')
Next, we need to set the is_stop
tag for wonder
to ‘True` as shown below:
sp.vocab['wonder'].is_stop = True
Going Further — Hand-Held End-to-End Project
Your inquisitive nature makes you want to go further? We recommend checking out our Guided Project: «Image Captioning with CNNs and Transformers with Keras».
In this guided project — you’ll learn how to build an image captioning model, which accepts an image as input and produces a textual caption as the output.
You’ll learn how to:
- Preprocess text
- Vectorize text input easily
- Work with the
tf.data
API and build performant Datasets - Build Transformers from scratch with TensorFlow/Keras and KerasNLP — the official horizontal addition to Keras for building state-of-the-art NLP models
- Build hybrid architectures where the output of one network is encoded for another
How do we frame image captioning? Most consider it an example of generative deep learning, because we’re teaching a network to generate descriptions. However, I like to look at it as an instance of neural machine translation — we’re translating the visual features of an image into words. Through translation, we’re generating a new representation of that image, rather than just generating new meaning. Viewing it as translation, and only by extension generation, scopes the task in a different light, and makes it a bit more intuitive.
Framing the problem as one of translation makes it easier to figure out which architecture we’ll want to use. Encoder-only Transformers are great at understanding text (sentiment analysis, classification, etc.) because Encoders encode meaningful representations. Decoder-only models are great for generation (such as GPT-3), since decoders are able to infer meaningful representations into another sequence with the same meaning. Translation is typically done by an encoder-decoder architecture, where encoders encode a meaningful representation of a sentence (or image, in our case) and decoders learn to turn this sequence into another meaningful representation that’s more interpretable for us (such as a sentence).
Conclusion
Phrase and vocabulary matching is one of the most important natural language processing tasks. In this article, we continued our discussion about how to use Python to perform rule-based and phrase based matching. In addition, we also saw spaCy stop words.
In the next article, we will see parts of speech tagging and named entity recognition in detail.
-
1.
How the text conforms to the left and right margins.
(right, center, left or justified)-
A.
Line Spacing
-
B.
Alignment
-
C.
Page Setup
-
D.
Word Wrap
-
-
2.
A style of text that makes a letter or word darker and thicker to stand out in a document.
-
A.
Italics
-
B.
Text
-
C.
Bold
-
D.
Font
-
-
3.
The letters, numbers, or symbols that appear in a document.
-
A.
Character
-
B.
Text
-
C.
Font
-
D.
Indent
-
-
4.
A collection of picture files that can be inserted into a document.
-
A.
Retrieve
-
B.
Clip Art
-
C.
Spell Check
-
D.
Highlight
-
-
5.
To make a duplicate of information in the document, so that you can place it in another location.
-
A.
Cut
-
B.
Paste
-
C.
Copy
-
D.
Enter
-
-
6.
The blinking line that represents the current location in the document.
-
A.
Character
-
B.
Text
-
C.
Cursor
-
D.
File
-
-
7.
To remove a highlighted section of a document.
-
A.
Paste
-
B.
Cut
-
C.
Copy
-
D.
Retrieve
-
-
8.
A key used to erase characters.
-
A.
Delete
-
B.
Cut
-
C.
Cursor
-
D.
Enter
-
-
9.
To make changes in a document.
-
A.
Paste
-
B.
Cut
-
C.
Delete
-
D.
Edit
-
-
10.
The key used to begin a new line in a document.
-
A.
Delete
-
B.
Enter
-
C.
Indent
-
D.
Save
-
-
11.
A word processing document.
-
A.
Thesaurus
-
B.
Text
-
C.
File
-
D.
Copy
-
-
12.
The shape and style of text.
-
A.
Font
-
B.
Bold
-
C.
Italics
-
D.
Character
-
-
13.
The printed copy.
-
A.
Hard Copy
-
B.
Text
-
C.
Copy
-
D.
Word Processing
-
-
14.
To choose a part of a document by clicking and dragging over it with the mouse.
-
A.
HIghlight
-
B.
Copy
-
C.
Retrieve
-
D.
Save As
-
-
15.
To set the first line of a paragraph in from the margin.
-
A.
Delete
-
B.
Indent
-
C.
Line Spacing
-
D.
Page Setup
-
-
16.
A typestyle that is evenly slanted towards the right for emphasis and appearance.
-
A.
Text
-
B.
Cursor
-
C.
Bold
-
D.
Italics
-
-
17.
The page setup that prints a document in a horizontal position.
-
A.
Portrait
-
B.
Landscape
-
C.
Page Setup
-
D.
Line Spacing
-
-
18.
The span or verticle space between lines of text.
-
A.
Page Setup
-
B.
Indent
-
C.
Line Spacing
-
D.
Underline
-
-
19.
The term used in reference to the way a document is formatted to print.
-
A.
Hard Copy
-
B.
Line Spacing
-
C.
Page Setup
-
D.
Landscape
-
-
20.
To insert the last information that was cut or copied into a document.
-
A.
Copy
-
B.
Cut
-
C.
Paste
-
D.
Clip Art
-
-
21.
The default page setup that prints the document vertically.
-
A.
Landscape
-
B.
Portrait
-
C.
Line Spacing
-
D.
Page Setup
-
-
22.
To generate a hard copy of a document.
-
A.
Hard Copy
-
B.
Retrieve
-
C.
WYSIWYG
-
D.
Print
-
-
23.
Open a saved document.
-
A.
Save As
-
B.
Save
-
C.
Retrieve
-
D.
Paste
-
-
24.
To store information for later use.
-
A.
Save
-
B.
Retrieve
-
C.
Save As
-
D.
Print
-
-
25.
To save a document for the first time or to save a version with a different name.
-
A.
Save
-
B.
Save As
-
C.
Retrieve
-
D.
Alignment
-
10000+ results for ‘word matching’
Word matching
Match up
by Mkurkova1
Matching Word and Symbol Equations
Find the match
by Swright
Physics
Prefix root word matching activity
Match up
by Sianedwards
KS5
English
Place Value Word Matching
Match up
by Kaitlin
KS2
Maths
Numbers & fractions
tricky word matching
Matching pairs
by Nickipatrick
Tricky Word Matching
Matching pairs
by Misspayne2020
Year 1 Chumash key word matching
Matching pairs
by Egassel302
Y1
Tricky Word Matching Game
Matching pairs
by Kaela553
Word Matching Game
Missing word
by Tdavies2
Spreadsheet word matching
Match up
by Ajearnshaw
Matching word to word activity — can
Matching pairs
by Efoley
Phonics phase 3 picture word matching game
Match up
by Arantell203
Phonics
Common word — consolidation — matching game
Matching pairs
by Mrscrerar
Focus word matching — AA target
Matching pairs
by Katiemorrell
English
Homophones — Matching Word
Match up
by Positivitypantskids
Matching word activity- Quality
Match up
by Sruwoko
Participation Word Matching Game
Match up
by Hcapurro
ck Word Matching
Match up
by Aburns449
Hallowe’en Word Matching
Match up
by Katy19
CVC word matching
Anagram
by Omartin1
Key word matching task
Match up
by Headshsc
Weather word matching
Quiz
by Katy19
Winter — Matching activity word
Quiz
by Banbury
KS1
KS2
French
Matching Word & Symbol Equations
Match up
by Cwilson13
Letter/Word Matching
Match up
by Jilldyslexia
o-e matching word and picture
Find the match
by Pthorne
KS1
English
Phonics
/zh/ sound matching word to definition
Matching pairs
by Caitlin42
Word Processing: Editing
Missing word
by Gw19coetsersimo
High school
KS1
KS2
editing
Word processing
No. 14 Matching Word Patterns
Match up
by Mcbcprice
Easter picture to word matching
Match up
by Siobhan51
Matching word problems and calculations
Match up
by Kejr
Y5
Maths
matching digits to the word
Match up
by Missgately8
Y2
Protected Characteristics — Sexuality word matching
Match up
by Kholton
KS3
PSHE
Viking Key Word Matching Game
Match up
by Humanities17
Clothes single-word pinyin matching
Match up
by Msbleiman
Word Processing Formating 2
Group sort
by Gw19coetsersimo
KS1
KS2
Word processing
Roman Key Word Matching Game
Match up
by Humanities17
LW Matching Word to Picture
Match up
by Zwatkinson
matching numbers as words word wall
Match up
by Missmiddlebrook
KS1
Maths
Phase 3 Word Matching — j w z y ch qu
Matching pairs
by U80488093
Reception
Read the Word and Find the Matching Pair
Matching pairs
by Lhoggard
Reception
English
aimst nop cvc word picture matching
Match up
by Rosered
Adult Education
English
Phonics phase 3 picture word matching game
Match up
by Zwatkinson
Find a Matching Word For Each Sentence
Match up
by Guptaa
Reception
Y1
Phonics
Quiz aimst nop cvc word picture matching
Quiz
by Rosered
Adult Education
English
Phonics phase 3 picture word matching game
Find the match
by Zwatkinson
Word Order Reisen
Unjumble
by Iarichardson
Y10
German
Word Order
Number Sense: Matching word names from 1-20
Match up
by Moesvg4macmillan
Grades K-3
Maths
MOESVG
OECS Standards
Counting
Number Sense
VC112A
Wordbuilding p7#1
Match up
by Lidiakol
Word building
Find the matching prefix and base word
Group sort
by Mrsbowey
Matching
Matching pairs
by Tracydudziec
Winter matching
Matching pairs
by Harewood
KS1
KS2
Matching
Matching pairs
by Aleeshaobose
KS1
Y2
English
Vocabulary
MS Word Toolbar
Labelled diagram
by Colin79
S3
Word
Separable Verbs — SORT — Correct/Incorrect
Whack-a-mole
by Iarichardson
Y10
German
Word Order
Sight word match
Matching pairs
by Moesvg3macmillan
Primary
Kindergarten
Grade 1
Language Arts
Created by Zonnel Cumberbatch
Decoding word recognition
St Vincent and the Grenadines
numbers 1-5 matching
Matching pairs
by Missddriver
KS1
Maths
Matching animals
Matching pairs
by Dean
KS1
English
Homophones matching
Matching pairs
by Jwfraser
Matching subitising cards
Matching pairs
by Dgriffiths3
Reception
Maths
Pamela Meyer – Liespotting Vocabulary matching Find the vocabulary in the transcript and match it with the definition. 1. A liespotter a. A small involuntary movement 2. Go the extra mile b. An obvious sign of guilt 3. “Gotcha!” c. A fussy/picky person 4. Nitpicky d. Make an extra special effort 5. Twitch e. The story becomes more interesting 6. Flare your nostrils f. A one last drink before bed 7. Tough love g. To become something very quickly 8. Con man h. To trick sb 9. The crux (of the issue) i. Unmarried 10. A white lie j. Something you say when you catch someone doing something bad 11. The plot thickens k. Someone who is good at identifying liars 12. bluff l. to pay someone compliments because you want something 13. Flattery/ to flatter m. To move to try to escape. 14. A breadwinner n. An unimportant dishonesty 15. A telltale sign o. A gesture saying “I don’t know” 16. A dead giveaway p. To talk informally 17. Chatter q. A decisive point 18. Squirm r. A dishonest person who
Worksheets
Powerpoints
Video Lessons
Search
Filters
SORT BY
Most popular
TIME PERIOD
All-time
PhilipR
166714 uses
PhilipR
124055 uses
PhilipR
119212 uses
loveteaching
90298 uses
kifissia
73468 uses
PhilipR
72733 uses
PhilipR
64313 uses
PhilipR
61562 uses
PhilipR
58266 uses
PhilipR
52179 uses
PhilipR
52062 uses
PhilipR
39506 uses
Next
35
Blog
FAQ
About us
Terms of use
Your Copyright