Language is at the core of human communication, but teaching computers to “understand” it is a monumental challenge. Unlike humans, computers need to convert words and concepts into numbers to process them effectively. Today, we’ll look at a concept called word embeddings, where words are transformed into mathematical representations that capture their meaning, relationships, and context. But first, let’s explore how humans have historically organized and connected words.
Early Attempts at Organizing Language
In 1805, Peter Mark Roget began working on what would become one of the most influential tools in the English language: Roget’s Thesaurus. Unlike a dictionary that defines words, Roget’s Thesaurus organized words by concepts and ideas. For instance, words like “happy,” “joyful,” and “elated” would be grouped together not just as synonyms, but as part of a broader conceptual category of positive emotions. This hierarchical organization of language was revolutionary and, in many ways, presaged modern computational approaches to understanding language.
Fast forward to the digital age, where Princeton University’s WordNet project took Roget’s concept of organizing words by meaning and transformed it into a comprehensive digital database. WordNet is included in the Natural Language Toolkit (NLTK), a popular Python library for natural language processing (NLP).
NLTK
To imbue our python notebooks with a touch of intelligence, we can use wordnet which is included in the Natural Language Toolkit (nltk). NLTK (Natural Language Toolkit) is a popular Python library widely used for natural language processing (NLP) tasks. It provides a set of tools and datasets for processing, analyzing and understanding human language data.
With NLTK, you can perform various NLP tasks such as tokenization, stemming, lemmatization, part-of-speech tagging, named entity recognition, sentiment analysis, and more. It also offers access to a vast collection of language resources, including corpora, lexical resources, grammars, and pre-trained models. Long story short, NLTK empowers you in exploring, analyzing, and processing textual data effectively, making it an invaluable asset whenever you deal with written language.
Installation is a straightforward process - we simply need to execute the following command:
! pip3 install nltk
Requirement already satisfied: nltk in /home/enric/miniconda3/envs/dsfb/lib/python3.11/site-packages (3.9.1)
Requirement already satisfied: click in /home/enric/miniconda3/envs/dsfb/lib/python3.11/site-packages (from nltk) (8.1.7)
Requirement already satisfied: joblib in /home/enric/miniconda3/envs/dsfb/lib/python3.11/site-packages (from nltk) (1.4.2)
Requirement already satisfied: regex>=2021.8.3 in /home/enric/miniconda3/envs/dsfb/lib/python3.11/site-packages (from nltk) (2024.11.6)
Requirement already satisfied: tqdm in /home/enric/miniconda3/envs/dsfb/lib/python3.11/site-packages (from nltk) (4.67.0)
At the core of WordNet lies the concept of a synset (synonym set), similar to how Roget grouped words by meaning. These sets represent groups of synonymous words that express a particular concept. Let’s explore some examples:
You can access them using the following syntax: wordnet.synsets('hello').
wordnet.synsets('hello')
[Synset('hello.n.01')]
This word just belongs to one synonym set, other words have more complicated meanings:
wordnet.synsets('hi')
[Synset('hello.n.01'), Synset('hawaii.n.01')]
Note how it came up with Hawaii because ‘HI’ is the short form of ‘Hawai’. It’s usually better to use longer terms to avoid such conflicts.
Every synset is composed of words, not in their original form, but in their lemma form. Lemmas are the base or canonical forms of words, representing their dictionary entries or headwords. For instance, the lemma of ‘beautifully’ is ‘beautiful’, or of ‘walking’ is ‘walk’. They serve as the common form for inflected or derived words, capturing the core meaning of a word and facilitating semantic analysis and language processing tasks. You can retrieve these using the .lemmas() syntax. To get a nice human-friendly version of the lemma, use .name() on the lemma.
'a financial institution that accepts deposits and channels the money into lending activities'
You try it
Using a combination of wordnet.synsets() and synset.definition(), figure out all definitions of the word bank:
...
Ellipsis
Solution
for synset in wordnet.synsets("bank"):print(synset.definition())# alternative[ synset.definition() for synset in wordnet.synsets("bank")]
sloping land (especially the slope beside a body of water)
a financial institution that accepts deposits and channels the money into lending activities
a long ridge or pile
an arrangement of similar objects in a row or in tiers
a supply or stock held in reserve for future use (especially in emergencies)
the funds held by a gambling house or the dealer in some gambling games
a slope in the turn of a road or track; the outside is higher than the inside in order to reduce the effects of centrifugal force
a container (usually with a slot in the top) for keeping money at home
a building in which the business of banking transacted
a flight maneuver; aircraft tips laterally about its longitudinal axis (especially in turning)
tip laterally
enclose with a bank
do business with a bank or keep an account at a bank
act as the banker in a game or in gambling
be in the banking business
put into a bank account
cover with ashes so to control the rate of burning
have confidence or faith in
['sloping land (especially the slope beside a body of water)',
'a financial institution that accepts deposits and channels the money into lending activities',
'a long ridge or pile',
'an arrangement of similar objects in a row or in tiers',
'a supply or stock held in reserve for future use (especially in emergencies)',
'the funds held by a gambling house or the dealer in some gambling games',
'a slope in the turn of a road or track; the outside is higher than the inside in order to reduce the effects of centrifugal force',
'a container (usually with a slot in the top) for keeping money at home',
'a building in which the business of banking transacted',
'a flight maneuver; aircraft tips laterally about its longitudinal axis (especially in turning)',
'tip laterally',
'enclose with a bank',
'do business with a bank or keep an account at a bank',
'act as the banker in a game or in gambling',
'be in the banking business',
'put into a bank account',
'cover with ashes so to control the rate of burning',
'have confidence or faith in']
The ‘net’ in wordnet comes from all the relations that are encoded in wordnet.
'any of several whales having simple conical teeth and feeding on fish etc.'
The Modern Era: Embeddings
While WordNet and Roget’s Thesaurus organize words through human-crafted hierarchies and relationships, modern NLP has moved toward learning these relationships automatically from data. Word embeddings represent the cutting edge of this approach, transforming words into dense numerical vectors that capture semantic relationships based on how words are actually used in text.
Word embeddings are at the heart of most modern applications in natural language processing, from creating basic word clouds to powering sophisticated models like ChatGPT and advanced translation systems. Unlike the discrete categories of Roget or the explicit relationships in WordNet, embeddings capture subtle semantic relationships in a continuous mathematical space.
The wiki-news-300d-50K model we’ll be using is a pre-trained Word2Vec embedding built from Wikipedia and news data. It provides 300-dimensional vector representations for 50,000 of the most common words, capturing their semantic meanings and relationships based on real-world usage. This compact and efficient model is ideal for exploring word similarities, analogies, and clustering tasks in natural language processing.
You can dowload it from Virtual Campus.
%%time import gensim.models.keyedvectors as word2vec# Load pre-trained Word2Vec model using Gensim.model = word2vec.KeyedVectors.load_word2vec_format('./resources/wiki-news-300d-50K.vec')
CPU times: user 5.05 s, sys: 41 ms, total: 5.09 s
Wall time: 5.12 s
The %%time cell magic displays how long the cell takes to execute. If all went well, you should now have an embedding model of 100K words. Each word has associated with it a numerical representation as a list of numbers.
model.vectors.shape
(50000, 300)
Remember, everything is pretrained. We can get the vector for a particular word as follows:
Use the get_vector() command to get the embeddings of the word ‘man’ and ‘potato’. Then, use the np.dot() function and the np.linalg.norm() function to calculate the cosine similarity.
Solution
import numpy as npv1 = model.get_vector('man')v2 = model.get_vector('potato')np.dot(v1, v2)/(np.linalg.norm(v1)*np.linalg.norm(v2))
0.40033147
Sometimes, results are maybe not what we would expect. Let’s try to find the most similar fruits to oranges:
To figure out how the model is “thinking”, we need to inspect similar items. Here’s what the following code does:
Find the 100 most similar items
Ask the model about the word embedding numpy array
Position similar words together (tSNE)
Color similar words the same (clustering)
Steps 2-4 individually are advanced ML concepts which would require a class to explain each, so we’ll take it for granted that they work here.
INPUT_WORD ='orange'NUMBER_OF_CLUSTERS =5# 0. We need a couple more packages for all the ML and plotting functionsimport matplotlib.pyplot as pltimport sklearnimport umap.umap_ as umapfrom sklearn.metrics.pairwise import cosine_distancesfrom sklearn.cluster import KMeans# 1. find the similar wordstop_similar_words = [w for w,s in model.similar_by_vector(INPUT_WORD,topn=100)]top_embeddings = [ model.get_vector(w) for w in top_similar_words]# 2. Calculate distance from 1 word to anotherdistances = cosine_distances(top_embeddings)# 3. Find a positioning on a 2D screen based on the distancesmethod = sklearn.manifold.TSNE()embedding_in_2D = method.fit_transform(distances)# 4. Cluster / group words together by cosine similaritykmeans = KMeans(n_clusters=6, random_state=42)clusters = kmeans.fit_predict(embedding_in_2D)# Plot plt.figure(figsize=(12,6))ax = plt.gca()for i inrange(embedding_in_2D.shape[0]): ax.annotate( top_similar_words[i], xy=embedding_in_2D[i,:], ha='center', alpha=0.8,color=plt.cm.tab10(clusters[i] %10), fontsize=12)plt.xlim(embedding_in_2D.min(axis=0)[0],embedding_in_2D.max(axis=0)[0])plt.ylim(embedding_in_2D.min(axis=0)[1],embedding_in_2D.max(axis=0)[1])plt.axis('off');
Try it out with other input words …
Word math
One of the most astonishing results of word2vec models is that you can do word math. Let’s try to get the computer to guess what the capital of France is. The question could go something like this:
“Hello DSfBot, Madrid is to Spain as X is to France, what is X?”
At the time of its discovery in the original word2vec paper, this was completely unexpected and took the world in awe … Do note that this is not infallible, especially with the low resolution embeddings that we’re dealing with here.
You try it
You can do basic vector math to help the model on its way with our Orange question. Try this procedure instead:
find the vector embedding for ‘orange’ using model.get_vector()
find the vector embedding for ‘fruit’ model.get_vector()
Add fruit to orange, resulting in a [300,] shaped array which represnts their sum.
now query the model using model.similar_by_vector()
Note: you could instead also have subtracted color from orange :)
We could also visualize what an “orange” plus “fruit” looks like:
INPUT_VECTOR = model.get_vector('orange') + model.get_vector('fruit')NUMBER_OF_CLUSTERS =5# 0. We need a couple more packages for all the ML and plotting functionsimport matplotlib.pyplot as pltimport sklearnimport umap.umap_ as umapfrom sklearn.metrics.pairwise import cosine_distancesfrom sklearn.cluster import KMeans# 1. find the similar wordstop_similar_words = [w for w,s in model.similar_by_vector(INPUT_VECTOR,topn=50)]top_embeddings = [ model.get_vector(w) for w in top_similar_words]# 2. Calculate distance from 1 word to anotherdistances = cosine_distances(top_embeddings)# 3. Find a positioning on a 2D screen based on the distancesmethod = sklearn.manifold.TSNE()embedding_in_2D = method.fit_transform(distances)# 4. Cluster / group words together by cosine similaritykmeans = KMeans(n_clusters=5, random_state=42)clusters = kmeans.fit_predict(embedding_in_2D)# Plot plt.figure(figsize=(12,6))ax = plt.gca()for i inrange(embedding_in_2D.shape[0]): ax.annotate( top_similar_words[i], xy=embedding_in_2D[i,:], ha='center', alpha=0.8,color=plt.cm.tab10(clusters[i] %10), fontsize=12)plt.xlim(embedding_in_2D.min(axis=0)[0],embedding_in_2D.max(axis=0)[0])plt.ylim(embedding_in_2D.min(axis=0)[1],embedding_in_2D.max(axis=0)[1])plt.axis('off');