[go: up one dir, main page]

Open In App

Bag of words (BoW) model in NLP

Last Updated : 08 Mar, 2019
Improve
Improve
Like Article
Like
Save
Share
Report

In this article, we are going to discuss a Natural Language Processing technique of text modeling known as Bag of Words model. Whenever we apply any algorithm in NLP, it works on numbers. We cannot directly feed our text into that algorithm. Hence, Bag of Words model is used to preprocess the text by converting it into a bag of words, which keeps a count of the total occurrences of most frequently used words.

This model can be visualized using a table, which contains the count of words corresponding to the word itself.

Applying the Bag of Words model:

Let us take this sample paragraph for our task :

Beans. I was trying to explain to somebody as we were flying in, that’s corn. That’s beans. And they were very impressed at my agricultural knowledge. Please give it up for Amaury once again for that outstanding introduction. I have a bunch of good friends here today, including somebody who I served with, who is one of the finest senators in the country, and we’re lucky to have him, your Senator, Dick Durbin is here. I also noticed, by the way, former Governor Edgar here, who I haven’t seen in a long time, and somehow he has not aged and I have. And it’s great to see you, Governor. I want to thank President Killeen and everybody at the U of I System for making it possible for me to be here today. And I am deeply honored at the Paul Douglas Award that is being given to me. He is somebody who set the path for so much outstanding public service here in Illinois. Now, I want to start by addressing the elephant in the room. I know people are still wondering why I didn’t speak at the commencement.

Step #1 : We will first preprocess the data, in order to:

  • Convert text to lower case.
  • Remove all non-word characters.
  • Remove all punctuations.




# Python3 code for preprocessing text
import nltk
import re
import numpy as np
  
# execute the text here as :
# text = """ # place text here  """
dataset = nltk.sent_tokenize(text)
for i in range(len(dataset)):
    dataset[i] = dataset[i].lower()
    dataset[i] = re.sub(r'\W', ' ', dataset[i])
    dataset[i] = re.sub(r'\s+', ' ', dataset[i])


Output:

Preprocessed text


You can further preprocess the text to suit you needs.

Step #2 : Obtaining most frequent words in our text.

We will apply the following steps to generate our model.

  • We declare a dictionary to hold our bag of words.
  • Next we tokenize each sentence to words.
  • Now for each word in sentence, we check if the word exists in our dictionary.
  • If it does, then we increment its count by 1. If it doesn’t, we add it to our dictionary and set its count as 1.




    # Creating the Bag of Words model
    word2count = {}
    for data in dataset:
        words = nltk.word_tokenize(data)
        for word in words:
            if word not in word2count.keys():
                word2count[word] = 1
            else:
                word2count[word] += 1

    
    

    Output:

    Bag of Words dictionary

    In our model, we have a total of 118 words. However when processing large texts, the number of words could reach millions. We do not need to use all those words. Hence, we select a particular number of most frequently used words. To implement this we use:




    import heapq
    freq_words = heapq.nlargest(100, word2count, key=word2count.get)

    
    

    where 100 denotes the number of words we want. If our text is large, we feed in a larger number.

    100 most frequent words

    Step #3 : Building the Bag of Words model
    In this step we construct a vector, which would tell us whether a word in each sentence is a frequent word or not. If a word in a sentence is a frequent word, we set it as 1, else we set it as 0.
    This can be implemented with the help of following code:




    X = []
    for data in dataset:
        vector = []
        for word in freq_words:
            if word in nltk.word_tokenize(data):
                vector.append(1)
            else:
                vector.append(0)
        X.append(vector)
    X = np.asarray(X)

    
    

    Output:

    BoW model



  • Similar Reads

    Continuous bag of words (CBOW) in NLP
    In order to make the computer understand a written text, we can represent the words as numerical vectors. One way to do so is by Using Word embeddings, they are a way of representing words as numerical vectors. These vectors capture the meaning of the words and their relationships to other words in the language. Word embeddings can be generated usi
    5 min read
    Bag of word and Frequency count in text using sklearn
    Text data is ubiquitous in today's digital world, from emails and social media posts to research articles and customer reviews. To analyze and derive insights from this textual information, it's essential to convert text into a numerical form that machine learning algorithms can process. One of the fundamental methods for this conversion is the "Ba
    3 min read
    NLP | Filtering Insignificant Words
    Many of the words used in the phrase are insignificant and hold no meaning. For example - English is a subject. Here, 'English' and 'subject' are the most significant words and 'is', 'a' are almost useless. English subject and subject English holds the same meaning even if we remove the insignificant words - ('is', 'a'). Using the nltk, we can remo
    2 min read
    NLP | How to score words with Execnet and Redis
    Distributed word scoring can be performed using Redis and Execnet together. For each word in movie_reviews corpus, FreqDist and ConditionalFreqDist are used to calculate information gain. Using >RedisHashFreqDist and a RedisConditionalHashFreqDist, same thing can be performed with Redis. The scores are then stored in a RedisOrderedDict. In order
    4 min read
    Top K Nearest Words using Edit Distances in NLP
    We can find the Top K nearest matching words to the given query input word by using the concept of Edit/Levenstein distance. If any word is the same word as the query input(word) then their Edit distance would be zero and is a perfect match and so on. So we can find the Edit distances between the query word and the words present in our vocabulary p
    3 min read
    NLP | How tokenizing text, sentence, words works
    Tokenization in natural language processing (NLP) is a technique that involves dividing a sentence or phrase into smaller units known as tokens. These tokens can encompass words, dates, punctuation marks, or even fragments of words. The article aims to cover the fundamentals of tokenization, it's types and use case. What is Tokenization in NLP?Natu
    8 min read
    BART Model for Text Auto Completion in NLP
    BART stands for Bidirectional and Auto-Regressive Transformer. It is a denoising autoencoder that is a pre-trained sequence-to-sequence method, that uses masked language modeling for Natural Language Generation and Translation. It is developed by Lewis et al. in 2019. BART architecture is similar to an encoder-decoder network except that it uses a
    7 min read
    M-CTC-T Model in NLP
    Automatic Speech Recognition (ASR) stands at the forefront of cutting-edge Natural Language Processing (NLP) applications, revolutionizing the way computers interact with spoken language. In this article, we embark on a journey to unravel the intricacies of an advanced ASR model known as M-CTC-T. This model, rooted in the principles of Connectionis
    15 min read
    Universal Language Model Fine-tuning (ULMFit) in NLP
    In this article, We will understand the Universal Language Model Fine-tuning (ULMFit) and its applications in the real-world scenario. This article will give a brief idea about ULMFit working and the concept behind it. What is ULMFit?ULMFit, short for Universal Language Model Fine-tuning, is a revolutionary approach in natural language processing (
    9 min read
    Explanation of BERT Model - NLP
    BERT, an acronym for Bidirectional Encoder Representations from Transformers, stands as an open-source machine learning framework designed for the realm of natural language processing (NLP). Originating in 2018, this framework was crafted by researchers from Google AI Language. The article aims to explore the architecture, working and applications
    14 min read
    Practice Tags :