Deep Learning and Context Based Intelligent Search

7 Min Read

Enterprises have a treasure trove of content in the form of Word documents, pdfs, emails, text files etc . Finding valuable information in these unstructured data has always been difficult. Traditional enterprise search engines have always been about creating indexes for all the words or phrases in the documents and using it to search and return results. Most search engines are rules based and will try to match the search query using regular expressions to the content in the text.

Enterprises have a treasure trove of content in the form of Word documents, pdfs, emails, text files etc . Finding valuable information in these unstructured data has always been difficult. Traditional enterprise search engines have always been about creating indexes for all the words or phrases in the documents and using it to search and return results. Most search engines are rules based and will try to match the search query using regular expressions to the content in the text.

Examples below are with the assumption that a search is happening on Auto Insurance Organization on Claim Notes. For example, if the user is searching for the word “Whiplash”, results will contain text which has a partial match for the search term such as:

  • “Insured has Whiplash Head and Neck”
  • “Claimant had whiplash injury”

These searches don’t understand the context of what the user is searching for. Most of the time users start with a keyword or phrase to search, and they actually want some information related to it.

What if the search itself is intelligent enough to find these results without building an index, and find related information of the keywords or phrases used for search?

Business users need an effective tool to search for information retrieval. For example, claims adjustors may want to know how whiplash claims are settled to help them with their losses to the policyholder in a timely fashion. This search tool will increase customer satisfaction and improve the claims adjustor’s performance.

Understanding Text

This goes back to the age old problem of the computer’s ability to understand and crunch numbers. A CPU does floating point calculations. If it has to understand text, it has to be explicitly instructed on how to process the text. For search to be intelligent, text has to be represented in a form that computers understand, which is  “numbers”. The process of converting text to numbers varies, with the most common being the “Bag of Words Model.”

For example, “The insured vehicle “ambulance” is a 2007 Chevrolet model G3500 Type III ambulance.”

Words are extracted from the above sentence – “Insured”, “Vehicle”, “2007”, “Chevrolet”, “model”, “G3500”, “Type”, “Ambulance”. Value is assigned in the below table based on Word frequency

Blog_Table_1

The disadvantages of this approach are

  • It says nothing about the order of the words in the original text.
  • It says nothing about the context of the text.
  • It says nothing about the meaning of the words.

For example, the computer may think that “type”, “ambulance”, and “vehicle” are equivalent, although “ambulance” and “vehicle” are mostly equivalent and “type” is something different. To address this problem we need a model which understands context of the text.

Deep Learning to the Rescue

Deep Learning is a branch of Machine learning. It is Neural Networks with multiple hidden layers. Deep Learning makes a machine think for itself.

Usually, in traditional machine learning algorithms, we try to predict the dependent variable “y” from the independent variable “x”. Auto-Encoder is a kind of neural network which uses an unsupervised learning method to predict the independent variable “x” from the independent variable “x”. AutoEncoder learns about the data and patterns in the data and creates a representation for the data.

Auto Encoders can be used to create word embeddings or Paragraph2Vec to convert text to Vector format.

How Word2Vec Works?

  • When given the text, it looks at each word and the words around it.
  • In this way, it trains itself on the text, and recognizes the order of each word, and the structure of the sentences.
  • The training is done using Deep Learning autoencoder with 1 hidden layer. Even though it is called Deep Learning, it is actually quite shallow.

At the end of training, each word is represented by an N-dimensional vector, where N is typically in the hundreds.

Architecture for Search

Above diagram shows proposed Architecture for using Deep Learning for Search.

  1. Parser converts files from Word, PDF, Text file to *.txt format for Vectorization process to read.
  2. Vectorization Engine converts *.txt files to format into Vectors of “n” dimension.
  3. Document Vectors Database to persist the Vectors for later uses.
  4. Linear Algebra based Search Engine which can do searches on Vectors using linear algebra operations.

Deep Learning for Search

Cosine similarity between two vectors can be used as a way to search for content in the documents that were converted to Vectors.

For e.g. Search for keyword “Whiplash” in the above said example of claim notes, below will be the context decided by the Deep Learning algorithm.

Based on how the words occurred in the original text, AutoEncoder was able to construct a good representation of words. It was able to equate whiplash as neckback, headache, concussion, soreness, neck, spasms etc. See below for the search results.

Rows are ranked by co-sine similarity, top 4 rows returned didn’t have word whiplash in it, but if you read through, it is describing whiplash injuries. There were no deterministic rules that were configured to equate whiplash to neck and back injuries.

Word Embedding’s learned from AutoEncoders can be used for intelligent search of a treasure trove of Word documents, pdfs, emails etc. This would help in accurate information retrieval saving lot of time for users looking for information.

Learn more about our solution Fluid Analytics that uses machine learning for predictive Analytics.

Share This Article
Exit mobile version