Before proceeding with the Building Better Systems series I thought I should write a quick post over the weekend about the underlying Natural Language Processing (NLP) and text engineering technologies proposed in the solution.
Before proceeding with the Building Better Systems series I thought I should write a quick post over the weekend about the underlying Natural Language Processing (NLP) and text engineering technologies proposed in the solution. I have received a lot of questions about this when I posted “How to build better systems – the specification” and mentioned this key element. And a few assumptions that are not quite correct about just what this technology
- Structural Completeness/Alignment
- Text Ambiguity and Context
- Section Quantity/Size
- Volatility
- Lexicon Discovery
- Plain Language; word complexity density – complex words
NLP and text engineering can do this for us and much more. Rather than elaborate any further, I think it is better to just see it in action with a few good examples. Fortunately the University of Illinois Cognitive Computation Group as already done this for all of us and created a site for a little taste of what can be done. Their demonstration page using NLP has several clear demonstrations of the following key concepts:
Natural Language Analysis
- Coreference Resolution
- Part of Speech Tagging
- Semantic Role Labeling
- Shallow Parsing
- Text Analysis
Entities and Information Extraction
- Named Entity Recognition
- Named Entity Recognizer (often using extended entity type sets)
- Number Quantization
- Temporal Extraction and Comparison
Similarity
- Context Sensitive Verb Paraphrasing
- LLM (Lexical Level Matching)
- Named Entity Similarity
- Relation Identification
- Word Similarity
For example say we wanted to parse and process this simple sentence:
“Time Zones are used to manage the various campaigns that are executed to ensure customers are called within the allowed campaigning window which is 8 AM through 8 PM.”
Using the text analysis demonstration at the site we can copy and paste this simple phrase into the dialog box and submit for processing. The results are returned in the following diagram.
The parser has identified things (entities), guarantees, purpose, verbs (use, manage, call, and execute) and time within this simple sentence. I think you can see now how powerful this technology can be when used to collect, group, and evaluate text . While this is impressive enough think of the possibilities for using this technology to process page after page of business and system requirements?
There is also a terrific demonstration illustrating what the Cognitive Computation Group calls The Wikification system. Use their examples or plug in your own text for processing. Using the Wikification example you can insert plain text to be parsed and processed to identify entities with Wikipedia articles. The result set returned includes the live links to visit the corresponding Wikipedia page and the categories associated with each entity. Here is an example from an architecture specification describing (at a high level) Grid Service Agent behavior.
“The Grid Service Agent (GSA) acts as a process manager that can spawn and manage Service Grid processes (Operating System level processes) such as Grid Service Manager and Grid Service Container. Usually, a single GSA is run per machine. The GSA can spawn Grid Service Managers, Grid Service Containers, and other processes. Once a process is spawned, the GSA assigns a unique id for it and manages its life cycle. The GSA will restart the process if it exits abnormally (exit code different than 0), or if a specific console output has been encountered (for example, an Out Of Memory Error).”
The result returned is illustrated in the following diagram. Note the parser here has categorized and selected context specific links to public Wikipedia articles (and related links) to elaborate on the objects identified where such articles exist.
The more ambitious among us (with an internal Wiki) can understand how powerful this can be. Armed with the source code this is potentially a truly wonderful application processor to link dynamic content (think system specifications or requirements) in context back to an entire knowledge base.
If you really want to get your hands dirty and dive right in, there are two widely known frameworks for natural language processing.
- GATE (General Architecture for Text Engineering)
- UIMA (Unstructured Information Management Architecture)
GATE is a Java suite of tools originally developed at the University of Sheffield and now used worldwide by a wide community of scientists, companies for all sorts of natural language processing tasks. It is readily available and works well with Protégé (semantic editor) for those of you into ontology development.
UIMA (Unstructured Information Management Architecture) originally developed by IBM but now maintained by the Apache Software Foundation. .
If you need to input plain text and identify entities, such as persons, places, organizations; or relations, (such as works-for or located-at) this may be the way to do it. Access to both frameworks is open, and there is a large community of developers supporting both initiatives.
For myself, I’m going to use a wonderful commercial product available in the cloud (and on-premise if needed) called The Visible Thread to illustrate many of the key concepts. Designed for just this need, it also comes with the