Natural Language Processing (NLP)
The procedures described in this chapter act as wrappers around cloud based Natural Language APIs. These procedures extract entities, key phrases, categories, and sentiment from text stored as node properties.
This section includes:
Neo4j offers powerful querying capabilities for structured data, but a lot of the world’s data exists in text documents. NLP techniques can help to extract the latent structure in these documents. This structure could be as simple as nodes representing tokens in a sentence or as complicated as nodes representing entities extracted using a named entity recognition algorithm.
Why NLP?
Extracting structure from text documents and storing it in a graph enables several different use cases, including:
-
Content based recommendations
-
Natural Language search
-
Document similarity
apoc.nlp.* procedures
It contains procedures that call the Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure Natural Language APIs, and create a graph based on the results returned.
These procedures support entity extraction, key phrase extraction, sentiment analysis, and document classification.
Other approaches
For the sake of completeness, we report other possible NLP approaches.
Hume is a graph-powered Insights Engine made by GraphAware, a Neo4j partner. It can be used to build a knowledge graph that will help surface previously buried and undetected relevance in your organization.
Other approaches to NLP analysis, using Python libraries and Cypher, are described in the following articles: