Natural Language Processing (NLP)
Neo4j offers powerful querying capabilities for structured data, but a lot of the world’s data exists in text documents. NLP techniques can help to extract the latent structure in these documents. This structure could be as simple as nodes representing tokens in a sentence or as complicated as nodes representing entities extracted using a named entity recognition algorithm.
Extracting structure from text documents and storing it in a graph enables several different use cases, including:
Content based recommendations
Natural Language search
There are severall approaches for doing NLP analysis in Neo4j. We’ll learn about them in this section.
APOC is Neo4j’s standard library. It contains procedures that call the Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure Natural Language APIs, and create a graph based on the results returned.
These procedures support entity extraction, key phrase extraction, sentiment analysis, and document classification.
This library is a good choice for your first graph based NLP project.
Hume is a graph-powered Insights Engine made by GraphAware, a Neo4j partner. It can be used to build a knowledge graph that will help surface previously buried and undetected relevance in your organization.
Hume is a commercial product. You’ll need to get in contact with GraphAware to learn more and get a demo.
Other approaches to NLP analysis, using Python libraries and Cypher, are described in the following articles:
Was this page helpful?