Going Meta: Two Years of Knowledge Graphs

Senior Developer Marketing Manager
4 min read

We take a look back at the past episodes of Going Meta, specifically those where we cover knowledge graphs. This is our recap of Season 2 of Going Meta, a good year after I wrote the last episode of Season 1.
But first, I want to thank you for your continued support over the years (crazy that I can say that now). Our GitHub repository, where we gather all the assets (code, queries, datasets, ontologies, notebooks, etc.) for each episode, is a great source for you.
With Season 2 coming to an end, we wanted to pick out one integral pillar of Going Meta: knowledge graphs. Plenty of episodes cover them across the season (and beyond in Season 1), so we wanted to offer a guide on what topics we covered in each episode in case you’re new to the series or you missed an episode.
Foundations
Toward the end of Season 1, we explored how semantics (another theme of Going Meta) can be captured in two fundamentally different ways: explicitly through knowledge graphs and implicitly through vector embeddings. We demonstrated how the explicit graph-based semantics offer explanability and rich exploration capabilities, while vector embeddings provide robust semantic search. These approaches are highly complementary rather than competing: Graphs provide interpretable structure and context, while vectors enable efficient semantic similarity matching. This approach creates powerful retrieval-augmented generation (RAG) systems that move beyond pure similarity to actual relevance.
Episodes:
S01 Ep21 — Explicit and Implicit Semantics
S01 Ep22— Basic RAG and RAG with Knowledge Graphs
Retrieval Patterns
We explored sophisticated retrieval techniques that combine vector search with graph traversals. Starting with basic vector search that lands on relevant nodes, the approach then navigates through graph relationships to contextualize and enrich results.
We introduced ontology-driven dynamic exploration, where the type of entity discovered determines the navigation strategy. Later episodes compared multiple retrieval methods available in Neo4j (vector, full-text, geospatial, and Cypher generation) and evolved toward agentic approaches where LLMs autonomously select appropriate retrieval tools based on query requirements. The progression moved from one-shot retrieval to multi-tool orchestration, demonstrating how to offer retrieval capabilities as functions that agents can intelligently combine.
Episodes:
S01 Ep22 — RAG with Vectors and Traversals
S01 Ep24 — Adding Ontologies to the Mix
S02 Ep06 — Comparing Retrieval Methods
S02 Ep07 — Function Calling (Tools)
LLMs as Assistants for Domain Modeling
These episodes pioneered the use of LLMs for graph domain modeling, exploring how AI could assist in the traditionally complex task of designing graph schemas from structured data. We showed the LLMs’ ability to identify entities, relationships, and logical data organization patterns — essentially performing entity-relationship modeling.
We then implemented an agentic workflow in which one LLM acts as a modeling expert while another serves as a critic, providing feedback and iterating to improve model quality. This approach proved that LLMs could produce graph models indistinguishable from expert-designed ones. This is now widely known as LLM-as-a-Critic and Agentic AI.
Episodes:
S01 Ep25 —Learn a Graph Mode from a Denormalized Dataset
S01 Ep27 — Call a Critic, Get Feedback, and Iterate!
Knowledge Graph Construction
We spent most of the time on this topic and covered Neo4j’s unique dual-graph approach to knowledge graph construction, combining document/lexical graphs (representing document structure) with domain graphs (representing extracted entities and relationships). Our key message here is the importance of using ontologies or target schemas as guardrails. Knowledge graphs without well-defined schemas can easily lead to unmanageable results.
Across different episodes, we covered no-code visual approaches and programmatic methods for extracting knowledge from unstructured data (PDFs, web pages) and structured data (CSV files, databases). A key insight was the “mixed data” approach, which showed how a single ontology can drive construction from structured and unstructured sources simultaneously, with structured data providing scaffolding for unstructured content integration.
Episodes:
S02 Ep01 — From Unstructured Data (No Code)
S02 Ep02 and Ep03 — From Unstructured Data (Programmatic)
S02 Ep05 — From Structured Data (No Code)
S01 Ep05 and S02 Ep07 — From Structured Data (Programmatic)

Season 03
We want to continue from there in Season 3 and put our focus toward consumption/retrieval/agentic adaptations of AI. Going Meta returns with Episode 1 of Season 3 in October 2025. We hope you continue to join us on the journey ahead!
Resources
- Going Meta (all episodes)
- GitHub repo
- Going Meta: Wrapping Up GraphRAG, Vectors, and Knowledge Graphs
- 20 Episodes of Going Meta Recap
Going Meta: Two Years of Knowledge Graphs was originally published in Neo4j Developer Blog on Medium, where people are continuing the conversation by highlighting and responding to this story.