Developer Center » Languages » Javascript » Tutorials » Building a Fullstack GenAI App with LangChain, Neo4j, and TypeScript

Building a Fullstack GenAI App with LangChain, Neo4j, and TypeScript

In this tutorial, we will build a minimal fullstack TypeScript application that serves a simple frontend and a LangChain-powered GenAI backend on the same Node.js server. The app will recommend books to users based on a query (e.g. their favorite books or genres), by performing semantic search over a Neo4j graph of books and authors and then formatting a concise recommendation via OpenAI’s Chat API. We’ll use Neo4j’s new vector index capability for semantic similarity search, and LangChain’s integration with OpenAI and Neo4j to tie everything together.

What we’ll implement:

  • Backend – A Node/TS server with a single API route (/recommend) that takes a query string and returns a book recommendation. The backend will:
    1. Embed the user query and perform a vector similarity search in Neo4j to find relevant Book/Author data (via LangChain’s Neo4j vector store).
    2. Feed the results into OpenAI’s Chat model to generate a short recommendation message.
  • Frontend – A static HTML+JS page (served by the same Node server) with a text input and button. The user enters a query (e.g. “I loved The Hobbit, what should I read next?”), the app calls our /recommend API, and displays the response. We’ll use Tailwind CSS (via CDN) for quick styling – no frameworks or build steps needed.

We assume you have a Neo4j instance loaded with a Goodreads books dataset (10k books, 5k authors, etc.) and that Neo4j’s vector index is available. For this tutorial, we’ll connect to Neo4j’s Aura demo database (neo4j+s://demo.neo4jlabs.com, user/pass “goodreads”, database “goodreads”). You’ll also need your own OpenAI API key. 

1. Project Setup

First, create a new Node.js project and install the required packages. We will use LangChain for JS/TS (community and OpenAI modules), the Neo4j driver, and dotenv for configuration:

npm init -y
npm install langchain @langchain/core @langchain/openai @langchain/community neo4j-driver dotenvCode language: Shell Session (shell)

This installs LangChain’s core and integrations we need. Next, set up package.json with scripts to build and run the project, and list our dependencies:

{
  "name": "genai-book-recommender",
  "version": "1.0.0",
  "type": "module",
  "scripts": {
    "build": "tsc",
    "start": "node dist/index.js"
  },
  "dependencies": {
    "langchain": "^0.3.30",
    "@langchain/core": "^0.6.2",
    "@langchain/openai": "^0.6.2",
    "@langchain/community": "^0.6.2",
    "neo4j-driver": "^5.8.0",
    "dotenv": "^16.3.1"
  },
  "devDependencies": {
    "typescript": "^5.1.3"
  }
}Code language: JSON / JSON with Comments (json)

We set the module type to "module" so we can use ES module import syntax in Node (alternatively, use CommonJS requires and adjust config if preferred). We also include a basic tsconfig.json to compile our TypeScript:

{
  "compilerOptions": {
    "target": "ES2020",
    "module": "CommonJS",
    "rootDir": "src",
    "outDir": "dist",
    "esModuleInterop": true,
    "strict": true
  }
}Code language: JSON / JSON with Comments (json)

With this config, our TS source will be in src/ and compiled to dist/. Now, create a .env file (and a .env.example for reference) in the project root to store sensitive credentials:

# .env (fill in your actual keys and endpoints)
OPENAI_API_KEY=<your OpenAI API key>
NEO4J_URI=neo4j+s://demo.neo4jlabs.com
NEO4J_USERNAME=goodreads
NEO4J_PASSWORD=goodreads
NEO4J_DATABASE=goodreadsCode language: PHP (php)

This will allow our code to connect to the Neo4j Aura Goodreads demo database and authenticate to OpenAI. Now, let’s build the backend server.

2. Backend: LangChain-Powered Recommendation API

Create src/index.ts (or server.ts) for our Node server. We will use Node’s built-in HTTP module to keep things minimal (no Express). The plan is:

  • Load environment variables and initialize connections (Neo4j and OpenAI).
  • Prepare a LangChain vector store for Neo4j, so we can do similarity search on our graph data.
  • Create an HTTP server that serves the frontend file(s) and handles the /recommend API route.

Let’s start with imports and initialization at the top of index.ts:

import * as http from 'http';
import * as fs from 'fs';
import * as path from 'path';
import 'dotenv/config';

// LangChain + Neo4j imports
import { OpenAIEmbeddings, ChatOpenAI } from '@langchain/openai';
import { Neo4jVectorStore } from '@langchain/community/vectorstores/neo4j_vector';
import neo4j from 'neo4j-driver';

// Load config from .env
const PORT = process.env.PORT || 3000;
const neo4jUrl = process.env.NEO4J_URI!;
const neo4jUser = process.env.NEO4J_USERNAME!;
const neo4jPass = process.env.NEO4J_PASSWORD!;
const neo4jDb = process.env.NEO4J_DATABASE || 'neo4j';
const openAiKey = process.env.OPENAI_API_KEY!;

// Initialize Neo4j driver (for any direct Cypher queries)
const driver = neo4j.driver(neo4jUrl, neo4j.auth.basic(neo4jUser, neo4jPass));

// Initialize LangChain components
const embeddings = new OpenAIEmbeddings(); //to embed queries
const chatModel = new ChatOpenAI({ openAIApiKey: openAiKey, temperature: 0.7 });Code language: TypeScript (typescript)

Here we configure the Neo4j driver and OpenAI. We use ChatOpenAI from LangChain with a moderate temperature for a bit of creativity in the response.

Next, we set up the Neo4j vector store. This will let us search our Neo4j graph using vector similarity. We assume our Neo4j graph already contains embedded vectors for some text (like book reviews or descriptions) and a corresponding vector index. LangChain’s Neo4jVectorStore can interface with Neo4j’s vector index – creating one if needed – to enable similarity search.

For the Goodreads demo, the data includes Book and Author nodes, and a set of Review nodes with text content. We’ll perform vector search over Review nodes’ text to find books related to the user’s query (as user queries like “I love The Hobbit” are better matched via reviews or descriptions). Each Review node is connected to a Book (and that Book has one or more Authors). Below is a simplified schema of the graph (Users, Reviews, Books, Authors) for context:

Graph data model of the Goodreads dataset: users write reviews for books, and authors write books (one book can have multiple reviews and authors).

We can initialize the vector store as follows:

// Initialize vector index interface for Neo4j (will use existing embeddings or add if missing)
const vectorStore = await Neo4jVectorStore.fromExistingGraph(
  embeddings,
  {
    url: neo4jUrl,
    username: neo4jUser,
    password: neo4jPass,
    database: neo4jDb,
    indexName: 'review-embedding-index', // name of the Neo4j vector index (assumed to exist)
    nodeLabel: 'Review',                 // label of nodes to search
    textNodeProperty: 'text',            // property name where review text is stored
    embeddingNodeProperty: 'embedding',  // property name of the stored vector embeddings
    searchType: 'vector'                 // pure vector search (cosine similarity by default)
  }
);
Code language: JavaScript (javascript)

We pass the connection config and specify that we want to use Review nodes’ text and embedding properties. We also name the index (in this demo it’s "review-embedding-index" as preconfigured in the dataset). The call to fromExistingGraph will connect to Neo4j and ensure that any Review without an embedding gets one computed and stored. If a Neo4j vector index by that name doesn’t exist, it will create one behind the scenes. Once this promise resolves, our vectorStore is ready to perform similarity searches on the graph.

Now let’s implement the HTTP server with the /recommend API route. We also need to serve the static frontend (index.html). We can use Node’s http and fs modules for simplicity:

// Create HTTP server
const server = http.createServer(async (req, res) => {
  try {
    const url = req.url ? new URL(req.url, `http://localhost:${PORT}`) : null;
    if (!url) {
      res.writeHead(400).end('Bad Request');
      return;
    }

  // Serve the frontend HTML (and any static files)
    if (url.pathname === '/' || url.pathname === '/index.html') {
      const filePath = path.join(__dirname, '../public/index.html');
      const data = fs.readFileSync(filePath);
      res.writeHead(200, { 'Content-Type': 'text/html' });
      res.end(data);
      return;
    }

  // API endpoint: /recommend?query=<user query>
    if (url.pathname === '/recommend') {
      const queryParam = url.searchParams.get('query') || '';
      const userQuery = queryParam.trim();
      if (!userQuery) {
        res.writeHead(400).end(JSON.stringify({ error: 'Missing query' }));
        return;
      }

    // 1. Vector search in Neo4j for similar content
      const results = await vectorStore.similaritySearch(userQuery, 3);
    // results are Documents, likely containing review text in pageContent and metadata including the Neo4j node ID

      if (results.length === 0) {
        res.writeHead(200).end(JSON.stringify({ recommendation: "Sorry, I couldn't find any related books." }));
        return;
      }

    // 2. Take the top result and find its Book + Author in the graph via a Cypher query
      const topReviewDoc = results[0];
      const reviewId = topReviewDoc.metadata.id;  // assuming the Review node has an 'id' property
      const session = driver.session({ database: neo4jDb });
      const cypher = `
        MATCH (r:Review {id: $rid})-[:WRITTEN_FOR]->(b:Book)<-[:AUTHORED]-(a:Author)
        RETURN b.title AS title, a.name AS author LIMIT 1
      `;
      const record = await session.executeRead(tx => tx.run(cypher, { rid: reviewId }))
        .then(res => res.records[0]);
      await session.close();
      let title = record.get('title'), author = record.get('author');
      if (!title || !author) {
      // Fallback: try next result if any
        if (results[1]) {
        // (similar logic to get title/author for next doc)
        }
      }

     // 3. Use OpenAI Chat model to format a concise recommendation message
      const messages = [
        { role: 'system', content: 'You are a helpful book recommendation assistant.' },
        { role: 'user', content: 
            `A user said: "${userQuery}".\n` +
            `You found a book suggestion: "${title}" by ${author}.\n` +
            'Write a single-paragraph recommendation for this book, mentioning the title and author and why the user might like it. Be concise and upbeat.'
        }
      ];
      const response = await chatModel.call(messages);
      const answer = response.text;  // the generated recommendation text

    // 4. Return the answer as JSON
      res.writeHead(200, { 'Content-Type': 'application/json' });
      res.end(JSON.stringify({ recommendation: answer }));
      return;
    }

  // If no route matched, return 404 for API or static requests
    res.writeHead(404).end('Not Found');
  } catch (err) {
    console.error('Error handling request', err);
    res.writeHead(500).end(JSON.stringify({ error: 'Internal Server Error' }));
  }
});

//Start the server
server.listen(PORT, () => {
  console.log(`Server running at http://localhost:${PORT}`);
});Code language: TypeScript (typescript)

A lot is happening here, so let’s break down the core logic in the /recommend handler:

  • We read the query parameter from the URL. If it’s empty, respond with a 400 error.
  • We call vectorStore.similaritySearch(userQuery, 3) to retrieve the top 3 most similar pieces of text from the Neo4j graph. Under the hood, this uses the vector index in Neo4j to find the nearest neighbors to the query embedding. The result is an array of LangChain Document objects. Each Document represents a piece of text from a node (in our case, a Review) and includes any node properties as metadata.
  • If we get no results, we return a friendly message indicating no matches.
  • Otherwise, we take the top result’s metadata (which we expect to include an id for the Review node). We then run a Cypher query (using the Neo4j driver) to find the Book title and Author name for that review. The Cypher pattern matches the review to a Book ((:Review)-[:WRITTEN_FOR]->(:Book)) and then finds an Author connected to that Book ((:Author)-[:AUTHORED]->(:Book)). We limit to one author/book, just in case of multiple.
  • Next, we construct a prompt for OpenAI’s Chat API. We give a brief system role instruction, and then as the “user” message we feed in:
    • The original user query (their interests).
    • The book suggestion (title and author) we found.
    • An instruction to “Write a single-paragraph recommendation… mentioning the title and author and why the user might like it.”
  • We call chatModel.call(messages) to get the assistant’s reply. This uses the OpenAI API (ChatGPT model by default) to generate a concise recommendation based on the info we provided.
  • Finally, we respond with JSON containing the recommendation text.

At this point, our backend logic is complete. Note that we kept everything minimal and synchronous where possible. For a production app, you might want to batch queries, handle multi-author books, cache results, etc., but those are beyond our current scope.

Before moving on, ensure the public/index.html file (we serve in the static route) exists – we’ll create it next.

3. Frontend: HTML + Tailwind CSS UI

Our frontend is a single HTML page with no build tools. We’ll use Tailwind’s Play CDN to quickly style the interface. Create public/index.html as below:

<!DOCTYPE html>
<html lang="en">
<head>
  <meta charset="UTF-8" />
  <meta name="viewport" content="width=device-width, initial-scale=1.0" />
  <title>GenAI Book Recommender</title>
  <!-- Tailwind CSS via CDN (Play CDN for development) -->
  <script src="https://cdn.tailwindcss.com"></script>  <!--:contentReference[oaicite:10]{index=10}-->
</head>
<body class="bg-gray-100 text-gray-800 flex items-center justify-center min-h-screen">
  <div class="p-6 bg-white rounded shadow-md max-w-md w-full">
    <h1 class="text-2xl font-bold mb-4 text-center">Book Recommendation</h1>
    <p class="mb-2 text-sm text-gray-600">Ask for a book recommendation based on your favorite books or genres:</p>
    <input id="queryInput" type="text" 
           class="w-full px-3 py-2 border rounded mb-4" 
           placeholder="e.g. I love epic fantasy like Lord of the Rings" />
    <button id="askButton" 
            class="w-full bg-blue-600 text-white font-semibold py-2 rounded hover:bg-blue-700">
      Recommend a Book
    </button>
    <div id="result" class="mt-4 text-gray-900 font-medium"></div>
  </div>

  <script>
    const queryInput = document.getElementById('queryInput');
    const resultDiv = document.getElementById('result');
    document.getElementById('askButton').addEventListener('click', async () => {
      const query = queryInput.value;
      if (!query) return;
      resultDiv.textContent = "Finding recommendations...";
      try {
        const resp = await fetch('/recommend?query=' + encodeURIComponent(query));
        const data = await resp.json();
        resultDiv.textContent = data.recommendation || "No recommendation found.";
      } catch (err) {
        console.error(err);
        resultDiv.textContent = "Error getting recommendation.";
      }
    });
  </script>
</body>
</html>Code language: HTML, XML (xml)

Let’s unpack the frontend code:

  • We include the Tailwind CDN script in the <head>, which enables us to use Tailwind utility classes in our HTML without any build step. (This is fine for a demo or development, though for production you’d generate a static CSS file.)
  • The UI consists of a centered container with a title, a short instruction, an <input> for the query, and a button. We also have a <div id="result"> where the recommendation text will appear.
  • Basic Tailwind classes are used to style elements (e.g. a blue button, some padding and margin, etc.).
  • The script at the bottom attaches a click handler to the button. When clicked, it:
    • Reads the query text from the input.
    • Updates the result div to say “Finding recommendations…” (feedback while waiting).
    • Calls the /recommend API using fetch. We URL-encode the query and expect a JSON response.
    • Once the JSON arrives, we display data.recommendation inside the result div. If there’s an error (network issue, or the server returned an error), we catch it and show a generic error message.

That’s it for the frontend! It’s minimal by design – just enough to capture input and show output. You can enhance it with loading spinners, better error display, etc., but we’ll keep it simple.

4. Running and Testing the App

Now that we have both backend and frontend in place, let’s run the application:

npm run build   # compile TypeScript to dist/
npm start       # start the Node serverCode language: Shell Session (shell)

Make sure your Neo4j database credentials in .env are correct and the database is running (for the Aura demo, it’s cloud-hosted so it should be available). Also ensure your OpenAI API key is set and valid.

Open your browser to http://localhost:3000 (or the PORT you set). You should see the Book Recommendation UI. Try entering something like:

Input: “I enjoyed reading The Hobbit and other fantasy novels”

Click Recommend a Book, and after a moment, you should get a response. For example, you might see:

Output: “You might enjoy “The Name of the Wind” by Patrick Rothfuss – an epic fantasy adventure with rich world-building and captivating storytelling, much like the elements you loved in your previous reads.”

Each time you enter a query, the app will find semantically similar content in the graph and suggest a book accordingly, phrased as a friendly recommendation. Under the hood, Neo4j’s vector index is finding relevant books/authors by meaning, not just keywords, and OpenAI’s model is wording the recommendation for us.

5. Conclusion

In this tutorial, we built a fullstack GenAI application that combines graph-based search with LLM-based generation. We used Neo4j’s integrated vector search to find books related to a user’s interest and LangChain + OpenAI to produce a human-friendly recommendation. All components (frontend, backend, AI logic) run in one simple Node.js project.

This architecture can be extended in many ways. For example, you could index book descriptions or tags for more accurate vector search, handle multiple recommendations, or add user login to personalize results. You could also separate the frontend and backend into different services once the codebase grows (our modular setup makes this easier to do later). But even in this minimal form, we have a functional recommendation app powered by a knowledge graph and generative AI. Happy reading!

Sources:

  • Neo4j’s vector index introduction and LangChain integration docs
  • Goodreads dataset (10k books, authors, tags) used in Neo4j Aura demo
  • Tailwind CSS setup via CDN
  • SpringAI Goodreads example (data model)

Share Article