Mahabharata 2.0: From Scrolls to Soundwaves (Part 4)

Developer Relations Lead, APAC, Neo4j
5 min read

Leveling up the Mahabharata chatbot with multi-LLM support, Hindi audio, and faster hosting

In the previous three parts of this series, we journeyed through the making of the Mahabharata chatbot — from structuring ancient knowledge using Neo4j to enhancing relevance through GraphRAG to marrying the world of ancient epics with modern GenAI tooling. My last post showcased how I brought smart retrieval to life using GraphRAG and a knowledge graph of Mahabharata characters and events.
Now, it’s time for the next evolution. Welcome to Mahabharata 2.0! This update expands the chatbot’s powers through multi-model intelligence, faster performance via Cloud Run, and even Hindi audio support for its responses.
Let’s dive into what’s new!
🧠 Choose Your AI Sage: GPT-4o, Gemini, Claude — Who’s Your Favorite Rishi?
Why Multiple LLMs?
In the Mahabharata, many sages and scholars have offered their own interpretations of dharma and destiny — from Vyasa’s original narration to later commentaries by thinkers like Adi Shankaracharya and modern authors like C. Rajagopalachari. Why should our chatbot be any different? With the latest update, you can test how different LLMs (GPT-4o, Gemini 2.5 Flash/Pro, Claude 3.7 Sonnet) respond to the same query.
Each model has its strengths:
- GPT-4o is powerful for creative but contextual, emotive responses.
- Gemini 2.5 Flash and Pro offers deeper reasoning.
- Claude 3.7 provides longer context awareness.
Implementation Detail
A drop-down menu lets you switch between models on the fly. Behind the scenes, I’ve dynamically routed the prompt to the selected API endpoint and handled formatting for each provider.
Sample Prompt Comparison
Prompt: “What was the role of Krishna during the Kurukshetra war? Did he fight?”
- GPT-4o: Frames Krishna as a divine strategist and moral guide who upheld his vow of non-combat.

- Gemini 2.5 Flash Preview: Provides a clear and factual explanation emphasizing Krishna’s vow and advisory role.

- Gemini 2.5 Pro Preview: Focuses on textual citations and moral consequences.

- Claude 3.7 Sonnet: Highlights Krishna’s neutrality and strategic presence using direct textual references.

Takeaway
Now, you don’t just ask a question — you consult multiple sages. And just like the epic, answers vary based on who you ask.
🚀 From Hugging Face to Cloud Run: A New Yuga in Performance
Why Move to Cloud Run?
Initially hosted on Hugging Face Spaces, the chatbot faced cold-start issues and sluggish loading times. As usage grew, it was clear I needed a more production-grade hosting setup.
Cloud Run benefits:
- Faster cold starts
- Auto-scaling based on traffic
- Easier API key management
- Integration with Google Cloud services
How to Deploy (Mini Guide)
# 1. Create the Repository and Build the Container Image
export AR_REPO='mb-aisage' # Updated repo name
export SERVICE_NAME='mb-aisage' # Updated service name
gcloud artifacts repositories create "$AR_REPO" \
--location="$GCP_REGION" \
--repository-format=Docker
gcloud auth configure-docker "$GCP_REGION-docker.pkg.dev"
gcloud builds submit \
--tag "$GCP_REGION-docker.pkg.dev/$GCP_PROJECT/$AR_REPO/$SERVICE_NAME"
# 2. Deploy to Cloud Run
gcloud run deploy "$SERVICE_NAME" \
--port=8080 \
--image="$GCP_REGION-docker.pkg.dev/$GCP_PROJECT/$AR_REPO/$SERVICE_NAME" \
--allow-unauthenticated \
--region=$GCP_REGION \
--platform=managed \
--project=$GCP_PROJECT \
--env-vars-file=.env.yaml
Environment Configurations
Environment variables in Cloud Run are used to securely manage API keys for the following services:
- OpenAI (API key required)
- Claude (Anthropic) (API key required)
- ElevenLabs (API key required)
✅ Note: Google Cloud’s Gemini and Translation API do not require API keys when used within the same Google Cloud project. However, their services must be explicitly enabled before deploying:
gcloud services enable \
cloudresourcemanager.googleapis.com \
servicenetworking.googleapis.com \
run.googleapis.com \
cloudbuild.googleapis.com \
cloudfunctions.googleapis.com \
aiplatform.googleapis.com \
translate.googleapis.com
These commands ensure that the required services are active to support deployment, inference, and translation functionalities seamlessly.
🎧 When Vyasa Speaks Hindi: TTS Meets Translation
What Happens When You Click to Listen in Hindi?
This feature translates the chatbot’s English response into Hindi and plays it out loud. The tech stack behind this:
- Google Cloud Translation API converts English text to Hindi.
- ElevenLabs Text to Speech API generates natural-sounding audio in Hindi.
- Audio is embedded and played directly in the browser.
Why Hindi First?
Hindi is one of the most spoken languages in the world and is closely tied to the roots of the Mahabharata. Making the chatbot more inclusive starts with language.
Sample Demo
English: “Krishna’s role during the Kurukshetra war was primarily as a charioteer for Arjuna. He did not fight in the battle himself. Before the war, Krishna made a resolution not to take up arms for either side. Instead, he provided strategic advice and moral support to the Pandavas, particularly to Arjuna, guiding them throughout the conflict. His presence was crucial, but he remained a non-combatant.”
Hindi Audio: “कुरुक्षेत्र युद्ध के दौरान कृष्ण की भूमिका मुख्य रूप से अर्जुन के सारथी के रूप में थी। उन्होंने युद्ध में स्वयं भाग नहीं लिया। युद्ध से पहले, कृष्ण ने किसी भी पक्ष के लिए हथियार न उठाने का संकल्प लिया। इसके बजाय, उन्होंने पांडवों, विशेष रूप से अर्जुन को रणनीतिक सलाह और नैतिक समर्थन प्रदान किया, पूरे संघर्ष में उनका मार्गदर्शन किया। उनकी उपस्थिति महत्वपूर्ण थी, लेकिन वे एक गैर-लड़ाकू बने रहे।”
🎧 Listen to this response in Hindi
Challenges faced:
- Ensuring accurate cultural translation
- Tuning the TTS voice to sound more natural
- Retaining context in translation
🔮 Beyond Chat: Toward a Voice-First, Multi-Lingual Mahabharata
Conversational AI, Literally
I am now looking to go voice-first — where you can speak to the chatbot and get responses in audio. Tools like Whisper (for speech to text) and ElevenLabs (for speech synthesis) might make this possible.
Indic Language Expansion
The roadmap includes support for:
- Tamil
- Telugu
- Bengali
- Kannada
These languages will bring regional inclusivity and increase engagement with native speakers.
The End Goal: Build Your Own AI Sage
Beyond just choosing an LLM, imagine choosing your persona:
- Chat with Krishna for philosophical takes
- Ask Karna for a warrior’s perspective
- Debate with Draupadi on justice
Add multilingual audio on top of this, and I’m not just building a chatbot — I’m creating an epic digital sage.
🔊 Wrapping Up
From model switching and faster hosting to listening to the Mahabharata in Hindi, this chapter of my journey shows how tradition and technology can truly converge. Stay tuned as I step into a voice-native, multilingual future.
Until next time, may your questions be curious and your answers wise.
Massive thanks to Thorsten Schaeff (aka Thor) for generously helping me with ElevenLabs API credits — couldn’t have tested the Hindi audio feature without his support. Grateful to Michael Hunger for being a constant source of motivation throughout this project and the brilliant mind behind the amazing Neo4j LLM Graph Builder tool. 🙏
Explore Mahabharata 2.0: https://mb-aisage.netlify.app/.

Enjoyed this post? Give it a 👏 below and follow me on Medium and LinkedIn to get updates on upcoming blogs. And don’t forget to ★ the GitHub repo.
Mahabharata 2.0: From Scrolls to Soundwaves (Part 4) was originally published in Neo4j Developer Blog on Medium, where people are continuing the conversation by highlighting and responding to this story.