AI legal research for Indian law.
Trained to reason like a senior advocate.
1.8 million advocates. Most still rely on manual keyword search across fragmented databases, or generic AI that hallucinates citations.
Lawyers juggle SCC Online, Manupatra, Indian Kanoon, bare act sites. Annual subscriptions run INR 15,000-50,000+ each, with no synthesis across sources.
Existing tools return document lists, not answers. Lawyers spend 2-4 hours per research task manually reading and synthesizing judgments.
ChatGPT and Gemini hallucinate case names, fabricate section numbers, and miss India-specific codes (BNS, BNSS, BSA replaced IPC, CrPC, IEA in 2024).
Registered advocates in India (Bar Council)
Pending cases across Indian courts
Solo practitioners or 2-3 person firms
BNS, BNSS, BSA (2024) created urgent reskilling demand
Initial SOM: 50,000 solo/small-firm advocates at INR 500-2,000/mo = INR 30-120 Cr ARR ($3.5-14M)
Judicra 1.0: Qwen3.5-27B fine-tuned with LoRA on 20,000+ expert-curated Indian legal QA pairs. Trained to reason with Indian statutes, case law, and the new criminal codes.
5M judgments from SC, all 25 HCs, 16 tribunals, 4,000+ statutes. BGE-M3 dense+sparse embeddings with cross-encoder reranking. Every answer is grounded in real documents.
Adversarial training data (fake provisions, trick questions). Citation honesty training. Knowledge-grounded refusal for questions outside the corpus. The model says "I don't know" when it should.
17 custom QA generation pipelines. Each pipeline has domain-specific validation, fact-checking, and cross-referencing. This data doesn't exist anywhere else. Every iteration improves the model.
5M+ judgments across all Indian courts, cleaned, deduplicated, and quality-scored through a 5-phase pipeline. Coverage that took months to build and would take competitors the same.
General-purpose LLMs fail on Indian legal nuance: new criminal codes, tribunal-specific procedures, state-specific amendments. Our fine-tuned model handles these natively.
Every query improves retrieval quality and reveals gaps in training data. Server-side audit logging captures real practitioner questions for targeted model improvement. The product gets better with every user.
| Judicra | Incumbent databases | Generic AI (ChatGPT, etc.) | |
|---|---|---|---|
| Answer format | Synthesized analysis with citations | Document list (user synthesizes) | Prose answer (often wrong) |
| Indian law depth | Fine-tuned on Indian legal corpus | Comprehensive but search-only | Superficial, hallucinates specifics |
| New codes (BNS/BNSS/BSA) | Trained with crosswalk mappings | Available as raw text | Frequently confuses with old codes |
| Citation reliability | RAG-grounded, adversarial-tested | Real citations (search results) | Fabricates case names and sections |
| Pricing | INR 500-2,000/mo | INR 15,000-50,000+/yr per DB | $20/mo (not specialized) |
| UX | Conversational, instant | 1990s search interface | Good UX, wrong answers |
Free Trial
Limited queries. Conversion funnel to Pro.
Pro
~$12/mo. Target: solo practitioners.
Firm
~$30/seat. 3+ users.
Unit economics: GPU inference cost ~$0.002/query (serverless scale-to-zero). At INR 999/mo and ~100 queries/user/mo, gross margin exceeds 85%.
Expansion revenue: Firm tier, API access for litigation support tools, report generation credits, specialized tribunal modules.
5M+ judgments scraped, cleaned, deduplicated. 10.3M vectors indexed in Qdrant.
Judicra 1.0 (27B) fine-tuned. 17 QA pipelines, 20K+ training pairs. Eval score: 6.4/10 (with RAG).
judicra.com in private beta. Full-stack: streaming chat, auth, conversation sync, PDF export, feedback loop.
RunPod serverless (scale-to-zero), Cloudflare Pages, Mac Mini backend. <$50/mo running cost.
Target: 9/10. Biggest gaps: citation quality (4.8), new criminal codes (4.7), adversarial robustness. Clear path to improvement with more training data + RAG tuning.
Now - May 2026
Q3 2026 (Jun - Sep)
Q4 2026 - Q1 2027
Key milestone: Eval score from 6.4 to 8+/10 is the unlock. At 8/10, the product is reliable enough for paying practitioners. Training data pipeline and eval suite are already built. The bottleneck is compute for iterative training runs and corpus expansion.
Founder
End-to-end builder across the stack: data pipelines, model training, RAG, backend, frontend, and infrastructure.
aditya@judicra.com
Training data quality, eval curation, practitioner feedback loop. Someone who practices law and can judge answer quality at an expert level.
Bar association partnerships, content marketing, community building. Someone who knows how Indian lawyers discover and adopt tools.
Pre-seed round
18-month runway to paid product with 5,000 users
GPU compute (training + inference)
60%
Team (part-time legal expert + GTM)
20%
Infrastructure + tools
10%
Go-to-market spend
10%
Milestone for next round: 5,000 paying users, INR 50L+ MRR, eval score 9/10, Firm tier live.
The infrastructure is built. The model works. Now it's time to scale.
Try it
judicra.com
Contact
aditya@judicra.com