Human Hallucination > LLM Hallucination. The real reason most GenAI products fail.This is Part II of my blog series: Building Agentic AI That Actually Works. Read part I here.Sep 26Sep 26
How to build agentic AI that actually works (part I).There’s a lot of buzz around agentic AI right now — systems that don’t just respond, but act. But how many such products are we actually…Jul 30Jul 30
Why we’re building AI agents.We stopped all active product development when chatGPT launched. For the first few days our (combined) jaws dropped at what it was capable…Feb 29, 2024A response icon1Feb 29, 2024A response icon1
Turning 40. Bootstrapping for 5 years, and 15 years of marriage.I turned 40 last month, been running a startup for 5 years, and married for 15. Thought this is a good time to spew some gyan on turning…Apr 28, 2020Apr 28, 2020
Automated reviews for clean code — private beta launchWe are building a product that reviews code like how a developer would, for parameters that we’ve seen mostly matter in the real world…Apr 27, 2020Apr 27, 2020
Hiring Customer Support Associate for GeektrustWe’re hiring a customer happiness associate to be a part of Geektrust!Feb 5, 2020Feb 5, 2020
How we automated code evaluation for clean code“What? Automating for clean code?” followed by a look of skepticism is what happens when we talk about building Codu.Dec 10, 2019A response icon1Dec 10, 2019A response icon1
Automated code reviews. For clean code.We’re building an ML product that’ll review for clean code.Dec 6, 2019A response icon1Dec 6, 2019A response icon1