September 18, 2025 marks my two-year workiversary at Realm Labs. At 21, I joined as the first engineer while still in college. Two years later, I’ve helped turn our early vision into deployed systems that protect real AI workflows. This post is a reflection on that journey — what I learned about timing, ownership, and learning fast in an environment where the rails are being built as you run on them.
The Gap I Saw: Between Research and Reality
My dream has always been to turn ideas from the lab into systems that shape how the world works. The question was when and where.
I spent most of my undergrad years in the trenches of security and AI systems. I built across domains — from designing cloud vulnerability frameworks at Oracle to developing quantum circuit visualizations in Google Cirq and compiler transformation tooling in LLVM MLIR. Those projects were technically demanding, forcing me to think about how large, distributed systems fail, and how making complexity visible helps engineers reason about trust and reliability.
That foundation naturally led me to my research at AVID ML, where I studied how bias and vulnerabilities creep into generative systems, connecting academic frameworks like MITRE ATT&CK with real-world AI risk. That work culminated in a publication at ACM CHI (GenAICHI), but more importantly, it gave me a front-row seat to a problem that was about to explode: nobody was thinking seriously about securing AI in production.
While balancing coursework and research at IIIT Hyderabad, I kept noticing this gap. The academic world was studying AI security in controlled environments. The industry was racing to ship LLM features. But the space between research and production? Nearly empty.
That gap didn't stay empty for long. When Saurabh reached out, he wasn't selling me on a company. He was articulating the exact problem I'd been circling around in my research: "LLMs are exploding, but who's making sure they're secure?". It clicked instantly. The industry was in a gold rush, building faster and faster, while the security infrastructure lagged dangerously behind. We had a narrow window to build the guardrails before things got messy. My research background wasn't just relevant anymore; it was exactly the lens enterprises would need as they moved from experiments to production deployments.
Year One: Building the Foundation
Learn, Learn, Learn
The first year was an intensive crash course in turning vision into product. As one of the earliest members of the team, I shaped our approach from the earliest stages, deciding not just what to build, but how to build it in a way that would scale.
The speed of learning surprised me. In a lean startup, you develop pattern recognition and the ability to switch contexts quickly because you're exposed to every part of the business. One week I'd be architecting backend pipelines; the next I'd be in potential customer conversations that would reshape our technical priorities. I built dashboards, designed intervention pipelines, conducted penetration testing on agentic workflows, and dove deep into AI research to understand vulnerabilities that hadn't been documented yet.
This wasn't chaos. It was strategic agility. Each role taught me to see the full picture. Editing demos taught me storytelling. Analyzing agentic workflows sharpened my threat modeling. Building dashboards forced me to think about what clarity looks like for users under pressure. By year's end, I wasn't just a technologist; I was developing the product intuition that comes from understanding every layer of the stack.
The mindset shift mattered most. I stopped thinking like a contributor and started thinking like an owner. Every decision mattered because I would live with its consequences.
What validated our approach was watching the market catch up. In late 2023, AI security was still an abstract concern for many enterprises. By mid-2024, more teams began to recognize the need for structured approaches to secure AI systems. That gradual shift — from evangelizing the problem to seeing early adoption signals — reassured us that our thesis was directionally correct.
The Power of Iteration: Building What Matters
One of my most valuable lessons came from learning to balance speed with durability. Early on, I built an over-engineered modular system. It looked clean and clever from an engineering perspective, but it was slower to deliver customer value than a simpler, more pragmatic design. This experience helped me see the tension between engineering pride (building beautiful systems) and customer impact (delivering something that solves a real need fast). That experience taught me that at early-stage companies, the goal isn't perfection. It's rapid validation and iteration.
This became our competitive advantage. We could test hypotheses quickly, gather customer feedback, and refine our product in tight cycles. The key was knowing what needed to be rock-solid from day one (security, reliability, trust) and what could evolve through iteration.
When you're close to the customer and close to the code, you feel the impact of every decision immediately. There's something grounding about watching someone on the other side of the world rely on code you wrote at 3 AM to keep their systems safe. That’s when Realm stopped feeling like a startup experiment and started feeling like an obligation to our customers, to the teammates who believed in our vision, and to the mission of making AI safer for everyone using it.
Year Two: When the Mission Expanded
As we moved into the second year, something shifted: both in our team and in our understanding of what customers actually needed.
The Team That Changed Everything
Bringing in the founding team elevated everything. These weren't just talented engineers; they brought a decade of experience in secure, explainable, and trustworthy AI research, with over 4000 citations and 20+ patents among them. They had secured systems at scale for both enterprises and consumers at leading tech companies.
Working alongside people with this depth of expertise changed how I thought about problems. I wasn't just learning from trial and error anymore; I was learning from experts who had already solved adjacent problems in production environments. They could anticipate failure modes I hadn't considered, design systems for explainability from the ground up, and build trust into products as a first principle rather than an afterthought. Their presence raised the bar for everyone, including me.
From Defense to Understanding
By mid-2024, our understanding of AI risk had deepened. We began looking beyond defense to comprehension, moving from reacting to vulnerabilities to uncovering the mechanisms behind them. This wasn’t a move from security to safety, but an expansion toward explainability and interpretability, the foundation on which both depend. Our work now centers on opening the black box and helping teams bridge the gap between secure systems and understandable ones.
Finding My Focus
With a stronger team owning entire domains, I could sharpen my focus on high-leverage problems. Being there from day one gave me institutional knowledge that became increasingly valuable. I understood why certain architectural decisions were made, which customer conversations shaped our roadmap, and where the landmines were buried in the codebase. This context cut new engineer ramp-up time in half and helped us evaluate new features by connecting them back to original customer problems.
I concentrated on building the bridge between AI explainability research and deployed features. The inflection point came when customer conversations shifted from "why do we need this?" to "how fast can you deploy this?" We were watching a fundamental shift in real-time: enterprises were ready to invest in the safety infrastructure, and we were positioned to help them do it right.
The work ranged from testing hypotheses that went nowhere to shipping features that protected real workflows. In two years, I encountered problems that would take a decade at a larger organization - technical challenges, strategic decisions, operational trade-offs - all compressed into rapid cycles that forced me to develop judgment fast.
Ownership: The Ultimate Growth Accelerator
What makes early-stage companies unique isn't just the opportunity to build. It's the opportunity to own outcomes completely. There's a profound difference between contributing to a feature and being responsible for it from conception to customer impact. The first time I saw a customer using something I had designed end-to-end, it shifted my entire perspective. This wasn't theoretical work anymore; it was infrastructure protecting real workflows for real teams.
The human element became the most rewarding part. Behind every customer conversation is a team trying to deploy AI safely, trying to innovate responsibly, trying to get their work done without introducing unacceptable risk. Seeing how our work directly enabled them to move faster with confidence became the validation that mattered most.
Next Steps
What excites me most now is the magnitude of what's coming. We're no longer just validating a market thesis. We're securing systems that are core to how organizations operate. The challenges are bigger, the risks more complex, and the opportunities to make a lasting impact more meaningful than ever.
The landscape around responsible AI is evolving rapidly, and being part of shaping its foundation has been one of the most fulfilling experiences of my career. We're not just responding to today's threats; we're building the technology that will enable safe and reliable AI innovation for years to come.
Two years in, joining early was strategically correct. The learning was exponential: I went from studying AI security as a student to helping define enterprise best practices for trustworthy AI systems. The challenges were real, the growth undeniable.
For anyone at a similar crossroads: if your goal is to learn how to build companies and solve frontier problems, early-stage environments compress that learning dramatically. There's no perfect moment to join, but if the problem matters and you're prepared to build from scratch, the opportunity is now.
Sometimes the unconventional path is the most direct route to where you need to be.