User Research, Observability, and Institutional Navigation: The Vegetables Before AI Dessert

The past two years have seen everyone rushing to add "AI" to their product. But here’s the thing: no matter how fancy your LLM demo looks, you still have to eat your engineering vegetables. User research, observability, and institutional navigation are three of the most nutritious. Skip them, and you're face-first in deep-fried ML pie. Delicious, sure. But it'll kill you in the long run.
I've always been a product engineer first. I build things for people. And, though I love using AI in all manners and flavors (I keep waiting for Gradient Boosted Decision Trees to have their Pedro Pascal moment, where everyone realizes they're quietly cool, and then they show up everywhere for the next decade), I’m noticing that AI actually makes some of the core vegetables of engineering more important than ever.
You don't get to avoid these pieces by "using AI." They're timeless, they'll always be important, and they are still very much tied to humans. So, I'm here to remind you to be a healthy human engineer and eat your vegetables.
User Research - LLMs Will Not Find you Product Market Fit (By Themselves)
Were you surprised at the backlash to "AI" in a product feature label? Neither was I. There was just so much "the board is telling us to explore this AI thing, it's gonna be big... hey product owner, do an AI thing" over the past two years.
Tired: "We added a chatbox with RAG. Clap for us."
Wired: "Claude prototyped a hunch we had. We showed five users, it raised 20 new questions, and now we have three more demos in-flight."
This "wired" case is the trend of real AI usage I'm seeing as a product engineer, and it is worthy of hype. It's using AI not just as the feature, but to figure out the things we're paid as humans to figure out such as user insights, the real leading indicators of traction. User research is like broccoli. You can prepare it quickly, throw some furikake on it, and eat this all the time.
Observability - My New Favorite Beta-Carotene
Observability (the cool kids say 011y, which, sorry, just makes me think of 1990s MTV sock puppets) is carrots for your eyes. Not fancy, but it lets you actually see what's really going on. Add to that, it pairs beautifully with the dishes of reliability AND user research.

Observability is a simple concept and one of those things where you wonder why we didn't do this before. What if you did more than just monitor things you think might break but rather had traces all the way through your systems, and at many points along the way, you could dump GIANT AMOUNTS of context so when something goes wrong, it's a heck of a lot easier to debug. Oh, is that expensive? Nope. Our friends at Honeycomb.io have a service designed not to charge you up the wazoo for dumping the kitchen sink at points along the way.
So with a feature flag, you can toss something on production, and if it breaks at 3am, you can roll it back before the SRE screaming commences. But most importantly, you and Claude can query those kitchen sinks at 3:05am and figure out that, oh yes, an off by one error led to an n-squared problem. Oh look, Claude has the PR up already. Although some may proclaim, "Test in prod," maybe don't merge that Claude produced PR at 3:06am... just keep that feature flag off and look again in the morning.
One more point. In every AI contract proposal I've looked at recently, the first thought I have is, "Is there an observability section?" It is literally my "Did you add vegetables to this meal?" question. If you want AI to help write code and solve more problems, it needs context. Context that is queryable, sorted, and nicely julienned with a sesame-ginger sauce. Then the AI magic can start to happen.
Institutional Navigation - It's Just a Squishy System
Whaaat? MIT says 95% of AI initiatives fail? You don't say. McKinsey was also saying something similar. But, with the deep learning advances of the mid 2010s, a whole lot is becoming possible. Basically neural nets went from cool academic toy, to, "Whoa."

So why are AI pilots failing in organizations? If you have a business oriented around a people/automation process, and now you need to have a people/AI/automation process, this involves reshaping the business. Not an AI pilot. Not a chatbot. And, 100% AI for most business processes is next to impossible... we need AI, and AI needs us for the foreseeable future.
But this is a known, exceedingly hard problem. To paraphrase how one friend aptly put it when I became rather loquacious describing why AI pilots don't work, "TL;DR: Organizational change is hard. No $H!7." And that is exactly what we're fighting here.
Did you know that engineers are supposed to be systems thinkers? Did you know that the systems you need to understand as an engineer extend to the institution you are a part of? And yes, your highers-ups are the ones that have more say and responsibility, but it's going to save you a lot of heartache and disappointment if you are able to analyze those systems as well and understand what is possible from an institutional standpoint. Institutions need vegetables, too. Sometimes you need to tell your parents to eat healthier.
So, yes, TL;DR: current AI is never going to solve your organizational problems. That is firmly in squishy meat-bag space. But, this is going to be more and more a part of an engineer's "managing up" toolkit. If you can clearly state to your boss and your boss' boss the institutional pressures affecting your work, you're more likely to solve it. Easy? Hell no. Necessary? It's now more part of the job of engineering as that vibe-coded script with a few ML calls you wrote in a night may solve a huge business problem, but you might not have thought about how it will add new headaches to some very important stakeholders.
A Plate Full of Engineering Vegetables to Offset the Fried

When a non-technical friend puts their hand on my shoulder and asks if I'm worried about AI taking my job, I say, "Oh far from it! I'm going to have WAY more work in the next decade. It's about to get weird!" My job is changing, sure, but there are core principles I care about that won't change. Vegetables are still the path to a healthy diet. And caring about user research, observability, and institutional awareness still makes for healthy products.
I've always adored a blog post from a decade ago that talked about machine learning as the great deep fryer of data. I don't think Maciej Cegłowski knew just how prescient that would be. I love tempura, deep-fried oreos, and chicken-fried steak as much as the next person, but my family still has to remind me that I shouldn't eat them all the time. My LDL was a little high last time, and if I make my kids eat vegetables, I should too.