I spent the first quarter of 2026 interviewing for Staff+ SRE and platform roles, from recruiter screens through full panels and executive rounds. Across all of it, at companies building AI infrastructure, I was surprised how rarely anyone asked how I use or would use LLMs to do my job.
The assumption layer
I went into the job process assuming everyone was using LLMs, on both sides of the table, and I found that nobody was really saying so out loud.
On the candidate side, I used LLMs for essentially every part of the preparation. The resume refresh was first. My CV had not been seriously updated in years, so I sat down with an LLM in a long interview-and-dictate loop: talk through a role, have it ask clarifying questions, tighten the wording, and cut anything more than five to ten years old.
I built a job-prep skill of my own. The personal part of it was what mattered. It knew the level I was targeting (Senior Staff / Principal IC), knew what that level actually means in terms of strategic and technical scope, and knew what my real background could and could not support.
Per opportunity, I went deeper: company trajectory, market position, competitors, and recent news. I reviewed their values, what their interview loop usually looks like, what kinds of STAR stories I needed ready, and where my experience tied to what they actually needed.
I never used any LLM as a prop during interviews. That was prep only.
On the company side, the signals were harder to read. AI notetakers showed up in almost every Zoom, Meet, and Teams call, transcribing and summarising in the background. That is increasingly just a default feature of meeting tools rather than evidence of anything in particular. Beyond that, I genuinely don't know how much LLM assistance was in the loop on their side, and nobody volunteered it.
So the shape of the exchange was: both sides heavily LLM-assisted, neither side really talking about it.
The live rule
In every loop I went through, the expectation was that I would not use any kind of AI during the interview itself. That is a reasonable rule and I agree with it. Using an LLM as a live prop mid-interview would feel like cheating, and it wouldn't measure anything useful.
What I found more interesting was what did not happen. I was not asked to walk through an LLM-assisted workflow. Nobody asked to see my tool setup. Nobody asked how I decide when to reach for a model and when not to.
As I wrote in The Harness is the Product, the work is no longer typing; it is operating the loop. Yet the interview format of coding screens and system design rounds has not shifted much from a few years ago. We spent 45+ minutes on failure modes and never discussed how those decisions change when some of the code is generated.
To be fair, interviews are designed for consistency and comparability, which makes rapid change difficult. Still, the gap was noticeable.
What they were actually testing
The theme that did come up, repeatedly, at this level was judgment.
Judgment across business and technical trade-offs. Judgment about when to push back on a roadmap. Judgment about what to invest in operationally and what to let slide. Judgment about how to grow a team without being a manager. All IC-shaped, all appropriate for the level.
That is the part that connects to AI, even though the interviewers did not usually frame it that way.
Coding in early 2026 is not mostly typing; it is operating an LLM coding harness: reviewing what it produces, deciding what to keep, what to rewrite, and when not to generate anything at all.
The machine produces working code.
The human decides whether it should exist.
That's the job now.
If, as I've argued, code is becoming the "new assembler," then asking a Staff Engineer to hand-write boilerplate is like asking a structural engineer to do load calculations without software. It's not that we can't; it's just not where the value is. The valuable skill is taste: architectural taste, reliability taste, and a sense for where a generated thing falls short.
The junior engineer gap
The place I did push this into the conversation was around mentoring.
The traditional path from junior to experienced engineer ran through writing a lot of code, breaking things, and slowly developing taste. If much of that code is now generated, then the junior engineer's job shifts toward reviewing and steering model output. They are being asked to provide judgment; the very thing they have the least scaffolding for.
You cannot judge generated code well if you have never written and debugged enough of it yourself to know what "good" looks like. You cannot review a system design if you have never watched one fail in production. You cannot decide when the model is confidently wrong if your own confidence is itself based on model output.
This is not going to fix itself. Senior engineers will need to step in.
Real mentoring, shadowing, pairing, and deliberately putting juniors in front of problems without the harness are required. We have to make them articulate why a design is good or bad before the model does it for them.
None of the interview loops probed this directly either, but it was a theme I kept pulling into the conversation, because I think it is one of the real Staff+ problems of the next few years.
Thoughts
A lot of stages, a lot of companies, a lot of conversations, and the most AI-shaped thing about any of it was that it rarely surfaced explicitly.
From an SRE perspective, silence can be a signal. Either the problem space is moving faster than interview materials can keep up, or the mismatch is the timeless gap between how people work and how they interview.
Either way, the bar did not move. It was already there. What has changed is that the parts of the job that used to be hidden under a lot of typing are now the only parts left worth testing for, and the interview format has not quite caught up.