From Secret Weapon to Standard Practice: Scaling AI in Engineering with Confidence

Chris Profile Picture Christopher Carroll February 15, 2026
Podcast interview with Martin on AI in engineering.

There was a time, not so long ago, when using artificial intelligence to do your job was something you kept quiet. It felt like cheating. If you used an LLM to draft a technical document or debug a script, you likely scrubbed the evidence before hitting send.

But the tide has turned. As Martin Hempstock, VP of Platform Engineering at Markup AI, recently shared on the Markup AI Podcast, the stigma has vanished. In fact, it has flipped entirely. “There became this sort of flip… where people were like bragging about it,” Martin says. “And you were sort of almost stupid if you weren’t using AI.”

Key takeaways

The conversation between Chris and Martin offers a roadmap for any organization looking to mature its AI strategy. Here are the core lessons to take back to your teams:

  • Move from shame to fame: Stop hiding AI usage. Encourage your teams to share their prompts and workflows. Innovation happens when we iterate on each other’s successes.
  • Guardrails are non-negotiable: Blind trust in AI leads to embarrassing mistakes and reputational damage. You need systems that automatically scan and score output against your specific business rules.
  • Prepare for the Agentic Web: Your future customer might be a bot. Start thinking now about how your content and data are structured for machine retrieval.
  • Upskill, don’t just equip: Giving a junior employee a powerful AI tool without training is dangerous. Invest in mentorship to ensure they understand the fundamentals of what the AI is producing.
  • Look beyond traditional ROI: Measure success in velocity, quality consistency, and risk reduction, not just immediate headcount reduction or P&L impact.

In a wide-ranging conversation with our host, Chris, Martin dove deep into the realities of deploying AI across a technical organization. From the personal impact of AI in engineering and on neurodivergent professionals to the looming rise of “agentic SEO,” the discussion highlighted a critical truth: AI is no longer just a tool for generation; it’s the infrastructure of modern work.

However, with ubiquitous adoption comes new risk. Blind trust in LLMs, the “junior developer dilemma,” and the challenge of maintaining brand identity in an automated world are hurdles every leader must face.

Here’s a deep dive into the insights from our latest episode, exploring how engineering leaders are rewriting the rules of productivity and why guardrails are the key to scaling with confidence.

The personal impact: Acceleration and accessibility

For Martin, the value of AI isn’t just theoretical business ROI; it’s deeply personal. He opened the interview by sharing his experience as a professional with dyslexia. For years, getting thoughts down on paper was a struggle — a friction point between a brilliant engineering mind and the written word.

“I’ve literally struggled with writing,” Martin admits. “And as soon as I tried ChatGPT… took some thoughts in there and it came out with something concise that I could never possibly write, [it] was absolutely amazing.”

This is a microcosm of what AI achieves at the enterprise level. It removes friction. It allows professionals to bypass their weaknesses and double down on their strengths. Martin now uses a local running whisper model to dictate thoughts, which are then processed by a higher-tier model to structure the narrative, and finally polished for tone.

The takeaway for business leaders is clear: AI in engineering isn’t just about output volume; it’s about accessibility. It unlocks the potential of team members who might otherwise be held back by the mechanics of execution, allowing them to focus on strategy and ideation.

The AI in engineering evolution: From writers to architects

One of the most persistent fears regarding AI in software development is the creation of “technical debt” — messy, unmaintainable code generated by bots and pasted into production by humans who don’t understand it.

Martin offers a counter-narrative. In a professional engineering environment, AI isn’t lowering the bar; it’s raising the floor.

“If I look here before we really started using generative AI for coding,” Martin explains, “you’d create a really short, simple script to do one task and it’d just sit there. It’d hardly have any documentation.”

Today, that same “hacky script” generated with the aid of tools like Cursor comes with comprehensive documentation, unit tests, and a robust structure. The AI ensures that even minor utilities meet enterprise standards. The role of the developer is shifting from writing lines of code to architecting solutions.

“The shift really is… everyone’s sort of an architect with about three junior developers underneath them,” Martin notes, referring to the AI agents that assist each engineer.

The junior developer dilemma

However, this shift introduces a specific challenge for those early in their careers. In the past, copying code from Stack Overflow without understanding it was a cardinal sin. Today, generating code with an LLM leads to the same trap if the human in the loop lacks discernment.

“It’s easier to be bad now,” Martin cautions. “You can produce… something really bad and not understand it, and it works.”

The solution isn’t to ban the tools but to change the training. Junior developers must be mentored to understand the why and how of the code they generate. They need to treat the AI not as an oracle, but as a subordinate that requires supervision. This reinforces the Markup AI philosophy: Speak boldly, build trust. You can’t trust your infrastructure if you don’t understand the foundation it’s built on.

The agentic future: Who optimizes for the bots?

Perhaps the most forward-looking segment of the discussion revolved around the future of web experiences and the rise of “Agentic AI.”

We are moving toward a world where humans are less involved in the transactional layer of the internet. Instead of browsing a travel site to book a flight to Amsterdam, you’ll simply tell your personal agent, “Book me a trip to Amsterdam,” and the agent will handle the browsing, comparison, and purchasing.

This fundamentally breaks the traditional SEO and marketing model.

“The web didn’t used to exist… then companies were like, ‘How do I get my company at the top of the search results?’ So SEO was born,” Martin reflects. “We’re going to have this equivalent with AI agents. But if it’s the AI agents selecting what to do… how do you get your company at the top of that?”

This is the dawn of Agent Experience Optimization (AEO) or Generative Engine Optimization (GEO).

If an AI agent is booking a flight, it doesn’t care about your banner ads. It doesn’t care about your emotional brand storytelling on the landing page. It cares about data structure, API accessibility, and trust scores.

Martin predicts a future where parts of the web are built specifically for agents — machine-readable, optimized for data retrieval, and stripped of human-centric UI. For marketing and content teams, this means a bifurcated strategy: Creating emotive experiences for humans and structured, authoritative data for the Content Guardian Agents℠ and external bots that serve them.

The necessity of guardrails in a generative world

As organizations rush to deploy these technologies, the risk of “blind trust” grows. Martin shared a revealing anecdote about a cold email he received. The email was pitching an AI solution to guardrail brand content — a service Markup AI already provides.

“It was clearly automated… they’d seen my title… but it hadn’t done enough research,” Martin laughs. The AI had hallucinated a need or failed to check the competitive landscape, resulting in an embarrassing, reputation-damaging outreach.

This illustrates the critical need for Content Guardian Agents.

Generative AI is inherently generic. It’s trained on the average of the internet. It does not know your specific brand voice, your compliance mandates, or your strategic positioning unless you force it to.

“There’s always gonna have to be a place for like a central repository that knows the rules,” Martin argues.

When you rely on ungoverned AI to generate content — whether it’s code, marketing copy, or cold emails — you risk:

  • Hallucinations: Fabricating facts or products.
  • Tone misalignment: Sounding robotic or overly aggressive.
  • Compliance failures: Violating GDPR or industry regulations.

To scale AI confidently, you must integrate scanning, scoring, and rewriting into the workflow.

Scan, score, rewrite

At Markup AI, we believe in a workflow that treats AI guardrails as enablers, not blockers.

  1. Scan: As content is generated, it must be instantly analyzed against your specific criteria.
  2. Score: The asset is given an objective quality score. Is it on-brand? Is it compliant? Is it accurate?
  3. Rewrite: If the score is low, the system should automatically rewrite the content to meet the standard before a human ever has to fix it.

Martin emphasizes that this applies to code as much as copy. “When AI gets things wrong and it’s not guarded… that can cause a huge amount of reputational loss.”

Redefining ROI: Beyond the P&L

One of the more contentious topics in the industry is the perceived lack of ROI from AI deployments. Chris cited a recent report suggesting a high percentage of AI projects fail to show returns. Martin disputes this, suggesting the issue lies in how we measure success.

“If the cost is really high… [but] you’re getting an incredible amount of value… it’s a good thing,” Martin says.

ROI calculations often look strictly at direct profit and loss. They fail to capture the “billboard effect” or the efficiency gains that don’t immediately show up on a spreadsheet.

Consider the developer who uses AI to document their code. That documentation might save a different engineer five hours of debugging six months from now. That saving doesn’t appear on this quarter’s P&L, but it’s a massive organizational win.

Similarly, if a content team uses Markup AI to ensure every piece of content is legally compliant, the ROI isn’t just in faster writing — it’s in the absence of legal fines and the preservation of brand trust. How do you calculate the ROI of a crisis that didn’t happen?

What AI can’t replace (yet)

Despite his enthusiastic adoption of AI for everything from coding to grocery shopping, Martin acknowledges there’s a “human spark” that remains elusive.

“You can look at something and instinctively, you know, that just feels right,” Martin says. “AI doesn’t feel.”

This gut instinct — the ability to look at a campaign and know it will resonate emotionally, or to look at a user interface and know it will delight — is still the domain of the human professional. AI can simulate emotion based on patterns, but it can’t empathize.

This reinforces the role of Markup AI not as a replacement for human creativity, but as a safety net for it. By automating the objective rules (grammar, terminology, and compliance), we free up the human to focus on the subjective magic (emotion, strategy, and connection).

Scale with confidence

The future Martin describes — where home automation orders our groceries and developers operate as architects of AI swarms — is not far off. To get there safely, we need to build trust in our systems today.

Markup AI enforces content guardrails, accelerates your workflow, and protects your brand — so you scale AI confidently. Whether you are generating code, marketing copy, or technical documentation, Content Guardian Agents are ready to help you move faster without breaking things.

Ready to integrate guardrails into your workflow? Sign up for access to Markup AI today and start building a future you can trust.


Frequently Asked Questions (FAQs)

Will using AI to generate code create technical debt?

It can, if used improperly. However, as Martin noted, when used correctly by skilled engineers, AI actually reduces debt by automatically generating documentation, unit tests, and standardized structures that might otherwise be skipped. The key is oversight and understanding the code being committed.

How do we maintain our brand voice when using generic LLMs?

Generic models are trained on the entire internet, not your brand guidelines. You need a layer of governance or guardrails that sits between the model and the output. Markup AI allows you to define your specific terminology, tone, and compliance rules, ensuring the AI is rewritten to sound like you.

What is the risk of “blind trust” in AI?

Blind trust leads to hallucinations (made-up facts) and context failures (like the cold email pitching a competitor’s product to the competitor). Automated scanning and scoring processes prevent these errors from reaching the public.

Can AI replace the need for human review entirely?

Not yet, and perhaps not ever for high-stakes content. AI lacks “gut feeling” and emotional resonance. The goal of Markup AI is to handle the objective quality checks (grammar, style, and compliance) so humans can focus on the subjective, creative, and strategic review.

Last updated: February 15, 2026

Chris Profile Picture
is a Product Marketing Director at Markup AI. With over 15 years of B2B enterprise marketing experience, he spends his time helping product and sales leaders build compelling stories for their audiences. He is an avid video content creator and visual storyteller.

Continue reading

Get early access. Join other early adopters

Deploy your Brand Guardian Agent in minutes.