How We Scaled Internal AI Adoption
We value your privacy
By using this website, you agree to our use of cookies to enhance your browsing experience, serve personalized ads or content, and analyze our traffic. You can adjust this by clicking on "Opt-Out". If you would like to find out more about the cookies, refer to our Cookie Policy
We use third-party cookies that help us analyse how you use this website, store your preferences, and provide the content and advertisements that are relevant to you. However, you can opt out of these cookies by checking "Do Not Sell or Share My Personal Information" and clicking the "Save My Preferences" button. Once you opt out, you can opt in again at any time by unchecking "Do Not Sell or Share My Personal Information" and clicking the "Save My Preferences" button.
From Zero to AI-First: How We Scaled Internal AI Adoption
Christopher Carroll • February 10, 2026
Key takeaways
- Start with the problem: To boost AI adoption, don’t deploy AI for the sake of AI. Solve a specific pain point, like information silos.
- Prioritize flexibility: Using an interface like LibreChat allows you to switch between models (Claude, GPT-4, Gemini) based on the specific use case.
- Connect your data: LLMs become infinitely more valuable when connected to your internal tools (Jira, Salesforce, Workspace) via MCP.
- Trust is currency: Ensure your internal tools are stable before a full rollout. A bad first impression creates an uphill battle for adoption.
- Make it fun: Gamifying the creation of agents encourages experimentation and helps employees feel comfortable with the technology.
Eighteen months ago, our internal AI usage was limited to a few custom GPTs and individual experimentation. Today, we are a fully-fledged AI-first organization where every employee builds agents to solve their unique challenges.
In the latest episode of the Markup AI podcast, I sat down with Martin Hempstock, our VP of Platform Engineering. Martin was the architect behind this massive cultural and technical shift. We discussed how we moved from information silos to a unified, AI-enabled ecosystem, the power of the Machine Context Protocol (MCP), and why “gamification” was the secret weapon for adoption.
Here is how we did it.
Solving the “single view” problem
The initiative didn’t start with a desire to just “use AI.” It started with a specific business problem identified during a company offsite in Ireland. Our teams were struggling with information silos. Customer data, support tickets, and engineering updates lived in different systems, making it nearly impossible to get a single, unified view of a customer.
Martin’s team realized AI could be the bridge. But rather than buying expensive per-user subscriptions for every employee, they took a platform engineering approach.
The tech stack: LibreChat, MCP, Markup AI and a whole lotta models
Martin and his team deployed LibreChat, an open-source interface that connects to various LLM providers via API. This offered two distinct advantages:
- Cost efficiency: Instead of paying flat fees per user, we pay for API usage. This democratized access across the entire company without bloating the budget.
- Model agnosticism: We aren’t tied to one model. Employees can toggle between OpenAI, Anthropic, and Google models depending on the task—whether they need creative writing or complex code generation.
To connect this interface to our actual work, Martin leveraged MCP (Machine Context Protocol). This allowed us to build custom tools that let the AI “reach into” our internal systems—Google Workspace, Jira, Zendesk, and even our own Markup AI platform—to retrieve real-time data securely.
Soft launches and gamification for AI Adoption
Technology is easy; culture is hard. Martin didn’t just turn the lights on and hope for the best. He knew that if the first experience was buggy, trust would evaporate.
The team used a “soft launch” strategy, allowing a small group of employees to test the system and iron out the kinks. Once stability was proven, they rolled it out to the company—but with a twist. They gamified the adoption.
To encourage employees to build their own agents, we held a competition. The result? We saw agents for everything from analyzing release notes to navigating Berlin public transport and even office fitness challenges. This broke down the fear barrier and turned AI adoption into a community activity.
Real-world use cases
The impact on our daily workflows has been astronomical. Here are just a few ways our team is using this infrastructure:
- The CEO’s shadow board: Our CEO built a “dream board” of historical figures (like Steve Jobs and Warren Buffett) to stress-test strategic ideas and prepare for actual board meetings.
- Instant analytics: Instead of spending hours compiling reports, I now ask our Google Analytics agent to summarize traffic spikes, top pages, and campaign correlations in seconds.
- The content engine: We practice what we preach. This very blog post was generated from the podcast transcript, then run through Markup AI to ensure it met our brand voice, terminology, and style guidelines—reducing a multi-hour process to minutes.
The future: Agent orchestration
What’s next? Martin believes we are moving toward chains of agents or “agent orchestration.”
Currently, we have specific agents for specific tasks. The goal is to build a top-level “Executive Assistant” agent that knows which tool to call. You could simply say, “Book a meeting with Michael and write up the minutes,” and the central agent would coordinate the calendar agent and the transcription agent to complete the workflow autonomously.
Watch the full episode Want to see the full conversation and learn how to build your own internal AI ecosystem? Watch the full episode on our YouTube channel.
Last updated: February 10, 2026
Continue reading
Get early access. Join other early adopters
Deploy your Brand Guardian Agent in minutes.