- within Technology topic(s)
- in United States
- with readers working within the Technology industries
Given the pace of technology advancement, it may feel like artificial intelligence (AI) has been part of our daily lives for decades. In reality, many organizations are still in the early stages of learning how to apply AI's power to complex systems that manage critical missions and sensitive data.
On a recent episode of the Federal Tech Podcast, Excella's AI Engineer Will Angel sat down with host John Gilroy to discuss how federal agencies can use AI effectively, balancing innovation with responsibility while keeping their mission and citizens at the center of every decision.
The Challenge: Balancing Innovation and Responsibility
AI tools are more accessible than ever, with platforms like ChatGPT now pursuing FedRAMP approval and available for as little as a dollar. That accessibility is both exciting and daunting. Leaders must decide whether to use off-the-shelf tools, adopt enterprise copilots, or build their own systems. Each choice brings tradeoffs between innovation, cost, and control.
Agencies also need to consider how employees use these systems and whether governance structures can keep up with the pace of change. Even with policies in place, compliance isn't perfect, and a single lapse can expose critical data. Angel encourages agencies to approach adoption deliberately, embracing the promise of AI while ensuring responsible use and effective change management. For example, establishing clear policies around data access and training staff on when and how to use AI systems can go a long way in preventing missteps.
Understanding Model Context Protocol
Model Context Protocol (MCP) is helping agencies embed AI more deeply into their technology stacks, making it easier for systems to interact with data and tools in smarter, more connected ways. Introduced by Anthropic in late 2024, MCP provides a framework for connecting AI models to servers that hold data or perform specific functions, essentially giving AI systems controlled access to parts of an organization's technology stack. It's designed to allow agencies to enhance existing tools with AI functionality without rebuilding from scratch.
Angel noted that while MCP is still young, it represents a revolutionary shift in how software systems interact with AI. It can make applications more intelligent and responsive when built responsibly. However, developers must pay close attention to permissions and security. For example, an AI connected to SharePoint should be able to access an employee handbook but not salary data. With the right design, MCP can extend an agency's capabilities while maintaining the rigor federal environments demand.
The Promise and Pitfalls of Agentic AI
Another exciting development is agentic AI, which allows systems to take action on their own. These "agents" can handle multi-step processes like managing workflows or analyzing large datasets. Angel described them as "LLM-based tools that can accomplish a task," similar to the popcorn button on a microwave: when the tool is well built and the task is clear, it produces perfect results. But when the system is poorly designed, you might end up with burned popcorn. Worse yet, if someone without proper training uses the wrong tool entirely, they might try to make popcorn in the blender.
That's why design and oversight are key. When implemented carefully, agentic AI can significantly reduce manual work and help agencies focus on higher-value analysis and decision-making. Angel's advice is to pilot incrementally: start small, learn as you go, and evolve successful demos into production-ready systems. This approach helps agencies capture AI's potential while minimizing risk.
Moving Forward, One Experiment at a Time
The large language models (LLMs) that power AI are a revolutionary technology that will fundamentally change how we build and use software. Machines can now process and reason with information in ways that amplify human capability. "The faster you can iterate responsibly," Angel said, "and the more you can observe how your software and models are performing, the better you'll be able to use these technologies to further your mission."
When leaders feel overwhelmed by the pace of AI, Angel offers two guiding principles: focus on your mission and focus on what you can access easily. Start with the systems and tools you already have, learn by doing, and expand from there. Pilots and real-world experiments will always teach more than theory alone.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.