Closing the Agentic AI productionization gap: Bloomberg embraces MCP
July 14, 2025
After generative AI dominated headlines in 2024, 2025 is shaping up to be the Year of the AI Agent. Companies are racing to develop robust agentic AI systems that can iterate and use tools to solve complex, multi-step problems. Late last year, Anthropic released Model Context Protocol (MCP), which makes such solutions easier to build. This open standard defines how large language models (LLMs) and AI agents can interface with external tools and data sources through a shared protocol and universal interface.
The cross-functional team working on AI infrastructure at Bloomberg – which includes engineers and researchers from the company’s AI Engineering and Platform Services Engineering groups, technical product managers from the AI Strategy & Research team in the Office of the CTO, and members of the firm’s UX Design team – has long been ahead of the curve as it relates to AI infrastructure development. They were already working on standing up an internal solution to accelerate agentic AI development before MCP was introduced.
The team, which is deeply engaged in the AI and open source communities, closely monitored developments related to MCP. Once they saw the new protocol gaining adoption among both developers and enterprises, the team quickly aligned its own efforts with MCP to boost system interoperability and developer productivity. Their ultimate goal: delivering even better AI products to clients across the finance sector.
AI trailblazers
In its quest to deliver intelligence from the rapidly growing volume of structured data and unstructured documents in the financial markets, Bloomberg’s AI journey began over 15 years ago.
“We started applying machine learning to news sentiment analysis in 2009, integrated search functionality into the Bloomberg Terminal by 2012, and built and deployed neural networks to discover news themes in 2020,” says Engineering Manager Sambhav Kothari, Head of AI Productivity in Bloomberg’s AI Engineering group.


Kothari leads a group of researchers and engineers whose goal is to help ensure that Bloomberg’s AI application teams can productively use our AI infrastructure to deliver innovative applications. In turn, these AI solutions enable Bloomberg’s clients to make smarter, more informed business and investment decisions.
“When generative AI went mainstream in 2022, we already had an infrastructure mindset and were comfortable tackling the challenges to scale this technology,” says Kothari.
In early 2024, the team was deeply engaged in researching the opportunity to scale generative AI to support enterprise applications. In particular, they focused on the specific challenges of productionizing GenAI in finance responsibly. Some of the things they needed to address included how to incorporate continuous evaluation, guardrails, and traceability into their systems.
“This ‘productionization gap’ was killing our velocity,” notes Ania Musial, Head of AI Platforms in Bloomberg’s Office of the CTO. “Teams could build impressive demos in days, but it took much longer to get them production-ready for clients to use.”


Parallel-pathing an emerging protocol
The dynamic nature of agentic AI systems represents a scalability challenge. “In order for us to create maintainable systems, we need to regularly evaluate, improve, and add new capabilities to the agents. These systems are anything but static,” says Musial.
The team realized that the ability to make AI tools – the modular capabilities used by agents to gather information or take actions – “plug-and-play” was the missing piece for enterprise-scale agent development. “Agents need to interact with everything else in your system: tools, other agents, applications, and LLMs,” explains Kothari. “When you have hundreds of AI researchers and engineers building multiple applications across numerous business areas, you can’t just hard-code dependencies. You need configurable, swappable, loosely coupled components.”
The team hypothesized that protocol-based support for tools – along with improved discoverability and well-defined integration points – would help speed up the development of production-ready AI applications. Standard protocols enable dependency inversion, which can spur rapid innovation. For example, HTTP standardized communication between browsers and web sites, which led to the emergence of a global web development ecosystem. Similarly, in the GenAI era, the convergence across the industry towards the OpenAI API standard made swapping between LLMs more seamless.
By Q3 of 2024, the Bloomberg team had built a prototype of its GenAI tools protocol with a server SDK, middleware, and infrastructure components. They were proving out their hypothesis with real-world implementations.
“Building better agents depends on swapping out clouds, hardware, models, providers, and application interfaces.”
– Ania Musial
Further corroboration of the team’s hypothesis took place when Anthropic introduced MCP.
“From day one, we closely followed MCP’s progress because we realized this protocol had the same semantic mapping as our internal approach, but it was being built in the open,” says Kothari. “We had seen this play out before — when open source standards emerge and get adopted widely, they create network effects that benefit everyone. We quickly recognized that MCP had that same potential.”
The second iteration of the MCP specification in March 2025 added support for Server-Sent Events (SSE) and streamable HTTP, making it scalable for the web. “We predicted MCP would create a wave of remote servers, which was validating for our approach,” says Kothari.
When OpenAI, Google, and Microsoft all announced plans to adopt MCP, it enabled the Bloomberg team to use its internally built tools with a variety of third-party client applications without needing to build any additional custom integration points.
After seeing the direction in which MCP’s potential was unfolding, the Bloomberg team converged its internal approach with the open standard. “Building better agents depends on swapping out clouds, hardware, models, providers, and application interfaces,” said Musial. “The more interoperable and composable these elements become, the more nimble we can be as platform engineers, application developers, and as a product team. We were thrilled to see this was going to be embraced as the industry standard.”
Watch Sambhav’s recent talk at the MCP Dev Summit to learn more about how Bloomberg’s AI team converged on MCP
Making the system “enterprise ready”
Bloomberg’s MCP infrastructure is remote-first and multi-tenant, with robust identity awareness. Middleware handles identity, access control, and observability. On the client side, tools are exposed via a proxy that handles single sign-on and authentication, and translates inbound requests from MCP clients to appropriate remote server calls for backwards compatibility with the rest of the system.
Kothari says the power of the system comes from what is abstracted. “Private data and tokens never touch the developer experience because an identity exchange mechanism creates canonical user and workload identities.”
The company’s internal generative AI infrastructure platform facilitates rapid experimentation and versioning of agents, creating a full-stack development loop. “We can now easily productionize capabilities and rapidly turn demos into dependable, connected systems at scale,” he says.
Bloomberg’s imperative to develop production-grade generative AI solutions within the fast-paced finance industry amplifies concerns around ground truth.
“You don’t have a fixed corpus of responses that are being returned to the LLMs,” says Kothari. “It’s dynamic content that’s being fetched from live systems, some of which are ‘open world’, and there’s potentially no governance or review processes on such publicly available web content.”
So, while MCP enabled connectivity, it was up to the Bloomberg team to address other issues. “Our system provides the missing middleware layer that includes systems for authentication, authorization, rate limiting, metering, and AI guardrails to make MCP usable, viable, and safe-to-use in enterprise settings” says Kothari.
Closing the production gap: From construction to composition
The introduction of MCP has dramatically reduced the product development gap and accelerated agent development, as Bloomberg’s AI teams no longer need to build all of the integrations themselves. They can now rapidly compose them using intuitive, easy-to-use tooling the team has built together with the firm’s UX designers.
“We partnered closely with our UX team to make sure our internal developer tools were intuitive and efficient, and solving the right problems from the start,” says Musial. “In complex systems like this, the wrong abstraction can slow teams down or introduce hidden errors. Our UX teams’ expertise helped us get the journey just right — so our engineers could focus on building, not untangling, interfaces.”
With wide internal adoption of MCP, Bloomberg’s AI researchers and engineers now have access to a rapidly growing set of tools. The next challenge is managing the scale — dealing with complexities such as tool sprawl, discoverability issues, and governance.
“We’re at the early stage of building a scalable, internal version registry of tools, complete with metadata and usage policies,” says Kothari. “We’re also layering in our internal governance frameworks for access control, observability, and policy enforcement.“
“We can now easily productionize capabilities and rapidly turn demos into dependable, interoperable systems at scale.”
– Sambhav Kothari
Along the way, this shift also reduced experimentation time from days to minutes, closed the production gap, and created a rapid flywheel in which tools and agents reinforce each other in a common environment.
Most importantly, it has enabled the company’s Product teams to deliver new AI-driven benefits to Bloomberg’s customers far faster and more efficiently. For example, the company’s Document Search and Analysis product team was able to quickly build and test dozens of AI systems consisting of different tool variations while working to improve quality, latency and throughput for their recently introduced offering.
So, by optimizing the path from idea to production, Bloomberg isn’t just modernizing its own engineering stack — it’s accelerating how financial intelligence is delivered and empowering our customers with smarter, faster, AI-powered tools in a rapidly evolving industry.
Interested in building the future of agentic AI? Join the teams driving Bloomberg’s innovative AI infrastructure platforms: