Report: Microsoft Develops Internet-Free AI Model for Spy Agencies


Microsoft has reportedly built a generative AI model designed for U.S. intelligence agencies.

The development marks a milestone in the artificial intelligence (AI) sector, in that officials at Microsoft say it is a large language model (LLM) that can function entirely separated from the internet, Bloomberg News reported Tuesday (May 7).

“This is the first time we’ve ever had an isolated version — when isolated means it’s not connected to the internet — and it’s on a special network that’s only accessible by the U.S. government,” William Chappell, Microsoft’s chief technology officer for strategic missions and technology, told Bloomberg.

According to the report, most AI models depend on cloud services to glean patterns from data, but Microsoft wanted to provide a truly secure system for U.S. intelligence agencies like the CIA.

Bloomberg said intelligence officials have stressed they want to use the same type of AI tools that proponents say will transform the business world. While the CIA last year debuted a ChatGPT-style service for unclassified information, agencies want something that can handle much more sensitive data.

“There is a race to get generative AI onto intelligence data,” Sheetal Patel, assistant director of the CIA for the Transnational and Technology Mission Center, said at a recent security conference at Vanderbilt University, per Bloomberg.

She added that the first country to use generative AI for intelligence would win that race. “And I want it to be us.”

Microsoft’s latest efforts follow reports from earlier this week that the company was working on a new, in-house AI model “far larger” than past open source models it has trained.

The new model, MAI-1, is expected to have about 500 billion parameters, and is designed to compete with models created by companies such as Google, Anthropic and OpenAI (in which Microsoft is an investor).

Meanwhile, PYMNTS on Monday (May 6) examined some of the challenges and concerns arising from the use of AI LLMs.

“LLMs may make up information, affecting their credibility and reliability,” that report said. “The models can perpetuate biases found in their training data and generate misinformation. Their use to produce online content at scale may accelerate the spread of fake news and spam. Policymakers worry about the impact on jobs as LLMs encroach on knowledge work.

In addition, questions have emerged about intellectual property, as these models are trained using copyrighted material.


Leave a Reply

Your email address will not be published. Required fields are marked *