Building Robust RAI Programs as Third-Party AI Tools Proliferate

An article from MIT Sloan Management Review sponsored by BCG. It highlights the broader concerns executives might have about the proliferation of such technologies as LLM. It then tries to bundle these concerns into a sellable package.

Reading notes

This is an MIT Sloan Management Review research initiative sponsored by BCG. Key insights are, in my opinion:

  • Third-party AI use will be dominant in most firms
  • A lot of companies are writing policies on proper AI tool use
  • There are risks involved with using AI

The article goes out of its way to co-mingle these themes into a term of “Responsible AI” or “RAI” in a pretty transparent marketing “thought leadership” → more in my rant essay on the mater.

Main points

The impact of LLMs has been wild:


OpenAI’s ChatGPT tool has catapulted the capabilities, as well as the ethical challenges and failures, of artificial intelligence into the spotlight.

And of course somebody is already suing chatbots:


[… a chatbot] falsely accusing a law professor of sexual harassment and implicating an Australian mayor in a fake bribery scandal, leading to the first lawsuit against an AI chatbot for defamation.


  • P. Dixit, “U.S. Law Professor Claims ChatGPT Falsely Accused Him of Sexual Assault, Says ‘Cited Article Was Never Written,’” Business Today, April 8, 2023,
  • T. Gerken, “ChatGPT: Mayor Starts Legal Bid Over False Bribery Claim,” BBC, April 6, 2023,

Half of the companies are working on policies regarding AI


In fact, nearly half of the companies polled in a recent Bloomberg survey reported that they are actively working on policies for employee chatbot use […]


  • J. Constantz, “Nearly Half of Firms Are Drafting Policies on ChatGPT Use,” Bloomberg, March 20, 2023,

Most companies rely on external AI tools


The vast majority (78%) of organizations surveyed this year report accessing, buying, licensing, or otherwise using third-party AI tools, including commercial APIs, pretrained models, and data. In fact, more than half (53%) of organizations surveyed rely exclusively on third-party AI tools and have no internally designed or developed AI technologies of their own.

External AI tools are also a source of failure


[…] more than half (55%) of all AI-related failures stem from third-party AI tools, leaving organizations that use them vulnerable to unmitigated risks.

Also, a fifth of the surveyed organizations fails to evaluate the risks stemming from AI at all. Many have not adapted their third-party risk management to the AI domain, and AI vendors are not subjected to vetting as much as other vendors are.

Regulation on AI is coming


For example, laws in New York, Illinois, and Maryland, as well as draft legislation introduced in California and a handful of other states, address the use of AI tools in the context of hiring and employment. In Europe, the much-anticipated AI Act and corresponding AI Liability Directive will impose stringent requirements on AI systems deemed to be “high risk,” as well as general-purpose systems like AI chatbots, and make vendors liable for any damage to consumers.


  • L. Mearian, “Legislation to Rein In AI’s Use in Hiring Grows,” Computerworld, April 1, 2023,
  • “Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts,” European Commission, April 21, 2021,
  • “Proposal for a Directive of the European Parliament and of the Council on Adapting Non-Contractual Civil Liability Rules to Artificial Intelligence (AI Liability Directive),” PDF file (Brussels: European Commission, Sept. 28, 2022),

You should just give all your money away

The article finishes with a call to action for you to (unsurprisingly) “invest” heavily into “RAI.” That way you can become a Pokemon Master “RAI Leader”.