Who’s Deploying AI, and Who’s Responsible for Safety?

A strategic overview of how AI deployment responsibility is shifting across enterprise segments, and what it means for AI safety efforts.

Overview

(Created in early 2024)

Historically, AI deployment was centralized, led by Big Tech firms that had mature Responsible AI and safety functions. But the generative AI wave has changed that.

Today, enterprises of all sizes are building and deploying their own AI systems, many of them internally and without the safety infrastructure typically found in larger firms. In fact, some estimates show that smaller enterprises are deploying generative AI at 3x the rate of their Fortune 500 counterparts.

This infographic offers a segmented view of:

  • Which organizations are building vs. buying models
  • Where AI safety expertise is present—and where it’s lacking
  • How AI safety standards bodies (like AISIC and MLCommons) may be missing key parts of the deployment landscape

Why it matters

If your organization is buying, integrating, or fine-tuning generative models, you may be taking on more risk than you realize. Reins AI helps organizations understand where safety responsibilities lie, and how to evaluate what they buy.

Our other articles

All articles