Menu

Adults in the Generative AI Rumpus Room: Anthropic, AWS, Meta

Adults in the Generative AI Rumpus Room: Anthropic, AWS, Meta

Introduction: Generative AI is widely considered the fastest moving technology innovation in history. It has captured the imagination of consumers and enterprises across the globe, spawning incredible innovation and along with it a mutating market ecosystem. Generative AI has also caused a copious amount of FOMO, missteps, and false starts. These are the classic signals of technology disruption—lots of innovation, but also lots of mistakes. It is a rumpus room with a lot of “kids” going wild. The rumpus room needs adults. Guidance through the generative AI minefield will come from thoughtful organizations who do not panic, who understand the fundamentals of AI, and who manage risk.

Our picks for this week’s Adults In The Generative AI Rumpus Room are, Anthropic, Amazon Web Services (AWS), and Meta.

Anthropic Tackles LLM Bias

The News: On December 7, Anthropic released a tool and research that is interesting. The paper, “Evaluating and Mitigating Discrimination in Language Model Decisions,” outlines the challenges in AI bias, policy suggestions, and more. Most interesting, we might have mitigation for AI bias! From the paper:

“In addition to tools to measure discrimination, developers also need tools to mitigate it. In our study, we found that simple prompting—i.e., providing additional instruction to an LM in plain language—is an effective tool to reduce discriminatory outputs. We tested a variety of prompt strategies that include:

  • Appending statements to decision questions instructing a model to ensure its answer is unbiased.
  • Inserting requests to articulate the rationale behind a decision while avoiding bias and stereotypes.
  • Asking the model to answer the decision question as if no demographic information was provided.

While each of these techniques were effective in reducing discriminatory outputs, two strategies nearly eliminated discrimination in these decision scenarios: 1) appending the decision prompt with a statement that discrimination is illegal, and 2) instructing the model to pretend no demographic information was included in the original prompt.”

You can read the full Anthropic bias mitigation blog post on the Anthropic website.

Adults because … Bias is a huge issue for language models, and the larger the model, the bigger the issue. Cleaning and tagging datasets is probably the best approach, but even open source models (such as Meta’s Llama models) do not share details of their datasets. In the meantime, the ability to mitigate bias with simple instructions is a step in the right direction in combating bias.

Guardrails for Amazon Bedrock Levels Up Responsible AI

The News: At re:Invent, AWS launched Guardrails for Amazon Bedrock into preview. With the new tool, Amazon Bedrock users can define denied topics and content filters to remove undesirable and harmful content from interactions between their applications and users. Here are the key details:

  • Additional layer of protection. Guardrails for Amazon Bedrock controls are an additional layer of protection to any protections built into foundation models.
  • Apply to all large language models (LLMs) in Amazon Bedrock. This feature includes fine-tuned models and Agent for Amazon Bedrock (see Next-Generation Compute: Agents for Amazon Bedrock Complete Tasks for more information).
  • Control denied topics and configure with natural language commands. Users can use a short natural language description to define a set of topics that are undesirable in the context of their application.
  • Control content filters. Users can configure thresholds to filter harmful content across hate, insults, sexual, and violence categories. While many FMs already provide built-in protections to prevent the generation of undesirable and harmful responses, Guardrails gives users additional controls to filter such interactions to desired degrees based on the user’s company’s use cases and responsible AI policies.
  • Control personally identifiable information (PII) redaction. Coming soon, users will be able to select a set of PII such as name, email address, and phone number, that can be redacted in FM-generated responses, or they can block user input if it contains PII.

Read the AWS blog post on the launch of Guardrails for Amazon Bedrock on the AWS website.

Adults because … Guardrails for Amazon Bedrock reflects careful thinking by AWS about the responsible use of AI. The prevention/proactive approach is unique at this point, though it is likely that both Microsoft and Google will soon add similar features to their AI development platforms. Regardless, the initiative is the mark of AI leadership and another signal that AWS understands generative AI and is fully engaged in enabling enterprises to leverage generative AI. For further analysis including comparisons of Guardrails to Microsoft and Google’s comparable responsible AI governance tools, read Guardrails for Amazon Bedrock Show AWS Gets Generative AI.

Meta Launches Purple Llama

The News: On December 7, Meta announced Purple Llama, an umbrella project featuring open trust and safety tools and evaluations meant to level the playing field for developers to responsibly deploy generative AI models and experiences in accordance with best practices shared in Meta’s Responsible Use Guide.

As a first step, the company is releasing CyberSecEval, a set of cybersecurity safety evaluation benchmarks for LLMs, and Llama Guard, a safety classifier for input/output filtering that is optimized for ease of deployment.

CyberSecEval provides tools that provide metrics to quantify LLM cybersecurity risks, evaluate the frequency of insecure code suggestions, and evaluate LLMs to make it harder to generate malicious code or aid in carrying out cyberattacks. Llama Guard provides developers with a pretrained model to help defend against generating risky outputs. Read Meta’s Purple Llama announcement on the Meta website.

Adults because … Tools that combat the inherent challenges for LLMs and other foundation models are good things. Purple Llama might be the first of these types of guardrails for open source models.

Disclosure: The Futurum Group is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.

Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of The Futurum Group as a whole.

Other Insights from The Futurum Group:

Adults in the Generative AI Rumpus Room: Leica, Data Provenance, Google

Adults in the Generative AI Rumpus Room: Google, Tidalflow, Lakera

Adults in the Generative AI Rumpus Room: Anthropic, Kolena, IBM

Author Information

Based in Tampa, Florida, Mark is a veteran market research analyst with 25 years of experience interpreting technology business and holds a Bachelor of Science from the University of Florida.

Related Insights
Collapsing the Stack VAST Data’s Bid to Own the AI Data Loop
February 27, 2026

Collapsing the Stack: VAST Data’s Bid to Own the AI Data Loop

Brad Shimmin, Vice President at Futurum, analyzes the VAST Data platform updates from VAST Forward, detailing how the new Policy Engine, Tuning Engine, and Polaris architectures are simplifying the AI...
Are Enterprises Ready for the Virtualization Reset, or Just Swapping Out One Complexity for Another
February 27, 2026

Are Enterprises Ready for the Virtualization Reset, or Just Swapping Out One Complexity for Another?

Futurum’s Alastair Cooke shares his insights on new HPE research that finds that only 5% of enterprises are fully prepared for the so-called Great Virtualization Reset, even as two-thirds plan...
NVIDIA Q4 FY 2026 Earnings Highlight Durable AI Infrastructure Demand
February 27, 2026

NVIDIA Q4 FY 2026 Earnings Highlight Durable AI Infrastructure Demand

Futurum’s Nick Patience analyzes NVIDIA’s Q4 FY 2026 earnings, highlighting data center scale, networking expansion, and agentic AI adoption shaping AI infrastructure demand....
Salesforce Q4 FY 2026 Earnings Show Agentic AI Scaling, Guidance Steadies
February 27, 2026

Salesforce Q4 FY 2026 Earnings Show Agentic AI Scaling, Guidance Steadies

Keith Kirkpatrick, VP and Research Director at Futurum, analyzes Salesforce’s Q4 FY 2026 earnings, focusing on Agentforce scaling, enterprise AI execution metrics, and what FY 2027 guidance signals for growth...
The Storage Era is Dead; Long Live Everpure!
February 25, 2026

Storage Evolved: Everpure Takes on Data Challenges for an AI World

Brad Shimmin, VP and Practice Lead at Futurum, shares his insights on Pure Storage’s rebrand to Everpure as well as its supportive acquisition of 1touch.io, exploring why dropping "Storage" is...
Five9 Q4 FY 2025 Earnings Revenue Beat, AI Momentum, Cash Flow High
February 25, 2026

Five9 Q4 FY 2025 Earnings: Revenue Beat, AI Momentum, Cash Flow High

Keith Kirkpatrick, VP & Research Director, Enterprise Software & Digital Workflows at Futurum, notes Five9’s Q4 FY 2025 AI momentum and record bookings signal strong H2 FY 2026 growth....

Book a Demo

Newsletter Sign-up Form

Get important insights straight to your inbox, receive first looks at eBooks, exclusive event invitations, custom content, and more. We promise not to spam you or sell your name to anyone. You can always unsubscribe at any time.

All fields are required






Thank you, we received your request, a member of our team will be in contact with you.