3 دقيقة قراءة

Beam AI Passes the NYC LL144 Bias Audit

Beam AI has completed an independent bias audit for its Agentic AI System under New York City’s Local Law 144, conducted and verified by Asenion (formerly Fairly AI). The results are in: our platform meets the standard for fairness and compliance.

We are proud of this, and we think it matters for anyone evaluating AI platforms for enterprise use.

What Is NYC LL144?

New York City’s Local Law 144, in effect since July 2023, is one of the first laws in the United States that requires independent bias audits for automated decision tools. It applies to AI systems used in employment-related decisions, like screening, scoring, or ranking candidates.

The law requires:

An independent bias audit conducted by a qualified, external auditor. The audit assesses disparate impact across race, ethnicity, and sex. Results must be made publicly available. And the audit must be renewed annually.

For enterprise AI, LL144 set an important precedent. It moved the conversation from “we take fairness seriously” to “prove it.”

Why This Matters for Our Customers

If you are running AI agents across HR, compliance, or customer-facing workflows, the question is not whether regulation is coming. It is already here.

LL144 is a US regulation, but it fits into a broader global pattern. The EU AI Act classifies employment AI as high-risk, requiring conformity assessments and ongoing monitoring. ISO/IEC 42001 provides a management systems framework for responsible AI governance. These standards are converging, and the bar is rising.

For Beam customers, passing this audit means:

  • Independent verification that our agentic AI system has been tested for bias across protected categories.

  • Compliance readiness for organizations operating in New York City or working with NYC-based clients.

  • And a foundation for meeting broader regulatory requirements as AI laws expand across jurisdictions.

This is not a one-time effort. Responsible AI requires ongoing audits, and we plan to maintain this certification as the regulatory environment evolves.

Our Approach to Trustworthy AI

Beam’s platform architecture was built with auditability in mind. Every decision an AI agent makes is traceable. Logic and data flows are transparent. This is what made the LL144 audit process straightforward: when your system is designed for transparency, proving fairness is a natural extension.

We are also GDPR compliant, ISO 27001 certified, and SOC 2 attested, with options for EU data residency, private cloud, and on-premise deployment. The LL144 bias audit adds another layer to that foundation.

About Asenion

The audit was conducted by Asenion (formerly Fairly AI and anch.AI), a firm specializing in AI governance, bias auditing, and compliance. Beam AI has been added to Asenion’s AI Trust Registry, a public directory of organizations that have completed independent AI audits.

We thank the Asenion team for a thorough and collaborative process.

What Comes Next

Responsible AI is a continuous commitment, not a milestone you pass and forget. We will continue investing in fairness testing, compliance readiness, and transparent AI as regulations evolve across the US, Europe, and beyond.

If you are evaluating AI platforms and compliance is on your checklist, we are happy to walk you through how Beam handles it.

ابدأ اليوم

ابدأ في بناء وكلاء الذكاء الاصطناعي لأتمتة العمليات

انضم إلى منصتنا وابدأ في بناء وكلاء الذكاء الاصطناعي لمختلف أنواع الأتمتة.

ابدأ اليوم

ابدأ في بناء وكلاء الذكاء الاصطناعي لأتمتة العمليات

انضم إلى منصتنا وابدأ في بناء وكلاء الذكاء الاصطناعي لمختلف أنواع الأتمتة.