Artificial intelligence can now create new content based on patterns it has learned from large datasets, generating original outputs that resemble human-created work. In the business context, can draft contracts, create ads and even recommend business strategies. But when generative AI gets it wrong, who pays the price?
Generative AI is a powerful tool, but it introduces legal and operational risks. In decision-making, AI-generated recommendations can reinforce a company’s historic biases, with discriminatory results. AI contract drafting can let ambiguous or unenforceable terms slip in. In marketing, AI might create misleading ads, repurpose copyrighted material or generate defamatory statements. AI-driven systems can act unpredictably, creating “black box” scenarios that challenge existing accountability structures.
Case law is beginning to clarify when and how companies bear responsibility for AI-generated harms. Companies can be held vicariously liable for employees’ or contractors’ misuse of AI, such as HR bots making discriminatory hiring decisions or sales teams using AI-generated content without proper review. If an AI-generated contract leads to a dispute, the business may face contractual liability for ambiguous or unenforceable terms. Reputational damage from a high-profile AI blunder may prompt shareholder lawsuits. Consider also the risk of AI tools mishandling customer data, running afoul of privacy laws.
As the law catches up, the specter of expanded liability is growing. The boundaries of corporate liability are being drawn in real time. The European Union’s sweeping AI Act imposes strict obligations on high-risk AI systems, including requirements for transparency, human oversight and risk assessment. In the U.S., states like California and New York are considering AI-specific rules. Federal agencies like the FTC are focusing on truth-in-advertising standards and bias mitigation in automated decision-making. Looking ahead, expect tighter regulation as lawmakers respond to high-profile AI mishaps.
Corporate counsel will play a pivotal role in helping businesses strike a balance between harnessing innovation and mitigating liability. A skilled attorney can help you develop clear AI governance policies that set rules for what tasks AI may and may not perform and that mandate human review of all critical AI outputs before they go live, including contract language, strategic recommendations or ad copy. An attorney can help update vendor and contractor agreements to include AI-specific clauses addressing responsibility, data protection and indemnification. Finally, an attorney can help with reviewing your insurance coverage, as some carriers now offer policies tailored to AI-related risks.
At the Law Offices of Donald W. Hudspeth, P.C. in Phoenix, we assist businesses in development of policies and procedures for using AI tools so as to substantially limit the risk of regulatory violations. Call us at 866-696-2033 or contact us online to schedule a consultation.