Moye Law, PC
HARRISON, NY · VOL. I
§
AI law
Discipline article

Law at the edge

Navigating copyright, contracts, and governance in an AI-shaped landscape

By Christopher Moye, Esq.

Artificial intelligence does not eliminate the legal questions that govern commerce and creation — it relocates them to places where established doctrine reaches imperfectly, where contract language written before the technology existed does not mean what either party assumed, and where the gap between what the law currently says and what it will eventually say is wide enough that the planning decisions made now will determine outcomes long after the law has settled.

The legal framework governing artificial intelligence is not a single statute or a unified regulatory regime. It is a patchwork of existing intellectual property law applied to new factual contexts, contract doctrine tested by novel questions about authorship and performance, and emerging regulation that varies across jurisdictions in ways that create compliance complexity for any organization operating at scale. Copyright law does not clearly resolve whether training a model on copyrighted works constitutes infringement. Contract law does not clearly assign liability when an AI system produces output that causes harm. Employment law does not clearly categorize the relationship between an AI tool and the workers whose tasks it performs. The law is in motion, and the organizations navigating it are moving through territory that changes while they walk.

For the founders, creators, and enterprises building with AI or deploying AI-generated content, this uncertainty is not an abstraction. It has immediate operational implications. A company whose product incorporates content generated by an AI model may not be able to assert copyright protection for that content in jurisdictions that require human authorship. A company that uses a model trained on proprietary data must understand what rights it retains in the outputs and whether the model provider's terms of service transfer, share, or license those outputs in ways the company has not authorized. A creator who licenses their voice, likeness, or creative style to an AI model must ensure that the license agreement is precise enough to define what uses are permitted and what uses are not — because a license that is silent on a permitted use will be interpreted against the licensor in most jurisdictions.

This article surveys the primary legal frameworks implicated by artificial intelligence — copyright, contracts, and emerging regulatory governance — and identifies the planning decisions that carry the highest risk if deferred. It is written for informational purposes only and does not constitute legal advice. AI law is evolving rapidly, and the specific analysis applicable to your situation depends on the nature of your technology, the jurisdictions in which you operate, and the current state of the law at the time you are making decisions — all of which require counsel who is tracking the field as it develops.


Copyright at the frontier

The United States Copyright Office has taken the position that copyright protection requires human authorship, and has declined to register works produced entirely by artificial intelligence without meaningful human creative contribution. This position creates immediate practical questions for anyone using AI as a primary tool in content production. If an AI model generates the first draft and a human edits it, is the result copyrightable? The answer depends on the nature and extent of the human contribution — and the cases developing this doctrine are doing so in real time, with outcomes that are not yet consistent enough to predict with confidence. The safest working assumption, for now, is that the more the AI contributes and the less the human shapes the final expression, the weaker the copyright claim becomes.

The training data question is equally unsettled. Multiple large-scale copyright infringement actions are currently pending against AI companies whose models were trained on textual, visual, and audio works without a license from the rights holder. The plaintiffs in these cases argue that training constitutes copying; the defendants argue that training is transformative and falls within fair use. Neither position has been definitively resolved at the appellate level. For businesses that operate their own AI models, this uncertainty argues strongly for a clear understanding of where training data came from, whether any licenses were obtained, and what indemnification obligations the model provider has assumed in its terms of service. Training data provenance is a compliance question today, not merely a future risk.

For creators and rights holders, the mirror-image concern is how to protect existing work from unauthorized use in AI training while preserving the ability to license it on acceptable terms. Technical measures — watermarking, opt-out signals, and metadata preservation — are increasingly relevant to this question. Some jurisdictions, including the European Union under its AI Act, have imposed disclosure and opt-out requirements on AI developers that create corresponding rights for rights holders. In the United States, there is as yet no comparable comprehensive framework, but state legislation and proposed federal bills are in active development. A rights holder who waits for the law to stabilize before addressing training data protections may find that significant unauthorized use has already occurred, and that the remedial options available after the fact are limited.

The safest working assumption is that the more the AI contributes and the less the human shapes the final expression, the weaker the copyright claim becomes.

Contract law in an AI context

Contracts govern the relationship between AI companies and their customers, between creators and the platforms that deploy their work, and between enterprises and the vendors supplying AI tools used in operations. Most of those contracts were drafted before the specific questions AI raises were fully apparent — and many are being interpreted against their original intent as parties discover that language about content, outputs, data, and ownership does not map cleanly onto AI's production process. A software license that grants the licensee the right to use the software's outputs for commercial purposes may or may not grant the right to use those outputs to train a competing model. The answer depends on the precise language, the jurisdiction, and the interpretive approach the relevant court would apply — not on what the parties believed when they signed.

AI vendor agreements require careful negotiation on several points that are easily overlooked. Ownership of outputs: does the customer own outputs generated using the platform, or does the vendor retain a license to use those outputs for further training? Data handling: if a customer inputs proprietary data to generate outputs, what restrictions govern the vendor's use of that input? Liability and indemnification: if an AI model produces output that infringes a third party's intellectual property, who bears the cost of that claim — the vendor, the customer, or some allocated portion of both? These questions are standard in well-negotiated enterprise agreements and are frequently absent from the click-through terms that govern smaller deployments. The cost of a bad term discovered in litigation is almost always greater than the cost of a better-negotiated contract at the outset.

Agreements involving AI-generated content — licensing deals, publishing contracts, production agreements — must now address what AI contributed to the deliverable and who owns what portion of the resulting work. A publishing contract signed in 2018 almost certainly does not address AI-generated content because the parties had no reason to consider it. A publishing contract signed today that is silent on AI is being drafted without a key term, not left intentionally to implication. Creators entering agreements for work that incorporates AI tools should ensure that the contract allocates copyright ownership in any AI-assisted portions, addresses disclosure obligations that may apply under the law of the relevant jurisdiction, and protects the creator from liability for any infringement claim arising from the AI model's training data.

A contract that is silent on AI is being drafted without a key term — not left intentionally to implication.

Governance and compliance

The regulatory landscape for artificial intelligence is developing at different speeds in different jurisdictions. The European Union's AI Act creates a tiered regulatory framework based on the risk level of AI applications, imposing the heaviest obligations on systems that affect employment decisions, access to credit, and essential public services. The United Kingdom has adopted a sector-specific approach, applying existing regulatory frameworks to AI rather than creating new horizontal legislation. The United States has issued executive guidance and sector-specific agency actions but has not enacted comprehensive federal AI legislation. For organizations operating across jurisdictions, this fragmentation requires compliance analysis in each market rather than the application of a single governing framework.

Employment law presents some of the most immediate compliance questions for organizations deploying AI internally. Using AI to screen job applicants, evaluate employee performance, or make promotion decisions implicates state and local laws in several jurisdictions — most prominently New York City's Local Law 144, which requires employers using automated employment decision tools to conduct annual bias audits and provide candidates with notice of the tool's use. Organizations that are not tracking these requirements across their operating jurisdictions risk regulatory enforcement at a time when AI-related investigations are active priorities for state and local agencies. Compliance requires knowing which AI tools are in use in employment decisions, not just which tools have been officially approved for deployment.

For founders and operators, the AI governance framework that matters most is the one built internally before a regulator or a plaintiff asks to see it. That framework should address how the organization has inventoried AI tools in use, what data each tool processes, whether human oversight is applied at critical decision points, how the organization responds to complaints related to AI-generated outcomes, and what documentation supports the legitimacy of each AI use case. Organizations that build governance frameworks proactively are better positioned to respond to regulatory inquiry, better positioned to negotiate terms with enterprise customers who conduct vendor AI audits, and better positioned to limit liability when an AI-related claim arises — because the documentation of a deliberate process is itself a defense.

Organizations that build AI governance frameworks proactively are better positioned to respond to regulatory inquiry, limit liability, and negotiate enterprise terms.
With composed counsel,
Christopher Moye
ATTORNEY · ADMITTED IN NEW YORK
Share this article
[1]This article is for general informational purposes and does not constitute legal advice. AI law is a rapidly evolving field, and the legal positions described here reflect the state of the law as understood at the time of writing. Analysis of specific situations requires current counsel.[2]Attorney advertising under NY Rules of Professional Conduct § 7.1. Prior results do not guarantee similar outcomes.
Set in Cormorant Garamond · Inter · JetBrains MonoMoye Law, PC · Harrison, NY