A view into current trends and regulations in the US and Latin America.
As artificial intelligence continues to reshape the digital landscape, deepfake technology has emerged as one of the most complex challenges at the intersection of AI innovation and intellectual property law. While often associated with celebrity scandals or explicit content, deepfakes pose far broader and more serious risks to businesses across all industries, creating new legal, financial, and reputational concerns that demand immediate attention.
Understanding Deepfakes in the Modern Business Context
A deepfake is a form of manipulated media—whether video, audio, or still images—created using artificial intelligence that mimics or alters a person’s likeness and voice to create hyper-realistic depictions of someone saying or doing something they never actually did. This technology has evolved rapidly, becoming increasingly sophisticated and accessible to the general public.
The business implications extend far beyond entertainment or social media. Companies now face risks from deepfakes being used to fabricate product endorsements, spread misinformation, or impersonate key executives in ways that can mislead the public, investors, or employees. Recent examples include deepfakes being used to trick staff into wiring funds or disclosing confidential information, as well as sophisticated phishing attacks involving deepfake-generated credentials where employees receive seemingly legitimate emails from managers requesting sensitive actions.
United States: Federal and State Legislative Action
The United States is experiencing a sharp uptick in legislative activity at both federal and state levels aimed at regulating AI-generated media that uses an individual’s image, voice, or likeness.
A major federal development is the “Take It Down Act,” which amends Section 223 of the Communications Act to make it unlawful to use an “interactive computer service” to knowingly publish an intimate visual depiction (including digital forgeries) of an identifiable person, with an intent to cause harm to such identifiable person. The law applies when the person had a reasonable expectation of privacy or, in the case of a forgery, when published without consent. For minors, the only requirement is that the visual depiction is published to abuse, humiliate, harass or degrade the minor, or arouse or gratify the sexual desire of a person. Violators could face fines or imprisonment for up to two years, or three years if a minor is involved.
Congress has also reintroduced the “NO FAKES Act,” which proposes a federal right of publicity, a right currently only recognized at the state level. This bill would require consent before creating a “digital replica,” defined as a highly realistic, computer-generated, identifiable voice or visual likeness. Importantly, it would also obligate online service providers to maintain tools for detecting and removing such replicas in response to takedown notices.
At the state level, several jurisdictions are stepping in with their own regulations. States like Arkansas, California, Illinois, and Tennessee have updated their right of publicity laws to explicitly prohibit unauthorized use of “digital replicas.” California has also passed The California AI Transparency Act, which requires large AI developers to watermark AI-generated images and offer free public detection tools to help consumers distinguish synthetic from real content.
Regional Regulatory Landscapes in Latin America
Across Brazil, Colombia, Mexico, and Argentina, a clear trend is emerging toward transparency and consent as foundational principles for AI in marketing and business operations.
In Brazil, while comprehensive AI legislation continues to develop, the country’s Lei Geral de Proteção de Dados (LGPD), similar to Europe’s GDPR, already demands explicit consent for data processing that applies to data used for AI-generated content. Recent electoral resolutions have also prohibited political deepfakes and mandate clear labeling for any AI-edited content, establishing precedents that extend to commercial applications.
Colombia is taking a risk-based approach to AI regulation, with proposed laws categorizing systems by their potential for harm. Virtual influencers and similar applications will likely be classified as “limited risk,” requiring clear disclosure that users are interacting with AI. Colombia’s robust data privacy framework also means privacy impact assessments are crucial if AI content uses data derived from real individuals.
Mexico is pushing for explicit consent when using AI to mimic human voices or likenesses, partly driven by concerns from voice actors and other creative professionals. Proposed intellectual property legislation emphasizes clear labeling of AI-generated content and proper authorization for data used to train these systems.
In Argentina, while many laws initially focused on addressing election deepfakes or non-consensual images, proposed legislation now directly regulates influencer activity, including provisions for transparency regarding AI-generated images in advertising.
Business Challenges
The proliferation of deepfake technology creates multiple risk vectors for businesses. Reputational harm can occur when false content featuring company executives or spokespersons spreads across social media platforms. Financial risks emerge through sophisticated fraud schemes that use deepfake technology to impersonate decision-makers or bypass security protocols.
Legal liability represents another significant concern. Companies using AI-driven content in marketing, training, or customer engagement must navigate an evolving legal landscape while ensuring they don’t inadvertently violate emerging regulations around consent, transparency, and intellectual property rights.
The challenge is particularly acute for businesses leveraging digital marketing tools like virtual influences and AI-generated content. These innovative strategies, while offering new opportunities for engagement and cost efficiency, must be implemented with careful attention to compliance requirements and ethical considerations.
Practical Strategies for Business Protection
Leading companies are implementing comprehensive strategies that fall into four key categories:
Crisis Response Planning. Companies are integrating deepfake-specific risks into their broader incident response strategies. This includes drafting pre-approved communications for employees, media, and business partners in case harmful deepfakes surface. Legal, communications, and cybersecurity teams are collaborating to create plans that allow for swift investigation, takedown, and mitigation.
Staying Ahead of Legislation. Forward-thinking businesses actively track emerging AI and deepfake laws at federal, state, and international levels. This proactive approach helps companies anticipate how new legislation might impact their use of AI in products, services, or marketing, allowing them to adapt gradually rather than scrambling for compliance later.
Establishing Internal AI Governance. Companies are developing robust internal policies that clearly define acceptable uses of AI, disclosure requirements for consumer-facing AI-generated content, and guardrails around the use of third-party likenesses. This includes implementing internal clearance protocols to ensure proper consent and enhancing employee training on phishing, impersonation, and deepfake threats.
Updating External Policies and Contracts. Externally, businesses are revising website terms of use to prohibit scraping, unauthorized AI use, and misuse of likenesses or media content while disclosing how AI is used to generate or enhance content on their platforms. Takedown policies are being updated to provide a mechanism for rightsholders to provide notice of unauthorized use of publicity rights on company platforms. Vendor and service agreements are also being updated to include provisions that restrict or govern the use of AI when companies receive services, especially when such services involve the use or creation of company intellectual property, trade secrets, or sensitive customer data.
Regional Considerations and Best Practices
For companies operating in Latin America or conducting business in the region, additional strategies are proving essential:
Strengthening Transparency and Consent Practices: Businesses are adapting their marketing and content creation practices to comply with new and expected regulations. When using AI to generate advertisements, virtual influencers, or branded content, many companies are including clear disclosures such as “AI-generated” or “virtual character,” especially for consumer-facing content.
There’s also growing emphasis on obtaining clear, informed consent from individuals whose likenesses—faces, voices, or styles—are used to train or produce AI content. This often requires updating media release forms and internal review protocols to specifically address generative AI applications.
Investing in Content Authentication Technology: To protect their brands from impersonation and build public trust, companies are exploring tools that add digital “signatures” or “stamps” to original content. These act as digital certificates for videos, images, or audio, making it easier to verify authenticity and detect tampering, which is particularly valuable in markets where misinformation spreads quickly and brand credibility is paramount.
The Path Forward
As deepfake technology becomes more sophisticated and accessible, the legal and ethical balance continues to evolve rapidly. Companies across all industries need to stay ahead of these developments, regardless of their current use of AI technologies. The global nature of deepfake technology means businesses must understand regulatory approaches across multiple jurisdictions and implement comprehensive strategies that address both current risks and anticipated future challenges.
Success in this environment requires combining transparency efforts with robust internal governance and risk mitigation strategies. By doing so, businesses can better navigate the complex legal landscape while reinforcing consumer trust in a world where not everything we see or hear is what it seems. The key is proactive preparation: understanding the risks, staying informed about regulatory developments, implementing appropriate safeguards, and maintaining the flexibility to adapt as the technology and legal landscape continue to evolve. Companies that take these steps now will be better positioned to leverage AI innovations responsibly while protecting their brands, stakeholders, and customers from the potential harms of malicious deepfake use.