The launch of ChatGPT in November 2022 sparked intense debate about the potential damage that uncontrolled generative artificial intelligence systems might cause. As this conversational AI model demonstrated unprecedented capabilities, concerns mounted over how such technology could be misused to spread misinformation, automate online abuse, or support illegal activities. Yet even as the US looks to other countries and prominent industry voices to decide if and how it ought to regulate AI, some experts argue that new specialized regulation may not be that important for mitigating current harms.

Many believe existing content moderation, cybersecurity, privacy, and intellectual property laws can be applied to regulate AI. New AI-specific policies aren't needed, they say. They concede that the approach could prove difficult – emerging technologies companies face numerous regulatory challenges, including the "moving target" issue of keeping policies relevant amid rapid advances, clarifying complex subject matter around data and algorithms for policymakers, and preventing digital tools from enabling harm even with good intentions. However, finding common ground on laws designed for a still-emerging vertical could prove even more challenging.

On the other hand, well-designed regulations could benefit ethical AI developers – especially smaller startups aiming to compete with tech giants. Thoughtful oversight and accountability guardrails might counteract the network effects and data advantages held by companies like Google and Meta. Regulation aimed at encouraging industry-wide best practices through targeted subsidies and adjusted liability rules could further support responsible innovation among startups with limited resources. Ultimately, the right regulatory approach demands a balance in answering complex questions, and the stakes could not be higher.

Approaches to AI Regulation

Since as early as 2016, jurisdictions at the forefront of technological advancement have been assessing ways to mitigate risks associated with AI systems. By 2021, more than 1,600 AI governance policies had been drafted. China, the United Kingdom, Japan, and the European Union have developed comprehensive regulatory frameworks. Momentum to establish oversight guardrails accelerated in April 2021 when the EU proposed its Artificial Intelligence Act – one of the first attempts to horizontally regulate all AI systems by classifying them into different risk categories. This law aims to impose various obligations and restrictions on providers and users of AI depending on the intended purpose and potential harm. As the EU's proposal awaited approvals, China also formalized its own set of AI regulations with a similar focus on risk-proportionate accountability.

The US has lagged in enacting broad AI policies. As of today, the US does not have a single law specifically regulating artificial intelligence systems. Lawmakers have introduced several narrow, sectoral bills focusing on issues like algorithmic discrimination, but there are also rising proposals for establishing a federal AI oversight agency. In lieu of comprehensive oversight, existing civil rights statutes, consumer protection regulations, and cybersecurity policies still apply to AI systems and constrain misuses to some degree. As global powers sprint ahead in regulating AI, pressure mounts on the US to take a clear strategic stance in balancing ethical progress and international competitiveness. Achieving this while protecting the public interest and ensuring innovation should be the goal.

The Velocity Problem

The rapid pace of AI advancement exacerbates regulatory challenges, often called the velocity or "Red Queen" problem. Existing rules struggle to keep up with the speed of technological progress in the field, sparking concerns that policies could quickly become obsolete or stifle innovation if not carefully designed. This velocity dilemma explains much of the wariness toward regulation among emerging tech companies – especially startups aiming to experiment and compete in a volatile landscape. For instance, many firms in the EU have argued that the landmark AI Act ties their hands by restricting certain use cases and requiring often impractical transparency standards. Some stakeholders contend that self-regulation is insufficient considering AI's societal impacts and the free-market competition. Still, the breakneck speed of innovations in generative AI and adjacent technologies outstrips most policymaker oversight capabilities currently. Tensions around regulating "moving targets" will likely persist. More adaptive governance frameworks that emphasize best practices over rigid technical requirements could help ease this – but questions around enforcement and international alignment also muddy issues. Ultimately dynamic, thoughtful multi-stakeholder collaboration balancing both ethical stewardship duties and sustained progress will prove essential.

Data Security and Privacy

Thoughtful AI oversight of data protection and security could also benefit Web3 companies. With no current overarching international consensus or treaty guiding data protection standards, national regulators adopt widely varying, often conflicting governance standards. These regulations naturally address the needs of their specific populations and protect the interests of their domestic businesses. This poses complex compliance challenges for global AI firms. For instance, while the EU's General Data Protection Regulation (GDPR) represents a relatively harmonized regulatory framework, data rules in the United States are fragmented across the states and sector-specific across healthcare, finance, and retail. This lack of international cohesion gives AI developers more flexibility in some regards – unlike the EU, no singular set of strict top-down data handling requirements exists in the US. However, that does not imply companies have free rein; myriad regulations like breach disclosure laws still require security safeguards and transparency. Well-designed regulations could introduce shared best practices surrounding privacy and ethics while allowing innovation to continue responsibly. As AI relies heavily on data, policymakers must accept their key role to play in clarifying standards around issues like consent, anonymization, and accountability.

Subject Matter of Regulation

Clarifying exactly what aspects of AI systems any new oversight frameworks should cover presents another potential win for Web3 enterprises. Solving the "subject matter challenge" involves forging a consensus on whether AI-focused policies should regulate scopes of service, define harm potential and risk categories, assign liability for damages, and/or tackle other concerns. This ambiguity surrounding the appropriate subject matter for regulation poses a mixed bag for AI developers. The lack of clear liability models or definitions of unacceptable system behavior provides more flexibility for companies to experiment without strict legal accountability guardrails in place. This does not imply firms can act irresponsibly without consequence. Rather, it suggests room for further clarification on standards and enforcement regimes. Well-designed regulations would establish better norms and transparency expectations while sustaining innovation. Progress, however, depends partly on a clear delineation of regulatory reach. Should policies govern systems internals like data and algorithms, or just outputs and use cases? Should they focus on privacy, bias, safety, or other issues as priorities? Resolving these open questions will prove critical for policymakers aiming to craft judicious, relevant AI laws that responsibly uphold the public interest.

Conclusion

The pace of AI regulation is extremely fast. Even though no singular scheme governs AI and other forms of generative technologies in the US, the situation is fluid, and Congress could enact a law tailored to AI at any time. Until then, AI will be regulated by a patchwork of existing federal and state laws. It is best to consult a law firm specializing in emerging technologies such as AI to understand the compliance requirements related to AI. An AI law firm can also help you understand the present legal framework, the types of legal documents required, and the actual drafting of the legal documents. Spending resources on timely legal advice is the best investment that you can make in your AI company or AI business. Any cost-cutting in this area can prove to be prohibitive in the long term.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.