During Ward and Smith’s annual In-House Counsel seminar, Mayukh Sircar, a cybersecurity, data privacy and technology attorney, shared comprehensive guidance on the strategic role of Artificial Intelligence (AI) in the modern business landscape, the key risks associated with implementation, the evolution of AI regulations, and the playbook for AI governance.
“Mayukh was well on his way to a research PhD, having earned his Master of Science in Physiology and Biophysics from Georgetown, but luckily for us, he discovered a passion for intellectual property and decided to become an attorney,” said Laura Hudson, the firm’s Chief Marketing Officer.
To set the cadence for the event, which was focused on providing attendees with practical, deployable strategies for AI governance, Sircar began with an outline of the various forms of AI technologies. The most common and “primitive” form of AI is automation, which executes pre-defined, rule-based tasks aimed at enhancing efficiency.
“Think of this like a thermostat that turns the heat on when the temperature hits a certain level,” Sircar explained. Other examples include workflow approvals, data entry, and chatbots.
“The legal risks with automation are relatively low, unlike with Generative AI, which can create potential IP infringement and factual inaccuracies,” added Sircar. “We’ve all heard about the hallucinations, data privacy violations, and breaches of confidentiality.”
Generative AI refers to models that create new content based on patterns and structures in large amounts of existing data. “This is a reactive tool that needs human prompting at each step…it can’t independently verify its output,” commented Sircar. “These are the systems that many are already familiar with, like ChatGPT, Gemini, and Claude.”
Agentic AI is an emerging category. This technology can autonomously pursue goals, make decisions, and execute tasks with minimal human intervention. Rather than simply reacting to a prompt, Agentic AI can develop strategies, execute plans, and adapt to changing outcomes.
Examples of Agentic AI include self-driving cars and Virtuoso QA, an autonomous tool that performs quality assurance for software development. “This magnifies the risks I mentioned with Generative AI, as it adds new layers of legal agency, delegation of authority, and ultimately, accountability for the actions of the tool, which often have binding legal effects,” noted Sircar.
Writing the Arrangement: What Regulations Apply Where
Staying current with the evolving regulatory landscape is challenging, but essential. “AI is no longer a future state technology. It is being increasingly integrated into multiple business sectors, so the role of the legal department has shifted from reactive gatekeeper to proactive strategic advisor,” said Sircar.
Legal teams are tasked with ensuring AI adoption is both strategic and defensible. “Understanding the existing regulations is one of the first steps that the legal department must take to build the guardrails that allow the business to innovate responsibly,” Sircar explained.
Similar to data privacy, AI rules are constantly changing in response to the technology. “There’s just not a single standard,” commented Sircar, “so the landscape of global AI regulations is highly complex. The only way to balance innovation with accountability and compliance across jurisdictions is to leverage a coordinated approach.”
European Union
The European AI Act is often referred to as the key jurisdictional approach. A comprehensive, risk-based framework, the Act applies across all business sectors. AI systems are sectioned into categories based on risk, such as Unacceptable, High, Limited, or Minimal Risk, and the obligations scale accordingly.
“If you’re doing business in the EU and you have an AI tool, the EU AI Act is going to apply, much like GDPR,” mentioned Sircar. “This forces global companies to follow these standards, which are considered very stringent. That said, the EU Commission proposed amendments aimed at simplifying compliance for small- and mid-sized companies just this week.”
AI in the UK
The UK is implementing a pro-innovation, context-based approach. Instead of creating new AI laws, the idea is to empower existing regulators such as the Information Commission to manage the technology within their realm of governance and policing actions.
China
“China is the other major player, and their approach is not surprising as it is driven by state control,” noted Sircar. The regulations are focused on algorithmic transparency, content moderation, and explicit consent from the user.
The Cyberspace Administration of China leads enforcement actions with a stated intent of ensuring social and political stability.
United States
Similar to data privacy, the regulatory approach to AI in the US varies state to state. California, Colorado and Illinois, for example, are advancing their own privacy and automated decision-making laws.
Federal agencies are issuing guidance under existing statutes. “To elaborate on that, the FTC handles unfair and deceptive trade practices associated with AI, while the EEOC is offering guidance and bringing enforcement actions, since it is responsible for policing discrimination,” said Sircar.
Considering the variance of regulations across the country, a best practice for organizations to follow is to benchmark AI governance against the strictest applicable standards.
Common Regulatory Principles
A number of core principles are beginning to emerge within the context of AI regulation. Transparency is a key issue and notably, the EU AI Act requires labeling for deep fakes.
“The FTC has made it clear that the deceptive use of AI for advertising is a violation,” added Sircar. “The general principle is that people have the right to know if they’re dealing with an AI tool.”
Fairness and non-discrimination is an emerging theme. The EU AI Act mandates bias detection and mitigation for high-risk systems like those used in hiring and resume sifting. The US has adopted similar guidance, with the EEOC affirming that the use of a biased AI tool will create a direct route to disparate impact claims under Title VII.
A real-world example occurred when Amazon had to dispose of an AI recruitment tool in its internal testing phase that was showing bias against women. Because software developers and some other roles in technology were traditionally male-dominated, the system was inadvertently downgrading female candidates.
Accountability is a regulatory theme, which in the EU, requires formal risk management systems for AI classified as having a higher risk profile. In the US, the National Institute of Standards and Technology created a risk management framework for AI.
“This is voluntary but it’s rapidly becoming the de facto standard for demonstrating responsible AI governance. So, if you’re using AI, that’s something to either look forward to or not look forward to,” joked Sircar.
Human oversight is another theme that is increasingly top of mind. The EU and the Chinese are fairly aligned in this regard, stated Sircar: “EU regulations mandate that a human must be able to intervene and override high-risk systems. In China, AI-generated content must be reviewed by a human before it’s published.”
Even when AI regulations are in place, Sircar expects data privacy and protection laws to continue to be an underlying theme. The base for data privacy in the EU is the GDPR; in the US, the California Privacy Rights Act provides certain rights to consumers related to automated decision-making.
Navigating Dissonance: Legal Risks
For organizations considering using AI, or that have already implemented the technology, there are several high-priority legal risks. Jurisdictional compliance presents a challenge due to the patchwork of regulations across the US.
“As I previously mentioned, benchmarking your internal governance against the most stringent possible standard is going to be the most defensible, efficient strategy for broad compliance,” noted Sircar. “That will most likely be the EU AI Act.”
Intellectual property presents a risk, as the business may spend time and effort creating assets that it has no ability to own. “The US Copyright Office has made it clear that works generated solely by AI lack the human authorship necessary for copyright protection,” Sircar explained.
The clarity of the rules pertaining to a work created by a human and then modified by AI are clear as mud. “There’s a lot of complex questions about intellectual property,” added Sircar. “As an example, if you’re using a model that is exclusively trained on content from Time Magazine, there may be significant infringement risks.”
Using a public facing AI tool presents data privacy and confidentiality risks. If an employee decides to paste sensitive customer personal information or a draft of a new patent application into the AI system, the result could be a data breach or the elimination of trade secret/patent protection.
Algorithmic biases pose a risk, as illuminated by the example of Amazon, and AI that was trained on a system with existing biases could amplify the risks.
Contractual liabilities create another risk for organizations, as standard vendor agreements are likely insufficient. The standard SaaS agreement is not fit for the AI era…many contracts fail to allocate risk for AI-generated errors, infringement, or data misuse,” Sircar said. “AI addendums are becoming more prevalent.”
Attorneys are required to understand the risks of relevant technology, as outlined in the American Bar Association model Rule 1.1 on competence. Rule 1.6 states that any breach resulting from the input of confidential client information into an AI tool could lead to sanctioned professional discipline.
“We think of AI like a young child. It wants to give you an answer and make you happy, so if it doesn’t know something, it’s going to very confidently make something up,” noted Sircar. “Our professional responsibility as attorneys requires us to review the information and verify the research.”
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.
[View Source]