Last week, I had the privilege to attend one of the Midwest's largest artificial intelligence conferences dedicated to AI developers, users, and enthusiasts: Cincy AI Week. During the three-day event, which brought together over 950 local professionals, I spoke on a panel entitled "Managing Risk in the Age of AI and Automation."
Here are six important observations I shared during that panel:
1. Organizations needs to shift risk posture from reactive to proactive.
AI technologies are increasingly influencing decision-making, data handling, and customer interactions. Accordingly, legal risks – from security incidents and data breaches to algorithmic bias – are no longer hypothetical, but imminent. Organizations must embed legal and ethical oversight into each stage of AI development and deployment, rather than addressing issues only after they arise. This means engaging legal counsel early, implementing clear accountability frameworks (see below), and ensuring transparency and auditability in AI systems. Doing so not only mitigates risk, but also builds trust with regulators, clients, and stakeholders in a rapidly evolving legal landscape.
2. Data governance, vendor risk, and internal misuse remain major vulnerabilities for organizations adopting AI tools.
As organizations rapidly adopt AI tools, many overlook risk exposure in data governance, vendor risk, and internal misuse; each posing significant legal and regulatory exposure. Inadequate data governance can lead to incidents of unauthorized use of sensitive or non-compliant data in model training. Such unauthorized use could constitute violations of data protection laws such as GDPR, CCPA, HIPAA, and various state consumer privacy laws. Meanwhile, reliance on third-party AI vendors without thorough due diligence or contractual safeguards can expose organizations to hidden liability such as intellectual property infringement and undisclosed model risks. Internally, the misuse of AI – whether through employee error, lack of oversight, or improper prompts – can result in biased outputs, misinformation, or breaches of confidentiality. Without a comprehensive legal strategy to address these gaps, businesses risk serious reputational, financial, and legal consequences.
3. Legal and technical teams need to collaborate on AI risk management.
Effective AI risk management demands close collaboration between legal and technical teams because the challenges posed by AI straddle both legal obligations and technical complexities. Legal teams bring critical expertise in regulatory compliance, intellectual property, privacy, and liability. Technical teams, on the other hand, understand how AI models are built, trained, and deployed; thereby giving technical teams insight into an AI model's limitations and potential for unintended outcomes. Without alignment, organizations risk (a) legal teams overlooking how a system actually functions, and (b) technical teams missing key regulatory implications. Instead, by working together from the beginning of the AI development or adoption lifecycle, these teams can design and procure AI systems that are not only innovative and efficient, but also legally sound, transparent, and defensible under regulatory scrutiny. This integrated approach is necessary to mitigate risk, ensure accountability, and maintain trust in an increasingly AI-driven environment.
4. NIST and ISO frameworks are a great starting point for AI-specific cybersecurity planning.
The National Institute of Standards and Technology (NIST) AI Risk Framework offers structured guidance on identifying, assessing, and mitigating risks unique to AI systems, such as data poisoning, model inversion, and adversarial attacks. Adapting cybersecurity policies, such as those modeled on ISO/IEC 27001 or the NIST Cybersecurity Framework, to address AI-specific vulnerabilities ensures a more resilient security posture. In-house and outside counsel play a critical role in this process by ensuring that these frameworks are aligned with regulatory requirements and contractual obligations; particularly in sectors handling sensitive or high-risk data. Organizations that leverage these frameworks into their AI development lifecycle are better positioned to manage evolving threats while demonstrating compliance and due diligence to regulators, customers, and stakeholders.
5. Transparency and explainability are essential for managing AI risk.
Legal scrutiny around algorithmic decision-making has intensified over the last two years. Transparency and explainability are not just ethical ideals; they are practical safeguards. Transparent AI systems allow organizations to understand how decisions are made, which is essential for identifying and mitigating bias, ensuring compliance with anti-discrimination and data protection laws, and responding effectively to audits and litigation. Explainability strengthens accountability by enabling organizations to justify automated decisions to regulators, courts, and impacted individuals in high-risk environments like finance, healthcare, and employment. Lack of explainability, on the other hand, can lead to regulatory penalties, reputational damage, or invalidated outcomes. Building AI systems with transparency and explainability from the outset is a key step in reducing legal risk and fostering trust.
6. Security-first cultures do not need to slow innovation.
Building a security-first culture does not mean stifling progress. Instead, a security-first approach means adopting smart, scalable safeguards into the innovation process from day one (in other words, security-by-design). By aligning legal, security, and technical teams early in the development or adoption lifecycle, organizations can proactively address risks without creating bottlenecks. Clear policies, ongoing employee training, and cross-functional collaboration through an AI Governance Committee ensure that security is treated as a shared responsibility instead of a final checkpoint. This approach empowers teams to innovate with confidence by knowing that privacy, compliance, and risk management are integrated rather than obstructive.
As AI reshapes how organizations operate, the intersection of cybersecurity, legal compliance, and responsible innovation has never been more critical. The six observations I shared during Cincy AI Week underscore a central theme: managing AI risk is not just a technical or legal challenge; instead, it is a strategic imperative that demands proactive, cross-disciplinary coordination. By merging legal oversight into the development process, embracing transparency, leveraging established frameworks, and fostering a culture where security and innovation go hand-in-hand, organizations can mitigate risk and build resilient, trustworthy AI systems. In other words, as AI continues to evolve, so too must risk strategies.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.