- with readers working within the Advertising & Public Relations and Aerospace & Defence industries
- within Criminal Law topic(s)
As we have discussed in prior posts, AI-enabled smart glasses are rapidly evolving from niche wearables into powerful tools with broad workplace appeal — but their innovative capabilities bring equally significant legal and privacy concerns.
- In Part 1, we addressed compliance issues that arise when these wearables collect biometric information.
- In Part 2, we covered all-party consent requirements and AI notetaking technologies.
- In Part 3, we considered broader privacy and surveillance issues, including from a labor law perspective.
In this Part 4, we consider the potentially vast amount of personal and other confidential data that may be collected, visually and audibly, through everyday use of this technology. Cybersecurity and data security risk more broadly pose another major and often underestimated exposure from this technology.
The Risk
AI smart glasses collect, analyze, and transmit enormous volumes of sensitive data—often continuously, and typically transmitting it to cloud-based servers operated by third parties. This creates a perfect storm of cybersecurity risk, regulatory exposure, and breach notification obligations under laws in all 50 states, as well as the CCPA, GDPR, and numerous sector-specific regulations, such as HIPAA for the healthcare industry.
Unlike traditional cameras or recording devices, AI glasses are designed to collect and process data in real time. Even when users believe they are not "recording," the devices may still be capturing visual, audio, and contextual information for AI analysis, transcription, translation, or object recognition. That data is frequently transmitted to third-party AI providers with unclear security controls, retention practices, and secondary-use restrictions.
Many AI glasses explicitly rely on third-party AI services. For example, Brilliant Labs' Frame glasses use ChatGPT to power their AI assistant, Noa, and disclose that multiple large language models may be involved in processing. In practice, this means sensitive business conversations, images, and metadata may leave the organization entirely—often without IT, security, or legal teams fully understanding where the data goes or how it is protected.
Use Cases at Risk
- Hospital workers going on rounds with their team equipped with AI glasses that access, capture, view, and record patients, charts, wounds, family members, in electronic format, triggering the HIPAA Security Rule and state law obligations
- Financial services employees wearing AI glasses that capture customer financial data, account numbers, or investment information
- Any workplace use involving personally identifiable information (PII), such as Social Security numbers, credit card data, or medical information, as well as confidential business of the company and/or its customers
- Attorneys and legal professionals using AI glasses during privileged communications, potentially risking waiver of attorney-client privilege
- Employees connecting AI glasses to unsecured or public Wi-Fi networks, creating man-in-the-middle attack risks
- Lost or stolen AI glasses that store unencrypted audio, video, or contextual data
Why It Matters
Data breaches involving biometric data, health information, or financial data carry outsized legal and financial consequences. With AI glasses, as a practical matter, an entity generally is less likely to face a large-scale data breach affecting hundreds of thousands or millions of people. However, a breach and exposure of sensitive patient images, discussions, or other data captured with AI glasses could be just as, if not more, harmful to the reputation of a health system, for example, than an attack by a criminal threat actor. Beyond reputational harm, incident response costs, litigation, and regulatory penalties also remain a significant risk factor.
Shadow AI (the unauthorized use of artificial intelligence tools by employees in the workplace) also poses a potential data security, breach, and third-party risks. Many devices sync automatically to consumer cloud accounts with security practices that employers neither control nor audit. When an employee uses personal AI glasses for work, fundamental questions often go unanswered: Where is the data stored? Is it encrypted? Who has access? How long is it retained? Is it used to train AI models?
Finally, the use of AI glasses can diminish the effects of a powerful data security tool – data minimization. Businesses will need to grapple with the question whether the constant, ambient data collection and recording aligns with the principles of data minimization, a principle that is woven into data privacy laws, such as the California Consumer Privacy Act.
Practical Compliance Considerations
- Implement clear policies: Be deliberate about whether to permit these wearables in the workplace. And, if so, establish policies limiting when and where they may be used, and what recording features can be activated and under what circumstances.
- Perform an assessment: Conduct security and privacy assessments of specific AI glasses models before deployment
- Understand third-party service provider risks: Review security documentation, including encryption practices, access controls, and incident response commitments
- Understand obligations to customers: Review services agreements concerning the collection, processing, and security obligations for handling customer personal and confidential business information
- Update incident response plans: Factor in wearable device compromises
- For HIPAA Covered Entities and Business Associates: Confirm that AI glasses meet HIPAA requirements
- Evaluate cyber insurance coverage: Assess whether your policy (assuming you have a cyber policy!) covers breaches involving wearable technology and AI-related risks
Conclusion
AI smart glasses may feel futuristic and convenient, but from a data security and compliance perspective, they dramatically expand an organization's attack surface. Without careful controls, these devices can quietly introduce breach risks, third-party data sharing, and regulatory exposure that outweigh their perceived benefits.
The key is to approach the deployment of AI glasses (and deployment of similar technologies) with eyes wide open—understanding both the capabilities of the technology and the complex legal frameworks that govern their use. With thoughtful policies, robust technical controls, ongoing compliance monitoring, and respect for privacy rights, organizations can harness the benefits of AI glasses while managing the risks.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.