ARTICLE
27 October 2025

Conversations With The Industry: Tyler Cook, Georgia Institute Of Technology

FH
Foley Hoag LLP

Contributor

Foley Hoag provides innovative, strategic legal services to public, private and government clients. We have premier capabilities in the life sciences, healthcare, technology, energy, professional services and private funds fields, and in cross-border disputes. The diverse experiences of our lawyers contribute to the exceptional senior-level service we deliver to clients.
Tyler Cook is the Assistant Program Director at Emory University's Center for AI Learning and a Professional Fellow with Emory university's Center for Ethics.
United States Technology
Foley Hoag LLP are most popular:
  • within Media, Telecoms, IT, Entertainment and Tax topic(s)

Tyler Cook is the Assistant Program Director at Emory University's Center for AI Learning and a Professional Fellow with Emory university's Center for Ethics. Immediately prior to these appointments, Tyler was a Postdoctoral Fellow in the Jimmy and Rosalynn Carter School of Public Policy at the Georgia Institute of Technology, where he specialized in machine ethics, AI safety, and the philosophical foundations of artificial intelligence. With a background that bridges philosophy and computer science, Tyler's research explores the intersection of technology, ethics, and society, focusing on the responsible development and deployment of advanced AI and robotics. He is recognized for his thought leadership on algorithmic bias, explainability, and the ethical challenges posed by human-robot interaction, and is a frequent contributor to academic and industry discussions on the future of AI and robotics.

Q: John Lanza -What ethical issues do you foresee becoming more prominent in the robotics space, especially with the increase in human-robot interaction?

A:Tyler Cook -

As AI and robotics become more capable and integrated into daily life, a range of ethical issues will come to the forefront. Some of these, like algorithmic bias, are already well-known in the context of machine learning, but will take on new dimensions as robots assume social roles and interact with people in more personal settings. For example, domestic robots that provide care or companionship could inadvertently reinforce social biases if their behavior is shaped by skewed training data. Social norms and etiquette vary widely, and if robots are not designed to adapt to different environments, they risk exhibiting biased or inappropriate behavior.

Other ethical challenges include explainability—ensuring robots can justify their actions, especially in situations where their behavior may not be immediately understandable to humans. Deskilling is another concern: as robots automate more complex tasks, humans may lose proficiency in certain skills, potentially leading to overdependence on technology. Human autonomy could also be undermined if robots are empowered to override human decisions based on their own programmed judgments. Finally, the field of machine ethics is becoming increasingly important as robots are deployed in ethically sensitive domains. We must ensure that AI-powered robots are equipped to handle ethical dilemmas, though there is ongoing debate about the risks and merits of pursuing machine ethics.

Q: As robotics systems become more connected and integrated with IoT, what are the biggest cybersecurity risks you anticipate, and how do you see the industry addressing these risks to ensure safety and compliance?

A: The integration of robotics with IoT brings significant cybersecurity challenges. The most pressing risk is the potential for robots to be hacked by malicious actors, which could lead to anything from minor disruptions to catastrophic consequences, depending on the robot's capabilities and the context of the attack. As robots are deployed in more critical and sensitive environments, the stakes of such attacks increase dramatically.

Addressing these risks requires robust investment in cybersecurity. This includes proactive measures like red teaming—where experts attempt to breach systems to uncover vulnerabilities before adversaries do. It's also important to consider restricting the deployment of robots in especially high-risk domains or limiting their capabilities in those settings. The industry must prioritize cybersecurity at every stage of development and deployment, recognizing that as AI and robotics advance, so too will the sophistication of potential threats.

Q: As robotics systems become more sophisticated in gathering and analyzing data, how can the industry address potential concerns related to data ethics, AI bias, and user privacy?

A: Responsible data ethics are essential for building public trust in robotics. Privacy is a major concern, as robots often collect sensitive information about users and their environments. While robots themselves may not "judge" users, the risk lies in unauthorized access to the data they collect. Encryption and other security measures are critical to protect user data, and transparency about data collection and sharing practices is equally important. Companies should go beyond legal requirements to ensure users are fully informed about how their data is used.

AI bias is another significant issue. Robots trained on biased datasets can perpetuate or even amplify harmful stereotypes. Mitigating this risk requires careful curation of training data and ongoing efforts to identify and correct embedded biases. This is a complex challenge, but it is essential for ensuring that robotics technologies are fair and equitable.

Q: If robots could have hobbies, what kind of hobbies do you think they'd pick up, and how might they compete or collaborate with humans in those areas?

A:This is a fun question, but it also raises deep philosophical issues about consciousness and value. For a robot to truly have a hobby, it would need to be capable of interest or enjoyment—qualities that presuppose some form of consciousness or sentience. If we imagine a future where robots possess these capacities, predicting their hobbies becomes a fascinating challenge. Would they enjoy activities they were programmed to like, or would new forms of enjoyment emerge?

In terms of competition and collaboration, we already see AI outperforming humans in domains like chess and even simulated dogfights. It's likely that robots will eventually surpass humans in many physical and cognitive activities. However, research suggests that human-machine teams often outperform either humans or machines alone for certain tasks. The future will likely see a mix of competition and collaboration, with each side bringing unique strengths to the table.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More