In July 2023, three independent designers filed a lawsuit in California against Shein, the popular online fashion retailer. They allege that Shein sold exact copies of their work which infringes their copyright, and which violates the United States Racketeer Influenced and Corrupt Organizations ("RICO") Act.

The RICO Act was originally put in place to target organised crime, but it also provides for civil action to be taken against "racketeering", which includes certain acts relating to criminal infringement of copyright.

It was further alleged that Shein has a "secret" algorithm that is utilised to manipulate market data, search results, and unfairly drive out competitors, leading to monopolistic practices. This legal action is important as it will provide a glimpse into the stance that courts may take in the future in regulating AI, as well as assist in the development of recommendations regarding the ethical use of AI systems.

For example, it is alleged that Shein's algorithms have been programmed to generate false or misleading information on the Shein app regarding product popularity, customer reviews, or pricing trends. By artificially inflating their own performance metrics and suppressing negative feedback, Shein could have created a skewed perception of their products' desirability and quality.

Such manipulation of market data could have severe implications, including deceiving consumers into making purchasing decisions based on inaccurate or biased information. This not only undermines the trust of consumers but also hampers the ability of competitors to compete on a level playing field. By distorting market data, Shein's AI algorithms may have affected the purchasing decisions of customers, potentially leading to an unfair advantage for the company.

Therefore, the use of AI algorithms for manipulating market data highlights the potential risks and challenges associated with the deployment of advanced technologies which some commentators argue necessitates the creation of Responsible AI use regulation.

Responsible AI refers to the framework of principles and practices aimed at ensuring the fair and ethical use of AI technologies. By integrating responsible AI practices, organisations can proactively minimise the risk of legal controversies such as the current Shein lawsuit. Actions that organisations can take include:

  • Governance: the board needs to ensure that proper structures are put in place as well as safeguards employed in order to ensure the adoption of Responsible AI. These may include establishing Centres of Excellence, dedicated task teams, and or other structures whose focus is ensuring that AI is adopted in a Responsible manner in keeping with the values and culture of the company and also in order to mitigate legal, technical and financial risk;
  • Policy implementation: a sound policy for the adoption of Responsible AI needs to be implemented. These would include not only mechanisms to mitigate legal, technical and financial risk but also ensure that ethical boundaries have been established based on the company's own value system;
  • Training: companies should ensure that staff are trained at various levels and that training be adapted depending on what role staff members undertake as part of the company's AI initiatives. Example: (i) legal and technical teams should undergo training on more than just the legal and technical risk of AI adoption but also on AI ethics and financial risks; and (ii) board of directors need to be trained on both ethical and legal considerations in order to establish a culture of Responsible AI;
  • Contracting: as companies would rely on third-party service providers in order to deploy AI solutions, companies should ensure that they establish sound contracting standards in order to mitigate against the risk of a supplier providing tools and/or solutions which may give rise to claims and such supplier not being liable due to restrictive liability provisions. Further, the usual due diligence in supplier selection needs to also be adopted;
  • Ethical impact assessments: Although not mandatory, it is a useful tool to ensure that any projects undertaken or AI being adopted complies with the company's policies and applicable laws;
  • Ethical reviews: as part of this, companies may wish to establish a distinct AI ethics review board, which would also engage in the approval of projects based on ethical impact assessments undertaken.
  • Pioneering industry initiatives or codes of conduct: leading companies may wish to pioneer the adoption of industry acceptable codes of conduct, including obtaining approvals from regulatory authorities such as the Information Regulator; and
  • Auditing and monitoring: as with any compliance initiative, boards should ensure that proper resources are dedicated to ensuring compliance with interventions adopted, as well as dealing with violates of company policies.

The Shein RICO lawsuit serves as a wake-up call for organisations to adopt responsible AI practices and address the potential legal pitfalls associated with advanced algorithms. By adhering to ethical frameworks and regulations, implementing robust data governance, conducting continuous testing and monitoring, and fostering collaboration and accountability, organisations can mitigate the risks of legal controversies arising from AI technologies. For organisations and boards, who may be grappling with where to begin with implementing Responsible AI, especially in the absence of regulation, our expert team at ENSafrica have developed a Responsible AI Toolkit to help fast-track AI usage and implementation.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.