On August 7, the Omidyar Network's Tech and Society Solutions Lab and the Institute for the Future ("IFTF") launched a new toolkit, The Ethical Operating System (Ethical OS), to help guide technologists in building preventing, mitigation, and correcting the "social downsides" of technology while also maximizing positive impacts.
The guide, which has already been piloted by 20 companies, uses checklists, scenarios, and exercises to help companies anticipate problems and to develop appropriate strategies to mitigate risk. The guide focuses on eight key "risk zones" within which "hard-to-anticipate and unwelcome consequences" may emerge:
- Truth, Disinformation, Propaganda
- Addiction & the Dopamine Economy
- Economic & Asset Inequalities
- Machine Ethics & Algorithmic Biases
- Surveillance State
- Implicit Trust & User Understanding
- Hateful & Criminal Actors
Examples of the many questions highlighted in the guide include:
- What type of data do users expect you to accurately share, measure, or collect?
- What data are you collecting from users? Do you really need to collect it? Are you selling it? If so, whom are you selling it to and do users know this? How can you be more transparent about this?
- How could you design a system that encourages moderate use?
- If you are reducing human employment, how might that impact overall economic well-being and social stability?
- Who is responsible for developing the algorithm? Is there a lack of diversity in the people responsible for the design of the technology?
- How might a government or military body utilize this technology to increase its capacity to surveil or otherwise infringe upon the rights of its citizens?
- Does the technology you're building have a clear code of rights for users? Are the terms of service easy to read, access, and understand?
Companies are encouraged to ensure that these questions are highlighted across teams and incorporated, as appropriate, into product design requirements.
Notably, the guide is intended to promote early-stage conversations regarding the potential impacts of technological innovations. Efforts to ensure that product managers, engineers, and others consider the potential downsides and adverse societal impacts of new technologies are fully consistent with the responsibility of technology companies to operate with respect for human rights, as set forth in the UN Guiding Principles on Business and Human Rights.
To view Foley Hoag's Corporate Social Responsibility Blog please click here
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.