It's 2038 and - like most businesses these days - a tech company is using artificial intelligence to scan job applicants. The system was trained with public employment records, an ostensibly unbiased dataset.
But even 20 years after sex-based discrimination was thrust into the media spotlight, the tech industry still hasn't fully corrected its gender imbalance. The job screening system "learns" that most software engineers are men and starts favoring male candidates over women.
This dangerous scenario is one of many posited in "The Future Computed," a new book published by Microsoft, with a foreword by Brad Smith, Microsoft president and chief legal officer, and Harry Shum, executive vice president of Microsoft's Artificial Intelligence and Research group.
The book examines the use cases and potential dangers of AI technology, which will soon be integrated into many of the systems people use everyday. Microsoft believes AI should be developed with six core principles: "fair, reliable and safe, private and secure, inclusive, transparent, and accountable."
Nimble policymaking and strong ethical guidelines are essential to ensuring AI doesn't threaten equity or security, Microsoft says. In other words, we need to start planning now to avoid a scenario like the one facing the imaginary tech company looking for software engineers.
Microsoft suggests governments lead the way in establishing best practices for AI by integrating the technology into systems that serve the public. "While enabling more effective delivery of services for citizens, this will also provide governments with firsthand experience in developing best practices to address the ethical principles identified," the book says.
Microsoft also suggests governments invest in better ways to make data public without connecting that data to individuals. AI relies on data - and large public datasets can be a useful tool in training the technology - but only if personal identifying information is protected.
"Additional research to enhance 'de-identification' techniques and ongoing discussions about how to balance the risks of re-identification against the social benefits will be important," the book says.
But, Microsoft warns, governments shouldn't go overboard. The book says regulators should avoid "rigid approaches" because not all personal information is equally sensitive and AI needs some identifying data, such as public health records, to achieve societal benefits.
Competitiveness is also a concern. AI needs data to learn and the keepers of the largest datasets in the world are a small handful of tech companies, Microsoft included. The book's authors suggest government data as an antidote to this concern, allowing challengers to the Big Tech incumbents to feed their data-hungry AI systems.
"The question of the availability of data will arise most directly when one firm seeks to buy another and competition authorities need to consider whether the combined firms would possess datasets that are so valuable and unique that no other firms can compete effectively," Microsoft says in the book.
Microsoft's recommendations boil down to an ethical code, designed to ensure governments and technologists recognize the risks associated with AI and build the technology with safeguards in mind.
"In computer science, will concerns about the impact of AI mean that the study of ethics will become a requirement for computer programmers and researchers? We believe that's a safe bet," write Smith and Shum in the book's forward. "Could we see a Hippocratic Oath for coders like we have for doctors? That could make sense."