by Gary Walker, interviewIA Head of Product and Innovation

The controversial exit of Google’s ethical AI team’s co-lead Timnit Gebru, https://www.technologyreview.com/2020/12/04/1013294/google-ai-ethics-research-paper-forced-out-timnit-gebru/, has put more focus on the role of bias and ethics in AI. Ms. Gebru’s mission to bring this conversation about ethics, bias and AI research is critical to highlight the inherent problems with how some AI tech is being developed and used. For example, hiring may be the single most subjective and therefore potentially biased activity we do at work and is an area where AI is seeing significant use. As good corporate citizens, we need to develop AI that is fair at a minimum, which is far from the current norm in most companies.

Software development is an endeavor where a small group of developers can determine the substance of what ends up in the hands of thousands or even millions of users. AI is transforming almost every form of software, forcing companies to implement it to remain competitive. The non-deterministic nature of AI and machine learning algorithms potentially gives companies more unchecked influence than ever. AI governance, oversight, testing, and ethics are therefore paramount to the future of human resources, finance, law enforcement, education, and healthcare, as these technologies can substantially impact people’s lives.

As a 25+ year veteran in software development and product leadership, I’ve worked in every type of software organization imaginable, from one developer working with a founder of a startup to Fortune 500 companies’ enterprise software projects consisting of multiple integrated systems built by hundreds of team members. All these organizations share one common property – they are comprised of human beings, who are all biased. This is a truism because being biased is an unavoidable, universal human trait, not a personal fault. It is also true that we can’t turn bias off – whether we’re sitting on a church pew or sitting at a desk at the office (or at home given today’s circumstances).

How Bias in AI Is Created

In the pre-AI world of software development, business logic was easier to test because their instructions were typically coded in implicit if-then statements: when the software identifies a piece of information as “true” then some actions are taken.

The “some actions” is what could negatively impact an individual in an intentional, unintentional, or even non-obvious way. This type of impact is usually easy to catch in code reviews and the results of such code would be relatively easy to observe in testing if monitored. The problem with the status quo occurs when these effects aren’t proactively measured and monitored or if the code is implemented in black boxes or by algorithms that don’t use if-then code, both of which are true of AI they may never come to light.

Bias in real-world code isn’t this obvious and almost never is the result of a single line of code but caused by the interaction of many lines of code, potentially spread across multiple modules. The use of AI algorithms compounds this because there are no if-then lines of code, but rather algorithms that have been trained (or manipulated) to make decisions using sample data and parameters. One could train an algorithm to decide any outcome with the right training. An MIT Technology Review article provides several examples of how bias might be introduced into AI, https://www.technologyreview.com/2019/02/04/137602/this-is-how-ai-bias-really-happensand-why-its-so-hard-to-fix/.

Besides code there are many other potential areas in commercial software development that can introduce bias through the necessary involvement of many stakeholders every product requires, from those who conceive the system to those who specify the system, those who fund its construction, to those who market and sell the system and eventually provide feedback to those who initially conceived it, starting the cycle over again. In this software development and productization lifecycle ecosystem, it is hard to prevent bias from impacting software, and ultimately its users, in the if-then code world. I believe it is orders of magnitude more difficult to do so in software comprised of AI algorithms.

For these reasons software companies, especially those using AI, must approach projects with the assumption that in today’s world there is an unacceptable opportunity to introduce bias (unconscious in the best case but conscious and intentional in the worst) into their system at every step in the process. Further, it is the responsibility of leaders to ensure that bias is mitigated at every step. We can’t stop being biased but with effort and training, we can become more aware of bias in order to mitigate it.

The Handle of the Spear is Much Longer than the Tip

The current focus of bias mitigation is on those who develop the software and select and train the machine learning algorithms. While this is a good start and is necessary since these professionals are at the tip of the software product development spear, that alone is not sufficient to ensure consumer and user protection. Every person filling a role in the software development and productization lifecycle ecosystem is equally important in this effort.

At interviewIA we’ve built a hiring platform using 3 core bias mitigation factors that help our customers implement better hiring processes, evaluate candidates, and make more successful hiring decisions by keeping people at the core of the decision-making loop, keeping DEI&B (Diversity, Equity, Inclusion, and Belonging) at the table, and providing bias training. The IA in our name stands for Intelligence Amplification, which is the use of machine learning to help people use their innate intelligence in better ways, rather than replacing humans in the process.

In order to climb out of this ethical morass, we must aspire to advance societal norms by championing systems that promote diversity, equity, and inclusion. Technical means like algorithm transparency and explainability are required, but a company culture espousing bias training, people in the loop, and intentional oversight to produce fairness and ethics in decisions made by software, especially in AI, are the enablers of such aspirations. This article discusses algorithm bias, discrimination, and the need for transparency, https://www.vox.com/recode/2020/2/18/21121286/algorithms-bias-discrimination-facial-recognition-transparency.

Companies will need to thoughtfully conceive and implement governance boards, policies, and processes around their AI, and throughout their cultures, to stop the perpetuation of exclusionary, bias-accepting societal norms and views, and to reverse the under-representation of some groups. One article by the Harvard Business review gives concrete recommendations for mitigating bias in AI, https://hbr.org/2019/10/what-do-we-do-about-the-biases-in-ai.

Doing what is fair and ethical is obviously right; maybe one day soon it will be considered best practice and good business.