It’s time we take responsibility for our AI

by | May 21, 2024 | AI | 1 comment

Back in 2018 when I started working on various aspects of what I later learned as “Responsible AI”, the perception was that these AI systems had issues of bias, fairness, equity, etc. Some of them were going “rogue”. One such example was COMPAS (Correctional Offender Management Profiling for Alternative Sanctions). It was used in determining the likelihood of a convicted felon to recommit a crime if released. Turns out that this software often gave higher chances of criminality for people of color compared to white offenders. In reality, once released, the white ex-felons ended up breaking the law more often than the rest. This was a version of what we would consider racial profiling for humans; the algorithm behind this software was biased against people of color. 

This rogue behavior can’t be allowed. We can’t have AI systems that are so “irresponsible”. Enter Responsible AI. This seemingly new branch of AI has solutions. In many cases, it had guidelines, frameworks, and principles — all meant to fix that irresponsible behavior.

But as my thinking matured in this area, I realized that this was more than simply fixing AI systems. In fact, most times the unexpected and undesired behaviors by AI systems were more of a reflection of us rather than a rogue algorithm.

One such case that struck me the most was something that can easily be laughed at and ignored as a one-off example of stupidity. Someone used AI to judge a beauty contest. Surprise surprise – the AI found only those with white skin as ‘beautiful’. The organizers had to apologize and pull back from the whole endeavor. You can say that nobody got harmed, at least not like what came out of COMPAS, and we can move on. But something about this seemed different from all other cases.

COMPAS had a clear need and application. Our criminal justice system is overwhelmed and  under-resourced. They could use a good set of tools to support them in their tasks. But what problem does an AI beauty judging system solve? Other than novelty, what are we gaining? And yet, those organizers thought of using it. Why? Because they could. This is the worst kind of reason for using an untested, unfair, and potentially biased AI system. It shows our lack of maturity and responsibility.

And therein lies our real problem with AI – more than a system misbehaving due to bad or biased training data or algorithms that are oblivious to human values, it’s us who are making these choices. It’s easy to blame a rogue system and focus all our energy on fixing them as another exercise of debugging software. It’s much harder to reflect on us and ask tough questions about why we are doing this, what are the risks, and who benefits.

It’s time to understand these choices. It’s time to take responsibility.



0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

  • Recent Posts

  • Recent Comments

    • Archives

    • Tags

    • Categories

    • Meta