Privacy Policy

Over the years, we’ve heard a lot about the unintended consequences of AI programmes gone wrong. We’ve seen algorithms incorrectly identify criminals with facial recognition technology and chatbots that don’t quite get the subtleties of slang. These are amongst endless examples where companies meant well but still ended up having to admit AI faux pas. 

It’s not to say that companies aren’t heavily invested in Responsible AI, but it does raise the question: Is Responsible AI an afterthought? AI may have reached the mainstream, but is Responsible AI second nature? It feels as though there is still work to be done in order to integrate Responsible AI into most organisations’ existing AI lifecycles so to mitigate the risks of misuse and unintended consequences. 

The reality is that Responsible AI needs to be second nature and a practice to live by — not a checklist at the end. Integrating all that goes into ensuring Responsible AI right from the beginning is possible, and here’s how.  

Get Intentional With Data

Being intentional with data is the very start of Responsible AI. As I’ve said before, while data and numbers may appear indisputable, the reality is that information is always a product of the context it was built in. Data is inherently biased, and the potential to teach AI systems to make critical business and ethical mistakes are very real. 

This is why it is so important to collect the right data into your practice from the get-go and to give this just as much importance as the modelling, the final predictions, and all the tweaking of data that happens along the way. 

So much focus gets put on the best new models, but if your data is bad, it doesn’t matter. Data ingestion, cleaning, and transformation — and being confident in that data — needs to become priority number one. Companies need to stop relying on “magic AI” and take the time to understand their data upfront in order to integrate Responsible AI practices throughout the data science lifecycle.

If you have a huge dataset and you throw it into an unsupervised cluster model without paying attention, you may get clusters that interpret the data incorrectly and skew it. And all a sudden, you are left with something that is not a universal sample, which may lead to inaccurate, biased, and/or poor business decisions.

There are also many examples that indicate that companies don’t dedicate enough resources to finding and using the right data. Think of how inaccurate location or GPS data can be and how many companies rely on it for delivering localised ads. It may not be life-threatening, but it’s a huge waste of money. If companies took this same approach to train self-driving cars on non-localised data, imagine what the consequences would be. The best-case scenario may be that a self-driving car causes traffic jams because it stops too frequently, and the worst case could be a serious injury. 

If companies want to embed Responsible AI right from the start, they need to identify ways to collect high fidelity data and use it appropriately. 

Checking for Fairness

It may be frustrating that even tech giants struggle when it comes to checking data for potential biases. However, there are lessons to be learned, and companies are beginning to see that more work needs to be put in before models are built. 

The question companies really need to ask themselves is: Are the predictions, recommendations, or decisions made by a model fair towards all populations? This is called “group fairness” and there is also an approach focused on individual fairness that checks whether individuals with similar profiles are treated the same by a model. 

Model fairness tools and reports can give users an idea of what the fairness bias in their model looks like. Data scientists and other key practitioners can work with these resources to define and implement fairness metrics that represent the considered decisions their organisations have made based on the metrics of what is acceptable and unacceptable unfairness. 

For example, in many instances, a difference in fairness of 5% may not have a significant impact on end-users and bias if the business use case is AI-driven ad targeting. However, that same 5% may have a life-changing implication when the use case is health outcomes in a healthcare screening process. The process of exploring fairness in relation to the applied AI use case at hand is essential to integrated Responsible AI. 

A Holistic Approach to Responsible AI

When working in any size of organisation, AI cannot be a silo — it has to be part of a broader organisational goal. Data scientists and data engineers need to champion data quality and data acquisition, but the C-suite also needs to clearly understand its organisational AI goals — which are not just the data, but how the data serves a broader purpose.

Responsible AI must act as a foundational principle for an organisation’s holistic AI efforts. Without responsible development, deployment, and use of AI systems, organisations run the risk of unintended harm and bias to a given population, among other negative consequences. 

Integrating Responsible AI into your organisation’s existing AI lifecycle and workflows isn’t a far-fetched reality. If organisations are ready to explore recommended strategies and methodologies for enacting systems grounded in traceability, transparency, and explainable, human-in-the-loop AI, Responsible AI can absolutely become second nature.