Privacy Policy

Only a decade ago, Artificial Intelligence (AI) was a research dream but has since become a priceless commodity in the modern world, especially within the world of computer vision and natural language processing. Should one wish, it is now straightforward for developers to incorporate a sophisticated cat detector in their programmes with just a small amount of API calls.  

But it doesn’t stop there. The power of AI goes far beyond that. For example, DeepMind, the Alphabet subsidiary, recently released state of the art precipitation forecasting results. It has always been difficult to generate high-resolution predictions for short lead times. However, “Nowcasting” fills the performance gap in this crucial time interval – a game-changer for industries such as water management, agriculture and aviation. Interestingly, it obtains competitive performance using a completely unique approach: it models radar data directly, without a physics-informed simulation.  

Such technological advancements demonstrate the promising direction for AI, as computational instruments that enhance and facilitate scientific research. While these successes should certainly be applauded it’s important for businesses to understand how AI is implemented and acknowledge that operating automated systems responsibly can present many challenges.  

A mask of invisibility 

With incredible AI breakthroughs shared across the media, many companies and products are jumping on the trend and labelling themselves as “powered by AI” in an attempt to jump on the bandwagon. Although many try to claim this title, the reality is that much of the dealings with AI technologies are nearly invisible. For instance, the offers consumers receive from online retailers, or the order of films displayed to viewers on their favoured streaming service. Almost invisible, that is until they aren’t. 

There are bountiful high profile misapplications of AI to exemplify this. A perfect example was Twitter’s recent image cropping algorithm. In October 2022, it was discovered that the algorithm for choosing which part of an image to display on Twitter timelines was gender and racially biased.  

Twitter addressed the issue responsibly. The Twitter ML Ethics, Transparency and Accountability (META) team carried out their own evaluation, which they shared in a detailed paper and blog post. Consequentially, from the user reports and their internal investigations, Twitter rightfully altered the design of the cropping system, assigning the cropping responsibility to the user. Deterring from the mass AI wave, Twitter was willing to entirely remove AI so the platform could better serve its intended purpose. Businesses can learn from Twitter and the need to ensure that they are using AI responsibly, without abusing its power, in order to be successful.  

Honesty is key 

This type of bias found in Twitter’s AI is unfortunately not an isolated case. Similar biases are often found in AI applications but seldom very visible.  

Increasingly, AI systems are being used to determine important decisions, such as who is the perfect candidate for a job vacancy. Therefore, identifying and removing undesirable biases becomes a moral necessity. If that isn’t enough to force companies to take responsibility for fair AI systems, Article 21 of the EU Charter of Fundamental Rights prohibits any discrimination based on age, sex, race and other protected attributes. Even if companies don’t deliberately use discriminatory systems, biases can take many forms and responsibly operating automated systems creates many challenges. 

The first hurdle to overcome is defining fairness. Objectively, there are multiple reasonable mathematical descriptions of fairness, with some directly contradicting the others. As a result, optimising fairly can be very tricky – different people in a system will benefit from using different definitions.

But sometimes even good intentions aren’t enough. Even when using suitable measurements, AI-enabled systems can be multifaceted and laden with feedback loops. Implementing them responsibly requires consistent awareness for unintended consequences. While good application monitoring and logging are unquestionably necessary, it is not by itself adequate. As showcased in the Twitter case, it can be the individuals who are affected by discrimination who first ascertain the problem.  

A system may well be instrumented for perfect observability, but this doesn’t compensate for organisations not having the necessary data to measure for discrimination. There is a fine balance between protecting user privacy by prohibiting the collection and use of sensitive demographic attributes, and the necessity of ensuring systems are non-discriminatory. If everything is private, it’s significantly harder to catch discrimination. But if nothing is private, it then violates users’ privacy. 

Overarching the challenges mentioned above, organisations utilising machine learning have few benchmarks for comparison. Whilst we can all learn from Twitter’s discriminatory system to be conscious of biases, only a small number of businesses have image cropping issues. Where are the similar analyses for demand forecasting, customer segmentation, and churn prediction? This is relatively new for everyone, and community practitioners are still developing the practices and toolbox to deploy machine learning conscientiously.  

It is time for acknowledgement and collaboration   

After rectifying the image-cropping biases, Twitter organised an algorithmic bias bounty challenge. As a one-off, the company offered monetary prizes for discovering algorithmic discrimination in their product. Inspired by this action, other companies have held similar bounties to detect bugs found in their software, encouraging others to improve their security and promote the development of best practices. Of course, Twitter isn’t the only company to employ bias bounties, but it is exciting to see it used by a highly visible organisation.  

If algorithms are to continue powering much of the modern world, we need greater acknowledgement and collaborative discussion of their short fallings. By learning from prior mistakes, we can promote the development of better practices to build fair AI systems.