We must build trust in AI in healthcare
As dramatically illustrated over the past 18 months, the healthcare industry is intrinsically linked to the health and wellbeing of our society. So commitments to ‘put people first, provide ‘value-based care’ and ‘integrated care systems are not nice-to-haves but necessary commitments to ensure the industry and society continues to progress and…
As dramatically illustrated over the past 18 months, the healthcare industry is intrinsically linked to the health and wellbeing of our society. So commitments to ‘put people first, provide ‘value-based care’ and ‘integrated care systems are not nice-to-haves but necessary commitments to ensure the industry and society continues to progress and thrive.
In recent years, technology has become essential to achieving these commitments – by providing the mechanisms to better listen to and act on employee feedback, save time, improve collaboration, increase care quality, improve diagnostic accuracy and reduce costs.
Digital technology has the power to enable a revolution in health and wellbeing. It provides the opportunity to automate many routine and repetitive tasks and to enable remote sensing and monitoring for a more impactful patient journey.
Even before the pandemic accelerated the adoption of new technologies, the use of intelligent embedded technologies in healthcare was growing at an exponential rate. These types of technologies are now increasingly being used by healthcare systems to provide patients with more efficient and precise services. For example, machine learning has been credited with earlier disease detection and in accelerating health science research and development.
Entering the intelligence era
Big gains have been made in many sectors including media, finance, insurance, retail. But Healthcare is still behind many other sectors in adopting technologies to help maximize data assets. Overall, the digitization of healthcare is progressing slowly, while the amount of usable patient-level data has increased dramatically over the last decade.
As much as 30% of the entire world’s data volume is generated in the healthcare industry. A patient typically generates 80 megabytes each year in imaging and electronic health record data alone.
Yet healthcare systems still struggle to address their strategic priorities.
We are only really starting to scratch the surface of the power and value intelligent technologies can bring to healthcare. AI could be the answer to many operational healthcare problems delivering value-based, patient-centred care, reimagining the pharmaceutical supply network, and setting a new standard for healthcare for our National Health Service.
But it is the wider socioeconomics of AI that are causing a roadblock in the road to transformation.
Trust is imperative for widespread adoption and with that comes the need for an ethical framework to guide the healthcare experience.
The conundrum of ethics
While the benefits of AI are still being realised, concerns around ethics are still being hotly debated. In a world that allows machines to make decisions that impact the lives of humans daily, who makes sure the machines are acting in each individual’s best interest and what safeguards and policies need to be in place?
In this landscape, some tough questions need to be asked: how can we feel confident in AI’s ethical decision-making? Whose job is it to ensure autonomous systems are moral?
To those questions, we need thoughtfully considered and transparent answers. There are some best practices creators and healthcare organisations should consider implementing to embed ethical decision-making into their AI framework and help users feel secure in its results.
As a company, we employ guiding principles for artificial intelligence that form the bedrock of our AI software. It is paramount all AI creators recognise the impact of AI on customers, individuals and wider society and are driven by a commitment to align values with its AI offering. But what are some of the practical steps that can be taken to champion trustworthy AI?
Firstly, initiatives should be implemented on multiple levels from organisation-specific guidelines, to industrywide, nationwide, and international standards. This approach ensures cohesion.
Secondly, principally formed ethics should serve as a baseline template for all applications that aim to augment humanity. These should be used to address concerns about AI and ensure the product portfolio maintains integrity and remains in line.
SAP’s guidelines for ethical AI is just one of its many initiatives. It leans on an external AI ethics advisory panel and an open-source course on ethics in AI to ensure all our technology experts are informed on best practices for developing responsible AI.
Ongoing contributions which help develop international policies and standards allow organisations to remain at the forefront of industry changes will be imperative to staying ahead on data and privacy, quality and safety, transparency and bias.
AI has the potential to reconstruct our healthcare services in a way that we could have only imagined. But to do that, we are going to have to trust in the integrity and accountability of its creators as they go over and above to provide a service that is transparent, reliable, and ethical for everyone. Yes, there are going to be practical challenges that we need to evolve beyond but with great potential comes pitfalls.
Let us remember that ultimately, it is humans that define the future of AI not the other way around and for us, that definition is rooted in our guiding principles, our moral compass, and our dedication to improving humanity.