Artificial Intelligence still has an aura reminiscent of the Wild West, with both restricted and splintered policies and legislation surrounding regulation and management of the utilization of this technology, particularly within the healthcare domain. But, staying one step ahead of the game, the World Health Organization (WHO) is already prompting leadership to consider ethics and responsibilities and dilemmas of artificial intelligence integration more heavily into care, putting forth questions with regards to maintenance of human autonomy, ensuring transparency, and establishing inclusiveness and equity – just to specify a few! 

The COVID-19 pandemic has served as a wake-up call – which includes the fact that while the innovation is poised for deployment, and is being deployed, on a massive scale, as demonstrated by the escalated use of telehealth services – it was in the peak of utilization that patients, practitioners, and healthcare providers found themselves encountering barriers to quality healthcare. Currently, we have an opportunity to swerve from hitting potholes and pave the path for a cleaner and more streamlined route that will function with Artificial Intelligence rather than against it – improving the healthcare experience for patients and providers alike. 

We can do this by beginning with three fundamental principles to direct the innovation of artificial intelligence, detailed below. 

Retaining human beings at the core of healthcare decision-making 

It is only a question of time before Artificial Intelligence becomes fundamental to everything we currently perform in the domain of medicine, and beyond. Artificial Intelligence is a really smart “machine”, with the capacity to make decisions that could considerably enhance patient care and save on expenditure and time, but this doesn’t imply Artificial Intelligence should be provided the power to do this, or have the authority to make the final decision. People (practitioners and patients alike) should access and oversee activity in every step of the care continuum, regardless of the precision and simplicity of Artificial Intelligence integration.  

We should absolutely harness Artificial Intelligence to inform, as it provides a considerable amount of invaluable data that can be extracted and swiftly disseminated, but locks should be instated that need human input prior to proceeding to provide a prognosis.  

Electronic Health Records (EHR) are a brilliant instance of the evolution in harnessing AI-based predictive utilities to assist providers streamline workflows, medical decisions and treatment plans. But, the provider (the human agent) must stay at the core of the process, pulling the levers to unlock the next phase in care and development of treatment plans. 

The standards that we should health technologies to 

There are a plethora of artificial intelligence-driven devices and services available currently, several of which are not HIPAA compliant or FDA-authorized, innovation aside. Not being FDA-authorized doesn’t imply that the technology isn’t safe or useful, but if we return to the “Wild West” theme, it does implicate a degree of scrutiny that should be taken when deciding what Artificial Intelligence technology to adopt into a practice, and the extent of the weight that the information from it should be afforded during decision making. Like the standards needed for any diagnostic tool or healthcare device, artificial intelligence utilities ought to be evaluated and subjected to proving their precision, and the FDA plays a major role in this.  

This mean publication and documentation of adequate data prior to the deployment of the technology to produce meaningful public consultation and discourse on how the technology is developed, and how it ought or ought not to be leveraged; the FDA can make or break the technology. 

Ensuring equity and inclusiveness 

Artificial intelligence utilities and frameworks ought to be monitored and assessed to detect disproportionate impacts on particular groups of persons. No tech, artificial intelligence, or otherwise, should sustain or intensify current variants of bias and discrimination regardless of innovation.  

During development and deployment of artificial intelligence-driven technologies, it is crucial to remember differing skin shades, gender designations and other variations in human traits, to ensure health providers furnish consistent and precise care. One research of three commercial gender-recognition frameworks, reported error rates of up to 34% for dark-skinned females – a rate approximately 49-fold that for Caucasian men. 

AI with regards to medicine must be developed to encourage the broadest feasible appropriate, equitable leveraging and access to care, regardless of age, sex, income, gender, race, ethnicity, sexual orientation, capacity or other traits safeguarded under human rights guidelines. This additionally implies facilitating health equity, affording the greatest degree of accessibility to care and removing requirements to travel massive lengths, or purchase numerous products and services to benefit from the care that Artificial Intelligence can furnish. Bias in any way, shape, or form, is hazardous to inclusiveness and equity, putting care at risk, and potentially – lives. 

Healthcare has experienced evolution with the adoption of Artificial Intelligence, and so should our playbook of ethics. There are a plethora of possible roadblocks we will face as we go on to harness this amazing technology in one of the most sensitive sphere there is – healthcare. However, there’s no looking back, and nor should we. Artificial intelligence has massive potential for enhancing our healthcare domain and patient experience, but we must set down ground rules in our current times, prior to going on in our journey. 


A broad array of thrilling and future-looking applications of Machine Learning / Artificial Intelligence strategies and platforms, within the domain of healthcare were spoken about. Subjects range from radiology assistant to smart health operations administration, from personalized medicine to digital surveillance for public health, were analyzed. 

Known hurdles from data privacy and legal systems will persist to be hurdles from the complete implementation of these frameworks. It can be really complex to identify what variant of data can be seen and harnessed lawfully by 3rd party providers (for example, the owner of the ML utilities and Artificial Intelligence, physical devices, or platforms.) Consequently, a huge rationalization attempt of the legal and policy-making is required, in parallel to tackle those hurdles. 

As technologists and Artificial Intelligence / Machine Learning practitioners, we should work towards a vibrant future where the potency of AI algorithms benefit billions of common citizens to enhance their basic health and well-being.