In the field of healthcare, artificial intelligence (AI) is becoming a force to be reckoned with. AI-based healthcare solutions have progressed past the proof-of-concept stage in the last decade or two, and have begun to rewrite our perceptions of what is feasible.
Here are a few examples: Dermatology has utilised deep learning algorithms to identify skin cancer, while radiology has used them to better understand CT images. Surgeons use AI-enabled robotics, while pharmaceutical companies use convolutional neural networks to find potential drug ideas.
Patients are frequently monitored with AI-based wearable devices, which alert them to any changes in their vital signs. For Covid-19, there are also AI-based triage systems that can decide who requires a PCR test.
At least for the time being, the concept of a robot doctor appears to be a long way off. However, it is evident that these new digital technologies will become essential instruments in a physician's arsenal in the near future.
Regulating these technologies presents a number of challenges.
What's less obvious is how these new technologies may be utilised responsibly and ethically. The World Health Organization (WHO) has issued a study warning that AI technologies have dangers, including biases built in algorithms and unethical data collection.
However, nearly all of these devices (126) were only assessed retrospectively, and none of the 54 high-risk devices had been studied prospectively.
More prospective trials, according to the authors, are needed to better capture real clinical outcomes. They also argued for increased post-market monitoring.
With an increasing number of gadgets approaching clearance, authorities will have to figure out how to test and approve them. Many problems remain unanswered at the moment, including how to control a machine-learning system that is meant to develop over time in response to fresh inputs.
The conventional medical device regulatory paradigm was not built to accommodate adaptive AI/ML technologies, which have the ability to adapt and optimise device performance in real time to continually enhance patient care.
AI in healthcare has the potential to be a massive sector, with sky-high expectations across many modalities. Given this, it's heartening to see that AI ethics is also a growing field of study.
In the previous decade, over 100 suggestions for AI principles have been presented, according to the WHO. Many regulatory agencies are creating their own frameworks, despite the fact that "no formal guidelines for use of AI for health have yet been suggested for adoption internationally."
Although AI is still unknown territory, with its risks and limits yet to be fully revealed, authorities are cautiously optimistic about the path ahead. Put the appropriate controls in place, and patients will receive the advantages without being exposed to needless dangers, so the reasoning goes.