By: Catherine Liu
Modern healthcare organizations are adapting and innovating in response to the boom in artificial intelligence. A recent paper details two distinct branches of use for artificial intelligence in healthcare: virtual and physical.
The virtual branch encompasses the use of deep learning in information management, management of electronic health records, and guidance of physicians in decision making. The virtual branch focuses on technology that can assist healthcare workers by processing and organizing information so less time is spent on menial tasks that could be completed by a computer. For example, electronic medical records make patient information easily accessible to doctors and nurses and allow for important information to be collectively organized in one location. The virtual branch also includes the many applications of machine learning techniques to imaging technology used by radiologists.
In contrast to the virtual branch, the physical branch focuses on tangible technologies that capitalize upon artificial intelligence in order to complete a set of tasks. This can include nanorobots that assist with drug delivery and robots that are used to assist elderly patients. For example, human-interactive robots can provide assistance, guide, and assist with psychological-enrichment with older patients (Shibata et al., 2010).
Although artificial intelligence holds great promise, there is a myriad of societal and ethical complexities that result from the use of artificial intelligence in healthcare, given concerns over reliability, safety, and accountability. As detailed at the Nuffield Council on Bioethics, artificial intelligence currently has many limitations in the medical field. For example, artificial intelligence is reliant on large amounts of data in order to learn how to behave, but the current availability and quality of medical data may not be sufficient for this purpose. Artificial intelligence may also propagate inequalities in healthcare if trained on biased data and may negatively affect patients. For example, a recent study found that men and women receive different treatment after heart attacks. Thus, if the training data did not account for this difference and included primarily male patients, the treatment suggestions given by the artificial intelligence program would be biased and thus may negatively affect female patients. On a practical note, artificial intelligence is limited by computing power, so the large, complex datasets inherent to healthcare may present a challenge, particularly for those organizations that do not have the financial resources to purchase and maintain computers capable of these calculations. Lastly, artificially intelligent systems may lack the empathy or ability to process a complex situation in order to ensure the correct suggestions for what further treatments should be pursued, as in the case of palliative care.
Rather than using artificial intelligence independently or completely abandoning it, combining the predictions made from machine learning algorithms with the expertise and empathy of healthcare providers may allow for better, more comprehensive treatment overall as we head into the future of modern healthcare.