visit
1. Classifying brain tumors
Brain cancer, along with other types of nervous system cancers, is Conventionally, prior to the operation, patients suffering from a brain tumor are left in the dark along with their surgeons. Both of them don’t know which kind of tumor is there and what treatment the patient will have to undergo. The first step is to remove as much infected brain mass as possible. A tumor sample is obtained from this mass and analyzed to classify the tumor. This intraoperative pathology analysis lasts around 40 minutes as the pathologist processes and stains the sample. In the meanwhile, the surgeon is idle. After receiving the results, they must quickly decide on the course of action.Introducing AI in radiology to this mix reduces the tumor classification time to and can comfortably be done in the operating room. According to Todd Hollon, Chief Neurological Resident at Michigan Medicine, “It’s so quick that we can image many specimens from right by the patient’s bedside and better judge how successful we’ve been at removing the tumor.” As another example, conducted in the UK discovered a non-invasive way of classifying brain tumors in children using machine learning in radiology and diffusion-weighted imaging techniques. This approach uses the diffusion of water molecules to obtain contrast in MRI scans. Afterward, the apparent diffusion coefficient (ADC) map is extracted and fed to machine learning algorithms. This technique can distinguish three main brain tumor types in the posterior fossa part of the brain. Such tumors are the most common cancer-related cause of death among children. If surgeons know which variant the patient has in advance, they can prepare a more efficient treatment plan.2. Detecting hidden fractures
The FDA AI algorithms for clinical decision support in 2018. Imagen’s OsteoDetect software was among the first the agency approved. This program uses AI to detect distal radius fractures in wrist scans.The FDA granted its clearance after Imagen submitted a study of its software performance on 1,000 wrist images. The confidence in OsteoDetect increased after 24 healthcare providers using the tool confirmed that it helped them detect fractures. Another use of AI in radiology is spotting hip fractures. This type of injury is common in elderly patients. Traditionally, radiologists use X-ray to detect this type of injury. However, such fractures are hard to spot as they can hide under soft tissues. A published in the European Journal of Radiology demonstrates the potential of employing Deep Convolutional Neural Network (DCNN) to help radiologists spot fractures. DCNN can identify defects in MRI and CT scans that escape the human eye. Researchers conducted an experiment where human radiologists attempted to identify hip fractures from X-rays while AI was reading CT and MRI scans of the same hips. As a result, the radiologists could spot 83% of the fractures. DCNN’s accuracy reached 91%.3. Recognizing breast cancer
Breast cancer is among women in the US. Despite the severity of this disease, doctors miss up to during routine screenings. At the same time, only around 10% of women with suspicious mammograms appear to have cancer. This results in frustration, depression, and even invasive procedures that healthy women are forced to undergo when wrongly diagnosed with cancer. Radiology AI simulation tools can improve this situation. A conducted by Korean academic hospitals used an AI-based tool developed by Lunit to aid radiologists in mammography screenings. The study found that radiologists’ accuracy increased from 75.3% to 84.8% when they used AI. The algorithm was particularly good at detecting early-stage invasive cancers. Some women with developing breast cancer don’t experience any symptoms. Therefore women, in general, are advised to do regular mammogram screenings. However, due to the pandemic, many couldn’t do their routine checkups. According to Dr. Lehman, a radiologist at the Massachusetts General Hospital, about 20,000 women skipped their screenings during the pandemic. On average, five out of 1,000 screened women exhibit early signs of breast cancer. This equates to 100 undetected cancer cases. To remedy the situation, Dr. Lehman and her colleagues used radiology AI to predict which patients are likely to develop cancer. The algorithm analyzed previous mammogram scans available at the hospital. It combined the scans with relevant patient information, such as previous surgeries and hormone-related factors. The women whom the algorithm flagged as high risk were persuaded to come for routine screening. The results showed many of them had early signs of cancer.4. Detecting neurological abnormalities
Artificial intelligence in radiology has the potential to diagnose neurodegenerative disorders such as Alzheimer’s, Parkinson’s, and amyotrophic lateral sclerosis (ALS) by tracking retinal movements. This analysis takes around 10 seconds. Another approach to spotting neurological abnormalities is through speech analysis, since Alzheimer’s changes patients’ language patterns. For instance, people with this disorder tend to replace nouns with pronouns. Researchers at Stevens Institute of Technology developed an based on convolutional neural networks and trained it using text composed by both healthy and affected individuals. The tool recognized early signs of Alzheimer’s in elderly patients solely based on their speech pattern with a 95% accuracy. Such software helps doctors identify which patients with mild cognitive impairment will go on to develop degenerative diseases and how severely their cognitive and motor skills will decline over time. This gives the endangered patients an opportunity to arrange for care facilities while they still can.5. Offering a second opinion
AI algorithms can run in the background offering a second opinion when radiologists disagree on a problematic medical image. This practice decreases the decision-making-related stress level and helps radiologists learn to work with AI side-by-side and appreciate its benefits. Mount Sinai Health System, New York City, used AI for reading radiology results alongside the human specialist as a “second opinion” option for detecting COVID-19 in CT scans. They claim to be the first institution to combine AI and medical imaging for the novel coronavirus detection. Researchers trained the AI algorithm on 900 scans. And even though CT scans are not the primary way of COVID-19 detection, the tool can pick on mild signs of the disease that human eyes can’t notice. This AI model provides a second opinion when the CT scan shows negative results or nonspecific findings that radiologists can’t classify.Availability of training datasets
To function properly, machine learning algorithms in radiology need to be trained on large amounts of medical images. The more, the better. But in the medical field, it is difficult to gain access to such datasets. For the sake of comparison, a typical non-medical imaging dataset can contain up to 100,000,000 images, while medical imaging sets rarely exceedLabeling
Another problem is producing labeled datasets for supervised training. Medical image annotation is a very time-consuming and labor-intensive process. Radiologists and other medical experts must do this task manually assigning appropriate labels for the given AI application. There is potential for automatically extracting structured labels from radiology reports using natural language processing. But even then, radiologists will most likely need to review the results.Customization
Opting for existing algorithms instead of developing custom ones can also be problematic. Many successful deep learning models available on the market are trained on 2D images, while CT scans and MRIs are 3D. This extra dimension poses a problem, and the algorithms need to be adjusted.Technological limitations
Finally, AI technology itself is leaving room for doubt. Computer power has been doubling every two years. However, according to Wim Naude, a business professor from the Netherlands, this established pattern is diminishing. Consequently, we may not have the necessary power and multitasking abilities to take over the broad range of tasks that an average radiologist performs. AI’s silicon-based transistors will have to be replaced with technology such as organic biochips, which is still in its infantry to achieve such capabilities.Continuous learning
When ML continues learning independently, it can take into account some irrelevant criteria. When the tool doesn’t explain its decision logic, radiologists can’t spot these self-added factors. For example, algorithms will learn that implanted medical devices and scars are signs of health issues. This is a correct assumption, but then the algorithm might assume that patients lacking these marks are healthy, which is not always true. Another example comes from Icahn School of Medicine. A team of researchers developed a deep learning algorithm to identify pneumonia in X-ray scans. They were baffled to see this software's performance considerably declining when tested with scans from other institutions. After a lengthy investigation, they realized the program was considering how common pneumonia is at each institution as a factor in its decision. This is obviously not something the researchers wanted.Biased datasets
Biased training datasets also present a problem. If a particular tool is mainly trained on medical images of a specific racial profile, it will not perform as well on others. For example, software trained on white people will be less precise on people of color. Also, algorithms trained and used at one institution need to be treated with caution when transferred to another organization as the labeling style will be different. A by Harvard discovered that algorithms trained on CT scans can even become biased to particular CT machine manufacturers. When radiologists don’t receive an explanation of a particular AI decision, their trust in the system will decline.Loose ethics and regulations
There are several ethical and regulatory issues coined with the use of AI in radiology.Changing behavior
Machine learning algorithms are challenging to regulate because their outcome is hard to predict. For example, a drug mostly works in a similar way, and we can anticipate its outcome. In contrast, ML tools tend to learn on the fly and adapt their behavior.Who is responsible?
Another issue up for debate is who carries the final responsibility if AI led to a wrong diagnosis and the prescribed treatment caused harm. Due to AI’s black-box nature, the radiologist often can’t explain the recommendations delivered by artificial intelligence tools. So, should they follow these recommendations, no questions asked?Permissions and credit sharing
The third hurdle is the use of patient data for AI training. There is a need to obtain and reobtain patient consent and offer a reliable and compliant data storage facility. Also, if you trained AI algorithms on patient data and then sold it and made a profit, are the patients entitled to a part of it? Now, we rely on the goodwill of AI software developers and researchers who train these tools to deliver an unbiased, reliable product that meets the appropriate standards. Instead, healthcare facilities adopting AI need to arrange for regular audits of the product to make sure it is still useful and compliant.Fundraising:
Coming up with a strong business case is a challenge. Focus on the long-term benefits of AI to the whole clinic, not only to the radiology department.Development:
Consult experienced radiologists on the rules you want to hardwire into your algorithms, especially if your developers don’t have a medical background.Diversify your training data. Use medical images from different population cohorts to avoid bias.Customize your training datasets to the location where you want to sell your software. If you are targeting a particular medical institution, gather as many details as possible. Information, such as the type of CT scanners they are using, will help you deliver more effective algorithms.Overcome the black-box problem by offering some degree of decision explanation. For example, you can use rule extraction, a data mining technique that can interpret models generated by shallow neural networks.Work on the user experience aspect of your tool. Most radiology software available on the market is not user-friendly. If you can pull it off, your product will stand out among the competition.Support:
Suggest organizing regular audits after clients deploy your tools. Machine learning algorithms in radiology continue to learn, and their behavior adapts accordingly. With audits, you will make sure they are still fit for the job.Monitor updates on relevant regulations and new reimbursement options. If you want to learn more about AI applications in radiology and how to overcome deployment challenges, feel free our AI experts.Previously published at