IBM Watson Health


IBM Watson Health is a division of the International Business Machines Corporation,, an American multinational information technology company headquartered in Armonk, New York. It helps clients facilitate medical research, clinical research, and healthcare solutions, through the use of artificial intelligence, data, analytics, cloud computing, and other advanced information technology.
IBM began in 1911, founded in Endicott, New York, as the Computing-Tabulating-Recording Company and was renamed "International Business Machines" in 1924. IBM is incorporated in New York.
IBM produces and sells computer hardware, middleware and software, and provides hosting and consulting services in areas ranging from mainframe computers to nanotechnology. IBM is also a major research organization, holding the record for most U.S. patents generated by a business for 26 consecutive years. Inventions by IBM include the automated teller machine, the floppy disk, the hard disk drive, the magnetic stripe card, the relational database, the SQL programming language, the UPC barcode, and dynamic random-access memory. The IBM mainframe, exemplified by the System/360, was the dominant computing platform during the 1960s and 1970s.

Advancements

In healthcare, Watson's natural language, hypothesis generation, and evidence-based learning capabilities are being investigated to see how Watson may contribute to clinical decision support systems and the increase in artificial intelligence in healthcare for use by medical professionals. To aid physicians in the treatment of their patients, once a physician has posed a query to the system describing symptoms and other related factors, Watson first parses the input to identify the most important pieces of information; then mines patient data to find facts relevant to the patient's medical and hereditary history; then examines available data sources to form and test hypotheses; and finally provides a list of individualized, confidence-scored recommendations. The sources of data that Watson uses for analysis can include treatment guidelines, electronic medical record data, notes from healthcare providers, research materials, clinical studies, journal articles and patient information. Despite being developed and marketed as a "diagnosis and treatment advisor", Watson has never been actually involved in the medical diagnosis process, only in assisting with identifying treatment options for patients who have already been diagnosed.
In February 2011, it was announced that IBM would be partnering with Nuance Communications for a research project to develop a commercial product during the next 18 to 24 months, designed to exploit Watson's clinical decision support capabilities. Physicians at Columbia University would help to identify critical issues in the practice of medicine where the system's technology may be able to contribute, and physicians at the University of Maryland would work to identify the best way that a technology like Watson could interact with medical practitioners to provide the maximum assistance.
In September 2011, IBM and WellPoint announced a partnership to utilize Watson's data crunching capability to help suggest treatment options to physicians. Then, in February 2013, IBM and WellPoint gave Watson its first commercial application, for utilization management decisions in lung cancer treatment at Memorial Sloan–Kettering Cancer Center.
IBM announced a partnership with Cleveland Clinic in October 2012. The company has sent Watson to the Cleveland Clinic Lerner College of Medicine of Case Western Reserve University, where it will increase its health expertise and assist medical professionals in treating patients. The medical facility will utilize Watson's ability to store and process large quantities of information to help speed up and increase the accuracy of the treatment process. "Cleveland Clinic's collaboration with IBM is exciting because it offers us the opportunity to teach Watson to 'think' in ways that have the potential to make it a powerful tool in medicine", said C. Martin Harris, MD, chief information officer of Cleveland Clinic.
In 2013, IBM and MD Anderson Cancer Center began a pilot program to further the center's "mission to eradicate cancer". However, after spending $62 million, the project did not meet its goals and it has been stopped.
On February 8, 2013, IBM announced that oncologists at the Maine Center for Cancer Medicine and Westmed Medical Group in New York have started to test the Watson supercomputer system in an effort to recommend treatment for lung cancer.
On July 29, 2016, IBM and Manipal Hospitals announced the launch of IBM Watson for Oncology, for cancer patients. This product provides information and insights to physicians and cancer patients to help them identify personalized, evidence-based cancer care options. Manipal Hospitals is the second hospital in the world to adopt this technology and first in the world to offer it to patients online as an expert second opinion through their website. Manipal discontinued this contract in December 2018.
On January 7, 2017, IBM and Fukoku Mutual Life Insurance entered into a contract for IBM to deliver analysis to compensation payouts via its IBM Watson Explorer AI, this resulted in the loss of 34 jobs and the company said it would speed up compensation payout analysis via analysing claims and medical record and increase productivity by 30%. The company also said it would save ¥140m in running costs.
It is said that IBM Watson will be carrying the knowledge-base of 1000 cancer specialists which will bring a revolution in the field of healthcare. IBM is regarded as a disruptive innovation. However the stream of oncology is still in its nascent stage.
Several startups in the healthcare space have been effectively using seven business model archetypes to take solutions based on IBM Watson to the marketplace. These archetypes depends on the value generate for the target user and value capturing mechanisms.
In 2019 Eliza Strickland calls "the Watson Health story a cautionary tale of hubris and hype" and provides a "representative sample of projects" with their status.

Industry considerations and challenges

The subsequent motive of large based health companies merging with other health companies, allow for greater health data accessibility. Greater health data may allow for more implementation of AI algorithms.
A large part of industry focus of implementation of AI in the healthcare sector is in the clinical decision support systems. As the amount of data increases, AI decision support systems become more efficient. Numerous companies are exploring the possibilities of the incorporation of big data in the health care industry.
IBM's Watson Oncology is in development at Memorial Sloan Kettering Cancer Center and Cleveland Clinic. IBM is also working with CVS Health on AI applications in chronic disease treatment and with Johnson & Johnson on analysis of scientific papers to find new connections for drug development. In May 2017, IBM and Rensselaer Polytechnic Institute began a joint project entitled Health Empowerment by Analytics, Learning and Semantics, to explore using AI technology to enhance healthcare.
Some other large companies that have contributed to AI algorithms for use in healthcare include:

Microsoft

's Hanover project, in partnership with Oregon Health & Science University's Knight Cancer Institute, analyzes medical research to predict the most effective cancer drug treatment options for patients. Other projects include medical image analysis of tumor progression and the development of programmable cells.

Google

's DeepMind platform is being used by the UK National Health Service to detect certain health risks through data collected via a mobile app. A second project with the NHS involves analysis of medical images collected from NHS patients to develop computer vision algorithms to detect cancerous tissues.

Intel

Intel's venture capital arm Intel Capital recently invested in startup Lumiata which uses AI to identify at-risk patients and develop care options.
Artificial intelligence in healthcare is the use of complex algorithms and software to emulate human cognition in the analysis of complicated medical data. Specifically, AI is the ability for computer algorithms to approximate conclusions without direct human input.
What distinguishes AI technology from traditional technologies in health care is the ability to gain information, process it and give a well-defined output to the end-user. AI does this through machine learning algorithms. These algorithms can recognize patterns in behavior and create its own logic. In order to reduce the margin of error, AI algorithms need to be tested repeatedly. AI algorithms behave differently from humans in two ways: algorithms are literal: if you set a goal, the algorithm can't adjust itself and only understand what it has been told explicitly, and algorithms are black boxes; algorithms can predict extremely precise, but not the cause or the why.
The primary aim of health-related AI applications is to analyze relationships between prevention or treatment techniques and patient outcomes. AI programs have been developed and applied to practices such as diagnosis processes, treatment protocol development, drug development, personalized medicine, and patient monitoring and care. Medical institutions such as The Mayo Clinic, Memorial Sloan Kettering Cancer Center, and National Health Service, have developed AI algorithms for their departments. Large technology companies such as IBM and Google, and startups such as Welltok and Ayasdi, have also developed AI algorithms for healthcare. Additionally, hospitals are looking to AI solutions to support operational initiatives that increase cost saving, improve patient satisfaction, and satisfy their staffing and workforce needs. Companies are developing predictive analytics solutions that help healthcare managers improve business operations through increasing utilization, decreasing patient boarding, reducing length of stay and optimizing staffing levels.
The following medical fields are of interest in artificial intelligence research:

Radiology

The ability to interpret imaging results with radiology may aid clinicians in detecting a minute change in an image that a clinician might accidentally miss. A study at Stanford created an algorithm that could detect pneumonia at that specific site, in those patients involved, with a better average F1 metric, than the radiologists involved in that trial. The radiology conference Radiological Society of North America has implemented presentations on AI in imaging during its annual meeting. The emergence of AI technology in radiology is perceived as a threat by some specialists, as the technology can achieve improvements in certain statistical metrics in isolated cases, as opposed to specialists.

Imaging

Recent advances have suggested the use of AI to describe and evaluate the outcome of maxillo-facial surgery or the assessment of cleft palate therapy in regard to facial attractiveness or age appearance.
In 2018, a paper published in the journal Annals of Oncology mentioned that skin cancer could be detected more accurately by an artificial intelligence system than by dermatologists. On average, the human dermatologists accurately detected 86.6% of skin cancers from the images, compared to 95% for the CNN machine.

Disease Diagnosis

There are many diseases out there but there also many ways that AI has been used to efficiently and accurately diagnose them. Some of the diseases that are the most notorious such as Diabetes, and Cardiovascular Disease which are both in the top ten for causes of death worldwide have been the basis behind a lot of the research/testing to help get an accurate diagnosis. Due to such a high mortality rate being associated with these diseases there have been efforts to integrate various methods in helping get accurate diagnosis’.
An article by Jiang, et al demonstrated that there are multiple different types of AI techniques that have been used for a variety of different diseases. Some of these techniques discussed by Jiang, et al include: Support vector machines, neural networks, Decision trees, and many more. Each of these techniques are described as having a “training goal” so “classifications agree with the outcomes as much as possible…”.
To demonstrate some specifics for disease diagnosis/classification there are two different techniques used in the classification of these diseases include using “Artificial Neural Networks and Bayesian Networks ”. From a review of multiple different papers within the timeframe of 2008-2017 observed within them which of the two techniques were better. The conclusion that was drawn was that “the early classification of these diseases can be achieved developing machine learning models such as Artificial Neural Network and Bayesian Network.” Another conclusion Alic, et al was able to draw was that between the two ANN and BN that ANN was better and could more accurately classify diabetes/CVD with a mean accuracy in “both cases.

Telehealth

The increase of Telemedicine, has shown the rise of possible AI applications. The ability to monitor patients using AI may allow for the communication of information to physicians if possible disease activity may have occurred. A wearable device may allow for constant monitoring of a patient and also allow for the ability to notice changes that may be less distinguishable by humans.

Electronic health records

Electronic health records are crucial to the digitalization and information spread of the healthcare industry. However logging all of this data comes with its own problems like cognitive overload and burnout for users. EHR developers are now automating much of the process and even starting to use natural language processing tools to improve this process. One study conducted by the Centerstone research institute found that predictive modeling of EHR data has achieved 70–72% accuracy in predicting individualized treatment response at baseline. Meaning using an AI tool that scans EHR data it can pretty accurately predict the course of disease in a person.

Drug Interactions

Improvements in Natural Language Processing led to the development of algorithms to identify drug-drug interactions in medical literature. Drug-drug interactions pose a threat to those taking multiple medications simultaneously, and the danger increases with the number of medications being taken. To address the difficulty of tracking all known or suspected drug-drug interactions, machine learning algorithms have been created to extract information on interacting drugs and their possible effects from medical literature. Efforts were consolidated in 2013 in the DDIExtraction Challenge, in which a team of researchers at Carlos III University assembled a corpus of literature on drug-drug interactions to form a standardized test for such algorithms. Competitors were tested on their ability to accurately determine, from the text, which drugs were shown to interact and what the characteristics of their interactions were. Researchers continue to use this corpus to standardize the measure of the effectiveness of their algorithms.
Other algorithms identify drug-drug interactions from patterns in user-generated content, especially electronic health records and/or adverse event reports. Organizations such as the FDA Adverse Event Reporting System and the World Health Organization’s VigiBase allow doctors to submit reports of possible negative reactions to medications. Deep learning algorithms have been developed to parse these reports and detect patterns that imply drug-drug interactions.