Machine Learning and Laboratory Medicine: Now and the Road Ahead
By Thomas J.S. Durant, MD Posted on 29 Jul 2019 |

(Photo courtesy of AACC)

(Photo courtesy of AACC)
As the demand for healthcare continues to grow exponentially, so does the volume of laboratory testing. Similar to other sectors, research in the field of laboratory medicine has begun to investigate the use of machine learning (ML) to ease the burden of increasing demand for services and to improve quality and safety.
Over the past decade, the statistical performance of ML on benchmark tasks has improved significantly due to increased availability of high-speed computing on graphic processing units, integration of convolutional neural networks, optimization of deep learning, and ever larger datasets (2). The details of these achievements are beyond the scope of this article.
However, the emerging consensus is that the general performance of supervised ML—algorithms which rely on labeled datasets—has reached a tipping point where clinical laboratorians should seriously consider enterprise-level, mission-critical applications (Table 1).
In recent years, research publications related to ML have increased significantly in pathology and laboratory medicine (Figure 1). However, despite recent strides in technology and the growing body of literature, few examples exist of ML implemented into routine clinical practice. In fact, some of the more prominent examples of ML in current practice were developed prior to the recent inflection in ML-related publications (3).
This underscores the possibility that despite technological advancements, progress in ML remains slow due to intrinsic limitations of available datasets, the state of ML technology itself, and other barriers.
As laboratory medicine continues to undergo digitalization and automation, clinical laboratorians will likely be confronted with the challenges associated with evaluating, implementing, and validating ML algorithms, both inside and outside their laboratories. Understanding what ML is good for, where it can be applied, and the ML field’s state-of-the-art and limitations will be useful to practicing laboratory professionals. This article discusses current implementations of ML technology in modern clinical laboratory workflows as well as potential barriers to aligning the two historically distant fields.
WHERE IS THE MACHINE LEARNING?
As ML continues to be adopted and integrated into the complex infrastructure of health information systems (HIS), how ML may influence laboratory medicine practice remains an open question. In particular, it is important to consider barriers to implementation and identify stakeholders for governance, development, validation, and maintenance. However, clinical laboratorians should first consider the context: is the ML application inside or downstream from a laboratory?
Machine Learning Inside Labs
Currently, there are just a handful of Food and Drug Administration (FDA)-cleared, ML-based commercial products available for clinical laboratories. The Cellavision DM96, marketed by Cellavision AB (based in Lund, Sweden), is a prominent example that has been adopted widely
since gaining FDA clearance in 2001 (3). More recently, the Accelerate Pheno, marketed by Accelerate Diagnostics in Tucson, Arizona, uses a hierarchical system that combines multivariate logarithmic regression and computer vision (4,5). Both systems rely heavily on digital image acquisition and analysis to generate their results.
The recent appearance of FDA-cleared instruments that process digital images is not surprising considering current advancements in computer science, especially significant strides researchers have made with image-based data. Robust ML methods such as image convolution, neural networks, and deep learning have accelerated the performance of image-based ML in recent years (2). Digital images, however, are not as abundant in clinical laboratories as they are in other diagnostic specialties, such as radiology or anatomic pathology, possibly limiting future applications of image-based ML in laboratory medicine.
Beyond the limited number of commercial applications, ML research in laboratory medicine also has been growing, although the total number of publications remains relatively low. In recent years, researchers have investigated the utility of ML for a broad array of datasets, such as analyzing erythrocyte morphology, bacterial colony morphology, thyroid panels, urine steroid profiles, flow cytometry, and to review test result reports for quality assurance (6-11).
While some institutions have successfully integrated homegrown ML systems into their workflows, few have successfully transitioned to clinical practice. Despite the development of better performing models, researchers for a variety of reasons often find difficulty with the proverbial last mile of clinical integration. In particular, the literature offers little to no guidance on statistical performance metrics by which to evaluate ML models, the design of clinical validation experiments, or on how to create more modular ML models that integrate with current laboratory medicine information technology (IT) infrastructures and workflows.
In all likelihood, the reason for clinical laboratories’ slow adoption of ML, both from commercial and research sources, is multifactorial, and arguably emanates from more than just the intrinsic limitations of the core technology itself. Similar to other technologies that receive a lot of attention, such as “big data” or blockchain, ML remains a tool that requires a supportive system architecture. While the core technology is demonstrating promising results, its prevalence in daily practice is likely to remain limited until developers and software engineers offer clinical IT systems that allow easy integration with existing workflows.
Machine Learning Outside Labs
As electronic health records (EHR) continue to evolve and accumulate more data, commercial EHR vendors are looking to expand their data access and analytic capabilities. They have begun offering ML models designed for use within their systems and in some cases are allowing access to third-party models. Vendors often package ML software into clinical decision support (CDS), an increasingly popular location for blending ML and clinical medicine.
While CDS tools traditionally rely on rule-based systems, vendors now are using ML in predictive alarms and syndrome surveillance tools, aimed at assisting clinical decision makers in complex scenarios.
In their current state, ML algorithms usually rely on structured data for training and subsequently generating predictions. While a significant portion of EHRs contains unstructured and semistructured data, laboratory information remains one of the largest sources for structured data, and it is not uncommon for ML-based CDS tools to rely heavily on laboratory data as input. As CDS tools proliferate, the role of laboratory medicine in developing, validating, and maintaining these models remains important yet poorly defined.
In addition, similar to calculated laboratory results such as estimated glomerular filtration rate, probability scores generated by ML models that rely on laboratory data could arguably be subject to regulation by traditional governing and accrediting organizations such as College of American Pathologists, FDA, or the Joint Commission.
While the regulation of ML remains an openly debated topic in the field of computer science, the growing consensus among experts in the medical community is that rigorous oversight of these models is appropriate to ensure their safety and reliability in clinical medicine.
In 2017, FDA released draft guidance on CDS software in an attempt to provide clarity on the scope of its regulatory oversight (12). While these guidelines are still subject to change, it’s clear the agency is committed to oversight in this area. Until guidelines are formalized, subjecting ML models to the rigor of the peer-reviewed process may be the next best thing.
To deliver promising ML technology at the bedside, the IT and medical communities may need to collaborate with ML researchers and vendors to support validation studies. Clinical laboratorians may be particularly suited for guiding these types of efforts, owing not only to ML models’ frequent reliance on laboratory data but also laboratorians’ expertise in validating new technology for clinical purposes.
As ML in the post-analytic phase propagates, clinical laboratorians will need to become increasingly attentive to which laboratory data are being used and how. For example, changes within laboratory information systems may have unintended consequences on downstream applications that rely on properly mapped laboratory result data. Health systems will also benefit from clinical laboratorians’ insight on how ML can improve patient care using laboratory data.
BARRIERS TO DEVELOPMENT AND ADOPTION
Three categories generally describe common approaches to ML: supervised, semisupervised, and unsupervised. Supervised ML relies on a large, accurately labeled dataset to train an ML model, such as labeling images of leukocytes as lymphocytes or neutrophils for subsequent classification. Currently, the consensus is that supervised ML will generate the best models for targeted detection of known classes of data. But in many cases the data sets required are not large enough or labeled accurately enough. However, the process of curating accurately labeled datasets is difficult and time-consuming.
With EHRs, researchers certainly have greater access to data than in years past. However, health information in its native state often is insufficiently structured for the rigorous development of ML models. For example, predictive alarms and syndrome surveillance tools that use supervised ML often rely on datasets delineated by the presence or absence of clinical disease. While ICD-10 codes are a discrete data element that could be used for labeling purposes, experience at our institution indicates that ICD-10 codes are not documented reliably enough to train supervised ML models.
To avoid performance issues associated with inconsistent labels, data scientists can curate custom labels based on specific criteria to define the classes in their datasets. But criteria for defining classes are often subjective and may lack universal acceptance. For example, sepsis prediction algorithms may rely on clinical criteria of sepsis used at one institution but not another. It will become increasingly important for clinical laboratorians to consider how models are trained and which specific clinical definitions define the functional ground-truth in an ML model for the classes or disease being detected.
In addition to issues with variable criteria for clinical disease, some labels also have intrinsic variability that may preclude ML from optimal performance across institutions. Linear models such as logistic and linear regression have shown poor generalizability between institutions (13,14). In healthcare, the problem is multifactorial and may result from population heterogeneity, or from discrepancies between the ML training population and the use case or test population. Consequently, ML models trained outside one’s institution may benefit from retraining before go-live. However, nothing in the literature supports this practice.
Lastly, the black box nature of ML models themselves poses a well-described barrier to adoption. Computer scientists have sought to elucidate how and why models arrive at the answers they generate in order to demonstrate to end users the decision points used to arrive at a given score or classification, often referred to as explainable artificial intelligence (XAI).
Proponents for XAI argue that it may help investigate the source of bias in an ML model in a scenario where a model is producing erroneous results. Ideally such a tool would also include interactive features to allow correction of the bias identified. However, as ML models become more powerful and complex, the ability to derive meaningful insight into their inner logic becomes more difficult. The practice of investigating methods for XAI is young, and its utility remains an open question.
WHAT’S NEXT FOR MACHINE LEARNING?
The powerful technology of ML offers significant potential to improve the quality of services provided by laboratory medicine. Early commercial and research-driven applications have demonstrated promising results with ML-based applications in our field. Despite nagging problems with model generalizability, oversight, and physician adoption, we should expect a steady influx of ML-based technology into laboratory medicine in the coming years.
Laboratory medicine professionals will need to understand what can be done reliably with the technology, what the pitfalls are, and to establish what constitutes best practices as we introduce ML models into clinical workflows.
Thomas J.S. Durant, MD, is a clinical fellow and resident physician in the department of laboratory medicine at Yale University School of Medicine in New Haven, Connecticut. +Email: thomas.durant@yale.edu
At the 71st AACC Annual Scientific Meeting & Clinical Lab Expo, scientific sessions cover a wide array of dynamic areas of clinical laboratory medicine. For sessions related to this and other Data Analytics topics visit https://2019aacc.org/conference-program.
Over the past decade, the statistical performance of ML on benchmark tasks has improved significantly due to increased availability of high-speed computing on graphic processing units, integration of convolutional neural networks, optimization of deep learning, and ever larger datasets (2). The details of these achievements are beyond the scope of this article.
However, the emerging consensus is that the general performance of supervised ML—algorithms which rely on labeled datasets—has reached a tipping point where clinical laboratorians should seriously consider enterprise-level, mission-critical applications (Table 1).
In recent years, research publications related to ML have increased significantly in pathology and laboratory medicine (Figure 1). However, despite recent strides in technology and the growing body of literature, few examples exist of ML implemented into routine clinical practice. In fact, some of the more prominent examples of ML in current practice were developed prior to the recent inflection in ML-related publications (3).
This underscores the possibility that despite technological advancements, progress in ML remains slow due to intrinsic limitations of available datasets, the state of ML technology itself, and other barriers.
As laboratory medicine continues to undergo digitalization and automation, clinical laboratorians will likely be confronted with the challenges associated with evaluating, implementing, and validating ML algorithms, both inside and outside their laboratories. Understanding what ML is good for, where it can be applied, and the ML field’s state-of-the-art and limitations will be useful to practicing laboratory professionals. This article discusses current implementations of ML technology in modern clinical laboratory workflows as well as potential barriers to aligning the two historically distant fields.
WHERE IS THE MACHINE LEARNING?
As ML continues to be adopted and integrated into the complex infrastructure of health information systems (HIS), how ML may influence laboratory medicine practice remains an open question. In particular, it is important to consider barriers to implementation and identify stakeholders for governance, development, validation, and maintenance. However, clinical laboratorians should first consider the context: is the ML application inside or downstream from a laboratory?
Machine Learning Inside Labs
Currently, there are just a handful of Food and Drug Administration (FDA)-cleared, ML-based commercial products available for clinical laboratories. The Cellavision DM96, marketed by Cellavision AB (based in Lund, Sweden), is a prominent example that has been adopted widely
since gaining FDA clearance in 2001 (3). More recently, the Accelerate Pheno, marketed by Accelerate Diagnostics in Tucson, Arizona, uses a hierarchical system that combines multivariate logarithmic regression and computer vision (4,5). Both systems rely heavily on digital image acquisition and analysis to generate their results.
The recent appearance of FDA-cleared instruments that process digital images is not surprising considering current advancements in computer science, especially significant strides researchers have made with image-based data. Robust ML methods such as image convolution, neural networks, and deep learning have accelerated the performance of image-based ML in recent years (2). Digital images, however, are not as abundant in clinical laboratories as they are in other diagnostic specialties, such as radiology or anatomic pathology, possibly limiting future applications of image-based ML in laboratory medicine.
Beyond the limited number of commercial applications, ML research in laboratory medicine also has been growing, although the total number of publications remains relatively low. In recent years, researchers have investigated the utility of ML for a broad array of datasets, such as analyzing erythrocyte morphology, bacterial colony morphology, thyroid panels, urine steroid profiles, flow cytometry, and to review test result reports for quality assurance (6-11).
While some institutions have successfully integrated homegrown ML systems into their workflows, few have successfully transitioned to clinical practice. Despite the development of better performing models, researchers for a variety of reasons often find difficulty with the proverbial last mile of clinical integration. In particular, the literature offers little to no guidance on statistical performance metrics by which to evaluate ML models, the design of clinical validation experiments, or on how to create more modular ML models that integrate with current laboratory medicine information technology (IT) infrastructures and workflows.
In all likelihood, the reason for clinical laboratories’ slow adoption of ML, both from commercial and research sources, is multifactorial, and arguably emanates from more than just the intrinsic limitations of the core technology itself. Similar to other technologies that receive a lot of attention, such as “big data” or blockchain, ML remains a tool that requires a supportive system architecture. While the core technology is demonstrating promising results, its prevalence in daily practice is likely to remain limited until developers and software engineers offer clinical IT systems that allow easy integration with existing workflows.
Machine Learning Outside Labs
As electronic health records (EHR) continue to evolve and accumulate more data, commercial EHR vendors are looking to expand their data access and analytic capabilities. They have begun offering ML models designed for use within their systems and in some cases are allowing access to third-party models. Vendors often package ML software into clinical decision support (CDS), an increasingly popular location for blending ML and clinical medicine.
While CDS tools traditionally rely on rule-based systems, vendors now are using ML in predictive alarms and syndrome surveillance tools, aimed at assisting clinical decision makers in complex scenarios.
In their current state, ML algorithms usually rely on structured data for training and subsequently generating predictions. While a significant portion of EHRs contains unstructured and semistructured data, laboratory information remains one of the largest sources for structured data, and it is not uncommon for ML-based CDS tools to rely heavily on laboratory data as input. As CDS tools proliferate, the role of laboratory medicine in developing, validating, and maintaining these models remains important yet poorly defined.
In addition, similar to calculated laboratory results such as estimated glomerular filtration rate, probability scores generated by ML models that rely on laboratory data could arguably be subject to regulation by traditional governing and accrediting organizations such as College of American Pathologists, FDA, or the Joint Commission.
While the regulation of ML remains an openly debated topic in the field of computer science, the growing consensus among experts in the medical community is that rigorous oversight of these models is appropriate to ensure their safety and reliability in clinical medicine.
In 2017, FDA released draft guidance on CDS software in an attempt to provide clarity on the scope of its regulatory oversight (12). While these guidelines are still subject to change, it’s clear the agency is committed to oversight in this area. Until guidelines are formalized, subjecting ML models to the rigor of the peer-reviewed process may be the next best thing.
To deliver promising ML technology at the bedside, the IT and medical communities may need to collaborate with ML researchers and vendors to support validation studies. Clinical laboratorians may be particularly suited for guiding these types of efforts, owing not only to ML models’ frequent reliance on laboratory data but also laboratorians’ expertise in validating new technology for clinical purposes.
As ML in the post-analytic phase propagates, clinical laboratorians will need to become increasingly attentive to which laboratory data are being used and how. For example, changes within laboratory information systems may have unintended consequences on downstream applications that rely on properly mapped laboratory result data. Health systems will also benefit from clinical laboratorians’ insight on how ML can improve patient care using laboratory data.
BARRIERS TO DEVELOPMENT AND ADOPTION
Three categories generally describe common approaches to ML: supervised, semisupervised, and unsupervised. Supervised ML relies on a large, accurately labeled dataset to train an ML model, such as labeling images of leukocytes as lymphocytes or neutrophils for subsequent classification. Currently, the consensus is that supervised ML will generate the best models for targeted detection of known classes of data. But in many cases the data sets required are not large enough or labeled accurately enough. However, the process of curating accurately labeled datasets is difficult and time-consuming.
With EHRs, researchers certainly have greater access to data than in years past. However, health information in its native state often is insufficiently structured for the rigorous development of ML models. For example, predictive alarms and syndrome surveillance tools that use supervised ML often rely on datasets delineated by the presence or absence of clinical disease. While ICD-10 codes are a discrete data element that could be used for labeling purposes, experience at our institution indicates that ICD-10 codes are not documented reliably enough to train supervised ML models.
To avoid performance issues associated with inconsistent labels, data scientists can curate custom labels based on specific criteria to define the classes in their datasets. But criteria for defining classes are often subjective and may lack universal acceptance. For example, sepsis prediction algorithms may rely on clinical criteria of sepsis used at one institution but not another. It will become increasingly important for clinical laboratorians to consider how models are trained and which specific clinical definitions define the functional ground-truth in an ML model for the classes or disease being detected.
In addition to issues with variable criteria for clinical disease, some labels also have intrinsic variability that may preclude ML from optimal performance across institutions. Linear models such as logistic and linear regression have shown poor generalizability between institutions (13,14). In healthcare, the problem is multifactorial and may result from population heterogeneity, or from discrepancies between the ML training population and the use case or test population. Consequently, ML models trained outside one’s institution may benefit from retraining before go-live. However, nothing in the literature supports this practice.
Lastly, the black box nature of ML models themselves poses a well-described barrier to adoption. Computer scientists have sought to elucidate how and why models arrive at the answers they generate in order to demonstrate to end users the decision points used to arrive at a given score or classification, often referred to as explainable artificial intelligence (XAI).
Proponents for XAI argue that it may help investigate the source of bias in an ML model in a scenario where a model is producing erroneous results. Ideally such a tool would also include interactive features to allow correction of the bias identified. However, as ML models become more powerful and complex, the ability to derive meaningful insight into their inner logic becomes more difficult. The practice of investigating methods for XAI is young, and its utility remains an open question.
WHAT’S NEXT FOR MACHINE LEARNING?
The powerful technology of ML offers significant potential to improve the quality of services provided by laboratory medicine. Early commercial and research-driven applications have demonstrated promising results with ML-based applications in our field. Despite nagging problems with model generalizability, oversight, and physician adoption, we should expect a steady influx of ML-based technology into laboratory medicine in the coming years.
Laboratory medicine professionals will need to understand what can be done reliably with the technology, what the pitfalls are, and to establish what constitutes best practices as we introduce ML models into clinical workflows.
Thomas J.S. Durant, MD, is a clinical fellow and resident physician in the department of laboratory medicine at Yale University School of Medicine in New Haven, Connecticut. +Email: thomas.durant@yale.edu
At the 71st AACC Annual Scientific Meeting & Clinical Lab Expo, scientific sessions cover a wide array of dynamic areas of clinical laboratory medicine. For sessions related to this and other Data Analytics topics visit https://2019aacc.org/conference-program.
Latest AACC 2019 News
- Instrumentation Laboratory Presents New IVD Testing System
- Quidel Welcomes Newest Member of Triage Family
- ERBA Mannheim Unveils Next-Generation Automation
- Roche Demonstrates How Health Networks Are Driving Change in Labs and Beyond
- BioMérieux Spotlights Diagnostic Solutions in Use of Antibiotics
- Thermo Shows New Clinical Innovations
- Randox Launches New Innovations
- Streck Introduces Three New Antibiotic Resistance Detection Kits
- EKF Diagnostics Highlights Assay for Diabetes Patient Monitoring
- Sysmex America Exhibits New Products, Automation and Quality Solutions
- BBI Solutions Showcases Mobile Solutions Capabilities at AACC 2019
- MedTest Dx Releases New Product Line for Drugs of Abuse Testing
- Mesa Biotech Launches Molecular Test System at AACC 2019
- Ortho Clinical Diagnostics Highlights Groundbreaking Lab Technology
- Abbott Diagnostics Exhibits POC Diagnostics Solutions at AACC
- Beckman Coulter Demonstrates Latest Innovations in Lab Medicine
Channels
Clinical Chemistry
view channel
Paper-Based Device Boosts HIV Test Accuracy from Dried Blood Samples
In regions where access to clinics for routine blood tests presents financial and logistical obstacles, HIV patients are increasingly able to collect and send a drop of blood using paper-based devices... Read more
AI-Powered Raman Spectroscopy Method Enables Rapid Drug Detection in Blood
Accurately monitoring drug levels in the blood is essential for effective treatment, particularly in the management of cardiovascular diseases. Traditional techniques for monitoring blood drug levels often... Read more
Novel LC-MS/MS Assay Detects Low Creatinine in Sweat and Saliva
Timely and accurate monitoring of renal function is essential for managing patients at risk of acute kidney injury (AKI), which affects about 12% of hospitalized patients and up to 57% of ICU patients.... Read more
Biosensing Technology Breakthrough Paves Way for New Methods of Early Disease Detection
Nanopores are tiny openings that can detect individual molecules as they pass through, making them ideal for analyzing biomolecules like DNA and proteins. However, detecting proteins at extremely low ... Read moreMolecular Diagnostics
view channel
Portable Blood-Based Device Detects Colon Cancer
Colon cancer is the second leading cause of cancer-related deaths in the U.S., yet it is highly treatable when detected at an early stage. Traditional colonoscopy screenings, although effective, are unpleasant,... Read more
New DNA Test Diagnoses Bacterial Infections Faster and More Accurately
Antimicrobial resistance has emerged as a significant global health threat, causing at least one million deaths annually since 1990. The Global Research on Antimicrobial Resistance (GRAM) Project warns... Read more
Innovative Bio-Detection Platform Improves Early Cancer Screening and Monitoring
Cancer remains one of the leading causes of death globally, underscoring the critical need for more advanced, efficient, and early detection methods. Circulating tumor cells (CTCs) are cells that have... Read more
Blood Test Could Help More Women Survive Aggressive Triple Negative Breast Cancer
Cancer research shows that over 90% of women diagnosed with breast cancer at its earliest stage survive for five years or more. However, this survival rate dramatically decreases to just 30% when the cancer... Read moreHematology
view channel
Non-Invasive Prenatal Test for Fetal RhD Status Demonstrates 100% Accuracy
In the United States, approximately 15% of pregnant individuals are RhD-negative. However, in about 40% of these cases, the fetus is also RhD-negative, making the administration of RhoGAM unnecessary.... Read more
WBC Count Could Predict Severity of COVID-19 Symptoms
The global health crisis caused by the SARS-CoV-2 virus continues to impact millions of people worldwide, with many experiencing persistent symptoms months after the initial diagnosis. Cognitive impairment... Read more
New Platelet Counting Technology to Help Labs Prevent Diagnosis Errors
Accurate platelet count testing is a significant challenge for laboratories. Inaccurate results can lead to misdiagnosis, missed diagnoses, and delayed treatment for a variety of potentially fatal conditions,... Read more
Streamlined Approach to Testing for Heparin-Induced Thrombocytopenia Improves Diagnostic Accuracy
Heparin-induced thrombocytopenia (HIT), a serious side effect of the blood thinner heparin, is difficult to diagnose because thrombocytopenia, or low platelet count, can be caused by a variety of factors... Read moreImmunology
view channelCerebrospinal Fluid Test Predicts Dangerous Side Effect of Cancer Treatment
In recent years, cancer immunotherapy has emerged as a promising approach where the patient's immune system is harnessed to fight cancer. One form of immunotherapy, called CAR-T-cell therapy, involves... Read more
New Test Measures Preterm Infant Immunity Using Only Two Drops of Blood
Preterm infants are particularly vulnerable due to their organs still undergoing development, which can lead to difficulties in breathing, eating, and regulating body temperature. This is especially true... Read more
Simple Blood Test Could Help Choose Better Treatments for Patients with Recurrent Endometrial Cancer
Endometrial cancer, which develops in the lining of the uterus, is the most prevalent gynecologic cancer in the United States, affecting over 66,000 women annually. Projections indicate that in 2025, around... Read moreMicrobiology
view channel
Gastrointestinal Panel Delivers Rapid Detection of Five Common Bacterial Pathogens for Outpatient Use
Acute infectious gastroenteritis results in approximately 179 million cases each year in the United States, leading to a significant number of outpatient visits and hospitalizations. To address this, a... Read more
Rapid PCR Testing in ICU Improves Antibiotic Stewardship
A collaborative study led by the University of Plymouth (Devon, UK) has shown that rapid polymerase chain reaction (PCR) testing in the intensive care unit (ICU) improved antibiotic stewardship compared... Read morePathology
view channel
New Test Diagnoses High-Risk Childhood Brain Tumors
Medulloblastoma, which originates in the cerebellum, the rear part of the brain, is the most prevalent malignant brain tumor in children and is notoriously difficult to diagnose. Currently, identifying... Read more
Informatics Solution Elevates Laboratory Efficiency and Patient Care
QuidelOrtho Corporation (San Diego, CA, USA) has introduced the QuidelOrtho Results Manager System, a cutting-edge informatics solution designed to meet the increasing demands of modern laboratories.... Read more
Microfluidic Device Assesses Stickiness of Tumor Cells to Predict Cancer Spread
Ductal carcinoma in situ (DCIS), a type of early-stage breast cancer, is often referred to as stage zero breast cancer. In many cases, it remains harmless and does not spread beyond the milk ducts where... Read more
New AI Tool Outperforms Previous Methods for Identifying Colorectal Cancer from Tissue Sample Analysis
Tissue analysis typically involves a pathologist reviewing scanned digital slides from a patient’s intestinal sample and marking specific areas, such as those where cancerous and related tissues are present.... Read moreTechnology
view channel
POC Paper-Based Sensor Platform to Transform Cardiac Diagnostics
Cardiovascular diseases continue to be the leading cause of death worldwide, accounting for over 19 million fatalities annually. Early detection of myocardial infarction (MI), commonly known as a heart... Read more
Study Explores Impact of POC Testing on Future of Diagnostics
In today’s rapidly changing world, having quick and accurate access to medical information is more crucial than ever. Point-of-Care Diagnostics (PoC-D) and Point-of-Care Testing (PoC-T) are making this... Read more
Low-Cost, Fast Response Sensor Enables Early and Accurate Detection of Lung Cancer
Cancer biomarkers are valuable tools for early diagnosis as their concentration in body fluids, such as serum, can be measured to detect the disease at an earlier stage. Additionally, serum levels of these... Read moreIndustry
view channel
CACLP 2025 Unites Global Innovators in IVD Industry
CACLP (Shanghai, China) will be holding the 22nd China International In Vitro Diagnostic Expo, the largest and most influential gathering of the IVD industry in China, 22-24 March 2025 at the Hangzhou... Read more
Bio-Rad to Acquire Digital PCR Developer Stilla Technologies
Bio-Rad Laboratories (Hercules, CA, USA) has entered into a binding offer to purchase all equity interests in Stilla Technologies (Villejuif, France). The acquisition remains subject to consultation with... Read more