Thursday, July 7, 2022

Automated Detection, Segmentation, and Classification of Pleural Effusion From Computed Tomography Scans Using Machine Learning

alexandrossfakianakis shared this article with you from Inoreader
imageObjective This study trained and evaluated algorithms to detect, segment, and classify simple and complex pleural effusions on computed tomography (CT) scans. Materials and Methods For detection and segmentation, we randomly selected 160 chest CT scans out of all consecutive patients (January 2016–January 2021, n = 2659) with reported pleural effusion. Effusions were manually segmented and a negative cohort of chest CTs from 160 patients without effusions was added. A deep convolutional neural network (nnU-Net) was trained and cross-validated (n = 224; 70%) for segmentation and tested on a separate subset (n = 96; 30%) with the same distribution of reported pleural complexity features as in the training cohort (eg, hyperdense fluid, gas, pleural thickening and loculation). On a separate consecutive cohort with a high prevalence of pleural complexity features (n = 335), a random forest model was implemented for classification of segmented effusions with Hounsfield unit thresholds, density distribution, and radiomics-based features as input. As performance measures, sensitivity, specificity, and area under the curves (AUCs) for detection/classifier evaluation ( per-case level) and Dice coefficient and volume analysis for the segmentation task were used. Results Sensitivity and specificity for detection of effusion were excellent at 0.99 and 0.98, respectively (n = 96; AUC, 0.996, test data). Segmentation was robust (median Dice, 0.89; median absolute volume difference, 13 mL), irrespective of size, complexity, or contrast phase. The sensitivity, specificity, and AUC for classification in simple versus complex effusions were 0.67, 0.75, and 0.77, respectively. Conclusion Using a dataset with different degrees of complexity, a robust model was developed for the detection, segmentation, and classification of effusion subtypes. The algorithms are openly available at https://github.com/usb-radiology/pleuraleffusion.git.
View on Web

Coronary Computed Tomography Angiography-Based Calcium Scoring: In Vitro and In Vivo Validation of a Novel Virtual Noniodine Reconstruction Algorithm on a Clinical, First-Generation Dual-Source Photon Counting-Detector System

alexandrossfakianakis shared this article with you from Inoreader
imagePurpose The aim of this study was to evaluate coronary computed tomography angiography (CCTA)-based in vitro and in vivo coronary artery calcium scoring (CACS) using a novel virtual noniodine reconstruction (PureCalcium) on a clinical first-generation photon-counting detector–computed tomography system compared with virtual noncontrast (VNC) reconstructions and true noncontrast (TNC) acquisitions. Materials and Methods Although CACS and CCTA are well-established techniques for the assessment of coronary artery disease, they are complementary acquisitions, translating into increased scan time and patient radiation dose. Hence, accurate CACS derived from a single CCTA acquisition would be highly desirable. In this study, CACS based on PureCalcium, VNC, and TNC, reconstructions was evaluated in a CACS phantom and in 67 patients (70 [59/80] years, 58.2% male) undergoing CCTA on a first-generation photon counting detector–computed tomography system. Coronary artery calcium scores were quantified for the 3 reconstructions and compared using Wilcoxon test. Agreement was evaluated by Pearson and Spearman correlation and Bland-Altman analysis. Classification of coronary artery calcium score categories (0, 1–10, 11–100, 101–400, and >400) was compared using Cohen κ. Results Phantom studies demonstrated strong agreement between CACSPureCalcium and CACSTNC (60.7 ± 90.6 vs 67.3 ± 88.3, P = 0.01, r = 0.98, intraclass correlation [ICC] = 0.98; mean bias, 6.6; limits of agreement [LoA], −39.8/26.6), whereas CACSVNC showed a significant underestimation (42.4 ± 75.3 vs 67.3 ± 88.3, P
View on Web

From Dose Reduction to Contrast Maximization: Can Deep Learning Amplify the Impact of Contrast Media on Brain Magnetic Resonance Image Quality? A Reader Study

alexandrossfakianakis shared this article with you from Inoreader
imageObjectives The aim of this study was to evaluate a deep learning method designed to increase the contrast-to-noise ratio in contrast-enhanced gradient echo T1-weighted brain magnetic resonance imaging (MRI) acquisitions. The processed images are quantitatively evaluated in terms of lesion detection performance. Materials and Methods A total of 250 multiparametric brain MRIs, acquired between November 2019 and March 2021 at Gustave Roussy Cancer Campus (Villejuif, France), were considered for inclusion in this retrospective monocentric study. Independent training (107 cases; age, 55 ± 14 years; 58 women) and test (79 cases; age, 59 ± 14 years; 41 women) samples were defined. Patients had glioma, brain metastasis, meningioma, or no enhancing lesion. Gradient echo and turbo spin echo with variable flip angles postcontrast T1 sequences were acquired in all cases. For the cases that formed the training sample, "low-dose" postcontrast gradient echo T1 images using 0.025 mmol/kg injections of contrast agent were also acquired. A deep neural network was trained to synthetically enhance the low-dose T1 acquisitions, taking standard-dose T1 MRI as reference. Once trained, the contrast enhancement network was used to process the test gradient echo T1 images. A read was then performed by 2 experienced neuroradiologist s to evaluate the original and processed T1 MRI sequences in terms of contrast enhancement and lesion detection performance, taking the turbo spin echo sequences as reference. Results The processed images were superior to the original gradient echo and reference turbo spin echo T1 sequences in terms of contrast-to-noise ratio (44.5 vs 9.1 and 16.8; P 0.99). The same effect was observed when considering all lesions larger than 5 mm: sensitivity increased from 70% to 85% (P
View on Web

Severity of SARS-CoV-2 Infection in Pregnancy

alexandrossfakianakis shared this article with you from Inoreader
Abstract
Background
Pregnancy represents a physiological state associated with increased vulnerability to severe outcomes from infectious diseases, both for the pregnant person and developing infant. The SARS-CoV-2 pandemic may have important health consequences for pregnant individuals, who may also be more reluctant than non-pregnant people to accept vaccination.
Methods
We sought to estimate the degree to which increased severity of SARS-CoV-2 outcomes can be attributed to pregnancy using a population-based SARS-CoV-2 case file from Ontario, Canada. Due to varying propensity to receive vaccination, and changes in dominant circulating viral strains over time, a time-matched cohort study was performed to evaluate the relative risk of severe illness in pregnant women with SARS-CoV-2 compared to other SARS-CoV-2 infected women of childbearing age (10 to 49 years old). Risk of severe SARS-CoV-2 outcomes was evaluated in pregnant women and time-matched non-pregnant controls using multivariable conditional logistic regression.
Results
Compared to the rest of the population, non-pregnant women of childbearing age had an elevated risk of infection (standardized morbidity ratio (SMR) 1.28), while risk of infection was reduced among pregnant women (SMR 0.43). After adjustment for confounding pregnant women had a markedly elevated risk of hospitalization (adjusted OR 4.96, 95% CI 3.86 to 6.37) and ICU admission (adjusted OR 6.58, 95% CI 3.29 to 13.18). The relative increase in hospitalization risk associated with pregnancy was greater in women without comorbidities than in those with comorbidities (P for heterogeneity 0.004).
Conclusions
Given the safety of SARS-CoV-2 vaccines in pregnancy, risk-benefit calculus strongly favours SARS-CoV-2 vaccination in pregnant women.
View on Web

Antibody and T-cell responses 6 months after COVID-19 mRNA-1273 vaccination in patients with chronic kidney disease, on dialysis, or living with a kidney transplant

alexandrossfakianakis shared this article with you from Inoreader
Abstract
Background
The immune response to COVID-19 vaccination is inferior in kidney transplant recipients (KTR), and to a lesser extent in patients on dialysis or with chronic kidney disease (CKD). We assessed the immune response 6 months after mRNA-1273 vaccination in kidney patients and compared this to controls.
Methods
152 participants with CKD stages G4/5 (eGFR <30  mL/min/1.73m2), 145 participants on dialysis, 267 KTR, and 181 controls were included. SARS-CoV-2 Spike S1-specific IgG antibodies were measured by fluorescent bead-based multiplex-immunoassay, neutralizing antibodies to ancestral, Delta and Omicron (BA.1) variants by plaque reduction, and T-cell responses by IFN-γ release assay.
Results
At 6 months after vaccination S1-specific antibodies were detected in 100% of controls, 98.7% of CKD G4/5 patients, 95.1% of dialysis patients, and 56.6% of KTR. These figures were comparable to the respons e rates at 28 days, but antibody levels waned significantly. Neutralization of the ancestral and Delta variant was detected in most participants, whereas neutralization of Omicron was mostly absent. S-specific T-cell responses were detected 6 months in 75.0% of controls, 69.4% of CKD G4/5 patients, 52.6% of dialysis patients, and 12.9% of KTR. T-cell responses at 6 months were significantly lower than responses at 28 days.
Conclusions
Although seropositivity rates at 6 months were comparable to that at 28 days after vaccination, significantly decreased antibody levels and T-cell responses were observed. The combination of low antibody levels, reduced T-cell responses, and absent neutralization of the newly-emerging variants indicates the need for additional boosts or alternative vaccination strategies in KTR.
View on Web

Exploring the impact of anterior chest wall scars from implantable venous ports in adolescent survivors of cancer

alexandrossfakianakis shared this article with you from Inoreader

Abstract

Background

In children with cancer, port-a-caths (ports) are commonly placed in the right anterior chest wall, leaving a visible scar when removed. The psychological impact of port scars on survivors is unknown. It is unclear whether alternative sites should be considered. We assessed the impact of port scars on pediatric cancer survivors to determine whether a change in location is indicated.

Methods

We performed a cross-sectional single-center study of pediatric cancer survivors aged 13–18 years. A questionnaire explored participants' perceptions of their port scars. Four additional validated tools were used: Fitzpatrick scale, Patient and Observer Scar Assessment Scale (POSAS), Children's Dermatology Life Quality Index, and a Distress Thermometer.

Results

Among 100 participants (median age 15.8 years [13–18], median duration since treatment 8 years [1.5–14.8]), 75 'never/occasionally' thought about their port scars, 85 were not bothered by its location and 87 would not have preferred another site. Eleven participants were highly impacted by their scars: six thought about their scar 'everyday/all the time', four were highly bothered by its location, and nine would have preferred a different location. There was an association between the desire for different scar location and how much the location bothered participants (p < 0.0001), female sex (p = 0.03) and Patient POSAS score (p = 0.04).

Conclusion

A port scar on the anterior chest wall was not a major concern for the majority of this cohort. A minority of participants were highly impacted by the scar and its location. Advance identification of those likely to be impacted by their scars may not be possible.

View on Web

Strategies for Evaluating Anosmia Therapeutics in the COVID-19 Era

alexandrossfakianakis shared this article with you from Inoreader

jamanetwork.com

Two studies in this issue of JAMA Otolaryngology–Head & Neck Surgery evaluate the use of nasal theophylline, a phosphodiesterase inhibitor that may promote neural olfactory signaling and recovery for postviral olfactory dysfunction (OD). The first, a dose-modification Research Letter by Lee et al was conducted in patients with non-COVID-19–related hyposmia or anosmia secondary to viral infection and confirmed by the objective University of Pennsylvania Smell Identification Test. This open-label, dose-escalation trial provides an educational description of how to evaluate the appropriate dose for an emerging pharmacologic intervention. Specifically, the authors identified 400 mg of theophylline twice daily as a tolerated dosage. This was calculated as an equivalent oral dose of 20 mg. A phase 2 pilot study by Gupta et al was desig ned using this valuable information. Patients with suspected COVID-19–related OD completed the University of Pennsylvania Smell Identification Test and were randomized to either theophylline or placebo nasal irrigations for 6 weeks. This study was inconclusive regarding the clinical benefit of theophylline nasal irrigations, although there was suggested improvement by subjective assessments. The authors acknowledge several limitations, including small sample size, the virtual nature of the study design and subsequent inability to conduct endoscopic nasal examinations, lack of information regarding participant COVID-19 vaccination status, lack of polymerase chain reaction–confirmed COVID-19 infection, and short-term participant follow-up. These acknowledgments are justified, and many can be overcome in subsequent studies with adequate funding and time. However, the heterogeneous nature of COVID-19 and associated research make this area of work particularly challenging. Herein, we propose several approaches to improve the rigor of OD research in the COVID-19 era.
View on Web

Safety of High-Dose Nasal Theophylline Irrigation in the Treatment of Postviral Olfactory Dysfunction

alexandrossfakianakis shared this article with you from Inoreader

jamanetwork.com

This case series aims to determine the maximum tolerable dose of theophylline delivered via high-volume, low-pressure nasal saline irrigation for treatment of postviral olfactory dysfunction.
View on Web

Surgeon Thyroidectomy Case Volume Impacts Disease‐free Survival in the Management of Thyroid Cancer

alexandrossfakianakis shared this article with you from Inoreader
Surgeon Thyroidectomy Case Volume Impacts Disease-free Survival in the Management of Thyroid Cancer

In this population-based cohort study involving 37,233 thyroidectomies performed in Ontario, Canada between 1993 and 2017, we found both high-volume surgeons and hospitals to be predictors of better disease-free survival (DFS) in patients with well-differentiated thyroid cancer. DFS is higher among surgeons performing more than 40 thyroidectomies a year.


Objectives

To assess the association between surgeons thyroidectomy case volume and disease-free survival (DFS) for patients with well-differentiated thyroid cancer (WDTC). A secondary objective was to assess a surgeon volume cutoff to optimize outcomes in those with WDTC. We hypothesized that surgeon volume will be an important predictor of DFS in patients with WDTC after adjusting for hospital volume and sociodemographic and clinical factors.

Methods

In this retrospective population-based cohort study, we identified WDTC patients in Ontario, Canada, who underwent thyroidectomy confirmed by both hospital-level and surgeon-level administrative data between 1993 and 2017 (N = 37,233). Surgeon and hospital volumes were calculated based on number of cases performed in the year prior by the physician and at an institution performing each case, respectively and divided into quartiles. A multilevel hierarchical Cox regression model was used to estimate the effect of volume on DFS.

Results

A crude model without patient or treatment characteristics demonstrated that both higher surgeon volume quartiles (p < 0.001) and higher hospital volume quartiles (p < 0.001) were associated with DFS. After controlling for clustering and patient/treatment covariates and hospital volume, moderately low (18–39/year) and low (0–17/year) volume surgeons (hazard ratios [HR]: 1.23, 95% confidence interval [CI]: 1.09–1.39 and HR: 1.34, 95% CI: 1.17–1.53 respectively) remained an independent statistically significant negative predictor of DFS.

Conclusion

Both high-volume surgeons and hospitals are predictors of better DFS in patients with WDTC. DFS is higher among surgeons performing more than 40 thyroidectomies a year.

Level of Evidence

3 Laryngoscope, 2022

View on Web

Characterizing Pediatric Bilateral Vocal Fold Dysfunction: Analysis with the Pediatric Health Information System Database

alexandrossfakianakis shared this article with you from Inoreader
Characterizing Pediatric Bilateral Vocal Fold Dysfunction: Analysis with the Pediatric Health Information System Database

This database study represents the largest cohort analysis to date characterizing bilateral vocal fold dysfunction. The majority of pediatric patients with bilateral vocal fold dysfunction (BVFD) have a complex chronic condition, with respiratory conditions being the most common followed by gastrointestinal conditions. Prognostic indicators of improved hospital survival include gastrointestinal comorbidities and presence of tracheostomy.


Objectives

The purpose of this study was to characterize pediatric bilateral vocal fold dysfunction and to examine the overall inpatient mortality.

Methods

Retrospective cohort analysis. Data from the Pediatric Health Information System was gathered for all pediatric patients with a diagnosis of bilateral vocal fold dysfunction between January 2008 and September 2020. Univariate and multivariate analyses were performed using Cox proportional hazard models.

Results

2395 patients accounted for 4799 hospitalizations with bilateral vocal fold dysfunction. Inpatient mortality occurred in 2.9% of the study sample. Chiari 2 was found in 2.8% of patients. The most common associated diagnoses were related to comorbid respiratory conditions (61.1%). The median adjusted ratio of cost to charges was $76,569. Aspiration was noted in 28 patients (1.2%). Gastrostomy was performed in 607 patients (25.3%). Tracheostomy was performed in 27% of patients. The overall 90-day readmission rate was 61%. On multivariate analysis, prognostic factors associated with increased hospital survival include gastrointestinal comorbidities (hazard ratio [HR]: 0.29; 95% confidence interval [CI]: 0.18–0.49) and tracheostomy (HR: 0.21; 95% CI: 0.12–0.37).

Conclusion

This database study represents the largest cohort analysis to date characterizing bilateral vocal fold dysfunction. Favorable prognostic indicators of overall hospital survival include gastrointestinal comorbidities and the presence of tracheostomy. Tracheostomy is associated with an increase in hospital costs, comorbidities, gastrostomy tube placement, and Chiari diagnosis.

Level of Evidence

4 Laryngoscope, 2022

View on Web

Collaboration request

Hi there How would you like to earn a 35% commission for each sale for life by selling SEO services Every website owner requires the ...