SCI Publications

2024


M. Han, J. Li, S. Sane, S. Gupta, B. Wang, S. Petruzza, C.R. Johnson. “Interactive Visualization of Time-Varying Flow Fields Using Particle Tracing Neural Networks,” Subtitled “arXiv preprint arXiv:2312.14973,” 2024.

ABSTRACT

Lagrangian representations of flow fields have gained prominence for enabling fast, accurate analysis and exploration of time-varying flow behaviors. In this paper, we present a comprehensive evaluation to establish a robust and efficient framework for Lagrangian-based particle tracing using deep neural networks (DNNs). Han et al. (2021) first proposed a DNN-based approach to learn Lagrangian representations and demonstrated accurate particle tracing for an analytic 2D flow field. In this paper, we extend and build upon this prior work in significant ways. First, we evaluate the performance of DNN models to accurately trace particles in various settings, including 2D and 3D time-varying flow fields, flow fields from multiple applications, flow fields with varying complexity, as well as structured and unstructured input data. Second, we conduct an empirical study to inform best practices with respect to particle tracing model architectures, activation functions, and training data structures. Third, we conduct a comparative evaluation of prior techniques that employ flow maps as input for exploratory flow visualization. Specifically, we compare our extended model against its predecessor by Han et al. (2021), as well as the conventional approach that uses triangulation and Barycentric coordinate interpolation. Finally, we consider the integration and adaptation of our particle tracing model with different viewers. We provide an interactive web-based visualization interface by leveraging the efficiencies of our framework, and perform high-fidelity interactive visualization by integrating it with an OSPRay-based viewer. Overall, our experiments demonstrate that using a trained DNN model to predict new particle trajectories requires a low memory footprint and results in rapid inference. Following best practices for large 3D datasets, our deep learning approach using GPUs for inference is shown to require approximately 46 times less memory while being more than 400 times faster than the conventional methods.



M. Han, T. Athawale, J. Li, C.R. Johnson. “Accelerated Depth Computation for Surface Boxplots with Deep Learning,” In IEEE Workshop on Uncertainty Visualization: Applications, Techniques, Software, and Decision Frameworks, IEEE, pp. 38--42. 2024.
DOI: 10.1109/UncertaintyVisualization63963.2024.00009

ABSTRACT

Functional depth is a well-known technique used to derive descriptive statistics (e.g., median, quartiles, and outliers) for 1D data. Surface boxplots extend this concept to ensembles of images, helping scientists and users identify representative and outlier images. However, the computational time for surface boxplots increases cubically with the number of ensemble members, making it impractical for integration into visualization tools. In this paper, we propose a deep-learning solution for efficient depth prediction and computation of surface boxplots for time-varying ensemble data. Our deep learning framework accurately predicts member depths in a surface boxplot, achieving average speedups of 6X on a CPU and 15X on a GPU for the 2D Red Sea dataset with 50 ensemble members compared to the traditional depth computation algorithm. Our approach achieves at least a 99% level of rank preservation, with order flipping occurring only at pairs with extremely similar depth values that pose no statistical differences. This local flipping does not significantly impact the overall depth order of the ensemble members.



C. Han, K.E. Isaacs. “A Deixis-Centered Approach for Documenting Remote Synchronous Communication around Data Visualizations,” Subtitled “arXiv:2408.04041,” 2024.

ABSTRACT

Referential gestures, or as termed in linguistics, deixis, are an essential part of communication around data visualizations. Despite their importance, such gestures are often overlooked when documenting data analysis meetings. Transcripts, for instance, fail to capture gestures, and video recordings may not adequately capture or emphasize them. We introduce a novel method for documenting collaborative data meetings that treats deixis as a first-class citizen. Our proposed framework captures cursor-based gestural data along with audio and converts them into interactive documents. The framework leverages a large language model to identify word correspondences with gestures. These identified references are used to create context-based annotations in the resulting interactive document. We assess the effectiveness of our proposed method through a user study, finding that participants preferred our automated interactive documentation over recordings, transcripts, and manual note-taking. Furthermore, we derive a preliminary taxonomy of cursor-based deictic gestures from participant actions during the study. This taxonomy offers further opportunities for better utilizing cursor-based deixis in collaborative data analysis scenarios.



C. Han, J. Lieffers, C. Morrison, K.E. Isaacs. “An Overview+ Detail Layout for Visualizing Compound Graphs,” Subtitled “arXiv:2408.04045,” 2024.

ABSTRACT

Compound graphs are networks in which vertices can be grouped into larger subsets, with these subsets capable of further grouping, resulting in a nesting that can be many levels deep. In several applications, including biological workflows, chemical equations, and computational data flow analysis, these graphs often exhibit a tree-like nesting structure, where sibling clusters are disjoint. Common compound graph layouts prioritize the lowest level of the grouping, down to the individual ungrouped vertices, which can make the higher level grouped structures more difficult to discern, especially in deeply nested networks. Leveraging the additional structure of the tree-like nesting, we contribute an overview+detail layout for this class of compound graphs that preserves the saliency of the higher level network structure when groups are expanded to show internal nested structure. Our layout draws inner structures adjacent to their parents, using a modified tree layout to place substructures. We describe our algorithm and then present case studies demonstrating the layout’s utility to a domain expert working on data flow analysis. Finally, we discuss network parameters and analysis situations in which our layout is well suited.



G. Hari, N. Joshi, Z. Wang, Q. Gong, D. Pugmire, K. Moreland, C.R. Johnson, S. Klasky, N. Podhorszki, T. Athawale. “FunM2C: A Filter for Uncertainty Visualization of Multivariate Data on Multi-Core Devices,” In IEEE Workshop on Uncertainty Visualization: Applications, Techniques, Software, and Decision Frameworks, IEEE, pp. 43--47. 2024.
DOI: 10.1109/UncertaintyVisualization63963.2024.00010

ABSTRACT

Uncertainty visualization is an emerging research topic in data visualization because neglecting uncertainty in visualization can lead to inaccurate assessments. In this paper, we study the propagation of multivariate data uncertainty in visualization. Although there have been a few advancements in probabilistic uncertainty visualization of multivariate data, three critical challenges remain to be addressed. First, the state-of-the-art probabilistic uncertainty visualization framework is limited to bivariate data (two variables). Second, existing uncertainty visualization algorithms use computationally intensive techniques and lack support for cross-platform portability. Third, as a consequence of the computational expense, integration into production visualization tools is impractical. In this work, we address all three issues and make a threefold contribution. First, we take a step to generalize the state-of-the-art probabilistic framework for bivariate data to multivariate data with an arbitrary number of variables. Second, through utilization of VTK-m’s shared-memory parallelism and cross-platform compatibility features, we demonstrate acceleration of multivariate uncertainty visualization on different many-core architectures, including OpenMP and AMD GPUs. Third, we demonstrate the integration of our algorithms with the ParaView software. We demonstrate the utility of our algorithms through experiments on multivariate simulation data with three and four variables.



M.M. Ho, S. Dubey, Y. Chong, B. Knudsen, T. Tasdizen. “F2FLDM: Latent Diffusion Models with Histopathology Pre-Trained Embeddings for Unpaired Frozen Section to FFPE Translation,” Subtitled “arXiv:2404.12650v1,” 2024.

ABSTRACT

The Frozen Section (FS) technique is a rapid and efficient method, taking only 15-30 minutes to prepare slides for pathologists' evaluation during surgery, enabling immediate decisions on further surgical interventions. However, FS process often introduces artifacts and distortions like folds and ice-crystal effects. In contrast, these artifacts and distortions are absent in the higher-quality formalin-fixed paraffin-embedded (FFPE) slides, which require 2-3 days to prepare. While Generative Adversarial Network (GAN)-based methods have been used to translate FS to FFPE images (F2F), they may leave morphological inaccuracies with remaining FS artifacts or introduce new artifacts, reducing the quality of these translations for clinical assessments. In this study, we benchmark recent generative models, focusing on GANs and Latent Diffusion Models (LDMs), to overcome these limitations. We introduce a novel approach that combines LDMs with Histopathology Pre-Trained Embeddings to enhance restoration of FS images. Our framework leverages LDMs conditioned by both text and pre-trained embeddings to learn meaningful features of FS and FFPE histopathology images. Through diffusion and denoising techniques, our approach not only preserves essential diagnostic attributes like color staining and tissue morphology but also proposes an embedding translation mechanism to better predict the targeted FFPE representation of input FS images. As a result, this work achieves a significant improvement in classification performance, with the Area Under the Curve rising from 81.99% to 94.64%, accompanied by an advantageous CaseFD. This work establishes a new benchmark for FS to FFPE image translation quality, promising enhanced reliability and accuracy in histopathology FS image analysis.



M.M. Ho, E. Ghelichkhan, Y. Chong, Y. Zhou, B.S. Knudsen, T. Tasdizen. “DISC: Latent Diffusion Models with Self-Distillation from Separated Conditions for Prostate Cancer Grading,” Subtitled “arXiv:2404.13097,” 2024.

ABSTRACT

Latent Diffusion Models (LDMs) can generate high-fidelity images from noise, offering a promising approach for augmenting histopathology images for training cancer grading models. While previous works successfully generated high-fidelity histopathology images using LDMs, the generation of image tiles to improve prostate cancer grading has not yet been explored. Additionally, LDMs face challenges in accurately generating admixtures of multiple cancer grades in a tile when conditioned by a tile mask. In this study, we train specific LDMs to generate synthetic tiles that contain multiple Gleason Grades (GGs) by leveraging pixel-wise annotations in input tiles. We introduce a novel framework named Self-Distillation from Separated Conditions (DISC) that generates GG patterns guided by GG masks. Finally, we deploy a training framework for pixel-level and slide-level prostate cancer grading, where synthetic tiles are effectively utilized to improve the cancer grading performance of existing models. As a result, this work surpasses previous works in two domains: 1) our LDMs enhanced with DISC produce more accurate tiles in terms of GG patterns, and 2) our training scheme, incorporating synthetic data, significantly improves the generalization of the baseline model for prostate cancer grading, particularly in challenging cases of rare GG5, demonstrating the potential of generative models to enhance cancer grading when data is limited.



T. Hoefler, M. Copik, P. Beckman, A. Jones, I. Foster, M. Parashar, D. Reed, M. Troyer, T. Schulthess, D. Ernst, J. Dongarra. “XaaS: Acceleration as a Service to Enable Productive High-Performance Cloud Computing,” Subtitled “arXiv:2401.04552v1,” 2024.

ABSTRACT

HPC and Cloud have evolved independently, specializing their innovations into performance or productivity. Acceleration as a Service (XaaS) is a recipe to empower both fields with a shared execution platform that provides transparent access to computing resources, regardless of the underlying cloud or HPC service provider. Bridging HPC and cloud advancements, XaaS presents a unified architecture built on performance-portable containers. Our converged model concentrates on low-overhead, high-performance communication and computing, targeting resource-intensive workloads from climate simulations to machine learning. XaaS lifts the restricted allocation model of Function-as-a-Service (FaaS), allowing users to benefit from the flexibility and efficient resource utilization of serverless while supporting long-running and performance-sensitive workloads from HPC.



J. K. Holmen , M. Garcıa, A. Bagusetty, V. Madananth, A. Sanderson,, M. Berzins. “Making Uintah Performance Portable for Department of Energy Exascale Testbeds,” In Euro-Par 2023: Parallel Processing, pp. 1--12. 2024.

ABSTRACT

To help ease ports to forthcoming Department of Energy (DOE) exascale systems, testbeds have been made available to select users. These testbeds are helpful for preparing codes to run on the same hardware and similar software as in their respective exascale systems. This paper describes how the Uintah Computational Framework, an open-source asynchronous many-task (AMT) runtime system, has been modified to be performance portable across the DOE Crusher, DOE Polaris, and DOE Sunspot testbeds in preparation for portable simulations across the exascale DOE Frontier and DOE Aurora systems. The Crusher, Polaris, and Sunspot testbeds feature the AMD MI250X, NVIDIA A100, and Intel PVC GPUs, respectively. This performance portability has been made possible by extending Uintah’s intermediate portability layer [18] to additionally support the Kokkos::HIP, Kokkos::OpenMPTarget, and Kokkos::SYCL back-ends. This paper also describes notable updates to Uintah’s support for Kokkos, which were required to make this extension possible. Results are shown for a challenging radiative heat transfer calculation, central to the University of Utah’s predictive boiler simulations. These results demonstrate single-source portability across AMD-, NVIDIA-, and Intel-based GPUs using various Kokkos back-ends.



Q. Huang, J. Le, S. Joshi, J. Mendes, G. Adluru, E. DiBella. “Arterial Input Function (AIF) Correction Using AIF Plus Tissue Inputs with a Bi-LSTM Network,” In Tomography, Vol. 10, pp. 660-673. 2024.

ABSTRACT

Background: The arterial input function (AIF) is vital for myocardial blood flow quantification in cardiac MRI to indicate the input time–concentration curve of a contrast agent. Inaccurate AIFs can significantly affect perfusion quantification. Purpose: When only saturated and biased AIFs are measured, this work investigates multiple ways of leveraging tissue curve information, including using AIF + tissue curves as inputs and optimizing the loss function for deep neural network training. Methods: Simulated data were generated using a 12-parameter AIF mathematical model for the AIF. Tissue curves were created from true AIFs combined with compartment-model parameters from a random distribution. Using Bloch simulations, a dictionary was constructed for a saturation-recovery 3D radial stack-of-stars sequence, accounting for deviations such as flip angle, T2* effects, and residual longitudinal magnetization after the saturation. A preliminary simulation study established the optimal tissue curve number using a bidirectional long short-term memory (Bi-LSTM) network with just AIF loss. Further optimization of the loss function involves comparing just AIF loss, AIF with compartment-model-based parameter loss, and AIF with compartment-model tissue loss. The optimized network was examined with both simulation and hybrid data, which included in vivo 3D stack-of-star datasets for testing. The AIF peak value accuracy and ?????? results were assessed. Results: Increasing the number of tissue curves can be beneficial when added tissue curves can provide extra information. Using just the AIF loss outperforms the other two proposed losses, including adding either a compartment-model-based tissue loss or a compartment-model parameter loss to the AIF loss. With the simulated data, the Bi-LSTM network reduced the AIF peak error from −23.6 ± 24.4% of the AIF using the dictionary method to 0.2 ± 7.2% (AIF input only) and 0.3 ± 2.5% (AIF + ten tissue curve inputs) of the network AIF. The corresponding ?????? error was reduced from −13.5 ± 8.8% to −0.6 ± 6.6% and 0.3 ± 2.1%. With the hybrid data (simulated data for training; in vivo data for testing), the AIF peak error was 15.0 ± 5.3% and the corresponding ?????? error was 20.7 ± 11.6% for the AIF using the dictionary method. The hybrid data revealed that using the AIF + tissue inputs reduced errors, with peak error (1.3 ± 11.1%) and ?????? error (−2.4 ± 6.7%). Conclusions: Integrating tissue curves with AIF curves into network inputs improves the precision of AI-driven AIF corrections. This result was seen both with simulated data and with applying the network trained only on simulated data to a limited in vivo test dataset.



X. Huang, H. Miao, A. Townsend, K. Champley, J. Tringe, V. Pascucci, P.T. Bremer. “Bimodal Visualization of Industrial X-Ray and Neutron Computed Tomography Data,” In IEEE Transactions on Visualization and Computer Graphics, IEEE, 2024.
DOI: 10.1109/TVCG.2024.3382607

ABSTRACT

Advanced manufacturing creates increasingly complex objects with material compositions that are often difficult to characterize by a single modality. Our collaborating domain scientists are going beyond traditional methods by employing both X-ray and neutron computed tomography to obtain complementary representations expected to better resolve material boundaries. However, the use of two modalities creates its own challenges for visualization, requiring either complex adjustments of bimodal transfer functions or the need for multiple views. Together with experts in nondestructive evaluation, we designed a novel interactive bimodal visualization approach to create a combined view of the co-registered X-ray and neutron acquisitions of industrial objects. Using an automatic topological segmentation of the bivariate histogram of X-ray and neutron values as a starting point, the system provides a simple yet effective interface to easily create, explore, and adjust a bimodal visualization. We propose a widget with simple brushing interactions that enables the user to quickly correct the segmented histogram results. Our semiautomated system enables domain experts to intuitively explore large bimodal datasets without the need for either advanced segmentation algorithms or knowledge of visualization techniques. We demonstrate our approach using synthetic examples, industrial phantom objects created to stress bimodal scanning techniques, and real-world objects, and we discuss expert feedback.



Y. Hua, J. Pang, X. Zhang, Y. Liu, X. Shi, B. Wang, Y. Liu, C. Qian. “Towards Practical Overlay Networks for Decentralized Federated Learning,” Subtitled “arXiv:2409.05331v1,” 2024.

ABSTRACT

Decentralized federated learning (DFL) uses peer-to-peer communication to avoid the single point of failure problem in federated learning and has been considered an attractive solution for machine learning tasks on distributed devices. We provide the first solution to a fundamental network problem of DFL: what overlay network should DFL use to achieve fast training of highly accurate models, low communication, and decentralized construction and maintenance? Overlay topologies of DFL have been investigated, but no existing DFL topology includes decentralized protocols for network construction and topology maintenance. Without these protocols, DFL cannot run in practice. This work presents an overlay network, called FedLay, which provides fast training and low communication cost for practical DFL. FedLay is the first solution for constructing near-random regular topologies in a decentralized manner and maintaining the topologies under node joins and failures. Experiments based on prototype implementation and simulations show that FedLay achieves the fastest model convergence and highest accuracy on real datasets compared to existing DFL solutions while incurring small communication costs and being resilient to node joins and failures.



K.E. Isaacs, H. Kaiser. “Halide Code Generation Framework in Phylanx,” In Euro-Par 2022: Parallel Processing Workshops , Springer, 2024.

ABSTRACT

Separating algorithms from their computation schedule has become a de facto solution to tackle the challenges of developing high performance code on modern heterogeneous architectures. Common approaches include Domain-specific languages (DSLs) which provide familiar APIs to domain experts, code generation frameworks that automate the generation of fast and portable code, and runtime systems that manage threads for concurrency and parallelism. In this paper, we present the Halide code generation framework for Phylanx distributed array processing platform. This extension enables compile-time optimization of Phylanx primitives for target architectures. To accomplish this, (1) we implemented new Phylanx primitives using Halide, and (2) partially exported Halide’s thread pool API to carry out parallelism on HPX (Phylanx’s runtime) threads. (3) showcased HPX performance analysis tools made available to Halide applications. The evaluation of the work has been done in two steps. First, we compare the performance of Halide applications running on its native runtime with that of the new HPX backend to verify there is no cost associated with using HPX threads. Next, we compare performances of a number of original implementations of Phylanx primitives against the new ones in Halide to verify performance and portability benefits of Halide in the context of Phylanx.



Y. Ishidoya, E. Kwan, B. Hunt, M. Lange, T. Sharma, D. Dosdall, R.S. MacLeod, E. Kholmovski, T.J. Bunch, R. Ranjan. “Effective ablation settings that predict chronic scar after atrial ablation with HELIOSTAR™ multi-electrode radiofrequency balloon catheter,” In Journal of Interventional Cardiac Electrophysiology, Springer Nature, 2024.
DOI: https://doi.org/10.1007/s10840-024-01948-y

ABSTRACT

Background

Radiofrequency balloon (RFB) ablation (HELIOSTAR™, Biosense Webster) has been developed to improve pulmonary vein ablation efficiency over traditional point-by-point RF ablation approaches. We aimed to find effective parameters for RFB ablation that result in chronic scar verified by late gadolinium enhancement cardiac magnetic resonance (LGE-CMR).

Methods

A chronic canine model (n = 8) was used to ablate in the superior vena cava (SVC), the right superior and the left inferior pulmonary vein (RSPV and LIPV), and the left atrial appendage (LAA) with a circumferential ablation approach (RF energy was delivered to all electrodes simultaneously) for 20 s or 60 s. The electroanatomical map with the ablation tags was projected onto the 3-month post-ablation LGE-CMR. Tags were divided into two groups depending on whether they correlated with CMR-based scar (ScarTags) or non-scar tissue (Non-ScarTags). The effective parameters for scar formation were estimated by multivariate logistic regression.

Results

This study assessed 80 lesions in the SVC, 80 lesions in the RSPV, 20 lesions in the LIPV, and 30 lesions in the LAA (168 ScarTags and 42 Non-ScarTags). In the multivariate analysis, two variables were associated with chronic scar formation: temperature of electrode before energy application (odds ratio (OR) 0.805, p = 0.0075) and long RF duration (OR 2.360, p = 0.0218), whereas impedance drop was not associated (OR 0.986, p = 0.373).

Conclusion

Lower temperature of the electrode before ablation and long ablation duration are critical parameters for durable atrial scar formation with RFB ablation.



K. Iyer, J. Adams, S.Y. Elhabian. “SCorP: Statistics-Informed Dense Correspondence Prediction Directly from Unsegmented Medical Images,” Subtitled “arXiv preprint arXiv:2404.17967,” 2024.

ABSTRACT

Statistical shape modeling (SSM) is a powerful computational framework for quantifying and analyzing the geometric variability of anatomical structures, facilitating advancements in medical research, diagnostics, and treatment planning. Traditional methods for shape modeling from imaging data demand significant manual and computational resources. Additionally, these methods necessitate repeating the entire modeling pipeline to derive shape descriptors (e.g., surface-based point correspondences) for new data. While deep learning approaches have shown promise in streamlining the construction of SSMs on new data, they still rely on traditional techniques to supervise the training of the deep networks. Moreover, the predominant linearity assumption of traditional approaches restricts their efficacy, a limitation also inherited by deep learning models trained using optimized/established correspondences. Consequently, representing complex anatomies becomes challenging. To address these limitations, we introduce SCorP, a novel framework capable of predicting surface-based correspondences directly from unsegmented images. By leveraging the shape prior learned directly from surface meshes in an unsupervised manner, the proposed model eliminates the need for an optimized shape model for training supervision. The strong shape prior acts as a teacher and regularizes the feature learning of the student network to guide it in learning image-based features that are predictive of surface correspondences. The proposed model streamlines the training and inference phases by removing the supervision for the correspondence prediction task while alleviating the linearity assumption. Experiments on the LGE MRI left atrium dataset and Abdomen CT-1K liver datasets demonstrate that the proposed technique enhances the accuracy and robustness of image-driven SSM, providing a compelling alternative to current fully supervised methods.



K. Iyer, S.Y. Elhabian. “Probabilistic 3D Correspondence Prediction from Sparse Unsegmented Images,” Subtitled “arXiv preprint arXiv:2407.01931v1,” 2024.

ABSTRACT

The study of physiology demonstrates that the form (shape) of anatomical structures dictates their functions, and analyzing the form of anatomies plays a crucial role in clinical research. Statistical shape modeling (SSM) is a widely used tool for quantitative analysis of forms of anatomies, aiding in characterizing and identifying differences within a population of subjects. Despite its utility, the conventional SSM construction pipeline is often complex and time-consuming. Additionally, reliance on linearity assumptions further limits the model from capturing clinically relevant variations. Recent advancements in deep learning solutions enable the direct inference of SSM from unsegmented medical images, streamlining the process and improving accessibility. However, the new methods of SSM from images do not adequately account for situations where the imaging data quality is poor or where only sparse information is available. Moreover, quantifying aleatoric uncertainty, which represents inherent data variability, is crucial in deploying deep learning for clinical tasks to ensure reliable model predictions and robust decision-making, especially in challenging imaging conditions. Therefore, we propose SPI-CorrNet, a unified model that predicts 3D correspondences from sparse imaging data. It leverages a teacher network to regularize feature learning and quantifies data-dependent aleatoric uncertainty by adapting the network to predict intrinsic input variances. Experiments on the LGE MRI left atrium dataset and Abdomen CT-1K liver datasets demonstrate that our technique enhances the accuracy and robustness of sparse image-driven SSM.



K. Iyer, S. Elhabian, S. Joshi. “LEDA: Log-Euclidean Diffeomorphic Autoencoder for Efficient Statistical Analysis of Diffeomorphism,” Subtitled “arXiv preprint arXiv:2412.16129,” 2024.

ABSTRACT

Image registration is a core task in computational anatomy that establishes correspondences between images. Invertible deformable registration, which computes a deformation field and handles complex, non-linear transformation, is essential for tracking anatomical variations, especially in neuroimaging applications where inter-subject differences and longitudinal changes are key. Analyzing the deformation fields is challenging due to their non-linearity, limiting statistical analysis. However, traditional approaches for analyzing deformation fields are computationally expensive, sensitive to initialization, and prone to numerical errors, especially when the deformation is far from the identity. To address these limitations, we propose the Log-Euclidean Diffeomorphic Autoencoder (LEDA), an innovative framework designed to compute the principal logarithm of deformation fields by efficiently predicting consecutive square roots. LEDA operates within a linearized latent space that adheres to the diffeomorphisms group action laws, enhancing our model’s robustness and applicability. We also introduce a loss function to enforce inverse consistency, ensuring accurate latent representations of deformation fields. Extensive experiments with the OASIS-1 dataset demonstrate the effectiveness of LEDA in accurately modeling and analyzing complex non-linear deformations while maintaining inverse consistency. Additionally, we evaluate its ability to capture and incorporate clinical variables, enhancing its relevance for clinical applications.



R. Jin, J.A. Bergquist, D. Dade, B. Zenger, X. Ye, R. Ranjan, R.S. MacLeod, B. Steinberg, T. Tasdizen. “Machine Learning Estimation of Myocardial Ischemia Severity Using Body Surface ECG,” In Computing in Cardiology 2024, 2024.

ABSTRACT

Acute myocardial ischemia (AMI) is one of the leading causes of cardiovascular deaths around the globe. Yet, clinical early detection and patient risk stratification of AMI remain an unmet need, in part due to poor performance of traditional electrocardiogram (ECG) interpretation. Machine learning (ML) techniques have shown promise in analysis of ECGs, even detecting cardiac diseases not identifiable via traditional analysis. However, there has been limited usage of ML tools in the case of AMI due to a lack of high-quality training data, especially detailed ECG recordings throughout the evolution of ischemic events. In this study, we applied ML to predict the ischemic tissue volume directly from body surface ECGs in an AMI animal model. The developed ML networks performed favorably, with an average R2 value of 0.932 suggesting a robust prediction. The study also provides insights on how to create and utilize ML tools to enhance clinical risk stratification of patients experiencing AMI



J Johnson, L McDonald, T Tasdizen. “Improving uranium oxide pathway discernment and generalizability using contrastive self-supervised learning,” In Computational Materials Science, Vol. 223, Elsevier, 2024.

ABSTRACT

In the field of Nuclear Forensics, there exists a plethora of different tools to aid investigators when performing analysis of unknown nuclear materials. Many of these tools offer visual representations of the uranium ore concentrate (UOC) materials that include complimentary and contrasting information. In this paper, we present a novel technique drawing from state-of-the-art machine learning methods that allows information from scanning electron microscopy images (SEM) to be combined to create digital encodings of the material that can be used to determine the material’s processing route. Our technique can classify UOC processing routes with greater than 96% accuracy in a fraction of a second and can be adapted to unseen samples at similarly high accuracy. The technique’s high accuracy and speed allow forensic investigators to quickly get preliminary results, while generalization allows the model to be adapted to new materials or processing routes quickly without the need for complete retraining of the model.



L.G. Johnson, J.D. Mozingo, P.R. Atkins, S. Schwab, A. Morris, S.Y. Elhabian, D.R. Wilson, H. Kim, A.E. Anderson. “A framework for three-dimensional statistical shape modeling of the proximal femur in Legg–Calvé–Perthes disease,” In International Journal of Computer Assisted Radiology and Surgery, Springer Nature Switzerland, 2024.

ABSTRACT

Purpose

The pathomorphology of Legg–Calvé–Perthes disease (LCPD) is a key contributor to poor long-term outcomes such as hip pain, femoroacetabular impingement, and early-onset osteoarthritis. Plain radiographs, commonly used for research and in the clinic, cannot accurately represent the full extent of LCPD deformity. The purpose of this study was to develop and evaluate a methodological framework for three-dimensional (3D) statistical shape modeling (SSM) of the proximal femur in LCPD.

Methods

We developed a framework consisting of three core steps: segmentation, surface mesh preparation, and particle-based correspondence. The framework aims to address challenges in modeling this rare condition, characterized by highly heterogeneous deformities across a wide age range and small sample sizes. We evaluated this framework by producing a SSM from clinical magnetic resonance images of 13 proximal femurs with LCPD deformity from 11 patients between the ages of six and 12 years.

Results

After removing differences in scale and pose, the dominant shape modes described morphological features characteristic of LCPD, including a broad and flat femoral head, high-riding greater trochanter, and reduced neck-shaft angle. The first four shape modes were chosen for the evaluation of the model’s performance, together describing 87.5% of the overall cohort variance. The SSM was generalizable to unfamiliar examples with an average point-to-point reconstruction error below 1mm. We observed strong Spearman rank correlations (up to 0.79) between some shape modes, 3D measurements of femoral head asphericity, and clinical radiographic metrics.

Conclusion

In this study, we present a framework, based on SSM, for the objective description of LCPD deformity in three dimensions. Our methods can accurately describe overall shape variation using a small number of parameters, and are a step toward a widely accepted, objective 3D quantification of LCPD deformity.