SCI Publications
2026
T. M. Athawale, K. Moreland, D. Pugmire, C. R. Johnson, P. Rosen, M. Norman, A. Georgiadou,, A. Entezari.
MAGIC: Marching Cubes Isosurface Uncertainty Visualization for Gaussian Uncertain Data with Spatial Correlation, In TVCG, IEEE, 2026.
In this paper, we study the propagation of data uncertainty through the marching cubes algorithm for isosurface visualization for correlated uncertain data. Consideration of correlation has been shown paramount for avoiding errors in uncertainty quantification and visualization in multiple prior studies. Although the problem of isosurface uncertainty with spatial data correlation has been previously addressed, there are two major limitations to prior treatments. First, there are no analytical formulations for uncertainty quantification of isosurfaces when the data uncertainty is characterized by a Gaussian distribution with spatial correlation. Second, as a consequence of the lack of analytical formulations, existing techniques resort to a Monte Carlo sampling approach, which is expensive and difficult to integrate into visualization tools. To address these limitations, we present a closed-form framework to efficiently derive uncertainty in marching cubes level-sets for Gaussian uncertain data with spatial correlation (MAGIC). To derive closed-form solutions, we leverage the Hinkley’s derivation on the ratio of Gaussian distributions. With our analytical framework, we achieve a significant speed-up and enhanced accuracy of uncertainty quantification over classical Monte Carlo methods. We further accelerate our analytical solutions using many-core processors to achieve speed-ups up to 585× and integrability with production visualization tools for broader impact. We demonstrate the effectiveness of our correlation-aware uncertainty framework through experiments on meteorology, urban flow, and astrophysics simulation datasets.
R.T. Black, S.A. Maas, W. Wu, J. Maheshwari, T. Kolev, J.A. Weiss, M.A. Jolley.
An open-source computational framework for immersed fluid-structure interaction modeling using FEBio and MFEM, Subtitled arXiv:2601.08266v1, 2026.
Fluid-structure interaction (FSI) simulation of biological systems presents significant computational challenges, particularly for applications involving large structural deformations and contact mechanics, such as heart valve dynamics. Traditional ALE methods encounter fundamental difficulties with such problems due to mesh distortion, motivating immersed techniques. This work presents a novel open-source immersed FSI framework that strategically couples two mature finite element libraries: MFEM, a GPU-ready and scalable library with state-of-the-art parallel performance developed at Lawrence Livermore National Laboratory, and FEBio, a nonlinear finite element solver with sophisticated solid mechanics capabilities designed for biomechanics applications developed at the University of Utah. This coupling creates a unique synergy wherein the fluid solver leverages MFEM's distributed-memory parallelization and pathway to GPU acceleration, while the immersed solid exploits FEBio's comprehensive suite of hyperelastic and viscoelastic constitutive models and advanced solid mechanics modeling targeted for biomechanics applications. FSI coupling is achieved using a fictitious domain methodology with variational multiscale stabilization for enhanced accuracy on under-resolved grids expected with unfitted meshes used in immersed FSI. A fully implicit, monolithic scheme provides robust coupling for strongly coupled FSI characteristic of cardiovascular applications. The framework's modular architecture facilitates straightforward extension to additional physics and element technologies. Several test problems are considered to demonstrate the capabilities of the proposed framework, including a 3D semilunar heart valve simulation. This platform addresses a critical need for open-source immersed FSI software combining advanced biomechanics modeling with high-performance computing infrastructure.
H. Csala, A. Arzani .
Decomposed sparse modal optimization: Interpretable reduced-order modeling of unsteady flows, In International Journal of Heat and Fluid Flow, Vol. 117, Elsevier, pp. 110124. 2026.
ISSN: 0142-727X
DOI: https://doi.org/10.1016/j.ijheatfluidflow.2025.110124
Modal analysis plays a crucial role in fluid dynamics, offering a powerful tool for reducing the complexity of high-dimensional fluid flow data while extracting meaningful insights into flow physics. This is particularly important in the study of cardiovascular flows, where modal techniques help characterize unsteady flow structures, improve reduced-order modeling, and inform disease diagnosis and rapid medical device design. The most commonly used method, proper orthogonal decomposition (POD), is highly interpretable but suffers from its linearity, which limits its ability to capture nonlinear interactions. In this work, we introduce decomposed sparse modal optimization (DESMO), a nonlinear, adaptive extension of POD that improves the accuracy of flow field reconstruction while requiring fewer modes. We use modern gradient descent-based optimization tools to optimize the spatial modes and temporal coefficients concurrently while using a sparsity-promoting loss term. We demonstrate these on a canonical fluid flow benchmark, flow over a cylinder, a real-world example, blood flow inside a brain aneurysm, and a turbulent channel flow. DESMO can identify spatial modes that resemble higher-order POD modes while uncovering entirely new spatial structures in some cases. Different versions of DESMO can leverage Fourier series for modeling temporal coefficients, an autoencoder for spatial mode optimization, and symbolic regression for discovering differential equations for temporal evolution. Our results demonstrate that DESMO not only provides a more accurate representation of fluid flows but also preserves the interpretability of classical POD by having an analytical modal decomposition equation, offering a promising approach for reduced-order modeling across engineering applications.
Z. Cutler, J. Wilburn, H. Shrestha, Y. Ding, B. Bollen, K. Abrar Nadib, T. He, A. McNutt, L. Harrison, A. Lex.
ReVISit 2: A Full Experiment Life Cycle User Study Framework, In IEEE Transactions on Visualization and Computer Graphics, Vol. 32, IEEE, 2026.
Online user studies of visualizations, visual encodings, and interaction techniques are ubiquitous in visualization research. Yet, designing, conducting, and analyzing studies effectively is still a major burden.Although various packages support such user studies, most solutions address only facets of the experiment life cycle, make reproducibility difficult, or do not cater to nuanced study designs or interactions. We introduce reVISit 2, a software framework that supports visualization researchers at all stages of designing and conducting browser-based user studies. ReVISit supports researchers in the design, debug & pilot, data collection, analysis, and dissemination experiment phases by providing both technical affordances (such as replay of participant interactions) and sociotechnical aids (such as a mindfully maintained community of support). It is a proven system that can be (and has been) used in publication-quality studies---which we demonstrate through a series of experimental replications. We reflect on the design of the system via interviews and an analysis of its technical dimensions. Through this work, we seek to elevate the ease with which studies are conducted, improve the reproducibility of studies within our community, and support the construction of advanced interactive studies.
D. Dade, J.A. Bergquist, R.S. MacLeod, B.A. Steinberg, T. Tasdizen.
Self-Supervised Contrastive Learning Enables Robust ECG-Based Cardiac Classification, In Heart Rhythm O2, Elsevier, 2026.
Background
Objective
Methods
Results
Conclusions
E. Ghelichkhan, T. Tasdizen.
Beyond Standard Sampling: Metric-Guided Iterative Inference for Radiologists-Aligned Medical Counterfactual Generation, In Proceedings of Machine Learning Research, 2026.
Generative counterfactuals offer a promising avenue for explainable AI in medical imaging, yet ensuring these synthesized images are both anatomically faithful and clinically effective remains a significant challenge. This work presents a domain-specific diffusion framework for generating ”healthy” counterfactuals from chest X-rays with cardiomegaly, underpinned by a systematic metric-guided inference strategy. In contrast to methods relying on static sampling parameters, our approach iteratively explores the inference hyperparameter space to maximize our composite selection criterion, CF Score, that integrates our novel Faithfulness-Effectiveness Trade-off (F ET ) metric.
We extend the evaluation of counterfactual utility beyond simple classification shifts by conducting the simultaneous validation against radiologist annotations and eye-tracking data. Using the REFLACX dataset, we demonstrate that difference maps derived from our counterfactuals exhibit strong spatial alignment with expert visual attention and annotation. Quantified by Normalized Cross-Correlation, Hit Rate, pixel-wise ROC-AUC, and AUC-IoU, our results confirm that metric-guided counterfactuals provide dense and clinically relevant localizations of pathology that closely mirror human diagnostic reasoning.
J. Hart, B. van Bloemen Waanders, J. Li, T. A. J. Ouermi, C. R. Johnson.
Hyper-differential sensitivity analysis with respect to model discrepancy: Prior distributions, In International Journal for Uncertainty Quantification, Vol. 16, No. 1, Begell House, 2026.
Hyper-differential sensitivity analysis with respect to model discrepancy was recently developed to enable uncertainty quantification for optimization problems. The approach consists of two primary steps: (i) Bayesian calibration of the discrepancy between high- and low-fidelity models, and (ii) propagating the model discrepancy uncertainty through the optimization problem. When high-fidelity model evaluations are limited, as is common in practice, the prior discrepancy distribution plays a crucial role in the uncertainty analysis. However, specification of this prior is challenging due to its mathematical complexity and many hyper-parameters. This article presents a novel approach to specify the prior distribution. Our approach consists of two parts: (1) an algorithmic initialization of the prior hyper-parameters that uses existing data to initialize a hyper-parameter estimate, and (2) a visualization framework to systematically explore properties of the prior and guide tuning of the hyper-parameters to ensure that the prior captures the appropriate range of uncertainty. We provide detailed mathematical analysis and a collection of numerical examples that elucidate properties of the prior that are crucial to ensure uncertainty quantification.
K. Kroupa, R. Kepecs, H. Zhang, J.A. Weiss, C.T. Hung, G.A. Ateshian.
Intrinsic Viscoelasticity of Type II Col Contributes to The Viscoelastic Response of Immature Bovine Articular Cartilage Under Unconfined Compression Stress Relaxation , In Journal of Biomechanical Engineering, 2026.
This study validates a finite deformation, nonlinear viscoelastic constitutive model for the collagen matrix of immature bovine articular cartilage, using reactive viscoelasticity. Tissue samples underwent proteoglycan (PG) digestion, losing more than 98% of their initial PG content to increase their hydraulic permeability. To verify that PG-digestion eliminated flow-dependent viscoelasticity, samples were subjected to a gravitational permeation experiment, demonstrating that their hydraulic permeability, k=268±152 mm4/N⋅s (n=8), was five orders of magnitude greater than reported for untreated cartilage. Digested cartilage plugs were subjected to unconfined compression stress relaxation (four consecutive 10% strain ramp-hold profiles) to fit the load response and extract material properties (RMSE_fit=1.86±0.61 kPa, n=8). Successful curve-fitting served as a necessary condition for validating the model. Then, a separate unconfined compression stress-relaxation test was performed on the same samples, to 40% compressive strain at the same ramp rate. The model was able to faithfully predict this experimental response using fitted material properties (RMSE_pred=3.95±1.33 kPa, with 0=stresses=155±37 kPa), providing a sufficient condition for validation in unconfined compression stress-relaxation. A computational model showed that flow-independent viscoelasticity of cartilage collagen can enhance the stress response by ~15% at fast strain rates, over flow-dependent effects. However, we estimate from prior studies that flow-independent viscoelasticity may enhance the stress response of cartilage by up to 200%, implying that PGs probably contribute significantly to the tissue?s flow-independent viscoelasticity.
A Liew, M Strocchi, C Rodero, KK Gillette, et. al..
Leadless right ventricular pacing, In Advancing Our Understanding of the Cardiac Conduction System to Prevent Arrhythmias, Frontiers, 2026.
J. Maheshwari, W. Wu, C.N. Zelonis, S.A. Maas, K. Sunderland, Y. Barak-Corren, S. Ching, P. Sabin, A. Lasso, M. J. Gillespie, J. A. Weiss, M. A. Jolley.
Effect of Right Ventricular Outflow Tract Material Properties on Simulated Transcatheter Pulmonary Placement, Subtitled arXiv:2601.05410v1, 2026.
Finite element (FE) simulations emulating transcatheter pulmonary valve (TPV) system deployment in patient-specific right ventricular outflow tracts (RVOT) assume material properties for the RVOT and adjacent tissues. Sensitivity of the deployment to variation in RVOT material properties is unknown. Moreover, the effect of a transannular patch stiffness and location on simulated TPV deployment has not been explored. A sensitivity analysis on the material properties of a patient-specific RVOT during TPV deployment, modeled as an uncoupled HGO material, was conducted using FEBioUncertainSCI. Further, the effects of a transannular patch during TPV deployment were analyzed by considering two patch locations and four patch stiffnesses. Visualization of results and quantification were performed using custom metrics implemented in SlicerHeart and FEBio. Sensitivity analysis revealed that the shear modulus of the ground matrix (c), fiber modulus (k1), and fiber mean orientation angle (gamma) had the greatest effect on 95th %ile stress, whereas only c had the greatest effect on 95th %ile Lagrangian strain. First-order sensitivity indices contributed the greatest to the total-order sensitivity indices. Simulations using a transannular patch revealed that peak stress and strain were dependent on patch location. As stiffness of the patch increased, greater stress was observed at the interface connecting the patch to the RVOT, and stress in the patch itself increased while strain decreased. The total enclosed volume by the TPV device remained unchanged across all simulated patch cases. This study highlights that while uncertainties in tissue material properties and patch locations may influence functional outcomes, FE simulations provide a reliable framework for evaluating these outcomes in TPVR.
W. Regli, R. Rajaraman, D. Lopresti, D. Jensen, M. Maher, M. Parasher, M. Singh, H. Yanco.
The Imperative for Grand Challenges in Computing, Subtitled arXiv:2601.00700, 2026.
Computing is an indispensable component of nearly all technologies and is ubiquitous for vast segments of society. It is also essential to discoveries and innovations in most disciplines. However, while past grand challenges in science have involved computing as one of the tools to address the challenge, these challenges have not been principally about computing. Why has the computing community not yet produced challenges at the scale of grandeur that we see in disciplines such as physics, astronomy, or engineering? How might we go about identifying similarly grand challenges? What are the grand challenges of computing that transcend our discipline's traditional boundaries and have the potential to dramatically improve our understanding of the world and positively shape the future of our society?
There is a significant benefit in us, as a field, taking a more intentional approach to "grand challenges." We are seeking challenge problems that are sufficiently compelling as to both ignite the imagination of computer scientists and draw researchers from other disciplines to computational challenges.
This paper emphasizes the importance, now more than ever, of defining and pursuing grand challenges in computing as a field, and being intentional about translation and realizing its impacts on science and society. Building on lessons from prior grand challenges, the paper explores the nature of a grand challenge today emphasizing both scale and impact, and how the community may tackle such a grand challenge, given a rapidly changing innovation ecosystem in computing. The paper concludes with a call to action for our community to come together to define grand challenges in computing for the next decade and beyond.
A. Sahistan, S. Zellmann, H. Miao, N. Morrical, I. Wald, V. Pascucci.
Materializing Inter-Channel Relationships with Multi-Density Woodcock Tracking, In IEEE Trans Vis Comput Graph, 2026.
DOI: 10.1109/TVCG.2026.3653310
Volume rendering techniques for scientific visualization has recently shifted toward Monte Carlo (MC) methods for their flexibility and robustness, but their use in multi-channel visualization remains underexplored. Traditional multi-channel volume rendering often relies on arbitrary, non-physically-based color blending functions that hinder interpretation. We introduce multi-density Woodcock tracking, a simple extension of Woodcock tracking that leverages an MC method to produce high-fidelity, physically grounded multi-channel renderings without arbitrary blending. By generalizing Woodcock's distance tracking, we provide a unified blending modality that also integrates blending functions from prior works. We further implement effects that enhance boundary and feature recognition. By accumulating frames in real-time, our approach delivers high-quality visualizations with perceptual benefits, demonstrated on diverse datasets.
X. Tang, X. Yue, H. Mane, D. Li, Q. Nguyen, T. Tasdizen.
How to Build Robust, Scalable Models for GSV-Based Indicators in Neighborhood Research, Subtitled arXiv:2601.06443v1, 2026.
A substantial body of health research demonstrates a strong link between neighborhood environments and health outcomes. Recently, there has been increasing interest in leveraging advances in computer vision to enable large-scale, systematic characterization of neighborhood built environments. However, the generalizability of vision models across fundamentally different domains remains uncertain, for example, transferring knowledge from ImageNet to the distinct visual characteristics of Google Street View (GSV) imagery. In applied fields such as social health research, several critical questions arise: which models are most appropriate, whether to adopt unsupervised training strategies, what training scale is feasible under computational constraints, and how much such strategies benefit downstream performance. These decisions are often costly and require specialized expertise.
In this paper, we answer these questions through empirical analysis and provide practical insights into how to select and adapt foundation models for datasets with limited size and labels, while leveraging larger, unlabeled datasets through unsupervised training. Our study includes comprehensive quantitative and visual analyses comparing model performance before and after unsupervised adaptation.
2025
B. Adcock, B. Hientzsch, A. Narayan, Y. Xu.
Hybrid least squares for learning functions from highly noisy data, Subtitled arXiv:2507.02215, 2025.
Motivated by the need for efficient estimation of conditional expectations, we consider a least-squares function approximation problem with heavily polluted data. Existing methods that are powerful in the small noise regime are suboptimal when large noise is present. We propose a hybrid approach that combines Christoffel sampling with certain types of optimal experimental design to address this issue. We show that the proposed algorithm enjoys appropriate optimality properties for both sample point generation and noise mollification, leading to improved computational efficiency and sample complexity compared to existing methods. We also extend the algorithm to convex-constrained settings with similar theoretical guarantees. When the target function is defined as the expectation of a random field, we extend our approach to leverage adaptive random subspaces and establish results on the approximation capacity of the adaptive procedure. Our theoretical findings are supported by numerical studies on both synthetic data and on a more challenging stochastic simulation problem in computational finance.
N. Alkhatani, I. Petri, O. Rana, M. Parashar.
Edge learning for energy-aware resource management, In 2025 IEEE International Conference on Edge Computing and Communications (EDGE), IEEE, 2025.
As the demand for intelligent systems grows, leveraging edge learning and autonomic self-management offers significant benefits for supporting real-time data analysis and resource management in edge environments. We describe and evaluate four distinct task allocation scenarios to demonstrate the autonomics for edge resources management: random execution, autonomic broker-based scheduling, priority-driven execution, and energy-aware allocation. Our experiments reveal that while prioritization-based scheduling minimizes execution times by aligning with task criticality, the energy-aware approach presents a sustainable alternative. This method dynamically adapts task execution based on renewable energy availability, promoting environmentally conscious energy management without compromising operational efficiency. By harnessing renewable energy signals, our findings highlight the potential of edge autonomics to achieve a balance between performance, resource optimization and sustainability. This work demonstrates how intelligent edge-cloud integration can foster resilient smart building infrastructures that meet the challenges of modern computing paradigms.
S. Aslan, NR. Mangine, D.W. Laurence, P.M. Sabin, W. Wu, C. Herz, J. S. Unger, S. A. Maas, M. J. Gillespie, J. A. Weiss, M. A. Jolley.
Simulation of Transcatheter Therapies for Atrioventricular Valve Regurgitation in an Open-Source Finite Element Simulation Framework, Subtitled arXiv:2509.22865v1, 2025.
Purpose: Transcatheter edge-to-edge repair (TEER) and annuloplasty devices are increasingly used to treat mitral valve regurgitation, yet their mechanical effects and interactions remain poorly understood. This study aimed to establish an open-source finite element modeling (FEM) framework for simulating patient-specific mitral valve repairs and to evaluate how TEER, annuloplasty, and combined strategies influence leaflet coaptation and valve mechanics. Methods: Four G4 MitraClip geometries were modeled and deployed in FEBio to capture leaflet grasp and subsequent clip-leaflet motion under physiologic pressurization. CardioBand annuloplasty was simulated by reducing annular circumference via displacement-controlled boundary conditions, and Mitralign suture annuloplasty was modeled using discrete nodal constraints. Simulations were performed for prolapse and dilated annulus cases. Valve competence (regurgitant orifice area, ROA), coaptation/contact area (CA), and leaflet stress and strain distributions were quantified. Results: In prolapse, TEER restored coaptation but increased leaflet stresses, whereas band and suture annuloplasty produced distinct valve morphologies with lower stress distributions. In dilation, TEER alone left residual regurgitation, while annuloplasty improved closure. Combined TEER & band annuloplasty minimized ROA, maximized CA, and reduced stresses relative to TEER alone, though stresses remained higher than annuloplasty alone. Conclusion: This study establishes a reproducible, open-source FEM framework for simulating transcatheter TEER and annuloplasty repairs, with the potential to be extended beyond the mitral valve. By quantifying the mechanical trade-offs of TEER, suture annuloplasty, band annuloplasty, and their combinations, this methodology highlights the potential of virtual repair to guide patient selection and optimize surgical planning.
T.M. Athawale, Z. Wang, D. Pugmire, K. Moreland, Q. Gong, S. Klasky, C.R. Johnson, P. Rosen.
Uncertainty Visualization of Critical Points of 2D Scalar Fields for Parametric and Nonparametric Probabilistic Models, In IEEE Transactions on Visualization and Computer Graphics, Vol. 31, No. 1, IEEE, pp. 108--118. 2025.
DOI: 10.1109/TVCG.2024.3456393
This paper presents a novel end-to-end framework for closed-form computation and visualization of critical point uncertainty in 2D uncertain scalar fields. Critical points are fundamental topological descriptors used in the visualization and analysis of scalar fields. The uncertainty inherent in data (e.g., observational and experimental data, approximations in simulations, and compression), however, creates uncertainty regarding critical point positions. Uncertainty in critical point positions, therefore, cannot be ignored, given their impact on downstream data analysis tasks. In this work, we study uncertainty in critical points as a function of uncertainty in data modeled with probability distributions. Although Monte Carlo (MC) sampling techniques have been used in prior studies to quantify critical point uncertainty, they are often expensive and are infrequently used in production-quality visualization software. We, therefore, propose a new end-to-end framework to address these challenges that comprises a threefold contribution. First, we derive the critical point uncertainty in closed form, which is more accurate and efficient than the conventional MC sampling methods. Specifically, we provide the closed-form and semianalytical (a mix of closed-form and MC methods) solutions for parametric (e.g., uniform, Epanechnikov) and nonparametric models (e.g., histograms) with finite support. Second, we accelerate critical point probability computations using a parallel implementation with the VTK-m library, which is platform portable. Finally, we demonstrate the integration of our implementation with the ParaView software system to demonstrate near-real-time results for real datasets.
W. Bangerth, C. R. Johnson, D. K. Njeru, B. van Bloemen Waanders.
Estimating and using information in inverse problems, In Inverse Problems and Imaging, 2025.
ISSN: 1930-8337
DOI: 10.3934/ipi.2026003
In inverse problems, one attempts to infer spatially variable functions from indirect measurements of a system. To practitioners of inverse problems, the concept of "information" is familiar when discussing key questions such as which parts of the function can be inferred accurately and which cannot. For example, it is generally understood that we can identify system parameters accurately only close to detectors, or along ray paths between sources and detectors, because we have "the most information" for these places.
Although referenced in many publications, the "information" that is invoked in such contexts is not a well understood and clearly defined quantity. Herein, we present a definition of information density that is based on the variance of coefficients as derived from a Bayesian reformulation of the inverse problem. We then discuss three areas in which this information density can be useful in practical algorithms for the solution of inverse problems, and illustrate the usefulness in one of these areas – how to choose the discretization mesh for the function to be reconstructed – using numerical experiments.
Z. Bastiani, R.M. Kirby, J. Hochhalter, S. Zhe.
Diffusion-Based Symbolic Regression, Subtitled arXiv:2505.24776, 2025.
Diffusion has emerged as a powerful framework for generative modeling, achieving remarkable success in applications such as image and audio synthesis. Enlightened by this progress, we propose a novel diffusion-based approach for symbolic regression. We construct a random mask-based diffusion and denoising process to generate diverse and high-quality equations. We integrate this generative processes with a token-wise Group Relative Policy Optimization (GRPO) method to conduct efficient reinforcement learning on the given measurement dataset. In addition, we introduce a long short-term risk-seeking policy to expand the pool of top-performing candidates, further enhancing performance. Extensive experiments and ablation studies have demonstrated the effectiveness of our approach.
M. Belianovich, G.E. Fasshauer, A. Narayan, V. Shankar.
A Unified Framework for Efficient Kernel and Polynomial Interpolation, Subtitled arXiv:2507.12629v2, 2025.
We present a unified interpolation scheme that combines compactly-supported positive-definite kernels and multivariate polynomials. This unified framework generalizes interpolation with compactly-supported kernels and also classical polynomial least squares approximation. To facilitate the efficient use of this unified interpolation scheme, we present specialized numerical linear algebra procedures that leverage standard matrix factorizations. These procedures allow for efficient computation and storage of the unified interpolant. We also present a modification to the numerical linear algebra that allows us to generalize the application of the unified framework to target functions on manifolds with and without boundary. Our numerical experiments on both Euclidean domains and manifolds indicate that the unified interpolant is superior to polynomial least squares for the interpolation of target functions in settings with boundaries.
Page 1 of 149
