SCI Publications

2026


T. M. Athawale, K. Moreland, D. Pugmire, C. R. Johnson, P. Rosen, M. Norman, A. Georgiadou,, A. Entezari. “MAGIC: Marching Cubes Isosurface Uncertainty Visualization for Gaussian Uncertain Data with Spatial Correlation,” In TVCG, IEEE, 2026.

ABSTRACT

In this paper, we study the propagation of data uncertainty through the marching cubes algorithm for isosurface visualization for correlated uncertain data. Consideration of correlation has been shown paramount for avoiding errors in uncertainty quantification and visualization in multiple prior studies. Although the problem of isosurface uncertainty with spatial data correlation has been previously addressed, there are two major limitations to prior treatments. First, there are no analytical formulations for uncertainty quantification of isosurfaces when the data uncertainty is characterized by a Gaussian distribution with spatial correlation. Second, as a consequence of the lack of analytical formulations, existing techniques resort to a Monte Carlo sampling approach, which is expensive and difficult to integrate into visualization tools. To address these limitations, we present a closed-form framework to efficiently derive uncertainty in marching cubes level-sets for Gaussian uncertain data with spatial correlation (MAGIC). To derive closed-form solutions, we leverage the Hinkley’s derivation on the ratio of Gaussian distributions. With our analytical framework, we achieve a significant speed-up and enhanced accuracy of uncertainty quantification over classical Monte Carlo methods. We further accelerate our analytical solutions using many-core processors to achieve speed-ups up to 585× and integrability with production visualization tools for broader impact. We demonstrate the effectiveness of our correlation-aware uncertainty framework through experiments on meteorology, urban flow, and astrophysics simulation datasets.



T. Bidone. “Rethinking Contractility in Active Cytoskeletal Matter,” In Biophysical Journal, 2026.



R.T. Black, S.A. Maas, W. Wu, J. Maheshwari, T. Kolev, J.A. Weiss, M.A. Jolley. “An open-source computational framework for immersed fluid-structure interaction modeling using FEBio and MFEM,” Subtitled “arXiv:2601.08266v1,” 2026.

ABSTRACT

Fluid-structure interaction (FSI) simulation of biological systems presents significant computational challenges, particularly for applications involving large structural deformations and contact mechanics, such as heart valve dynamics. Traditional ALE methods encounter fundamental difficulties with such problems due to mesh distortion, motivating immersed techniques. This work presents a novel open-source immersed FSI framework that strategically couples two mature finite element libraries: MFEM, a GPU-ready and scalable library with state-of-the-art parallel performance developed at Lawrence Livermore National Laboratory, and FEBio, a nonlinear finite element solver with sophisticated solid mechanics capabilities designed for biomechanics applications developed at the University of Utah. This coupling creates a unique synergy wherein the fluid solver leverages MFEM's distributed-memory parallelization and pathway to GPU acceleration, while the immersed solid exploits FEBio's comprehensive suite of hyperelastic and viscoelastic constitutive models and advanced solid mechanics modeling targeted for biomechanics applications. FSI coupling is achieved using a fictitious domain methodology with variational multiscale stabilization for enhanced accuracy on under-resolved grids expected with unfitted meshes used in immersed FSI. A fully implicit, monolithic scheme provides robust coupling for strongly coupled FSI characteristic of cardiovascular applications. The framework's modular architecture facilitates straightforward extension to additional physics and element technologies. Several test problems are considered to demonstrate the capabilities of the proposed framework, including a 3D semilunar heart valve simulation. This platform addresses a critical need for open-source immersed FSI software combining advanced biomechanics modeling with high-performance computing infrastructure. 



A. Busatto, L.C.R. Tanner, J.A. Bergquist, G. Plank, K. Gillette, A. Narayan, R.S. MacLeod. “Uncertainty quantification of conduction velocity in models of cardiac spread of activation,” In Med Biol Eng Comput, Springer Nature, 2026.

ABSTRACT

This study quantified the effect of conduction velocity (CV) variability on cardiac electrical activation patterns, a key factor for cardiac digital twins. We examined how myocardial and endocardial longitudinal, transverse, and sheet CVs influence ventricular activation across multiple pacing sites. Three porcine biventricular heart models, each including a fast-conducting endocardial layer, were used to simulate electrical activation with an eikonal approach. Uncertainty quantification with polynomial chaos expansion systematically varied six CV parameters within physiological ranges. In total, 1,868 simulations from eight ventricular pacing sites were analyzed for activation time, variability, and global sensitivities. Myocardial longitudinal CV showed the greatest influence on activation timing (global sensitivity up to 0.98). Endocardial-layer longitudinal CV was similarly important for endocardial stimuli, while transverse and sheet CVs had minimal effects. Activation-time variability reached 15 ms, increasing with distance from the pacing origin. Longitudinal CVs, particularly myocardial and endocardial-layer, dominate ventricular activation dynamics and should be prioritized when personalizing cardiac digital twins. Accounting for CV uncertainty is essential for accurate prediction and therapy optimization.



A. Busatto, J.A. Bergquist, T. Tasdizen, B.A. Steinberg, R. Ranjan, R.S. MacLeod. “Predicting Ventricular Arrhythmia in Myocardial Ischemia Using Deep Learning,” In Heart Rhythm O2, Elsevier, 2026.

ABSTRACT

Background Myocardial ischemia can trigger ventricular arrhythmias with life-threatening consequences. Current monitoring is largely reactive, limiting opportunities for preventive intervention. Objective To determine whether high-resolution epicardial electrograms contain predictive signatures that enable forecasting the timing of premature ventricular contractions (PVCs) during acute ischemia, and to quantify subject-specific data requirements for effective personalization. Methods We analyzed epicardial sock electrograms (247 electrodes, 1 kHz) from 21 porcine acute ischemia experiments comprising 2,252 spontaneous PVCs. Signals were segmented into overlapping sequences of 3, 5, or 7 consecutive non-PVC beats with a continuous target of time-to-next PVC. A 6-layer Long Short-Term Memory (LSTM) network (hidden size 128) with temporal attention was trained using mean absolute error (MAE). Performance was evaluated in (A) pooled 80/10/10 cross-validation and (B) leave-one-experiment-out testing with subject-specific fine-tuning using 10% or 15% of held-out data. Results In Paradigm A, MAE decreased with longer context (6.50 s for 3 beats, 5.97 s for 5 beats, 4.73 s for 7 beats) with excellent calibration (R2>0.996). In Paradigm B, increasing fine-tuning from 10% to 15% reduced mean MAE by 9.6–14.6 s and flattened error growth with prediction horizon, improving the fraction of predictions within 30–60 s windows. Conclusion Epicardial electrograms support accurate PVC time-to-event forecasting during acute ischemia, and modest subject-specific adaptation substantially improves generalization, motivating development of real-time predictive monitoring tools.



L. Chenarides, R. Ladislau, M. Parashar, S. Porter, J. Lane. “Data-usage descriptors as search metadata: the case of food security data and the National Data Platform (2015-2025),” Subtitled “Research Square Preprint,” 2026.
DOI: https://doi.org/10.21203/rs.3.rs-8569040/v1

ABSTRACT

Scientific data is a critical input into scientific research. Yet the research data landscape is constantly changing as new datasets emerge, others are retired, or some disappear altogether. Data-usage descriptors can substantially advance research productivity by reducing the time that researchers spend finding new and relevant datasets in their research field. This paper describes how to generate data usage descriptors by finding how datasets are used in publications and then linking the dataset information to the publication metadata. It also shows how usage descriptors can be used to find other related datasets and their usage. It concludes by arguing that the approach represents a critical piece of foundational infrastructure that could be deployed in repositories as part of a referenceable, navigable, and contextual data framework. This article contains a reproducible workflow for constructing data-usage descriptors, based on analyzing the full text of publications in the Dimensions database. The illustrative use case is research on food security. The illustrative repository is the National Data Platform.



L. Cicci, S. Qian, C. Rodero, M. Strocchi, C. Corrado, F. Campos, S. Malik, A. Lee, A. Qayyum, K. Gillette, J. Isbister, R. Sy, M. Lee, M. Noseda, R. Wilkinson, G. Plank, M. Bishop, S. Niederer. “Personalising cardiac electrophysiology models from CT and ECG for 3D activation imaging and tissue characterisation,” Subtitled “Research Square Preprint,” 2026.

ABSTRACT

Background: Electrocardiographic imaging maps cardiac electrical activity
non-invasively but is restricted to the epicardium. Computational electrophysiol-
ogy models can predict 3D activation and tissue properties but require extensive
parameter calibration.
Methods: We introduce an unbiased workflow combining sensitivity analysis
with emulator-based Bayesian history matching to calibrate over 100 organ- and
tissue-scale parameters. The framework incorporates CT-scan images and 12-
lead ECGs with a multi-scale electrophysiology model to generate personalised
ventricular simulations.
Results: The framework was tested on seven subjects (four with synthetic and
three with clinical ECGs), with validation performed using high-density body
surface potentials from a 252-electrode vest for the clinical cases. Calibrated
models reproduced individual ECG morphologies and showed strong agreement
with independent measurements (Pearson’s correlation coefficient: 0.80 ± 0.04).
Conclusions: The study links non-invasive data with high-fidelity simulations to
estimate spatially-varying properties, supporting personalised cardiac modelling
for clinical use.



H. Csala, A. Arzani . “Decomposed sparse modal optimization: Interpretable reduced-order modeling of unsteady flows,” In International Journal of Heat and Fluid Flow, Vol. 117, Elsevier, pp. 110124. 2026.
ISSN: 0142-727X
DOI: https://doi.org/10.1016/j.ijheatfluidflow.2025.110124

ABSTRACT

Modal analysis plays a crucial role in fluid dynamics, offering a powerful tool for reducing the complexity of high-dimensional fluid flow data while extracting meaningful insights into flow physics. This is particularly important in the study of cardiovascular flows, where modal techniques help characterize unsteady flow structures, improve reduced-order modeling, and inform disease diagnosis and rapid medical device design. The most commonly used method, proper orthogonal decomposition (POD), is highly interpretable but suffers from its linearity, which limits its ability to capture nonlinear interactions. In this work, we introduce decomposed sparse modal optimization (DESMO), a nonlinear, adaptive extension of POD that improves the accuracy of flow field reconstruction while requiring fewer modes. We use modern gradient descent-based optimization tools to optimize the spatial modes and temporal coefficients concurrently while using a sparsity-promoting loss term. We demonstrate these on a canonical fluid flow benchmark, flow over a cylinder, a real-world example, blood flow inside a brain aneurysm, and a turbulent channel flow. DESMO can identify spatial modes that resemble higher-order POD modes while uncovering entirely new spatial structures in some cases. Different versions of DESMO can leverage Fourier series for modeling temporal coefficients, an autoencoder for spatial mode optimization, and symbolic regression for discovering differential equations for temporal evolution. Our results demonstrate that DESMO not only provides a more accurate representation of fluid flows but also preserves the interpretability of classical POD by having an analytical modal decomposition equation, offering a promising approach for reduced-order modeling across engineering applications.



Z. Cutler, J. Wilburn, H. Shrestha, Y. Ding, B. Bollen, K. Abrar Nadib, T. He, A. McNutt, L. Harrison, A. Lex. “ReVISit 2: A Full Experiment Life Cycle User Study Framework,” In IEEE Transactions on Visualization and Computer Graphics, Vol. 32, IEEE, 2026.

ABSTRACT

Online user studies of visualizations, visual encodings, and interaction techniques are ubiquitous in visualization research. Yet, designing, conducting, and analyzing studies effectively is still a major burden.Although various packages support such user studies, most solutions address only facets of the experiment life cycle, make reproducibility difficult, or do not cater to nuanced study designs or interactions. We introduce reVISit 2, a software framework that supports visualization researchers at all stages of designing and conducting browser-based user studies. ReVISit supports researchers in the design, debug & pilot, data collection, analysis, and dissemination experiment phases by providing both technical affordances (such as replay of participant interactions) and sociotechnical aids (such as a mindfully maintained community of support). It is a proven system that can be (and has been) used in publication-quality studies---which we demonstrate through a series of experimental replications. We reflect on the design of the system via interviews and an analysis of its technical dimensions. Through this work, we seek to elevate the ease with which studies are conducted, improve the reproducibility of studies within our community, and support the construction of advanced interactive studies.



D. Dade, J.A. Bergquist, R.S. MacLeod, B.A. Steinberg, T. Tasdizen. “Self-Supervised Contrastive Learning Enables Robust ECG-Based Cardiac Classification,” In Heart Rhythm O2, Elsevier, 2026.

ABSTRACT

Background

Self-supervised contrastive learning has emerged as a powerful paradigm for learning generalizable representations from unlabeled data. In the context of electrocardiogram (ECG) analysis, such pretraining can significantly enhance classification performance, especially when labeled data is scarce.
 

Objective

We aimed to investigate and improve contrastive self-supervised learning techniques for ECGs by systematically combining recent methodological advances in augmentation design, contrastive loss formulation, and encoder architectures.
 

Methods

We implemented a contrastive pretraining framework combining Vectorcardiography (VCG)-based physiologically inspired augmentations, interlead, intersegment, contrastive loss, and patient-aware positive sampling. In addition, we developed a dual-stream architecture, extending the TemporalNet model by processing grouped ECG leads independently. Pre-training was conducted on a large corpus of approximately 1 million unlabeled ECGs. We evaluated performance on two downstream classification tasks low left ventricular ejection fraction (LVEF) and high serum potassium (KCL) using various levels of labeled supervision (1%, 5%, 10%, 50%, and 100%). The pre-trained models were compared with the randomly initialized models under both frozen and fine-tuned conditions.
 

Results

Contrastive pretraining consistently improved performance across all supervision levels. In low-label settings (1%-10% supervision), the pretrained model achieved 3–-4% higher AUROC on the LVEF task and 5-7% higher (Area Under Receiver Operator Curve) AUROC on the KCL task compared to the baseline. The performance gap narrowed with increased supervision but remained favorable toward pretrained models.
 

Conclusions

Our findings demonstrate that contrastive pretraining can substantially enhance ECG classification, especially when labeled data is limited. By unifying and extending ideas from recent literature into a scalable framework trained on 1 million ECGs, we provide practical guidance and architectural innovations for building strong ECG foundation models applicable to a broad range of clinical prediction tasks.



M. Elhadidy, R.M. D'Souza, A. Arzani. “SLE-FNO: Single-Layer Extensions for Task-Agnostic Continual Learning in Fourier Neural Operators,” Subtitled “arXiv:2603.20410,” 2026.

ABSTRACT

Scientific machine learning is increasingly used to build surrogate models, yet most models are trained under a restrictive assumption in which future data follow the same distribution as the training set. In practice, new experimental conditions or simulation regimes may differ significantly, requiring extrapolation and model updates without re-access to prior data. This creates a need for continual learning (CL) frameworks that can adapt to distribution shifts while preventing catastrophic forgetting. Such challenges are pronounced in fluid dynamics, where changes in geometry, boundary conditions, or flow regimes induce non-trivial changes to the solution. Here, we introduce a new architecture-based approach (SLE-FNO) combining a Single-Layer Extension (SLE) with the Fourier Neural Operator (FNO) to support efficient CL. SLE-FNO was compared with a range of established CL methods, including Elastic Weight Consolidation (EWC), Learning without Forgetting (LwF), replay-based approaches, Orthogonal Gradient Descent (OGD), Gradient Episodic Memory (GEM), PiggyBack, and Low-Rank Approximation (LoRA), within an image-to-image regression setting. The models were trained to map transient concentration fields to time-averaged wall shear stress (TAWSS) in pulsatile aneurysmal blood flow. Tasks were derived from 230 computational fluid dynamics simulations grouped into four sequential and out-of-distribution configurations. Results show that replay-based methods and architecture-based approaches (PiggyBack, LoRA, and SLE-FNO) achieve the best retention, with SLE-FNO providing the strongest overall balance between plasticity and stability, achieving accuracy with zero forgetting and minimal additional parameters. Our findings highlight key differences between CL algorithms and introduce SLE-FNO as a promising strategy for adapting baseline models when extrapolation is required. 



I.J. Eliza, X. Huang, A. Panta, A. Sahistan, Z. Li, A.A. Gooch, V. Pascucci. “Animating Petascale Time-varying Data on Commodity Hardware with LLM-assisted Scripting,” Subtitled “arXiv:2603.07053v1,” 2026.

ABSTRACT

Scientists face significant visualization challenges as time-varying datasets grow in speed and volume, often requiring specialized infrastructure and expertise to handle massive datasets. Petascale climate models generated in NASA laboratories require a dedicated group of graphics and media experts and access to high-performance computing resources. Scientists may need to share scientific results with the community iteratively and quickly. However, the time-consuming trial-and-error process incurs significant data transfer overhead and far exceeds the time and resources allocated for typical post-analysis visualization tasks, disrupting the production workflow. Our paper introduces a user-friendly framework for creating 3D animations of petascale, time-varying data on a commodity workstation. Our contributions: (i) Generalized Animation Descriptor (GAD) with a keyframe-based adaptable abstraction for animation, (ii) efficient data access from cloud-hosted repositories to reduce data management overhead, (iii) tailored rendering system, and (iv) an LLM-assisted conversational interface as a scripting module to allow domain scientists with no visualization expertise to create animations of their region of interest. We demonstrate the framework's effectiveness with two case studies: first, by generating animations in which sampling criteria are specified based on prior knowledge, and second, by generating AI-assisted animations in which sampling parameters are derived from natural-language user prompts. In all cases, we use large-scale NASA climate-oceanographic datasets that exceed 1PB in size yet achieve a fast turnaround time of 1 minute to 2 hours. Users can generate a rough draft of the animation within minutes, then seamlessly incorporate as much high-resolution data as needed for the final version. 



S.A. Faroughi, F. Mostajeran, A. Arzani, S. Faroughi. “Symbolic--KAN: Kolmogorov-Arnold Networks with Discrete Symbolic Structure for Interpretable Learning,” Subtitled “arXiv:2603.23854,” 2026.

ABSTRACT

Symbolic discovery of governing equations is a long-standing goal in scientific machine learning, yet a fundamental trade-off persists between interpretability and scalable learning. Classical symbolic regression methods yield explicit analytic expressions but rely on combinatorial search, whereas neural networks scale efficiently with data and dimensionality but produce opaque representations. In this work, we introduce Symbolic Kolmogorov-Arnold Networks (Symbolic-KANs), a neural architecture that bridges this gap by embedding discrete symbolic structure directly within a trainable deep network. Symbolic-KANs represent multivariate functions as compositions of learned univariate primitives applied to learned scalar projections, guided by a library of analytic primitives, hierarchical gating, and symbolic regularization that progressively sharpens continuous mixtures into one-hot selections. After gated training and discretization, each active unit selects a single primitive and projection direction, yielding compact closed-form expressions without post-hoc symbolic fitting. Symbolic-KANs further act as scalable primitive discovery mechanisms, identifying the most relevant analytic components that can subsequently inform candidate libraries for sparse equation-learning methods. We demonstrate that Symbolic-KAN reliably recovers correct primitive terms and governing structures in data-driven regression and inverse dynamical systems. Moreover, the framework extends to forward and inverse physics-informed learning of partial differential equations, producing accurate solutions directly from governing constraints while constructing compact symbolic representations whose selected primitives reflect the true analytical structure of the underlying equations. These results position Symbolic-KAN as a step toward scalable, interpretable, and mechanistically grounded learning of governing laws.



E. Ghelichkhan, T. Tasdizen. “Beyond Standard Sampling: Metric-Guided Iterative Inference for Radiologists-Aligned Medical Counterfactual Generation,” In Proceedings of Machine Learning Research, 2026.

ABSTRACT

Generative counterfactuals offer a promising avenue for explainable AI in medical imaging, yet ensuring these synthesized images are both anatomically faithful and clinically effective remains a significant challenge. This work presents a domain-specific diffusion framework for generating ”healthy” counterfactuals from chest X-rays with cardiomegaly, underpinned by a systematic metric-guided inference strategy. In contrast to methods relying on static sampling parameters, our approach iteratively explores the inference hyperparameter space to maximize our composite selection criterion, CF Score, that integrates our novel Faithfulness-Effectiveness Trade-off (F ET ) metric.

We extend the evaluation of counterfactual utility beyond simple classification shifts by conducting the simultaneous validation against radiologist annotations and eye-tracking data. Using the REFLACX dataset, we demonstrate that difference maps derived from our counterfactuals exhibit strong spatial alignment with expert visual attention and annotation. Quantified by Normalized Cross-Correlation, Hit Rate, pixel-wise ROC-AUC, and AUC-IoU, our results confirm that metric-guided counterfactuals provide dense and clinically relevant localizations of pathology that closely mirror human diagnostic reasoning.



M. Rakibul Haque, V. Goudar, S. Elhabian, W.W. Pettine. “TimeSynth: A Framework for Uncovering Systematic Biases in Time Series Forecasting,” Subtitled “arXiv:2602.11413,” 2026.

ABSTRACT

Time series forecasting is a fundamental tool with wide ranging applications, yet recent debates question whether complex nonlinear architectures truly outperform simple linear models. Prior claims of dominance of the linear model often stem from benchmarks that lack diverse temporal dynamics and employ biased evaluation protocols. We revisit this debate through TimeSynth, a structured framework that emulates key properties of real world time series,including non-stationarity, periodicity, trends, and phase modulation by creating synthesized signals whose parameters are derived from real-world time series. Evaluating four model families Linear, Multi Layer Perceptrons (MLP), Convolutional Neural Networks (CNNs), and Transformers, we find a systematic bias in linear models: they collapse to simple oscillation regardless of signal complexity. Nonlinear models avoid this collapse and gain clear advantages as signal complexity increases. Notably, Transformers and CNN based models exhibit slightly greater adaptability to complex modulated signals compared to MLPs. Beyond clean forecasting, the framework highlights robustness differences under distribution and noise shifts and removes biases of prior benchmarks by using independent instances for train, test, and validation for each signal family. Collectively, TimeSynth provides a principled foundation for understanding when different forecasting approaches succeed or fail, moving beyond oversimplified claims of model equivalence. 



J. Hart, B. van Bloemen Waanders, J. Li, T. A. J. Ouermi, C. R. Johnson. “Hyper-differential sensitivity analysis with respect to model discrepancy: Prior distributions,” In International Journal for Uncertainty Quantification, Vol. 16, No. 1, Begell House, 2026.

ABSTRACT

Hyper-differential sensitivity analysis with respect to model discrepancy was recently developed to enable uncertainty quantification for optimization problems. The approach consists of two primary steps: (i) Bayesian calibration of the discrepancy between high- and low-fidelity models, and (ii) propagating the model discrepancy uncertainty through the optimization problem. When high-fidelity model evaluations are limited, as is common in practice, the prior discrepancy distribution plays a crucial role in the uncertainty analysis. However, specification of this prior is challenging due to its mathematical complexity and many hyper-parameters. This article presents a novel approach to specify the prior distribution. Our approach consists of two parts: (1) an algorithmic initialization of the prior hyper-parameters that uses existing data to initialize a hyper-parameter estimate, and (2) a visualization framework to systematically explore properties of the prior and guide tuning of the hyper-parameters to ensure that the prior captures the appropriate range of uncertainty. We provide detailed mathematical analysis and a collection of numerical examples that elucidate properties of the prior that are crucial to ensure uncertainty quantification.



M.H.H. Hisham, S. Elhabian, G. Adluru, J. Mendes, A. Arai, E. Kholmovski, R. Ranjan, E. DiBella. “Unrolled Reconstruction with Integrated Super-Resolution for Accelerated 3D LGE MRI,” Subtitled “arXiv:2603.18309v1,” 2026.

ABSTRACT

Accelerated 3D late gadolinium enhancement (LGE) MRI requires robust reconstruction methods to recover thin atrial structures from undersampled k-space data. While unrolled model-based networks effectively integrate physics-driven data consistency with learned priors, they operate at the acquired resolution and may fail to fully recover high-frequency detail. We propose a hybrid unrolled reconstruction framework in which an Enhanced Deep Super-Resolution (EDSR) network replaces the proximal operator within each iteration of the optimization loop, enabling joint super-resolution enhancement and data consistency enforcement. The model is trained end-to-end on retrospectively undersampled preclinical 3D LGE datasets and compared against compressed sensing, Model-Based Deep Learning (MoDL), and self-guided Deep Image Prior (DIP) baselines. Across acceleration factors, the proposed method consistently improves PSNR and SSIM over standard unrolled reconstruction and better preserves fine cardiac structures, leading to improved LA (left atrium) segmentation performance. These results demonstrate that integrating super-resolution priors directly within model-based reconstruction provides measurable gains in accelerated 3D LGE MRI.



Y. Huang, S.H. Wang, A.L. Bertozzi, B. Wang. “RMFlow: Refined Mean Flow by a Noise-Injection Step for Multimodal Generation,” Subtitled “arXiv:2602.00849,” 2026.

ABSTRACT

Mean flow (MeanFlow) enables efficient, high-fidelity image generation, yet its single-function evaluation (1-NFE) generation often cannot yield compelling results. We address this issue by introducing RMFlow, an efficient multimodal generative model that integrates a coarse 1-NFE MeanFlow transport with a subsequent tailored noise-injection refinement step. RMFlow approximates the average velocity of the flow path using a neural network trained with a new loss function that balances minimizing the Wasserstein distance between probability paths and maximizing sample likelihood. RMFlow achieves near state-of-the-art results on text-to-image, context-to-molecule, and time-series generation using only 1-NFE, at a computational cost comparable to the baseline MeanFlows. 



R. Kolasangiani, O. Joshi, M.A. Schwartz, T.C. Bidone. “All-atom simulations reveal distinct pathways for αIIbβ3 activation by biochemical vs. mechanical cues,” In Cellular and Molecular Life Sciences, Springer Nature, 2026.

ABSTRACT

The conformational activation of αIIbβ3integrin is crucial for platelet aggregation, a central event in hemostasis and thrombosis. Although activation can be triggered by extracellular arginine-glycine-aspartic acid (RGD)-containing ligands as well as mechanical forces, how these biochemical and mechanical cues exactly govern the structural dynamics of αIIbβ3remains unclear. Here, using all-atom molecular dynamics simulations, we show that mechanical force and RGD binding promote activation αIIbβ3through distinct mechanisms. Mechanical force applied to the RGD-binding site induces long-range, correlated motions of distant parts of the receptor, facilitating head–leg separation. In contrast, RGD binding increases localized, non-correlated fluctuations that weaken leg coordination but do not generate long-range motions. Despite these differences, both cues stabilize the open, extended conformation of αIIbβ3. Together, these findings suggest that mechanical and biochemical stimuli play complementary yet distinct roles in integrin conformational activation. A balance between global coordination and local fluctuations likely governs integrin activation in complex environments where the dominance of mechanical or biochemical cues could lead to distinct activation pathways and functional outcomes.



K. Kroupa, R. Kepecs, H. Zhang, J.A. Weiss, C.T. Hung, G.A. Ateshian. “Intrinsic Viscoelasticity of Type II Col Contributes to The Viscoelastic Response of Immature Bovine Articular Cartilage Under Unconfined Compression Stress Relaxation ,” In Journal of Biomechanical Engineering, 2026.

ABSTRACT

This study validates a finite deformation, nonlinear viscoelastic constitutive model for the collagen matrix of immature bovine articular cartilage, using reactive viscoelasticity. Tissue samples underwent proteoglycan (PG) digestion, losing more than 98% of their initial PG content to increase their hydraulic permeability. To verify that PG-digestion eliminated flow-dependent viscoelasticity, samples were subjected to a gravitational permeation experiment, demonstrating that their hydraulic permeability, k=268±152 mm4/N⋅s (n=8), was five orders of magnitude greater than reported for untreated cartilage. Digested cartilage plugs were subjected to unconfined compression stress relaxation (four consecutive 10% strain ramp-hold profiles) to fit the load response and extract material properties (RMSE_fit=1.86±0.61 kPa, n=8). Successful curve-fitting served as a necessary condition for validating the model. Then, a separate unconfined compression stress-relaxation test was performed on the same samples, to 40% compressive strain at the same ramp rate. The model was able to faithfully predict this experimental response using fitted material properties (RMSE_pred=3.95±1.33 kPa, with 0=stresses=155±37 kPa), providing a sufficient condition for validation in unconfined compression stress-relaxation. A computational model showed that flow-independent viscoelasticity of cartilage collagen can enhance the stress response by ~15% at fast strain rates, over flow-dependent effects. However, we estimate from prior studies that flow-independent viscoelasticity may enhance the stress response of cartilage by up to 200%, implying that PGs probably contribute significantly to the tissue?s flow-independent viscoelasticity.