SCI Publications

2026


T. M. Athawale, K. Moreland, D. Pugmire, C. R. Johnson, P. Rosen, M. Norman, A. Georgiadou,, A. Entezari. “MAGIC: Marching Cubes Isosurface Uncertainty Visualization for Gaussian Uncertain Data with Spatial Correlation,” In TVCG, IEEE, 2026.

ABSTRACT

In this paper, we study the propagation of data uncertainty through the marching cubes algorithm for isosurface visualization for correlated uncertain data. Consideration of correlation has been shown paramount for avoiding errors in uncertainty quantification and visualization in multiple prior studies. Although the problem of isosurface uncertainty with spatial data correlation has been previously addressed, there are two major limitations to prior treatments. First, there are no analytical formulations for uncertainty quantification of isosurfaces when the data uncertainty is characterized by a Gaussian distribution with spatial correlation. Second, as a consequence of the lack of analytical formulations, existing techniques resort to a Monte Carlo sampling approach, which is expensive and difficult to integrate into visualization tools. To address these limitations, we present a closed-form framework to efficiently derive uncertainty in marching cubes level-sets for Gaussian uncertain data with spatial correlation (MAGIC). To derive closed-form solutions, we leverage the Hinkley’s derivation on the ratio of Gaussian distributions. With our analytical framework, we achieve a significant speed-up and enhanced accuracy of uncertainty quantification over classical Monte Carlo methods. We further accelerate our analytical solutions using many-core processors to achieve speed-ups up to 585× and integrability with production visualization tools for broader impact. We demonstrate the effectiveness of our correlation-aware uncertainty framework through experiments on meteorology, urban flow, and astrophysics simulation datasets.



J. Hart, B. van Bloemen Waanders, J. Li, T. A. J. Ouermi, C. R. Johnson. “Hyper-differential sensitivity analysis with respect to model discrepancy: Prior distributions,” In International Journal for Uncertainty Quantification, Vol. 16, No. 1, Begell House, 2026.

ABSTRACT

Hyper-differential sensitivity analysis with respect to model discrepancy was recently developed to enable uncertainty quantification for optimization problems. The approach consists of two primary steps: (i) Bayesian calibration of the discrepancy between high- and low-fidelity models, and (ii) propagating the model discrepancy uncertainty through the optimization problem. When high-fidelity model evaluations are limited, as is common in practice, the prior discrepancy distribution plays a crucial role in the uncertainty analysis. However, specification of this prior is challenging due to its mathematical complexity and many hyper-parameters. This article presents a novel approach to specify the prior distribution. Our approach consists of two parts: (1) an algorithmic initialization of the prior hyper-parameters that uses existing data to initialize a hyper-parameter estimate, and (2) a visualization framework to systematically explore properties of the prior and guide tuning of the hyper-parameters to ensure that the prior captures the appropriate range of uncertainty. We provide detailed mathematical analysis and a collection of numerical examples that elucidate properties of the prior that are crucial to ensure uncertainty quantification.


2025


B. Adcock, B. Hientzsch, A. Narayan, Y. Xu. “Hybrid least squares for learning functions from highly noisy data,” Subtitled “arXiv:2507.02215,” 2025.

ABSTRACT

Motivated by the need for efficient estimation of conditional expectations, we consider a least-squares function approximation problem with heavily polluted data. Existing methods that are powerful in the small noise regime are suboptimal when large noise is present. We propose a hybrid approach that combines Christoffel sampling with certain types of optimal experimental design to address this issue. We show that the proposed algorithm enjoys appropriate optimality properties for both sample point generation and noise mollification, leading to improved computational efficiency and sample complexity compared to existing methods. We also extend the algorithm to convex-constrained settings with similar theoretical guarantees. When the target function is defined as the expectation of a random field, we extend our approach to leverage adaptive random subspaces and establish results on the approximation capacity of the adaptive procedure. Our theoretical findings are supported by numerical studies on both synthetic data and on a more challenging stochastic simulation problem in computational finance.



N. Alkhatani, I. Petri, O. Rana, M. Parashar. “Edge learning for energy-aware resource management,” In 2025 IEEE International Conference on Edge Computing and Communications (EDGE), IEEE, 2025.

ABSTRACT

As the demand for intelligent systems grows, leveraging edge learning and autonomic self-management offers significant benefits for supporting real-time data analysis and resource management in edge environments. We describe and evaluate four distinct task allocation scenarios to demonstrate the autonomics for edge resources management: random execution, autonomic broker-based scheduling, priority-driven execution, and energy-aware allocation. Our experiments reveal that while prioritization-based scheduling minimizes execution times by aligning with task criticality, the energy-aware approach presents a sustainable alternative. This method dynamically adapts task execution based on renewable energy availability, promoting environmentally conscious energy management without compromising operational efficiency. By harnessing renewable energy signals, our findings highlight the potential of edge autonomics to achieve a balance between performance, resource optimization and sustainability. This work demonstrates how intelligent edge-cloud integration can foster resilient smart building infrastructures that meet the challenges of modern computing paradigms.



S. Aslan, NR. Mangine, D.W. Laurence, P.M. Sabin, W. Wu, C. Herz, J. S. Unger, S. A. Maas, M. J. Gillespie, J. A. Weiss, M. A. Jolley. “Simulation of Transcatheter Therapies for Atrioventricular Valve Regurgitation in an Open-Source Finite Element Simulation Framework,” Subtitled “arXiv:2509.22865v1,” 2025.

ABSTRACT

Purpose: Transcatheter edge-to-edge repair (TEER) and annuloplasty devices are increasingly used to treat mitral valve regurgitation, yet their mechanical effects and interactions remain poorly understood. This study aimed to establish an open-source finite element modeling (FEM) framework for simulating patient-specific mitral valve repairs and to evaluate how TEER, annuloplasty, and combined strategies influence leaflet coaptation and valve mechanics. Methods: Four G4 MitraClip geometries were modeled and deployed in FEBio to capture leaflet grasp and subsequent clip-leaflet motion under physiologic pressurization. CardioBand annuloplasty was simulated by reducing annular circumference via displacement-controlled boundary conditions, and Mitralign suture annuloplasty was modeled using discrete nodal constraints. Simulations were performed for prolapse and dilated annulus cases. Valve competence (regurgitant orifice area, ROA), coaptation/contact area (CA), and leaflet stress and strain distributions were quantified. Results: In prolapse, TEER restored coaptation but increased leaflet stresses, whereas band and suture annuloplasty produced distinct valve morphologies with lower stress distributions. In dilation, TEER alone left residual regurgitation, while annuloplasty improved closure. Combined TEER & band annuloplasty minimized ROA, maximized CA, and reduced stresses relative to TEER alone, though stresses remained higher than annuloplasty alone. Conclusion: This study establishes a reproducible, open-source FEM framework for simulating transcatheter TEER and annuloplasty repairs, with the potential to be extended beyond the mitral valve. By quantifying the mechanical trade-offs of TEER, suture annuloplasty, band annuloplasty, and their combinations, this methodology highlights the potential of virtual repair to guide patient selection and optimize surgical planning.



T.M. Athawale, Z. Wang, D. Pugmire, K. Moreland, Q. Gong, S. Klasky, C.R. Johnson, P. Rosen. “Uncertainty Visualization of Critical Points of 2D Scalar Fields for Parametric and Nonparametric Probabilistic Models,” In IEEE Transactions on Visualization and Computer Graphics, Vol. 31, No. 1, IEEE, pp. 108--118. 2025.
DOI: 10.1109/TVCG.2024.3456393

ABSTRACT

This paper presents a novel end-to-end framework for closed-form computation and visualization of critical point uncertainty in 2D uncertain scalar fields. Critical points are fundamental topological descriptors used in the visualization and analysis of scalar fields. The uncertainty inherent in data (e.g., observational and experimental data, approximations in simulations, and compression), however, creates uncertainty regarding critical point positions. Uncertainty in critical point positions, therefore, cannot be ignored, given their impact on downstream data analysis tasks. In this work, we study uncertainty in critical points as a function of uncertainty in data modeled with probability distributions. Although Monte Carlo (MC) sampling techniques have been used in prior studies to quantify critical point uncertainty, they are often expensive and are infrequently used in production-quality visualization software. We, therefore, propose a new end-to-end framework to address these challenges that comprises a threefold contribution. First, we derive the critical point uncertainty in closed form, which is more accurate and efficient than the conventional MC sampling methods. Specifically, we provide the closed-form and semianalytical (a mix of closed-form and MC methods) solutions for parametric (e.g., uniform, Epanechnikov) and nonparametric models (e.g., histograms) with finite support. Second, we accelerate critical point probability computations using a parallel implementation with the VTK-m library, which is platform portable. Finally, we demonstrate the integration of our implementation with the ParaView software system to demonstrate near-real-time results for real datasets.



W. Bangerth, C. R. Johnson, D. K. Njeru, B. van Bloemen Waanders. “Estimating and using information in inverse problems,” In Inverse Problems and Imaging, 2025.
ISSN: 1930-8337
DOI: 10.3934/ipi.2026003

ABSTRACT

In inverse problems, one attempts to infer spatially variable functions from indirect measurements of a system. To practitioners of inverse problems, the concept of "information" is familiar when discussing key questions such as which parts of the function can be inferred accurately and which cannot. For example, it is generally understood that we can identify system parameters accurately only close to detectors, or along ray paths between sources and detectors, because we have "the most information" for these places.

Although referenced in many publications, the "information" that is invoked in such contexts is not a well understood and clearly defined quantity. Herein, we present a definition of information density that is based on the variance of coefficients as derived from a Bayesian reformulation of the inverse problem. We then discuss three areas in which this information density can be useful in practical algorithms for the solution of inverse problems, and illustrate the usefulness in one of these areas – how to choose the discretization mesh for the function to be reconstructed – using numerical experiments.



Z. Bastiani, R.M. Kirby, J. Hochhalter, S. Zhe. “Diffusion-Based Symbolic Regression,” Subtitled “arXiv:2505.24776,” 2025.

ABSTRACT

Diffusion has emerged as a powerful framework for generative modeling, achieving remarkable success in applications such as image and audio synthesis. Enlightened by this progress, we propose a novel diffusion-based approach for symbolic regression. We construct a random mask-based diffusion and denoising process to generate diverse and high-quality equations. We integrate this generative processes with a token-wise Group Relative Policy Optimization (GRPO) method to conduct efficient reinforcement learning on the given measurement dataset. In addition, we introduce a long short-term risk-seeking policy to expand the pool of top-performing candidates, further enhancing performance. Extensive experiments and ablation studies have demonstrated the effectiveness of our approach.



M. Belianovich, G.E. Fasshauer, A. Narayan, V. Shankar. “A Unified Framework for Efficient Kernel and Polynomial Interpolation,” Subtitled “arXiv:2507.12629v2,” 2025.

ABSTRACT

We present a unified interpolation scheme that combines compactly-supported positive-definite kernels and multivariate polynomials. This unified framework generalizes interpolation with compactly-supported kernels and also classical polynomial least squares approximation. To facilitate the efficient use of this unified interpolation scheme, we present specialized numerical linear algebra procedures that leverage standard matrix factorizations. These procedures allow for efficient computation and storage of the unified interpolant. We also present a modification to the numerical linear algebra that allows us to generalize the application of the unified framework to target functions on manifolds with and without boundary. Our numerical experiments on both Euclidean domains and manifolds indicate that the unified interpolant is superior to polynomial least squares for the interpolation of target functions in settings with boundaries.



F. Bělík, J. Chan, A. Narayan. “Efficient and Robust Carathéodory-Steinitz Pruning of Positive Discrete Measures,” Subtitled “arXiv:2510.14916,” 2025.

ABSTRACT

In many applications, one seeks to approximate integration against a positive measure of interest by a positive discrete measure: a numerical quadrature rule with positive weights. One common desired discretization property is moment preservation over a finite dimensional function space, e.g., bounded-degree polynomials. Carathéodory's theorem asserts that if there is any finitely supported quadrature rule with more nodes than the dimension of the given function space, one can form a smaller (and hence more efficient) positive, nested, quadrature rule that preserves the moments of the original rule.


We describe an efficient streaming procedure for Carathéodory-Steinitz pruning, a numerical procedure that implements Carathéodory's theorem for this measure compression. The new algorithm makes use of Givens rotations and on-demand storage of arrays to successfully prune very large rules whose storage complexity only depends on the dimension of the function space. This approach improves on a naive implementation of Carathéodory-Steinitz pruning whose runtime and storage complexity are quadratic and linear, respectively, in the size of the original measure. We additionally prove mathematical stability properties of our method with respect to a set of admissible, total-variation perturbations of the original measure. Our method is compared to two alternate approaches with larger storage requirements: non-negative least squares and linear programming, and we demonstrate comparable runtimes, with improved stability and storage robustness. Finally, we demonstrate practical usage of this algorithm to generate quadrature for discontinous Galerkin finite element simulations on cut-cell meshes.



L. F. Bittencourt, R. Rodrigues-Filho, J. Spillner, F. De Turck, J. Santos, N. L.S. da Fonseca, O. Rana, M. Parashar, I. Foster. “The computing continuum: Past, present, and future,” In Computer Science Review, Vol. 58, 2025.
ISSN: 1574-0137
DOI: https://doi.org/10.1016/j.cosrev.2025.100782

ABSTRACT

The development of network-connected computing resources has led to various computing paradigms over the years, each bringing its own set of challenges for creating efficient distributed systems. Currently, there is an increasing need to integrate the evolving Internet of Things (IoT) with the established Cloud infrastructure. This integration often requires adding intermediate layers to address Cloud limitations such as latency, bandwidth, security, cost, and control. This configuration, known as the computing continuum, involves a diverse array of distributed devices with unique characteristics working together to meet the demands of both current and emerging applications. This paper explores the path that has led to the development of the computing continuum, offering a technology-agnostic definition from a historical perspective. It also examines applications that can benefit from the computing continuum and identifies research challenges that need to be addressed to fully realize its potential.



A. Busatto, J.A. Bergquist, T. Tasdizen, B.A. Steinberg, R. Ranjan, R.S. MacLeod. “Predicting Ventricular Arrhythmia in Myocardial Ischemia Using Machine Learning,” In 2025 Computing in Cardiology Conference, 2025.

ABSTRACT

Ventricular arrhythmia frequently complicates myocardial ischemic events, sometimes to devastating ends. Accurate arrhythmia prediction in this setting could improve outcomes, yet traditional models struggle with the temporal complexity of the data. This study employs a Long Short-Term Memory (LSTM) network to predict the time to the next premature ventricular contraction (PVC) using high-resolution experimental data. We analyzed electrograms from 11 large animal experiments, identifying 1832 PVCs, and computed time-to-PVC. An LSTM model (247 inputs, 1024 hidden units) was trained on 10 experiments, with one held out for testing, achieving a validation MAE of 8.6 seconds and a test MAE of 135 seconds (loss 68.5). Scatter plots showed strong validation correlation and a positive test trend, suggesting the potential of this approach.



L. Carnevale, D. Balouek, S. Sebbio, M. Parashar, M. Villari. “Private Distributed Resource Management Data: Predicting CPU Utilization with Bi-LSTM and Federated Learning,” In 2025 IEEE 25th International Symposium on Cluster, Cloud and Internet Computing (CCGrid), IEEE, pp. 266-275. 2025.
DOI: 10.1109/CCGRID64434.2025.00048

ABSTRACT

Artificial intelligence is increasingly pervasive in many sectors. In this regard, IT operations are having a big deal on extracting useful information from the large amount of resources' datasets available (e.g., CPU, memory, disk, energy). The issue is bigger if we consider multiple cloud tiers. Artificial intelligence is a key technology when the main goal is to improve microservice migration through offload management. However, it struggles to facilitate distributed contexts where both data transfer needs to be reduced and data privacy needs to be increased. There is therefore a need for novel solutions that resolve the problem of prediction resource utilization (e.g. CPU) while maintaining data privacy and reducing data communication. In this paper, we present a Bi-LSTM model with attention trained in Federated Learning on CPU historical data. The dataset comes from multiple Microsoft Azure trace. The results are compared with the literature and showcase a good generalization and prediction results for metrics collected by multiple virtual machines. The model is evaluated in terms of R-squared, MSE, RMSE and MAE.



B. Charoenwong, R.M. Kirby, J. Reiter. “Tradeoffs in automated financial regulation of decentralized finance due to limits on mutable turing machines,” In Scientific Reports, Vol. 15, No. 3016, 2025.
DOI: https://doi.org/10.1038/s41598-024-84612-9

ABSTRACT

We examine which decentralized finance architectures enable meaningful regulation by combining financial and computational theory. We show via deduction that a decentralized and permissionless Turing-complete system cannot provably comply with regulations concerning anti-money laundering, know-your-client obligations, some securities restrictions and forms of exchange control. Any system that claims to follow regulations must choose either a form of permission or a less-than-Turing-complete update facility. Compliant decentralized systems can be constructed only by compromising on the richness of permissible changes. Regulatory authorities must accept new tradeoffs that limit their enforcement powers if they want to approve permissionless platforms formally. Our analysis demonstrates that the fundamental constraints of computation theory have direct implications for financial regulation. By mapping regulatory requirements onto computational models, we characterize which types of automated compliance are achievable and which are provably impossible. This framework allows us to move beyond traditional debates about regulatory effectiveness to establish concrete boundaries for automated enforcement.



P. Chen, S. Jernigan, K. Zhao, G.V. PJ, M. Saha, C. Kim, A. Arzani, G. Buckner, J. Hu. “Image-guided embolization using Ta@ Ca-Alg microspheres with optimized mechanical performance,” In Biomaterials Science, Vol. 13, pp. 4786-4802. 2025.

ABSTRACT

Transcatheter arterial embolization (TAE) is a minimally invasive technique used to treat hypervascular tumors, hemorrhage, and vascular abnormalities. Though microspheres (MSs) have achieved widespread clinical use as embolic agents, they often lack imaging opacity, optimal morphology and mechanical properties which can lead to unpredictable trajectories, non-target delivery, and suboptimal embolization. This study developed tantalum-loaded calcium alginate (Ta@Ca-Alg) MSs with intrinsic radiopacity, tunable density, and mechanical properties. Ta@Ca-Alg MSs were synthesized using a gas-shearing method and analyzed for size, morphology, swelling behavior, density, radiopacity, and optimized mechanical properties. The results demonstrated that Ta@Ca-Alg MSs maintained a narrow size distribution, with increasing Ta concentration enhancing radiopacity to levels comparable with the clinical contrast agent OMNIPAQUE 350. Density and Young's modulus corresponding to different Ta concentrations were also investigated. Phantom model testing validated effective vessel occlusion and controlled penetration. In vitro hemocompatibility, sterility, and cytotoxicity studies confirmed excellent biocompatibility. These findings suggest that Ta@Ca-Alg MSs are a promising radiopaque embolic agent with optimized radiopacity, density, and mechanical properties, offering excellent potential for TAE procedures.



J.H. Choi, M. Elhadidy, M. Kim, W. Park, J.C. Park, B. D. Kwun, S. Joo, S. H. Lee, S. U. Lee, J. S. Bang, M. T. Lawton, A. Arzani, J. S. Ahn. “Flow alteration strategies for complex basilar apex aneurysms: multicenter experience, systematic review, and insights from computational fluid dynamics,” Subtitled “Research Square Preprint,” 2025.

ABSTRACT

Complex basilar apex aneurysms (CBAAs) present a significant challenge due to their unfavorable natural history and difficulty with conventional treatments. This study aimed to provide insights into flow alteration strategies by combining a systematic review using PRISMA methodology with a multicenter experience from South Korea. We analyzed 57 cases, finding that flow preservation with aneurysm obliteration was performed in 12.7%, while mild, moderate, and maximum flow reduction were applied in 77.2%, 7.0%, and 3.5% respectively. Outcomes showed that 75.8% of patients with available imaging achieved satisfactory aneurysm obliteration. A good clinical outcome (mRS 0–2) was observed in 49.1% of cases. However, poor outcomes (mRS 4–6) were reported in 31.6%, with a mortality rate of 17.5%. Beyond simply reducing intra-aneurysmal flow, computational fluid dynamics (CFD) simulations revealed that alterations in flow balance and direction significantly influenced hemodynamic stress. Given the severe prognosis of CBAAs, flow alteration strategies can serve as viable alternatives when conventional treatments are not feasible. Furthermore, CFD simulations might hold promise in identifying optimal strategies that can maximize aneurysm control while minimizing procedural risks.



R.E. Coffman, R. Kolasangiani, T.C. Bidone. “Mn2+ accelerates ligand binding-site activation of αIIbβ3 integrin: insight from all-atom simulation,” In Biophysical Journal, Vol. 124, No. 17, pp. 2854-2864. 2025.

ABSTRACT

The activation of integrins by Mn2+ is a crucial area of research, yet the underlying mechanisms remain poorly understood. Previous studies have shown that substituting Mg2+ with Mn2+ at the metal ion-dependent adhesion site (MIDAS) enhances the affinities of high-affinity open and low-affinity closed integrins. However, the molecular effect of Mn2+ and how it compares to physiological activation mediated by Mg2+/Ca2+ remain unclear. This is partly due to the lack of experimental techniques capable of detecting these processes dynamically. In this study, we used equilibrium molecular dynamics simulations to examine the effects of Mn2+ on the binding site of platelet integrin αIIbβ3. Our findings show that Mn2+ accelerates conformational changes related to activation. Specifically, Mn2+ promotes an earlier displacement of M335 in the β6-α7 loop away from the ADMIDAS site (adjacent to the MIDAS site) and a rapid downward movement of the α7 helix in the βI domain. Additionally, Mn2+ leads to faster stabilization of the α1 helix, strengthening the interactions between the αIIbβ3 ligand-binding site and the RGD motif. These results suggest that Mn2+ accelerates high-affinity rearrangements at the ligand-binding site, resembling those seen in physiological activation, but occurring more rapidly than with Mg2+/Ca2+. Overall, our data suggest that Mn2+-induced affinity modulation proceeds through similar early activation steps, even without full integrin extension.



Z. Cutler, L. Harrison, C. Nobre, A. Lex. “Crowdsourced Think-Aloud Studies,” Subtitled “OSF Preprints,” 2025.

ABSTRACT

The think-aloud (TA) protocol is a useful method for evaluating user interfaces, including data visualizations. However, TA studies are time-consuming to conduct and hence often have a small number of participants. Crowdsourcing TA studies would help alleviate these problems, but the technical overhead and the unknown quality of results have restricted TA to synchronous studies.

To address this gap we introduce CrowdAloud, a system for creating and analyzing asynchronous, crowdsourced TA studies. CrowdAloud captures audio and provenance (log) data as participants interact with a stimulus. Participant audio is automatically transcribed and visualized together with events data and a full recreation of the state of the stimulus as seen by participants.

To gauge the value of crowdsourced TA studies, we conducted two experiments: one to compare lab-based and crowdsourced TA studies, and one to compare crowdsourced TA studies with crowdsourced text prompts. Our results suggest that crowdsourcing is a viable approach for conducting TA studies at scale.



H. Dai, S. Joshi . “Refining Skewed Perceptions in Vision-Language Contrastive Models through Visual Representations,” Subtitled “arXiv:2405.14030,” 2025.

ABSTRACT

Large vision-language contrastive models (VLCMs), such as CLIP, have become foundational, demonstrating remarkable success across a variety of downstream tasks. Despite their advantages, these models, akin to other foundational systems, inherit biases from the disproportionate distribution of real-world data, leading to misconceptions about the actual environment. Prevalent datasets like ImageNet are often riddled with non-causal, spurious correlations that can diminish VLCM performance in scenarios where these contextual elements are absent. This study presents an investigation into how a simple linear probe can effectively distill task-specific core features from CLIP’s embedding for downstream applications. Our analysis reveals that the CLIP text representations are often tainted by spurious correlations, inherited in the biased pre-training dataset. Empirical evidence suggests that relying on visual representations from CLIP, as opposed to text embedding, is more effective to refine the skewed perceptions in VLCMs, emphasizing the superior utility of visual representations in overcoming embedded biases. Our code can be found here.



T. Dixon, A. Gorodetsky, J. Jakeman, A. Narayan, Y. Xu. “Optimally balancing exploration and exploitation to automate multi-fidelity statistical estimation,” Subtitled “arXiv:2505.09828v1,” 2025.

ABSTRACT

Multi-fidelity methods that use an ensemble of models to compute a Monte Carlo estimator of the expectation of a high-fidelity model can significantly reduce computational costs compared to single-model approaches. These methods use oracle statistics, specifically the covariance between models, to optimally allocate samples to each model in the ensemble. However, in practice, the oracle statistics are estimated using additional model evaluations, whose computational cost and induced error are typically ignored. To address this issue, this paper proposes an adaptive algorithm to optimally balance the resources between oracle statistics estimation and final multi-fidelity estimator construction, leveraging ideas from multilevel best linear unbiased estimators in Schaden and Ullmann (2020) and a bandit-learning procedure in Xu et al. (2022). Under mild assumptions, we demonstrate that the multi-fidelity estimator produced by the proposed algorithm exhibits mean-squared error commensurate with that of the best linear unbiased estimator under the optimal allocation computed with oracle statistics. Our theoretical findings are supported by detailed numerical experiments, including a parametric elliptic PDE and an ice-sheet mass-change modeling problem.