SCI Publications

2025


F. Bělík, J. Chan, A. Narayan. “Efficient and Robust Carathéodory-Steinitz Pruning of Positive Discrete Measures,” Subtitled “arXiv:2510.14916,” 2025.

ABSTRACT

In many applications, one seeks to approximate integration against a positive measure of interest by a positive discrete measure: a numerical quadrature rule with positive weights. One common desired discretization property is moment preservation over a finite dimensional function space, e.g., bounded-degree polynomials. Carathéodory's theorem asserts that if there is any finitely supported quadrature rule with more nodes than the dimension of the given function space, one can form a smaller (and hence more efficient) positive, nested, quadrature rule that preserves the moments of the original rule.


We describe an efficient streaming procedure for Carathéodory-Steinitz pruning, a numerical procedure that implements Carathéodory's theorem for this measure compression. The new algorithm makes use of Givens rotations and on-demand storage of arrays to successfully prune very large rules whose storage complexity only depends on the dimension of the function space. This approach improves on a naive implementation of Carathéodory-Steinitz pruning whose runtime and storage complexity are quadratic and linear, respectively, in the size of the original measure. We additionally prove mathematical stability properties of our method with respect to a set of admissible, total-variation perturbations of the original measure. Our method is compared to two alternate approaches with larger storage requirements: non-negative least squares and linear programming, and we demonstrate comparable runtimes, with improved stability and storage robustness. Finally, we demonstrate practical usage of this algorithm to generate quadrature for discontinous Galerkin finite element simulations on cut-cell meshes.



F. Bělík, Y. Chen, A. Narayan. “Greedy Rational Approximation for Frequency-Domain Model Reduction of Parametric LTI Systems,” Subtitled “arXiv:2512.23814v1,” 2025.

ABSTRACT

We investigate model reduction of parametric linear time-invariant (LTI) dynamical systems. When posed in the frequency domain, this problem can be formulated as seeking a low-order rational function approximation of a high-order rational function. We propose to use a standard reduced basis method (RBM) to construct this low-order rational function. Algorithmically, this procedure is an iterative greedy approach, where the greedy objective is evaluated through an error estimator that exploits the linearity of the frequency domain representation. The greedy framework is motivated through theoretical results of rational approximability of functions. This framework provides a principled approach to rational compression of high-order rational functions, and provides a computational pathway for model reduction of parametric LTI systems. 



J.A. Bergquist, B. Orkild, E. Kwan, K. Gillette, K. Yazaki, S. Jaroonpipatkul, E. Dibella, R. Shelton, E. Beiging, L. Chang, G. Plank, S. Elhabian, R. S. MacLeod, R. Ranjan. “Comparison of LGE MRI Scar Identification Methods for Atrial Computational Modeling,” In Computing in Cardiology 2025, Vol. 52, 2025.

ABSTRACT

Identification of patient-specific scar and fibrosis is a critical step in the personalization of cardiac computational models. Late gadolinium enhanced cardiac magnetic resonance imaging (LGE-cMRI) is often used to identify patient anatomy, as well as tissue fibrosis and scar. Automated methods to identify scar from LGE-cMRI exist. Still, there is no clear consensus as to which is best in the context of patient-specific computational modeling of atrial fibrillation. There has been no substantial investigation into the effects that variability in scar may have on downstream patient-specific simulations. This study compares the distribution of scar patterns generated via automated LGE-cMRI analysis alongside human-guided scar identification. We assess the effects each identified scar pattern has on downstream computational modeling outputs by comparing the number of stable re-entrant arrhythmias induced In Silico in atrial fibrillation. We find both substantial disagreement between scar patterns identified via automated and human-guided methods, as well as sensitivity in the arrhythmia simulation outcomes across scar patterns. These results highlight the sensitivity of such computational models to these input parameters and enforce the need for robust personalization tools in the cardiac modeling field.



L. F. Bittencourt, R. Rodrigues-Filho, J. Spillner, F. De Turck, J. Santos, N. L.S. da Fonseca, O. Rana, M. Parashar, I. Foster. “The computing continuum: Past, present, and future,” In Computer Science Review, Vol. 58, 2025.
ISSN: 1574-0137
DOI: https://doi.org/10.1016/j.cosrev.2025.100782

ABSTRACT

The development of network-connected computing resources has led to various computing paradigms over the years, each bringing its own set of challenges for creating efficient distributed systems. Currently, there is an increasing need to integrate the evolving Internet of Things (IoT) with the established Cloud infrastructure. This integration often requires adding intermediate layers to address Cloud limitations such as latency, bandwidth, security, cost, and control. This configuration, known as the computing continuum, involves a diverse array of distributed devices with unique characteristics working together to meet the demands of both current and emerging applications. This paper explores the path that has led to the development of the computing continuum, offering a technology-agnostic definition from a historical perspective. It also examines applications that can benefit from the computing continuum and identifies research challenges that need to be addressed to fully realize its potential.



A. Busatto, J.A. Bergquist, T. Tasdizen, B.A. Steinberg, R. Ranjan, R.S. MacLeod. “Predicting Ventricular Arrhythmia in Myocardial Ischemia Using Machine Learning,” In 2025 Computing in Cardiology Conference, 2025.

ABSTRACT

Ventricular arrhythmia frequently complicates myocardial ischemic events, sometimes to devastating ends. Accurate arrhythmia prediction in this setting could improve outcomes, yet traditional models struggle with the temporal complexity of the data. This study employs a Long Short-Term Memory (LSTM) network to predict the time to the next premature ventricular contraction (PVC) using high-resolution experimental data. We analyzed electrograms from 11 large animal experiments, identifying 1832 PVCs, and computed time-to-PVC. An LSTM model (247 inputs, 1024 hidden units) was trained on 10 experiments, with one held out for testing, achieving a validation MAE of 8.6 seconds and a test MAE of 135 seconds (loss 68.5). Scatter plots showed strong validation correlation and a positive test trend, suggesting the potential of this approach.



L. Carnevale, D. Balouek, S. Sebbio, M. Parashar, M. Villari. “Private Distributed Resource Management Data: Predicting CPU Utilization with Bi-LSTM and Federated Learning,” In 2025 IEEE 25th International Symposium on Cluster, Cloud and Internet Computing (CCGrid), IEEE, pp. 266-275. 2025.
DOI: 10.1109/CCGRID64434.2025.00048

ABSTRACT

Artificial intelligence is increasingly pervasive in many sectors. In this regard, IT operations are having a big deal on extracting useful information from the large amount of resources' datasets available (e.g., CPU, memory, disk, energy). The issue is bigger if we consider multiple cloud tiers. Artificial intelligence is a key technology when the main goal is to improve microservice migration through offload management. However, it struggles to facilitate distributed contexts where both data transfer needs to be reduced and data privacy needs to be increased. There is therefore a need for novel solutions that resolve the problem of prediction resource utilization (e.g. CPU) while maintaining data privacy and reducing data communication. In this paper, we present a Bi-LSTM model with attention trained in Federated Learning on CPU historical data. The dataset comes from multiple Microsoft Azure trace. The results are compared with the literature and showcase a good generalization and prediction results for metrics collected by multiple virtual machines. The model is evaluated in terms of R-squared, MSE, RMSE and MAE.



K.R. Carney, R. Sondaz, W. Sturgess, K. Sakthivel, J. Kim, V. Swaminathan, T.C. Bidone. “Stabilization of adhesions controls F-actin architecture in mechanotransduction,” In Communications Materials, Vol. 6, No. 288, Nature, 2025.
DOI: 10.1038/s43246-025-01006-8

ABSTRACT

A cell’s ability to sense and respond to the mechanical properties of the extracellular matrix (ECM) is essential for maintaining tissue homeostasis, and its disruption contributes to diseases such as fibrosis, cardiovascular disorders, and cancer. Effective mechanical coupling between the plasma membrane, the underlying filamentous actin (F-actin) cytoskeleton, and integrin-based adhesion complexes (IACs) is required to link ECM mechanics to cell morphology, yet the underlying mechanisms remain incompletely understood. Here, we combine computational modeling and high-resolution imaging to show that integrin–ECM bonds determine F-actin cytoskeleton organization. On soft substrates, short-lived IACs bonds allow rapid actin retrograde flow and dense branching, restricting protrusion and limiting cell spreading. In contrast, stiff substrates or Mn²⁺-mediated integrin activation stabilize adhesions, promote filament alignment, and drive membrane protrusion for cell spreading. These cytoskeletal transitions arise from feedback between adhesion strength and the spatial positioning of the F-actin barbed ends relative to the leading-edge membrane. This positioning determines whether filaments polymerize into linear bundles or branch into dendritic networks, each generating distinct protrusive forces that regulate cell spreading. Collectively, our findings establish integrin–ECM bond stability as a key regulator of F-actin cytoskeleton organization and cell morphology.



D. Cavinatto, T. Webb, S. Joshi, D. Christensen, A. Payne. “MR-Compatible Ultrasound Through Transmission for Focused Ultrasound Thermal Therapy,” In 2025 IEEE International Ultrasonics Symposium (IUS), IEEE, 2025.
DOI: 10.1109/IUS62464.2025.11201505

ABSTRACT

Focused ultrasound (FUS) therapies for cancer provide non-invasive precise thermal treatment to tissues. Current FUS treatment systems rely on magnetic resonance imaging (MRI), B-mode ultrasound imaging, or harmonic motion imaging for guidance. MRI has the advantage of quantitatively measuring temperature but the high cost and limited availability of MRI limit ultimate impact. An MR-compatible Ultrasound Through Transmission (UTT) system that can quantitatively assess tissue changes caused by temperature is being investigated as a replacement for the current standard of guidance. Two FUS 256-element transducers are mounted in a container with a sample placed at their coinciding geometric center. A FUS transmitter system is connected to the transmitter transducer, while a FUS research system samples the signal at the receiver transducer. A UTT protocol of 256 sequential single-element transmissions with reception on 256 elements is performed on three different samples: homogeneous gelatin phantoms, gelatin phantoms with attenuative inclusions, and porcine meat samples before and after thermal ablation. Hybrid Angular Spectrum (HAS) acoustic simulations are performed on the unheated gelatin samples segmented into regions with measured properties. Simulations are also done on the heated porcine sample segmented into regions with ablative temperatures based on MR temperature imaging. Measured UTT datasets are compared to HAS-predicted transmission data. Complex regression shows good agreement between the UTT-measured and HAS-predicted datasets. On homogeneous samples, the average complex correlation coefficient across all receiver elements is 0.8897. In 1 cm and 2 cm diameter inclusion samples, the correlation is 0.7994 and 0.6934, respectively. On the porcine meat sample, the pre-ablation average correlation is 0.6636, with a post-ablation average correlation of 0.6595. The data indicate that ablation of the tissue causes measurable changes in the received signal, but our simplified model is inadequate to capture the true tissue changes. Future work is investigating this discrepancy with more advanced modeling. The ultimate goal is to use physics-informed neural networks to predict tissue changes from the UTT received signal.



B. Charoenwong, R.M. Kirby, J. Reiter. “Tradeoffs in automated financial regulation of decentralized finance due to limits on mutable turing machines,” In Scientific Reports, Vol. 15, No. 3016, 2025.
DOI: https://doi.org/10.1038/s41598-024-84612-9

ABSTRACT

We examine which decentralized finance architectures enable meaningful regulation by combining financial and computational theory. We show via deduction that a decentralized and permissionless Turing-complete system cannot provably comply with regulations concerning anti-money laundering, know-your-client obligations, some securities restrictions and forms of exchange control. Any system that claims to follow regulations must choose either a form of permission or a less-than-Turing-complete update facility. Compliant decentralized systems can be constructed only by compromising on the richness of permissible changes. Regulatory authorities must accept new tradeoffs that limit their enforcement powers if they want to approve permissionless platforms formally. Our analysis demonstrates that the fundamental constraints of computation theory have direct implications for financial regulation. By mapping regulatory requirements onto computational models, we characterize which types of automated compliance are achievable and which are provably impossible. This framework allows us to move beyond traditional debates about regulatory effectiveness to establish concrete boundaries for automated enforcement.



P. Chen, S. Jernigan, K. Zhao, G.V. PJ, M. Saha, C. Kim, A. Arzani, G. Buckner, J. Hu. “Image-guided embolization using Ta@ Ca-Alg microspheres with optimized mechanical performance,” In Biomaterials Science, Vol. 13, pp. 4786-4802. 2025.

ABSTRACT

Transcatheter arterial embolization (TAE) is a minimally invasive technique used to treat hypervascular tumors, hemorrhage, and vascular abnormalities. Though microspheres (MSs) have achieved widespread clinical use as embolic agents, they often lack imaging opacity, optimal morphology and mechanical properties which can lead to unpredictable trajectories, non-target delivery, and suboptimal embolization. This study developed tantalum-loaded calcium alginate (Ta@Ca-Alg) MSs with intrinsic radiopacity, tunable density, and mechanical properties. Ta@Ca-Alg MSs were synthesized using a gas-shearing method and analyzed for size, morphology, swelling behavior, density, radiopacity, and optimized mechanical properties. The results demonstrated that Ta@Ca-Alg MSs maintained a narrow size distribution, with increasing Ta concentration enhancing radiopacity to levels comparable with the clinical contrast agent OMNIPAQUE 350. Density and Young's modulus corresponding to different Ta concentrations were also investigated. Phantom model testing validated effective vessel occlusion and controlled penetration. In vitro hemocompatibility, sterility, and cytotoxicity studies confirmed excellent biocompatibility. These findings suggest that Ta@Ca-Alg MSs are a promising radiopaque embolic agent with optimized radiopacity, density, and mechanical properties, offering excellent potential for TAE procedures.



J.H. Choi, M. Elhadidy, M. Kim, W. Park, J.C. Park, B. D. Kwun, S. Joo, S. H. Lee, S. U. Lee, J. S. Bang, M. T. Lawton, A. Arzani, J. S. Ahn. “Flow alteration strategies for complex basilar apex aneurysms: multicenter experience, systematic review, and insights from computational fluid dynamics,” Subtitled “Research Square Preprint,” 2025.

ABSTRACT

Complex basilar apex aneurysms (CBAAs) present a significant challenge due to their unfavorable natural history and difficulty with conventional treatments. This study aimed to provide insights into flow alteration strategies by combining a systematic review using PRISMA methodology with a multicenter experience from South Korea. We analyzed 57 cases, finding that flow preservation with aneurysm obliteration was performed in 12.7%, while mild, moderate, and maximum flow reduction were applied in 77.2%, 7.0%, and 3.5% respectively. Outcomes showed that 75.8% of patients with available imaging achieved satisfactory aneurysm obliteration. A good clinical outcome (mRS 0–2) was observed in 49.1% of cases. However, poor outcomes (mRS 4–6) were reported in 31.6%, with a mortality rate of 17.5%. Beyond simply reducing intra-aneurysmal flow, computational fluid dynamics (CFD) simulations revealed that alterations in flow balance and direction significantly influenced hemodynamic stress. Given the severe prognosis of CBAAs, flow alteration strategies can serve as viable alternatives when conventional treatments are not feasible. Furthermore, CFD simulations might hold promise in identifying optimal strategies that can maximize aneurysm control while minimizing procedural risks.



R.E. Coffman, R. Kolasangiani, T.C. Bidone. “Mn2+ accelerates ligand binding-site activation of αIIbβ3 integrin: insight from all-atom simulation,” In Biophysical Journal, Vol. 124, No. 17, pp. 2854-2864. 2025.

ABSTRACT

The activation of integrins by Mn2+ is a crucial area of research, yet the underlying mechanisms remain poorly understood. Previous studies have shown that substituting Mg2+ with Mn2+ at the metal ion-dependent adhesion site (MIDAS) enhances the affinities of high-affinity open and low-affinity closed integrins. However, the molecular effect of Mn2+ and how it compares to physiological activation mediated by Mg2+/Ca2+ remain unclear. This is partly due to the lack of experimental techniques capable of detecting these processes dynamically. In this study, we used equilibrium molecular dynamics simulations to examine the effects of Mn2+ on the binding site of platelet integrin αIIbβ3. Our findings show that Mn2+ accelerates conformational changes related to activation. Specifically, Mn2+ promotes an earlier displacement of M335 in the β6-α7 loop away from the ADMIDAS site (adjacent to the MIDAS site) and a rapid downward movement of the α7 helix in the βI domain. Additionally, Mn2+ leads to faster stabilization of the α1 helix, strengthening the interactions between the αIIbβ3 ligand-binding site and the RGD motif. These results suggest that Mn2+ accelerates high-affinity rearrangements at the ligand-binding site, resembling those seen in physiological activation, but occurring more rapidly than with Mg2+/Ca2+. Overall, our data suggest that Mn2+-induced affinity modulation proceeds through similar early activation steps, even without full integrin extension.



Z. Cutler, L. Harrison, C. Nobre, A. Lex. “Crowdsourced Think-Aloud Studies,” Subtitled “OSF Preprints,” 2025.

ABSTRACT

The think-aloud (TA) protocol is a useful method for evaluating user interfaces, including data visualizations. However, TA studies are time-consuming to conduct and hence often have a small number of participants. Crowdsourcing TA studies would help alleviate these problems, but the technical overhead and the unknown quality of results have restricted TA to synchronous studies.

To address this gap we introduce CrowdAloud, a system for creating and analyzing asynchronous, crowdsourced TA studies. CrowdAloud captures audio and provenance (log) data as participants interact with a stimulus. Participant audio is automatically transcribed and visualized together with events data and a full recreation of the state of the stimulus as seen by participants.

To gauge the value of crowdsourced TA studies, we conducted two experiments: one to compare lab-based and crowdsourced TA studies, and one to compare crowdsourced TA studies with crowdsourced text prompts. Our results suggest that crowdsourcing is a viable approach for conducting TA studies at scale.



H. Dai, S. Joshi . “Refining Skewed Perceptions in Vision-Language Contrastive Models through Visual Representations,” Subtitled “arXiv:2405.14030,” 2025.

ABSTRACT

Large vision-language contrastive models (VLCMs), such as CLIP, have become foundational, demonstrating remarkable success across a variety of downstream tasks. Despite their advantages, these models, akin to other foundational systems, inherit biases from the disproportionate distribution of real-world data, leading to misconceptions about the actual environment. Prevalent datasets like ImageNet are often riddled with non-causal, spurious correlations that can diminish VLCM performance in scenarios where these contextual elements are absent. This study presents an investigation into how a simple linear probe can effectively distill task-specific core features from CLIP’s embedding for downstream applications. Our analysis reveals that the CLIP text representations are often tainted by spurious correlations, inherited in the biased pre-training dataset. Empirical evidence suggests that relying on visual representations from CLIP, as opposed to text embedding, is more effective to refine the skewed perceptions in VLCMs, emphasizing the superior utility of visual representations in overcoming embedded biases. Our code can be found here.



T. Dixon, A. Gorodetsky, J. Jakeman, A. Narayan, Y. Xu. “Optimally balancing exploration and exploitation to automate multi-fidelity statistical estimation,” Subtitled “arXiv:2505.09828v1,” 2025.

ABSTRACT

Multi-fidelity methods that use an ensemble of models to compute a Monte Carlo estimator of the expectation of a high-fidelity model can significantly reduce computational costs compared to single-model approaches. These methods use oracle statistics, specifically the covariance between models, to optimally allocate samples to each model in the ensemble. However, in practice, the oracle statistics are estimated using additional model evaluations, whose computational cost and induced error are typically ignored. To address this issue, this paper proposes an adaptive algorithm to optimally balance the resources between oracle statistics estimation and final multi-fidelity estimator construction, leveraging ideas from multilevel best linear unbiased estimators in Schaden and Ullmann (2020) and a bandit-learning procedure in Xu et al. (2022). Under mild assumptions, we demonstrate that the multi-fidelity estimator produced by the proposed algorithm exhibits mean-squared error commensurate with that of the best linear unbiased estimator under the optimal allocation computed with oracle statistics. Our theoretical findings are supported by detailed numerical experiments, including a parametric elliptic PDE and an ice-sheet mass-change modeling problem. 



M. Floca, K. O'Laughlin, P. Ramonetti Vega, A. Gupta, I. Altintas, M. Parashar. “Toward an Education Hub Linking Research Data and Compute to Learning Workflows in the National Data Platform,” In PEARC '25: Practice and Experience in Advanced Research Computing 2025, ACM, 2025.



M. Garcia, J.K. Holmen, M. Berzins. “Scaling Uintah on the Aurora Exascale System up to 122,880 Intel Ponte Vecchio Xe Stacks,” In Practice and Experience in Advanced Research Computing 2025: The Power of Collaboration, No. 4, ACM, 2025.
ISBN: 9798400713989
DOI: 10.1145/3708035.3736001

ABSTRACT

The challenge of being able to scale application codes based on the Asynchronous Many-Task (AMT) Uintah framework on the Department of Energy (DOE) Aurora exascale system is addressed in this work by considering a challenging Reverse Monte Carlo Ray Tracing radiation benchmark calculation. This benchmark involves potentially global all-to-all communication and uses adaptive mesh refinement and ray tracing to achieve scalability. This benchmark has been used as part of previous scalability studies on a number of pre-exascale systems and on the DOE Frontier exascale system. This paper describes steps taken to enable this benchmark to run successfully on up to 10,240 nodes and 122,880 Intel® Ponte Vecchio Xe stacks on the DOE Aurora exascale system. This scalability was achieved through a limited number of experiments on Aurora, given machine loads and its uniqueness. These experiments constitute valuable lessons learned to achieve scalability at this level. The resulting scalability runs, while few in number, demonstrate relatively good strong-scaling characteristics. A detailed analysis of these results provides important indications about the path to scalability on Aurora for future work. Overall, these results continue the remarkable ability of this AMT approach to produce scalable solutions for challenging problems at extreme scale on heterogeneous architectures.



T. Gautam, R.M. Kirby, J. Hochhalter, S. Zhe. “SIFBench: An Extensive Benchmark for Fatigue Analysis,” Subtitled “arXiv:2506.01173,” 2025.

ABSTRACT

Fatigue-induced crack growth is a leading cause of structural failure across critical industries such as aerospace, civil engineering, automotive, and energy. Accurate prediction of stress intensity factors (SIFs) -- the key parameters governing crack propagation in linear elastic fracture mechanics -- is essential for assessing fatigue life and ensuring structural integrity. While machine learning (ML) has shown great promise in SIF prediction, its advancement has been severely limited by the lack of rich, transparent, well-organized, and high-quality datasets.
To address this gap, we introduce SIFBench, an open-source, large-scale benchmark database designed to support ML-based SIF prediction. SIFBench contains over 5 million different crack and component geometries derived from high-fidelity finite element simulations across 37 distinct scenarios, and provides a unified Python interface for seamless data access and customization. We report baseline results using a range of popular ML models -- including random forests, support vector machines, feedforward neural networks, and Fourier neural operators -- alongside comprehensive evaluation metrics and template code for model training, validation, and assessment. By offering a standardized and scalable resource, SIFBench substantially lowers the entry barrier and fosters the development and application of ML methods in damage tolerance design and predictive maintenance. 



E. Ghelichkhan, T. Tasdizen. “A Comparison of Object Detection and Phrase Grounding Models in Chest X-ray Abnormality Localization using Eye-tracking Data,” Subtitled “arXiv:2503.01037,” 2025.

ABSTRACT

Chest diseases rank among the most prevalent and dangerous global health issues. Object detection and phrase grounding deep learning models interpret complex radiology data to assist healthcare professionals in diagnosis. Object detection locates abnormalities for classes, while phrase grounding locates abnormalities for textual descriptions. This paper investigates how text enhances abnormality localization in chest X-rays by comparing the performance and explainability of these two tasks. To establish an explainability benchmark, we proposed an automatic pipeline to generate image regions for report sentences using radiologists’ eye-tracking data. The better performance - mIoU = 36% vs. 20% - and explainability - Containment ratio 48% vs. 26% - of the phrase grounding model infers the effectiveness of text in enhancing chest X-ray abnormality localization.



N. Gorski, X. Liang, H. Guo, L. Yan, B. Wang. “A General Framework for Augmenting Lossy Compressors with Topological Guarantees,” Subtitled “ arXiv:2502.14022,” 2025.

ABSTRACT

Topological descriptors such as contour trees are widely utilized in scientific data analysis and visualization, with applications from materials science to climate simulations. It is desirable to preserve topological descriptors when data compression is part of the scientific workflow for these applications. However, classic error-bounded lossy compressors for volumetric data do not guarantee the preservation of topological descriptors, despite imposing strict pointwise error bounds. In this work, we introduce a general framework for augmenting any lossy compressor to preserve the topology of the data during compression. Specifically, our framework quantifies the adjustments (to the decompressed data) needed to preserve the contour tree and then employs a custom variable-precision encoding scheme to store these adjustments. We demonstrate the utility of our framework in augmenting classic compressors (such as SZ3, TTHRESH, and ZFP) and deep learning-based compressors (such as Neurcomp) with topological guarantees.