SCI Publications
2026
A Liew, M Strocchi, C Rodero, KK Gillette, et. al..
Leadless right ventricular pacing, In Advancing Our Understanding of the Cardiac Conduction System to Prevent Arrhythmias, Frontiers, 2026.
J. Maheshwari, W. Wu, C.N. Zelonis, S.A. Maas, K. Sunderland, Y. Barak-Corren, S. Ching, P. Sabin, A. Lasso, M. J. Gillespie, J. A. Weiss, M. A. Jolley.
Effect of Right Ventricular Outflow Tract Material Properties on Simulated Transcatheter Pulmonary Placement, Subtitled arXiv:2601.05410v1, 2026.
Finite element (FE) simulations emulating transcatheter pulmonary valve (TPV) system deployment in patient-specific right ventricular outflow tracts (RVOT) assume material properties for the RVOT and adjacent tissues. Sensitivity of the deployment to variation in RVOT material properties is unknown. Moreover, the effect of a transannular patch stiffness and location on simulated TPV deployment has not been explored. A sensitivity analysis on the material properties of a patient-specific RVOT during TPV deployment, modeled as an uncoupled HGO material, was conducted using FEBioUncertainSCI. Further, the effects of a transannular patch during TPV deployment were analyzed by considering two patch locations and four patch stiffnesses. Visualization of results and quantification were performed using custom metrics implemented in SlicerHeart and FEBio. Sensitivity analysis revealed that the shear modulus of the ground matrix (c), fiber modulus (k1), and fiber mean orientation angle (gamma) had the greatest effect on 95th %ile stress, whereas only c had the greatest effect on 95th %ile Lagrangian strain. First-order sensitivity indices contributed the greatest to the total-order sensitivity indices. Simulations using a transannular patch revealed that peak stress and strain were dependent on patch location. As stiffness of the patch increased, greater stress was observed at the interface connecting the patch to the RVOT, and stress in the patch itself increased while strain decreased. The total enclosed volume by the TPV device remained unchanged across all simulated patch cases. This study highlights that while uncertainties in tissue material properties and patch locations may influence functional outcomes, FE simulations provide a reliable framework for evaluating these outcomes in TPVR.
H. Mynampaty, N. Josephine, K.E. Isaacs, A.M. McNutt.
Linting Style and Substance in READMEs, Subtitled arXiv:2603.00331, 2026.
READMEs shape first impressions of software projects, yet what constitutes a good README varies across audiences and contexts. Research software needs reproducibility details, while open-source libraries might prioritize quick-start guides. Through a design probe, LintMe, we explore how linting can be used to improve READMEs given these diverse contexts, aiding style and content issues while preserving authorial agency. Users create context-specific checks using a lightweight DSL that uses a novel combination of programmatic operations (e.g., for broken links) with LLM-based content evaluation (e.g., for detecting jargon), yielding checks that would be challenging for prior linters. Through a user study (N=11), comparison with naive LLM usage, and an extensibility case study, we find that our design is approachable, flexible, and well matched with the needs of this domain. This work opens the door for linting more complex documentation and other culturally mediated text-based documents.
A. Panta, G. Scorzelli, A.A. Gooch, W. Sun, K.S. Shanks, S. Sarker, D. Bougie, K. Soloway, R. Verberg, T. Berman, G. Tarcea, J. Allison, M. Taufer, V. Pascucci.
Large Data Acquisition and Analytics at Synchrotron Radiation Facilities, Subtitled arXiv:2602.05837v1, 2026.
Synchrotron facilities like the Cornell High Energy Synchrotron Source (CHESS) generate massive data volumes from complex beamline experiments, but face challenges such as limited access time, the need for on-site experiment monitoring, and managing terabytes of data per user group. We present the design, deployment, and evaluation of a framework that addresses CHESS's data acquisition and management issues. Deployed on a secure CHESS server, our system provides real time, web-based tools for remote experiment monitoring and data quality assessment, improving operational efficiency. Implemented across three beamlines (ID3A, ID3B, ID4B), the framework managed 50-100 TB of data and over 10 million files in late 2024. Testing with 43 research groups and 86 dashboards showed reduced overhead, improved accessibility, and streamlined data workflows. Our paper highlights the development, deployment, and evaluation of our framework and its transformative impact on synchrotron data acquisition.
D.J. Pope, J. Elowitt, B. Zhang, M. Parashar, A. Clark.
ChemNetworks: New Capabilities for High-Throughput, Real-Time Chemical Graph Construction and Analysis, Subtitled chemrxiv.15001158/v1, 2026.
A new release of ChemNetworks (Journal of Computational Chemistry, 2013, 35, 495-505), is described. Updates have made the software performant in supercomputing environments, capable of real-time graph construction and analysis with running simulations, and incorporate graph theory libraries for diverse and customizable analysis workflows. Here, we describe newly implemented algorithms (and examples) in ChemNetworks that include a recursive Z-matrix algorithm for structure identification and graph construction, the incorporation of the DataSpaces data staging framework as one of its optional I/O engines, and user-contributed graph theory analyses that leverage the igraph library. The new release of ChemNetworks better enables ongoing development through community contributions.
M.D. Rahman, D. Lange, G.J. Quadri, P. Rosen.
Designing Annotations in Visualization: Considerations from Visualization Practitioners and Educators, In Computer Graphics Forum, Vol. 45, No. 3, Eurographics, 2026.
Annotation is a central mechanism in visualization design that enables people to communicate key insights. Prior research has provided essential accounts of the visual forms annotations take, but less attention has been paid to the decisions behind them. This paper examines how annotations are designed in practice and how educators reflect on those practices. We conducted a two-phase qualitative study: interviews with ten practitioners from diverse backgrounds revealed the heuristics they draw on when creating annotations, and interviews with seven visualization educators offered complementary perspectives situated within broader concerns of clarity, guidance, and viewer agency. These studies provide a systematic account of annotation design knowledge in professional settings, highlighting the considerations, trade-offs, and contextual judgments that shape the use of annotations. By making this tacit expertise explicit, our work complements prior form-focused studies, strengthens understanding of annotation as a design activity, and points to opportunities for improved tool and guideline support.
W. Regli, R. Rajaraman, D. Lopresti, D. Jensen, M. Maher, M. Parasher, M. Singh, H. Yanco.
The Imperative for Grand Challenges in Computing, Subtitled arXiv:2601.00700, 2026.
Computing is an indispensable component of nearly all technologies and is ubiquitous for vast segments of society. It is also essential to discoveries and innovations in most disciplines. However, while past grand challenges in science have involved computing as one of the tools to address the challenge, these challenges have not been principally about computing. Why has the computing community not yet produced challenges at the scale of grandeur that we see in disciplines such as physics, astronomy, or engineering? How might we go about identifying similarly grand challenges? What are the grand challenges of computing that transcend our discipline's traditional boundaries and have the potential to dramatically improve our understanding of the world and positively shape the future of our society?
There is a significant benefit in us, as a field, taking a more intentional approach to "grand challenges." We are seeking challenge problems that are sufficiently compelling as to both ignite the imagination of computer scientists and draw researchers from other disciplines to computational challenges.
This paper emphasizes the importance, now more than ever, of defining and pursuing grand challenges in computing as a field, and being intentional about translation and realizing its impacts on science and society. Building on lessons from prior grand challenges, the paper explores the nature of a grand challenge today emphasizing both scale and impact, and how the community may tackle such a grand challenge, given a rapidly changing innovation ecosystem in computing. The paper concludes with a call to action for our community to come together to define grand challenges in computing for the next decade and beyond.
R. Basu Roy, D. Tiwari.
LowCarb: Carbon-Aware Scheduling of Serverless Functions, In 2026 IEEE International Symposium on High Performance Computer Architecture (HPCA), pp. 1--16. 2026.
DOI: 10.1109/HPCA68181.2026.11408586
Serverless computing is observing rapid adoption in cloud computing platforms. Prior works have extensively focused on improving the performance of serverless computing platforms via “keeping alive” functions in memory proactively to lower the function execution latency, but the potential environmental sustainability aspects of such performance-enhancing strategies remain underexplored. This work highlights that serverless computing introduces unique carbon footprint sources and trade-offs between performance and sustainability. We present LowCarb, a novel reinforcement learning-based solution that co-optimizes serverless function performance and carbon footprint. LowCarb effectively quantifies and resolves the inherent conflict between performance and sustainability to achieve results within 15% of optimality.
A. Sahistan, S. Zellmann, H. Miao, N. Morrical, I. Wald, V. Pascucci.
Materializing Inter-Channel Relationships with Multi-Density Woodcock Tracking, In IEEE Trans Vis Comput Graph, 2026.
DOI: 10.1109/TVCG.2026.3653310
Volume rendering techniques for scientific visualization has recently shifted toward Monte Carlo (MC) methods for their flexibility and robustness, but their use in multi-channel visualization remains underexplored. Traditional multi-channel volume rendering often relies on arbitrary, non-physically-based color blending functions that hinder interpretation. We introduce multi-density Woodcock tracking, a simple extension of Woodcock tracking that leverages an MC method to produce high-fidelity, physically grounded multi-channel renderings without arbitrary blending. By generalizing Woodcock's distance tracking, we provide a unified blending modality that also integrates blending functions from prior works. We further implement effects that enhance boundary and feature recognition. By accumulating frames in real-time, our approach delivers high-quality visualizations with perceptual benefits, demonstrated on diverse datasets.
X. Tang, X. Yue, H. Mane, D. Li, Q. Nguyen, T. Tasdizen.
How to Build Robust, Scalable Models for GSV-Based Indicators in Neighborhood Research, Subtitled arXiv:2601.06443v1, 2026.
A substantial body of health research demonstrates a strong link between neighborhood environments and health outcomes. Recently, there has been increasing interest in leveraging advances in computer vision to enable large-scale, systematic characterization of neighborhood built environments. However, the generalizability of vision models across fundamentally different domains remains uncertain, for example, transferring knowledge from ImageNet to the distinct visual characteristics of Google Street View (GSV) imagery. In applied fields such as social health research, several critical questions arise: which models are most appropriate, whether to adopt unsupervised training strategies, what training scale is feasible under computational constraints, and how much such strategies benefit downstream performance. These decisions are often costly and require specialized expertise.
In this paper, we answer these questions through empirical analysis and provide practical insights into how to select and adapt foundation models for datasets with limited size and labels, while leveraging larger, unlabeled datasets through unsupervised training. Our study includes comprehensive quantitative and visual analyses comparing model performance before and after unsupervised adaptation.
J. Y. Xing, B. Xia, D. Renner, C. D. Cantwell, D. Moxey, R. M. Kirby, S. J. Sherwin.
Architecture-aware h -to- p optimisation: spectral/hp element operators for mixed-element meshes, Subtitled arXiv:2604.04644v1, 2026.
We extend earlier international efforts to optimise hexahedral-based spectral element methods on GPUs and vectorised CPUs to mixed element meshes additionally involving prismatic, pyramidic, and tetrahedral shapes using tensorial expansions. We demonstrate that common finite element operators (such as the mass and Helmholtz matrices) benefit from alternative implementation strategies depending on the element shape, choice of polynomial order, and system architecture in order to achieve optimal performance. In addition, we introduce a new approach/interpretation to efficiently evaluate more complex operations involving inner products with the derivative of the expansions as part of the integrand such as the stiffness matrix. This approach seeks to maximise operations using the collocation properties of the nodal tensorial expansion associated with classical quadrature rules. Our GPU performance tests demonstrate that the throughput of the Helmholtz operator on tetrahedral elements is at most 2.5 times slower than on hexahedral elements, despite tetrahedra having a factor of six greater floating-point operations.
B. Zhang, X. Li, H. Manoochehri, X. Tang, D. Sirohi, B.S. Knudsen, T. Tasdizen.
Weakly Supervised Contrastive Learning for Histopathology Patch Embeddings, Subtitled arXiv:2602.09477v2, 2026.
Digital histopathology whole slide images (WSIs) provide gigapixel-scale high-resolution images that are highly useful for disease diagnosis. However, digital histopathology image analysis faces significant challenges due to the limited training labels, since manually annotating specific regions or small patches cropped from large WSIs requires substantial time and effort. Weakly supervised multiple instance learning (MIL) offers a practical and efficient solution by requiring only bag-level (slide-level) labels, while each bag typically contains multiple instances (patches). Most MIL methods directly use frozen image patch features generated by various image encoders as inputs and primarily focus on feature aggregation. However, feature representation learning for encoder pretraining in MIL settings has largely been neglected.
In our work, we propose a novel feature representation learning framework called weakly supervised contrastive learning (WeakSupCon) that incorporates bag-level label information during training. Our method does not rely on instance-level pseudo-labeling, yet it effectively separates patches with different labels in the feature space. Experimental results demonstrate that the image features generated by our WeakSupCon method lead to improved downstream MIL performance compared to self-supervised contrastive learning approaches in three datasets.
R.G. Zitnay, S. Alizada, T. Moustafa, D. Lange, L. Schreiber, A. Lex, R. L. Judson-Torres, T. A. Zangle, R. L. Belote.
QUILPEN decouples pigment absorption and organelle scatter in live melanocytic cells, In Proceedings Volume 13863, Label-free Biomedical Imaging and Sensing (LBIS) 2026, SPIE, 2026.
Pigmentation is a defining feature of melanocytic cells, and melanogenesis, the biosynthesis and compartmentalization of pigment into lysosome-like melanosomes, is tightly linked to cell state and developmental programs. However, the optical complexity of melanin-rich organelles complicates efforts to study melanogenesis in live cells. Traditional optical measurements conflate pigment absorption with light scattering from intracellular granularity, limiting biological insight. Our pipeline, QUILPEN (QUantitative Imaging of Label-free Pigment-associated ENtities), uses a custom LED-array microscope to independently quantify transmitted, scattered, and absorbed light in live, unlabeled melanocytic cells. QUILPEN builds off our labs prior work in LED-based DPC and Quadrant Dark Field, or QDF, which reduce scatterrelated edge-effects. Our imaging workflow produces temporally resolved, multi-channel images to characterize the biophysical signature at single-cell resolution. Using melanoma cell lines with diverse pigmentation states, we show that acral melanoma cells display tightly correlated scatter and absorption signals, consistent with pigment-laden melanosomes driving both granularity and light absorption. By analyzing well-characterized melanoma models and perturbing pigment synthesis, we define how QUILPEN signals relate to melanin production, melanosome abundance, and general organelle content. Because QUILPEN enables longitudinal tracking without labels, it allows direct measurement of melanosome dynamics within individual cells and lineages. By separately quantifying absorption and scatter, QUILPEN reveals the sources of optical heterogeneity in melanocytic cells and links optical signatures to underlying cell physiology.
2025
B. Adcock, B. Hientzsch, A. Narayan, Y. Xu.
Hybrid least squares for learning functions from highly noisy data, Subtitled arXiv:2507.02215, 2025.
Motivated by the need for efficient estimation of conditional expectations, we consider a least-squares function approximation problem with heavily polluted data. Existing methods that are powerful in the small noise regime are suboptimal when large noise is present. We propose a hybrid approach that combines Christoffel sampling with certain types of optimal experimental design to address this issue. We show that the proposed algorithm enjoys appropriate optimality properties for both sample point generation and noise mollification, leading to improved computational efficiency and sample complexity compared to existing methods. We also extend the algorithm to convex-constrained settings with similar theoretical guarantees. When the target function is defined as the expectation of a random field, we extend our approach to leverage adaptive random subspaces and establish results on the approximation capacity of the adaptive procedure. Our theoretical findings are supported by numerical studies on both synthetic data and on a more challenging stochastic simulation problem in computational finance.
N. Alkhatani, I. Petri, O. Rana, M. Parashar.
Edge learning for energy-aware resource management, In 2025 IEEE International Conference on Edge Computing and Communications (EDGE), IEEE, 2025.
As the demand for intelligent systems grows, leveraging edge learning and autonomic self-management offers significant benefits for supporting real-time data analysis and resource management in edge environments. We describe and evaluate four distinct task allocation scenarios to demonstrate the autonomics for edge resources management: random execution, autonomic broker-based scheduling, priority-driven execution, and energy-aware allocation. Our experiments reveal that while prioritization-based scheduling minimizes execution times by aligning with task criticality, the energy-aware approach presents a sustainable alternative. This method dynamically adapts task execution based on renewable energy availability, promoting environmentally conscious energy management without compromising operational efficiency. By harnessing renewable energy signals, our findings highlight the potential of edge autonomics to achieve a balance between performance, resource optimization and sustainability. This work demonstrates how intelligent edge-cloud integration can foster resilient smart building infrastructures that meet the challenges of modern computing paradigms.
S. Aslan, NR. Mangine, D.W. Laurence, P.M. Sabin, W. Wu, C. Herz, J. S. Unger, S. A. Maas, M. J. Gillespie, J. A. Weiss, M. A. Jolley.
Simulation of Transcatheter Therapies for Atrioventricular Valve Regurgitation in an Open-Source Finite Element Simulation Framework, Subtitled arXiv:2509.22865v1, 2025.
Purpose: Transcatheter edge-to-edge repair (TEER) and annuloplasty devices are increasingly used to treat mitral valve regurgitation, yet their mechanical effects and interactions remain poorly understood. This study aimed to establish an open-source finite element modeling (FEM) framework for simulating patient-specific mitral valve repairs and to evaluate how TEER, annuloplasty, and combined strategies influence leaflet coaptation and valve mechanics. Methods: Four G4 MitraClip geometries were modeled and deployed in FEBio to capture leaflet grasp and subsequent clip-leaflet motion under physiologic pressurization. CardioBand annuloplasty was simulated by reducing annular circumference via displacement-controlled boundary conditions, and Mitralign suture annuloplasty was modeled using discrete nodal constraints. Simulations were performed for prolapse and dilated annulus cases. Valve competence (regurgitant orifice area, ROA), coaptation/contact area (CA), and leaflet stress and strain distributions were quantified. Results: In prolapse, TEER restored coaptation but increased leaflet stresses, whereas band and suture annuloplasty produced distinct valve morphologies with lower stress distributions. In dilation, TEER alone left residual regurgitation, while annuloplasty improved closure. Combined TEER & band annuloplasty minimized ROA, maximized CA, and reduced stresses relative to TEER alone, though stresses remained higher than annuloplasty alone. Conclusion: This study establishes a reproducible, open-source FEM framework for simulating transcatheter TEER and annuloplasty repairs, with the potential to be extended beyond the mitral valve. By quantifying the mechanical trade-offs of TEER, suture annuloplasty, band annuloplasty, and their combinations, this methodology highlights the potential of virtual repair to guide patient selection and optimize surgical planning.
T.M. Athawale, Z. Wang, D. Pugmire, K. Moreland, Q. Gong, S. Klasky, C.R. Johnson, P. Rosen.
Uncertainty Visualization of Critical Points of 2D Scalar Fields for Parametric and Nonparametric Probabilistic Models, In IEEE Transactions on Visualization and Computer Graphics, Vol. 31, No. 1, IEEE, pp. 108--118. 2025.
DOI: 10.1109/TVCG.2024.3456393
This paper presents a novel end-to-end framework for closed-form computation and visualization of critical point uncertainty in 2D uncertain scalar fields. Critical points are fundamental topological descriptors used in the visualization and analysis of scalar fields. The uncertainty inherent in data (e.g., observational and experimental data, approximations in simulations, and compression), however, creates uncertainty regarding critical point positions. Uncertainty in critical point positions, therefore, cannot be ignored, given their impact on downstream data analysis tasks. In this work, we study uncertainty in critical points as a function of uncertainty in data modeled with probability distributions. Although Monte Carlo (MC) sampling techniques have been used in prior studies to quantify critical point uncertainty, they are often expensive and are infrequently used in production-quality visualization software. We, therefore, propose a new end-to-end framework to address these challenges that comprises a threefold contribution. First, we derive the critical point uncertainty in closed form, which is more accurate and efficient than the conventional MC sampling methods. Specifically, we provide the closed-form and semianalytical (a mix of closed-form and MC methods) solutions for parametric (e.g., uniform, Epanechnikov) and nonparametric models (e.g., histograms) with finite support. Second, we accelerate critical point probability computations using a parallel implementation with the VTK-m library, which is platform portable. Finally, we demonstrate the integration of our implementation with the ParaView software system to demonstrate near-real-time results for real datasets.
W. Bangerth, C. R. Johnson, D. K. Njeru, B. van Bloemen Waanders.
Estimating and using information in inverse problems, In Inverse Problems and Imaging, 2025.
ISSN: 1930-8337
DOI: 10.3934/ipi.2026003
In inverse problems, one attempts to infer spatially variable functions from indirect measurements of a system. To practitioners of inverse problems, the concept of "information" is familiar when discussing key questions such as which parts of the function can be inferred accurately and which cannot. For example, it is generally understood that we can identify system parameters accurately only close to detectors, or along ray paths between sources and detectors, because we have "the most information" for these places.
Although referenced in many publications, the "information" that is invoked in such contexts is not a well understood and clearly defined quantity. Herein, we present a definition of information density that is based on the variance of coefficients as derived from a Bayesian reformulation of the inverse problem. We then discuss three areas in which this information density can be useful in practical algorithms for the solution of inverse problems, and illustrate the usefulness in one of these areas – how to choose the discretization mesh for the function to be reconstructed – using numerical experiments.
Z. Bastiani, R.M. Kirby, J. Hochhalter, S. Zhe.
Diffusion-Based Symbolic Regression, Subtitled arXiv:2505.24776, 2025.
Diffusion has emerged as a powerful framework for generative modeling, achieving remarkable success in applications such as image and audio synthesis. Enlightened by this progress, we propose a novel diffusion-based approach for symbolic regression. We construct a random mask-based diffusion and denoising process to generate diverse and high-quality equations. We integrate this generative processes with a token-wise Group Relative Policy Optimization (GRPO) method to conduct efficient reinforcement learning on the given measurement dataset. In addition, we introduce a long short-term risk-seeking policy to expand the pool of top-performing candidates, further enhancing performance. Extensive experiments and ablation studies have demonstrated the effectiveness of our approach.
M. Belianovich, G.E. Fasshauer, A. Narayan, V. Shankar.
A Unified Framework for Efficient Kernel and Polynomial Interpolation, Subtitled arXiv:2507.12629v2, 2025.
We present a unified interpolation scheme that combines compactly-supported positive-definite kernels and multivariate polynomials. This unified framework generalizes interpolation with compactly-supported kernels and also classical polynomial least squares approximation. To facilitate the efficient use of this unified interpolation scheme, we present specialized numerical linear algebra procedures that leverage standard matrix factorizations. These procedures allow for efficient computation and storage of the unified interpolant. We also present a modification to the numerical linear algebra that allows us to generalize the application of the unified framework to target functions on manifolds with and without boundary. Our numerical experiments on both Euclidean domains and manifolds indicate that the unified interpolant is superior to polynomial least squares for the interpolation of target functions in settings with boundaries.
Page 2 of 150
