SCI Publications
2026
T. M. Athawale, K. Moreland, D. Pugmire, C. R. Johnson, P. Rosen, M. Norman, A. Georgiadou,, A. Entezari.
MAGIC: Marching Cubes Isosurface Uncertainty Visualization for Gaussian Uncertain Data with Spatial Correlation, In TVCG, IEEE, 2026.
In this paper, we study the propagation of data uncertainty through the marching cubes algorithm for isosurface visualization for correlated uncertain data. Consideration of correlation has been shown paramount for avoiding errors in uncertainty quantification and visualization in multiple prior studies. Although the problem of isosurface uncertainty with spatial data correlation has been previously addressed, there are two major limitations to prior treatments. First, there are no analytical formulations for uncertainty quantification of isosurfaces when the data uncertainty is characterized by a Gaussian distribution with spatial correlation. Second, as a consequence of the lack of analytical formulations, existing techniques resort to a Monte Carlo sampling approach, which is expensive and difficult to integrate into visualization tools. To address these limitations, we present a closed-form framework to efficiently derive uncertainty in marching cubes level-sets for Gaussian uncertain data with spatial correlation (MAGIC). To derive closed-form solutions, we leverage the Hinkley’s derivation on the ratio of Gaussian distributions. With our analytical framework, we achieve a significant speed-up and enhanced accuracy of uncertainty quantification over classical Monte Carlo methods. We further accelerate our analytical solutions using many-core processors to achieve speed-ups up to 585× and integrability with production visualization tools for broader impact. We demonstrate the effectiveness of our correlation-aware uncertainty framework through experiments on meteorology, urban flow, and astrophysics simulation datasets.
J. Hart, B. van Bloemen Waanders, J. Li, T. A. J. Ouermi, C. R. Johnson.
Hyper-differential sensitivity analysis with respect to model discrepancy: Prior distributions, In International Journal for Uncertainty Quantification, Vol. 16, No. 1, Begell House, 2026.
Hyper-differential sensitivity analysis with respect to model discrepancy was recently developed to enable uncertainty quantification for optimization problems. The approach consists of two primary steps: (i) Bayesian calibration of the discrepancy between high- and low-fidelity models, and (ii) propagating the model discrepancy uncertainty through the optimization problem. When high-fidelity model evaluations are limited, as is common in practice, the prior discrepancy distribution plays a crucial role in the uncertainty analysis. However, specification of this prior is challenging due to its mathematical complexity and many hyper-parameters. This article presents a novel approach to specify the prior distribution. Our approach consists of two parts: (1) an algorithmic initialization of the prior hyper-parameters that uses existing data to initialize a hyper-parameter estimate, and (2) a visualization framework to systematically explore properties of the prior and guide tuning of the hyper-parameters to ensure that the prior captures the appropriate range of uncertainty. We provide detailed mathematical analysis and a collection of numerical examples that elucidate properties of the prior that are crucial to ensure uncertainty quantification.
2025
T.M. Athawale, Z. Wang, D. Pugmire, K. Moreland, Q. Gong, S. Klasky, C.R. Johnson, P. Rosen.
Uncertainty Visualization of Critical Points of 2D Scalar Fields for Parametric and Nonparametric Probabilistic Models, In IEEE Transactions on Visualization and Computer Graphics, Vol. 31, No. 1, IEEE, pp. 108--118. 2025.
DOI: 10.1109/TVCG.2024.3456393
This paper presents a novel end-to-end framework for closed-form computation and visualization of critical point uncertainty in 2D uncertain scalar fields. Critical points are fundamental topological descriptors used in the visualization and analysis of scalar fields. The uncertainty inherent in data (e.g., observational and experimental data, approximations in simulations, and compression), however, creates uncertainty regarding critical point positions. Uncertainty in critical point positions, therefore, cannot be ignored, given their impact on downstream data analysis tasks. In this work, we study uncertainty in critical points as a function of uncertainty in data modeled with probability distributions. Although Monte Carlo (MC) sampling techniques have been used in prior studies to quantify critical point uncertainty, they are often expensive and are infrequently used in production-quality visualization software. We, therefore, propose a new end-to-end framework to address these challenges that comprises a threefold contribution. First, we derive the critical point uncertainty in closed form, which is more accurate and efficient than the conventional MC sampling methods. Specifically, we provide the closed-form and semianalytical (a mix of closed-form and MC methods) solutions for parametric (e.g., uniform, Epanechnikov) and nonparametric models (e.g., histograms) with finite support. Second, we accelerate critical point probability computations using a parallel implementation with the VTK-m library, which is platform portable. Finally, we demonstrate the integration of our implementation with the ParaView software system to demonstrate near-real-time results for real datasets.
W. Bangerth, C. R. Johnson, D. K. Njeru, B. van Bloemen Waanders.
Estimating and using information in inverse problems, In Inverse Problems and Imaging, 2025.
ISSN: 1930-8337
DOI: 10.3934/ipi.2026003
In inverse problems, one attempts to infer spatially variable functions from indirect measurements of a system. To practitioners of inverse problems, the concept of "information" is familiar when discussing key questions such as which parts of the function can be inferred accurately and which cannot. For example, it is generally understood that we can identify system parameters accurately only close to detectors, or along ray paths between sources and detectors, because we have "the most information" for these places.
Although referenced in many publications, the "information" that is invoked in such contexts is not a well understood and clearly defined quantity. Herein, we present a definition of information density that is based on the variance of coefficients as derived from a Bayesian reformulation of the inverse problem. We then discuss three areas in which this information density can be useful in practical algorithms for the solution of inverse problems, and illustrate the usefulness in one of these areas – how to choose the discretization mesh for the function to be reconstructed – using numerical experiments.
J. Li, T. A. J. Ouermi, M. Han,, C. R. Johnson.
Uncertainty Tube Visualization of Particle Trajectories, In 2025 IEEE Workshop on Uncertainty Visualization: Unraveling Relationships of Uncertainty, AI, and Decision-Making, 2025.
DOI: 10.48550/arXiv.2508.13505
Predicting particle trajectories with neural networks (NNs) has substantially enhanced many scientific and engineering domains. However, effectively quantifying and visualizing the inherent uncertainty in predictions remains challenging. Without an understanding of the uncertainty, the reliability of NN models in applications where trustworthiness is paramount is significantly compromised. This paper introduces the uncertainty tube, a novel, computationally efficient visualization method designed to represent this uncertainty in NN-derived particle paths. Our key innovation is the design and implementation of a superelliptical tube that accurately captures and intuitively conveys nonsymmetric uncertainty. By integrating well-established uncertainty quantification techniques, such as Deep Ensembles, Monte Carlo Dropout (MC Dropout), and Stochastic Weight Averaging-Gaussian (SWAG), we demonstrate the practical utility of the uncertainty tube, showcasing its application on both synthetic and simulation datasets.
N. X. Marshak, K. Simotas, Z. Lukić, H. Park, J. Ahrens, C. R. Johnson.
Nyx-RT: Adaptive Ray Tracing in the Nyx Hydrodynamical Code, Subtitled arXiv:2512.12466, 2025.
Numerical methods for radiative transfer play a key role in modern-day astrophysics and cosmology, including study of the inhomogeneous reionization process. In this context, ray tracing methods are well-regarded for accuracy but notorious for high computational cost. In this work, we extend the capabilities of the Nyx N-body / hydrodynamics code, coupling radiation to gravitational and gas dynamics. We formulate adaptive ray tracing as a novel series of filters and transformations that can be used with AMReX particle abstractions, simplifying implementation and enabling portability across Exascale GPU architectures. To address computational cost, we present a new algorithm for merging sources, which significantly accelerates computation once reionization is well underway. Furthermore, we develop a novel prescription for geometric overlap correction with low-density neighbor cells. We perform verification and validation against standard analytic and numerical test problems. Finally, we demonstrate scaling to up to 1024 nodes and 4096 GPUs running multiphysics cosmological simulations, with 4096^3 Eulerian gas cells, 4096^3 dark matter particles, and ray tracing on a 1024^3 coarse grid. For these full cosmological simulations, we demonstrate convergence in terms of reionization history and post-ionization Lyman-alpha forest flux.
T. A. J. Ouermi, E. Li, K. Moreland, D. Pugmire, C. R. Johnson,, T. M. Athawale.
Efficient Probabilistic Visualization of Local Divergence of 2D Vector Fields with Independent Gaussian Uncertainty, In 2025 IEEE Workshop on Uncertainty Visualization: Unraveling Relationships of Uncertainty, AI, and Decision-Making, 2025.
This work focuses on visualizing uncertainty of local divergence of two-dimensional vector fields. Divergence is one of the fundamental attributes of fluid flows, as it can help domain scientists analyze potential positions of sources (positive divergence) and sinks (negative divergence) in the flow. However, uncertainty inherent in vector field data can lead to erroneous divergence computations, adversely impacting downstream analysis. While Monte Carlo (MC) sampling is a classical approach for estimating divergence uncertainty, it suffers from slow convergence and poor scalability with increasing data size and sample counts. Thus, we present a two-fold contribution that tackles the challenges of slow convergence and limited scalability of the MC approach. (1) We derive a closed-form approach for highly efficient and accurate uncertainty visualization of local divergence, assuming independently Gaussian-distributed vector uncertainties. (2) We further integrate our approach into Viskores, a platform-portable parallel library, to accelerate uncertainty visualization. In our results, we demonstrate significantly enhanced efficiency and accuracy of our serial analytical (speed-up up to $1946 \times$) and parallel Viskores (speed-up up to 19698X) algorithms over the classical serial MC approach. We also demonstrate qualitative improvements of our probabilistic divergence visualizations over traditional mean-field visualization, which disregards uncertainty. We validate the accuracy and efficiency of our methods on wind forecast and ocean simulation datasets.
T. Patel, T.A.J. Ouermi, T. Athawale,, C.R. Johnson.
Fast HARDI Uncertainty Quantification and Visualization with Spherical Sampling, In Computer Graphics Forum, Vol. 44, No. 3, pp. 1--12. 2025.
In this paper, we study uncertainty quantification and visualization of orientation distribution functions (ODF), which corresponds to the diffusion profile of high angular resolution diffusion imaging (HARDI) data. The shape inclusion probability (SIP) function is the state-of-the-art method for capturing the uncertainty of ODF ensembles. The current method of computing the SIP function with a volumetric basis exhibits high computational and memory costs, which can be a bottleneck to integrating uncertainty into HARDI visualization techniques and tools. We propose a novel spherical sampling framework for faster computation of the SIP function with lower memory usage and increased accuracy. In particular, we propose direct extraction of SIP isosurfaces, which represent confidence intervals indicating spatial uncertainty of HARDI glyphs, by performing spherical sampling of ODFs. Our spherical sampling approach requires much less sampling than the state-of-the-art volume sampling method, thus providing significantly enhanced performance, scalability, and the ability to perform implicit ray tracing. Our experiments demonstrate that the SIP isosurfaces extracted with our spherical sampling approach can achieve up to 8164× speedup, 37282× memory reduction, and 50.2% less SIP isosurface error compared to the classical volume sampling approach. We demonstrate the efficacy of our methods through experiments on synthetic and human-brain HARDI datasets.
2024
M. Han, J. Li, S. Sane, S. Gupta, B. Wang, S. Petruzza, C.R. Johnson.
Interactive Visualization of Time-Varying Flow Fields Using Particle Tracing Neural Networks, Subtitled arXiv preprint arXiv:2312.14973, 2024.
Lagrangian representations of flow fields have gained prominence for enabling fast, accurate analysis and exploration of time-varying flow behaviors. In this paper, we present a comprehensive evaluation to establish a robust and efficient framework for Lagrangian-based particle tracing using deep neural networks (DNNs). Han et al. (2021) first proposed a DNN-based approach to learn Lagrangian representations and demonstrated accurate particle tracing for an analytic 2D flow field. In this paper, we extend and build upon this prior work in significant ways. First, we evaluate the performance of DNN models to accurately trace particles in various settings, including 2D and 3D time-varying flow fields, flow fields from multiple applications, flow fields with varying complexity, as well as structured and unstructured input data. Second, we conduct an empirical study to inform best practices with respect to particle tracing model architectures, activation functions, and training data structures. Third, we conduct a comparative evaluation of prior techniques that employ flow maps as input for exploratory flow visualization. Specifically, we compare our extended model against its predecessor by Han et al. (2021), as well as the conventional approach that uses triangulation and Barycentric coordinate interpolation. Finally, we consider the integration and adaptation of our particle tracing model with different viewers. We provide an interactive web-based visualization interface by leveraging the efficiencies of our framework, and perform high-fidelity interactive visualization by integrating it with an OSPRay-based viewer. Overall, our experiments demonstrate that using a trained DNN model to predict new particle trajectories requires a low memory footprint and results in rapid inference. Following best practices for large 3D datasets, our deep learning approach using GPUs for inference is shown to require approximately 46 times less memory while being more than 400 times faster than the conventional methods.
M. Han, T. Athawale, J. Li, C.R. Johnson.
Accelerated Depth Computation for Surface Boxplots with Deep Learning, In IEEE Workshop on Uncertainty Visualization: Applications, Techniques, Software, and Decision Frameworks, IEEE, pp. 38--42. 2024.
DOI: 10.1109/UncertaintyVisualization63963.2024.00009
Functional depth is a well-known technique used to derive descriptive statistics (e.g., median, quartiles, and outliers) for 1D data. Surface boxplots extend this concept to ensembles of images, helping scientists and users identify representative and outlier images. However, the computational time for surface boxplots increases cubically with the number of ensemble members, making it impractical for integration into visualization tools. In this paper, we propose a deep-learning solution for efficient depth prediction and computation of surface boxplots for time-varying ensemble data. Our deep learning framework accurately predicts member depths in a surface boxplot, achieving average speedups of 6X on a CPU and 15X on a GPU for the 2D Red Sea dataset with 50 ensemble members compared to the traditional depth computation algorithm. Our approach achieves at least a 99% level of rank preservation, with order flipping occurring only at pairs with extremely similar depth values that pose no statistical differences. This local flipping does not significantly impact the overall depth order of the ensemble members.
G. Hari, N. Joshi, Z. Wang, Q. Gong, D. Pugmire, K. Moreland, C.R. Johnson, S. Klasky, N. Podhorszki, T. Athawale.
FunM2C: A Filter for Uncertainty Visualization of Multivariate Data on Multi-Core Devices, In IEEE Workshop on Uncertainty Visualization: Applications, Techniques, Software, and Decision Frameworks, IEEE, pp. 43--47. 2024.
DOI: 10.1109/UncertaintyVisualization63963.2024.00010
Uncertainty visualization is an emerging research topic in data visualization because neglecting uncertainty in visualization can lead to inaccurate assessments. In this paper, we study the propagation of multivariate data uncertainty in visualization. Although there have been a few advancements in probabilistic uncertainty visualization of multivariate data, three critical challenges remain to be addressed. First, the state-of-the-art probabilistic uncertainty visualization framework is limited to bivariate data (two variables). Second, existing uncertainty visualization algorithms use computationally intensive techniques and lack support for cross-platform portability. Third, as a consequence of the computational expense, integration into production visualization tools is impractical. In this work, we address all three issues and make a threefold contribution. First, we take a step to generalize the state-of-the-art probabilistic framework for bivariate data to multivariate data with an arbitrary number of variables. Second, through utilization of VTK-m’s shared-memory parallelism and cross-platform compatibility features, we demonstrate acceleration of multivariate uncertainty visualization on different many-core architectures, including OpenMP and AMD GPUs. Third, we demonstrate the integration of our algorithms with the ParaView software. We demonstrate the utility of our algorithms through experiments on multivariate simulation data with three and four variables.
J. Li, T.A.J. Ouermi, C.R. Johnson.
Visualizing Uncertainties in Ensemble Wildfire Forecast Simulations, In IEEE Workshop on Uncertainty Visualization: Applications, Techniques, Software, and Decision Frameworks, IEEE, pp. 84--88. 2024.
DOI: 10.1109/UncertaintyVisualization63963.2024.00016
Wildfires pose substantial risks to our health, environment, and economy. Studying wildfires is challenging due to their complex interaction with the atmosphere dynamics and the terrain. Researchers have employed ensemble simulations to study the relationship among variables and mitigate uncertainties in unpredictable initial conditions. However, many wildfire researchers are unaware of the advanced visualization available for conveying uncertainty. We designed and implemented an interactive visualization system for studying the uncertainties of fire spread patterns utilizing band-depth-based order statistics and contour boxplots. We also augment the visualization system with the summary of changes in the burned area and fuel content to help scientists identify interesting temporal events. In this paper, we demonstrate how our system can support wildfire experts in studying fire spread patterns, identifying outlier simulations, and navigating to interesting times based on a summary of events.
T.A.J. Ouermi, J. Li, T. Athawale, C.R. Johnson.
Estimation and Visualization of Isosurface Uncertainty from Linear and High-Order Interpolation Methods, In IEEE Workshop on Uncertainty Visualization: Applications, Techniques, Software, and Decision Frameworks, IEEE, pp. 51--61. 2024.
DOI: 10.1109/UncertaintyVisualization63963.2024.00012
Isosurface visualization is fundamental for exploring and analyzing 3D volumetric data. Marching cubes (MC) algorithms with linear interpolation are commonly used for isosurface extraction and visualization. Although linear interpolation is easy to implement, it has limitations when the underlying data is complex and high-order, which is the case for most real-world data. Linear interpolation can output vertices at the wrong location. Its inability to deal with sharp features and features smaller than grid cells can lead to an incorrect isosurface with holes and broken pieces. Despite these limitations, isosurface visualizations typically do not include insight into the spatial location and the magnitude of these errors. We utilize high-order interpolation methods with MC algorithms and interactive visualization to highlight these uncertainties. Our visualization tool helps identify the regions of high interpolation errors. It also allows users to query local areas for details and compare the differences between isosurfaces from different interpolation methods. In addition, we employ high-order methods to identify and reconstruct possible features that linear methods cannot detect. We showcase how our visualization tool helps explore and understand the extracted isosurface errors through synthetic and real-world data.
T.A.J. Ouermi, J. Li, Z. Morrow, B. Waanders, C.R. Johnson.
Glyph-Based Uncertainty Visualization and Analysis of Time-Varying Vector Fields, In IEEE Workshop on Uncertainty Visualization: Applications, Techniques, Software, and Decision Frameworks, IEEE, pp. 73--77. 2024.
DOI: 10.1109/UncertaintyVisualization63963.2024.00014
Uncertainty is inherent to most data, including vector field data, yet it is often omitted in visualizations and representations. Effective uncertainty visualization can enhance the understanding and interpretability of vector field data. For instance, in the context of severe weather events such as hurricanes and wildfires, effective uncertainty visualization can provide crucial insights about fire spread or hurricane behavior and aid in resource management and risk mitigation. Glyphs are commonly used for representing vector uncertainty but are often limited to 2D. In this work, we present a glyph-based technique for accurately representing 3D vector uncertainty and a comprehensive framework for visualization, exploration, and analysis using our new glyphs. We employ hurricane and wildfire examples to demonstrate the efficacy of our glyph design and visualization tool in conveying vector field uncertainty.
S. Saklani, C. Goel, S. Bansal, Z. Wang, S. Dutta, T. Athawale, D. Pugmire, C.R. Johnson.
Uncertainty-Informed Volume Visualization using Implicit Neural Representation, In IEEE Workshop on Uncertainty Visualization: Applications, Techniques, Software, and Decision Frameworks, IEEE, pp. 62--72. 2024.
DOI: 10.1109/UncertaintyVisualization63963.2024.00013
The increasing adoption of Deep Neural Networks (DNNs) has led to their application in many challenging scientific visualization tasks. While advanced DNNs offer impressive generalization capabilities, understanding factors such as model prediction quality, robustness, and uncertainty is crucial. These insights can enable domain scientists to make informed decisions about their data. However, DNNs inherently lack ability to estimate prediction uncertainty, necessitating new research to construct robust uncertainty-aware visualization techniques tailored for various visualization tasks. In this work, we propose uncertainty-aware implicit neural representations to model scalar field data sets effectively and comprehensively study the efficacy and benefits of estimated uncertainty information for volume visualization tasks. We evaluate the effectiveness of two principled deep uncertainty estimation techniques: (1) Deep Ensemble and (2) Monte Carlo Dropout (MC-Dropout). These techniques enable uncertainty-informed volume visualization in scalar field data sets. Our extensive exploration across multiple data sets demonstrates that uncertainty-aware models produce informative volume visualization results. Moreover, integrating prediction uncertainty enhances the trustworthiness of our DNN model, making it suitable for robustly analyzing and visualizing real-world scientific volumetric data sets.
S. Subramaniam, M. Miller, several co-authors, C. R. Johnson, et al..
Grand Challenges at the Interface of Engineering and Medicine, In IEEE Open Journal of Engineering in Medicine and Biology, Vol. 5, IEEE, pp. 1--13. 2024.
DOI: 10.1109/OJEMB.2024.3351717
Over the past two decades Biomedical Engineering has emerged as a major discipline that bridges societal needs of human health care with the development of novel technologies. Every medical institution is now equipped at varying degrees of sophistication with the ability to monitor human health in both non-invasive and invasive modes. The multiple scales at which human physiology can be interrogated provide a profound perspective on health and disease. We are at the nexus of creating “avatars” (herein defined as an extension of “digital twins”) of human patho/physiology to serve as paradigms for interrogation and potential intervention. Motivated by the emergence of these new capabilities, the IEEE Engineering in Medicine and Biology Society, the Departments of Biomedical Engineering at Johns Hopkins University and Bioengineering at University of California at San Diego sponsored an interdisciplinary workshop to define the grand challenges that face biomedical engineering and the mechanisms to address these challenges. The Workshop identified five grand challenges with cross-cutting themes and provided a roadmap for new technologies, identified new training needs, and defined the types of interdisciplinary teams needed for addressing these challenges. The themes presented in this paper include: 1) accumedicine through creation of avatars of cells, tissues, organs and whole human; 2) development of smart and responsive devices for human function augmentation; 3) exocortical technologies to understand brain function and treat neuropathologies; 4) the development of approaches to harness the human immune system for health and wellness; and 5) new strategies to engineer genomes and cells.
S. Subramaniam, M. Akay, M. A. Anastasio, V. Bailey, D. Boas, P. Bonato, A. Chilkoti, J. R. Cochran, V. Colvin, T. A. Desai, J. S. Duncan, F. H. Epstein, S. Fraley, C. Giachelli, K. J. Grande-Allen, J. Green, X. E. Guo, I. B. Hilton, J. D. Humphrey, C. R. Johnson, G. Karniadakis, M. R. King, R. F. Kirsch, S. Kumar, C. T. Laurencin, S. Li, R. L. Lieber, N. Lovell, P. Mali, S. S. Margulies, D. F. Meaney, B. Ogle, B. Palsson, N. A. Peppas, E. J. Perreault, R. Rabbitt, L. A. Setton, L. D. Shea, S. G. Shroff, K. Shung, A. S. Tolias, M. C. H. van der Meulen, S. Varghese, G. Vunjak-Novakovic, J. A. White, R. Winslow, J. Zhang, K. Zhang, C. Zukoski, M. I. Miller.
Grand Challenges at the Interface of Engineering and Medicine, In IEEE Open Journal of Engineering in Medicine and Biology, Vol. 5, IEEE, pp. 1--13. Feb, 2024.
ISSN: 2644-1276
DOI: 10.1109/ojemb.2024.3351717
Over the past two decades Biomedical Engineering has emerged as a major discipline that bridges societal needs of human health care with the development of novel technologies. Every medical institution is now equipped at varying degrees of sophistication with the ability to monitor human health in both non-invasive and invasive modes. The multiple scales at which human physiology can be interrogated provide a profound perspective on health and disease. We are at the nexus of creating "avatars" (herein defined as an extension of "digital twins") of human patho/physiology to serve as paradigms for interrogation and potential intervention. Motivated by the emergence of these new capabilities, the IEEE Engineering in Medicine and Biology Society, the Departments of Biomedical Engineering at Johns Hopkins University and Bioengineering at University of California at San Diego sponsored an interdisciplinary workshop to define the grand challenges that face biomedical engineering and the mechanisms to address these challenges. The Workshop identified five grand challenges with cross-cutting themes and provided a roadmap for new technologies, identified new training needs, and defined the types of interdisciplinary teams needed for addressing these challenges. The themes presented in this paper include: 1) accumedicine through creation of avatars of cells, tissues, organs and whole human; 2) development of smart and responsive devices for human function augmentation; 3) exocortical technologies to understand brain function and treat neuropathologies; 4) the development of approaches to harness the human immune system for health and wellness; and 5) new strategies to engineer genomes and cells.
M.C. Whitton, C.R. Johnson, D. Kasik, A. Stork.
The Making of “The Big 50: Celebrating 50 ACM SIGGRAPH Conferences”, In IEEE Computer Graphics and Applications, Vol. 44, IEEE, pp. 89--97. Aug, 2024.
ISSN: 0272-1716
DOI: 10.1109/mcg.2024.3414048
The article offers a behind-the-scenes look at the creation process of last year's special issue article commemorating five decades of SIGGRAPH conferences. In this edited transcription of an interview, the article's editors, Mary Whitton, Chris Johnson, and Dave Kasik, share insights on how the project evolved from a conventional idea to an unconventional groundbreaking collection of personal stories. The interview delves into the inception of the project, the challenges faced in gathering contributions, the innovative methods employed for outreach, and the surprises encountered along the way. From the initial concept to the final publication, the editors recount the collaborative effort involved in producing a comprehensive and visually captivating retrospective. The impact of the article on the SIGGRAPH community especially and the computer graphics community in general is reflected in feedback received, highlighting its value as a repository of memories and a testament to the enduring legacy of the conference. Through personal anecdotes and reflections, the interview captures the spirit of camaraderie and innovation that has characterized SIGGRAPH over the past half-century.
2023
T. M. Athawale, C.R. Johnson, S. Sane,, D. Pugmire.
Fiber Uncertainty Visualization for Bivariate Data With Parametric and Nonparametric Noise Models, In IEEE Transactions on Visualization and Computer Graphics, Vol. 29, No. 1, IEEE, pp. 613-23. 2023.
Visualization and analysis of multivariate data and their uncertainty are top research challenges in data visualization. Constructing fiber surfaces is a popular technique for multivariate data visualization that generalizes the idea of level-set visualization for univariate data to multivariate data. In this paper, we present a statistical framework to quantify positional probabilities of fibers extracted from uncertain bivariate fields. Specifically, we extend the state-of-the-art Gaussian models of uncertainty for bivariate data to other parametric distributions (e.g., uniform and Epanechnikov) and more general nonparametric probability distributions (e.g., histograms and kernel density estimation) and derive corresponding spatial probabilities of fibers. In our proposed framework, we leverage Green’s theorem for closed-form computation of fiber probabilities when bivariate data are assumed to have independent parametric and nonparametric noise. Additionally, we present a nonparametric approach combined with numerical integration to study the positional probability of fibers when bivariate data are assumed to have correlated noise. For uncertainty analysis, we visualize the derived probability volumes for fibers via volume rendering and extracting level sets based on probability thresholds. We present the utility of our proposed techniques via experiments on synthetic and simulation datasets
N. Boukhelifa, C. R. Johnson, K. Potter.
Visualization and Decision Making Design Under Uncertainty, In IEEE Computer Graphics and Applications, Vol. 43, IEEE, pp. 23--25. Sep, 2023.
ISSN: 0272-1716
DOI: 10.1109/mcg.2023.3302172
Uncertainty is an important aspect to data understanding. Without awareness of the variability, error, or reliability of a dataset, the ability to make decisions on that data is limited. However, practices around uncertainty visualization remain domain-specific, rooted in convention, and in many instances, absent entirely. Part of the reason for this may be a lack of established guidelines for navigating difficult choices of when uncertainty should be added, how to visualize uncertainty, and how to evaluate its effectiveness. Unsurprisingly, the inclusion of uncertainty into visualizations is a major challenge to visualization. As work concerned with uncertainty visualization grows, it has become clear that simple visual additions of uncertainty information to traditional visualization methods do not appropriately convey the meaning of the uncertainty, pose many perceptual challenges, and, in the worst case, can lead a viewer to a completely wrong understanding of the data. These challenges are the driving motivator for this special issue.
Page 1 of 16
