SCI Publications
2012
Y. Hong, S. Joshi, M. Sanchez, M. Styner, M. Niethammer.
Metamorphic Geodesic Regression, In Proceedings of Medical Image Computing and Computer-Assisted Intervention (MICCAI) 2012, pp. 197--205. 2012.
J. Huang, W. Pei, C. Wen, G. Chen, W. Chen, H. Bao.
Output-Coherent Image-Space LIC for Surface Flow Visualization, In Proceedings of the IEEE Pacific Visualization Symposium 2012, Korea, pp. 137--144. 2012.
Image-space line integral convolution (LIC) is a popular approach for visualizing surface vector fields due to its simplicity and high efficiency. To avoid inconsistencies or color blur during the user interactions in the image-space approach, some methods use surface parameterization or 3D volume texture for the effect of smooth transition, which often require expensive computational or memory cost. Furthermore, those methods cannot achieve consistent LIC results in both granularity and color distribution on different scales.
This paper introduces a novel image-space LIC for surface flows that preserves the texture coherence during user interactions. To make the noise textures under different viewpoints coherent, we propose a simple texture mapping technique that is local, robust and effective. Meanwhile, our approach pre-computes a sequence of mipmap noise textures in a coarse-to-fine manner, leading to consistent transition when the model is zoomed. Prior to perform LIC in the image space, the mipmap noise textures are mapped onto each triangle with randomly assigned texture coordinates. Then, a standard image-space LIC based on the projected vector fields is performed to generate the flow texture. The proposed approach is simple and very suitable for GPU acceleration. Our implementation demonstrates consistent and highly efficient LIC visualization on a variety of datasets.
A.H. Huang, B.M. Baker, G.A. Ateshian, R.L. Mauch.
Sliding Contact Loading Enhances The Tensile Properties Of Mesenchymal Stem Cell-Seeded Hydrogels, In European Cells and Materials, Vol. 24, pp. 29--45. 2012.
PubMed ID: 22791371
A. Humphrey, Q. Meng, M. Berzins, T. Harman.
Radiation Modeling Using the Uintah Heterogeneous CPU/GPU Runtime System, SCI Technical Report, No. UUSCI-2012-003, SCI Institute, University of Utah, 2012.
The Uintah Computational Framework was developed to provide an environment for solving fluid-structure interaction problems on structured adaptive grids on large-scale, long-running, data-intensive problems. Uintah uses a combination of fluid-flow solvers and particle-based methods for solids, together with a novel asynchronous task-based approach with fully automated load balancing. Uintah demonstrates excellent weak and strong scalability at full machine capacity on XSEDE resources such as Ranger and Kraken, and through the use of a hybrid memory approach based on a combination of MPI and Pthreads, Uintah now runs on up to 262k cores on the DOE Jaguar system. In order to extend Uintah to heterogeneous systems, with ever-increasing CPU core counts and additional onnode GPUs, a new dynamic CPU-GPU task scheduler is designed and evaluated in this study. This new scheduler enables Uintah to fully exploit these architectures with support for asynchronous, outof- order scheduling of both CPU and GPU computational tasks. A new runtime system has also been implemented with an added multi-stage queuing architecture for efficient scheduling of CPU and GPU tasks. This new runtime system automatically handles the details of asynchronous memory copies to and from the GPU and introduces a novel method of pre-fetching and preparing GPU memory prior to GPU task execution. In this study this new design is examined in the context of a developing, hierarchical GPUbased ray tracing radiation transport model that provides Uintah with additional capabilities for heat transfer and electromagnetic wave propagation. The capabilities of this new scheduler design are tested by running at large scale on the modern heterogeneous systems, Keeneland and TitanDev, with up to 360 and 960 GPUs respectively. On these systems, we demonstrate significant speedups per GPU against a standard CPU core for our radiation problem.
Keywords: csafe, uintah
A. Humphrey, Q. Meng, M. Berzins, T. Harman.
Radiation Modeling Using the Uintah Heterogeneous CPU/GPU Runtime System, In Proceedings of the first conference of the Extreme Science and Engineering Discovery Environment (XSEDE'12), Association for Computing Machinery, 2012.
DOI: 10.1145/2335755.2335791
The Uintah Computational Framework was developed to provide an environment for solving fluid-structure interaction problems on structured adaptive grids on large-scale, long-running, data-intensive problems. Uintah uses a combination of fluid-flow solvers and particle-based methods for solids, together with a novel asynchronous task-based approach with fully automated load balancing. Uintah demonstrates excellent weak and strong scalability at full machine capacity on XSEDE resources such as Ranger and Kraken, and through the use of a hybrid memory approach based on a combination of MPI and Pthreads, Uintah now runs on up to 262k cores on the DOE Jaguar system. In order to extend Uintah to heterogeneous systems, with ever-increasing CPU core counts and additional onnode GPUs, a new dynamic CPU-GPU task scheduler is designed and evaluated in this study. This new scheduler enables Uintah to fully exploit these architectures with support for asynchronous, outof- order scheduling of both CPU and GPU computational tasks. A new runtime system has also been implemented with an added multi-stage queuing architecture for efficient scheduling of CPU and GPU tasks. This new runtime system automatically handles the details of asynchronous memory copies to and from the GPU and introduces a novel method of pre-fetching and preparing GPU memory prior to GPU task execution. In this study this new design is examined in the context of a developing, hierarchical GPUbased ray tracing radiation transport model that provides Uintah with additional capabilities for heat transfer and electromagnetic wave propagation. The capabilities of this new scheduler design are tested by running at large scale on the modern heterogeneous systems, Keeneland and TitanDev, with up to 360 and 960 GPUs respectively. On these systems, we demonstrate significant speedups per GPU against a standard CPU core for our radiation problem.
Keywords: Uintah, hybrid parallelism, scalability, parallel, adaptive, GPU, heterogeneous systems, Keeneland, TitanDev
A. Irimia, M.C. Chambers, C.M. Torgerson, M. Filippou, D.A. Hovda, J.R. Alger, G. Gerig, A.W. Toga, P.M. Vespa, R. Kikinis, J.D. Van Horn.
Patient-tailored connectomics visualization for the assessment of white matter atrophy in traumatic brain injury, In Frontiers in Neurotrauma, Note: http://www.frontiersin.org/neurotrauma/10.3389/fneur.2012.00010/abstract, 2012.
DOI: 10.3389/fneur.2012.00010
A. Irimia, Bo Wang, S.R. Aylward, M.W. Prastawa, D.F. Pace, G. Gerig, D.A. Hovda, R.Kikinis, P.M. Vespa, J.D. Van Horn.
Neuroimaging of Structural Pathology and Connectomics in Traumatic Brain Injury: Toward Personalized Outcome Prediction, In NeuroImage: Clinical, Vol. 1, No. 1, Elsvier, pp. 1--17. 2012.
DOI: 10.1016/j.nicl.2012.08.002
S.K. Iyer, T. Tasdizen, E.V.R. DiBella.
Edge Enhanced Spatio-Temporal Constrained Reconstruction of Undersampled Dynamic Contrast Enhanced Radial MRI, In Magnetic Resonance Imaging, Vol. 30, No. 5, pp. 610--619. 2012.
Keywords: MRI, Reconstruction, Edge enhanced, Compressed sensing, Regularization, Cardiac perfusion
C.R. Johnson.
Biomedical Visual Computing: Case Studies and Challenges, In IEEE Computing in Science and Engineering, Vol. 14, No. 1, pp. 12--21. 2012.
PubMed ID: 22545005
PubMed Central ID: PMC3336198
Computer simulation and visualization are having a substantial impact on biomedicine and other areas of science and engineering. Advanced simulation and data acquisition techniques allow biomedical researchers to investigate increasingly sophisticated biological function and structure. A continuing trend in all computational science and engineering applications is the increasing size of resulting datasets. This trend is also evident in data acquisition, especially in image acquisition in biology and medical image databases.
For example, in a collaboration between neuroscientist Robert Marc and our research team at the University of Utah's Scientific Computing and Imaging (SCI) Institute (www.sci.utah.edu), we're creating datasets of brain electron microscopy (EM) mosaics that are 16 terabytes in size. However, while there's no foreseeable end to the increase in our ability to produce simulation data or record observational data, our ability to use this data in meaningful ways is inhibited by current data analysis capabilities, which already lag far behind. Indeed, as the NIH-NSF Visualization Research Challenges report notes, to effectively understand and make use of the vast amounts of data researchers are producing is one of the greatest scientific challenges of the 21st century.
Visual data analysis involves creating images that convey salient information about underlying data and processes, enabling the detection and validation of expected results while leading to unexpected discoveries in science. This allows for the validation of new theoretical models, provides comparison between models and datasets, enables quantitative and qualitative querying, improves interpretation of data, and facilitates decision making. Scientists can use visual data analysis systems to explore \"what if\" scenarios, define hypotheses, and examine data under multiple perspectives and assumptions. In addition, they can identify connections between numerous attributes and quantitatively assess the reliability of hypotheses. In essence, visual data analysis is an integral part of scientific problem solving and discovery.
As applied to biomedical systems, visualization plays a crucial role in our ability to comprehend large and complex data-data that, in two, three, or more dimensions, convey insight into many diverse biomedical applications, including understanding neural connectivity within the brain, interpreting bioelectric currents within the heart, characterizing white-matter tracts by diffusion tensor imaging, and understanding morphology differences among different genetic mice phenotypes.
Keywords: kaust
E. Jurrus, S. Watanabe, R.J. Giuly, A.R.C. Paiva, M.H. Ellisman, E.M. Jorgensen, T. Tasdizen.
Semi-Automated Neuron Boundary Detection and Nonbranching Process Segmentation in Electron Microscopy Images, In Neuroinformatics, pp. (published online). 2012.
T. Kapur, S. Pieper, R.T. Whitaker, S. Aylward, M. Jakab, W. Schroeder, R. Kikinis.
The National Alliance for Medical Image Computing, a roadmap initiative to build a free and open source software infrastructure for translational research in medical image analysis, In Journal of the American Medical Informatics Association, In Journal of the American Medical Informatics Association, Vol. 19, No. 2, pp. 176--180. 2012.
DOI: 10.1136/amiajnl-2011-000493
M. Kim, G. Chen, C.D. Hansen.
Dynamic particle system for mesh extraction on the GPU, In Proceedings of the 5th Annual Workshop on General Purpose Processing with Graphics Processing Units, London, England, GPGPU-5, ACM, New York, NY, USA pp. 38--46. 2012.
ISBN: 978-1-4503-1233-2
DOI: 10.1145/2159430.215943
Keywords: CUDA, GPGPU, particle systems, volumetric data mesh extraction
J. King, H. Mirzaee, J.K. Ryan, R.M. Kirby.
Smoothness-Increasing Accuracy-Conserving (SIAC) Filtering for discontinuous Galerkin Solutions: Improved Errors Versus Higher-Order Accuracy, In Journal of Scientific Computing, Vol. 53, pp. 129--149. 2012.
DOI: 10.1007/s10915-012-9593-8
Smoothness-increasing accuracy-conserving (SIAC) filtering has demonstrated its effectiveness in raising the convergence rate of discontinuous Galerkin solutions from order k + 1/2 to order 2k + 1 for specific types of translation invariant meshes (Cockburn et al. in Math. Comput. 72:577–606, 2003; Curtis et al. in SIAM J. Sci. Comput. 30(1):272– 289, 2007; Mirzaee et al. in SIAM J. Numer. Anal. 49:1899–1920, 2011). Additionally, it improves the weak continuity in the discontinuous Galerkin method to k - 1 continuity. Typically this improvement has a positive impact on the error quantity in the sense that it also reduces the absolute errors. However, not enough emphasis has been placed on the difference between superconvergent accuracy and improved errors. This distinction is particularly important when it comes to understanding the interplay introduced through meshing, between geometry and filtering. The underlying mesh over which the DG solution is built is important because the tool used in SIAC filtering—convolution—is scaled by the geometric mesh size. This heavily contributes to the effectiveness of the post-processor. In this paper, we present a study of this mesh scaling and how it factors into the theoretical errors. To accomplish the large volume of post-processing necessary for this study, commodity streaming multiprocessors were used; we demonstrate for structured meshes up to a 50× speed up in the computational time over traditional CPU implementations of the SIAC filter.
J. Knezevic, R.-P. Mundani, E. Rank, A. Khan, C.R. Johnson.
Extending the SCIRun Problem Solving Environment to Large-Scale Applications, In Proceedings of Applied Computing 2012, IADIS, pp. 171--178. October, 2012.
Keywords: scirun
S. Kole, N.P. Singh, R. King.
Whole Brain Fractal Analysis of the Cerebral Cortex across the Adult Lifespan, In Neurology, Meeting Abstracts I, Vol. 78, pp. P03.104. 2012.
D. Kopta, T. Ize, J. Spjut, E. Brunvand, A. Davis, A. Kensler.
Fast, Effective BVH Updates for Animated Scenes, In Proceedings of the Symposium on Interactive 3D Graphics and Games (I3D '12), pp. 197--204. 2012.
DOI: 10.1145/2159616.2159649
S. Kumar, V. Vishwanath, P. Carns, J.A. Levine, R. Latham, G. Scorzelli, H. Kolla, R. Grout, R. Ross, M.E. Papka, J. Chen, V. Pascucci.
Efficient data restructuring and aggregation for I/O acceleration in PIDX, In Proceedings of the International Conference on High Performance Computing, Networking, Storage and Analysis, IEEE Computer Society Press, pp. 50:1--50:11. 2012.
ISBN: 978-1-4673-0804-5
Hierarchical, multiresolution data representations enable interactive analysis and visualization of large-scale simulations. One promising application of these techniques is to store high performance computing simulation output in a hierarchical Z (HZ) ordering that translates data from a Cartesian coordinate scheme to a one-dimensional array ordered by locality at different resolution levels. However, when the dimensions of the simulation data are not an even power of 2, parallel HZ ordering produces sparse memory and network access patterns that inhibit I/O performance. This work presents a new technique for parallel HZ ordering of simulation datasets that restructures simulation data into large (power of 2) blocks to facilitate efficient I/O aggregation. We perform both weak and strong scaling experiments using the S3D combustion application on both Cray-XE6 (65,536 cores) and IBM Blue Gene/P (131,072 cores) platforms. We demonstrate that data can be written in hierarchical, multiresolution format with performance competitive to that of native data-ordering methods.
A.G. Landge, J.A. Levine, A. Bhatele, K.E. Isaacs, T. Gamblin, S. Langer, M. Schulz, P.-T. Bremer, V. Pascucci.
Visualizing Network Traffic to Understand the Performance of Massively Parallel Simulations, In IEEE Transactions on Visualization and Computer Graphics, Vol. 18, No. 12, IEEE, pp. 2467--2476. Dec, 2012.
DOI: 10.1109/TVCG.2012.286
The performance of massively parallel applications is often heavily impacted by the cost of communication among compute nodes. However, determining how to best use the network is a formidable task, made challenging by the ever increasing size and complexity of modern supercomputers. This paper applies visualization techniques to aid parallel application developers in understanding the network activity by enabling a detailed exploration of the flow of packets through the hardware interconnect. In order to visualize this large and complex data, we employ two linked views of the hardware network. The first is a 2D view, that represents the network structure as one of several simplified planar projections. This view is designed to allow a user to easily identify trends and patterns in the network traffic. The second is a 3D view that augments the 2D view by preserving the physical network topology and providing a context that is familiar to the application developers. Using the massively parallel multi-physics code pF3D as a case study, we demonstrate that our tool provides valuable insight that we use to explain and optimize pF3D’s performance on an IBM Blue Gene/P system.
C.H. Lee, B.O. Alpert, P. Sankaranarayanan, O. Alter.
GSVD Comparison of Patient-Matched Normal and Tumor aCGH Profiles Reveals Global Copy-Number Alterations Predicting Glioblastoma Multiforme Survival, In PLoS ONE, Vol. 7, No. 1, Public Library of Science, pp. e30098. 2012.
DOI: 10.1371/journal.pone.0030098
J.A. Levine, S. Jadhav, H. Bhatia, V. Pascucci, P.-T. Bremer.
A Quantized Boundary Representation of 2D Flows, In Computer Graphics Forum, Vol. 31, No. 3 Pt. 1, pp. 945--954. June, 2012.
DOI: 10.1111/j.1467-8659.2012.03087.x
Page 59 of 149
