Potential for unreliable interpretation of EEG recorded with microelectrodes
W.C. Stacey, S. Kellis, B. Greger, C.R. Butson, P.R. Patel, T. Assaf, T. Mihaylova, S. Glynn.
Potential for unreliable interpretation of EEG recorded with microelectrodes, In Epilepsia, May, 2013.
ISSN: 00139580
DOI: 10.1111/epi.12202
Surgical technique: talar neck osteotomy to lengthen the medial column after a malunited talar neck fracture
T. Suter, A. Barg, M. Knupp, H.B. Henninger, B. Hintermann.
Surgical technique: talar neck osteotomy to lengthen the medial column after a malunited talar neck fracture, In Clinical Orthopaedics & Related research, Vol. 471, No. 4, pp. 1356--1364. 2013.
DOI: 10.1080/10255842.2013.809711
PubMed ID: 23809004
ABSTRACT
×
Understanding the mechanical behaviour of chondrocytes as a result of cartilage tissue mechanics has significant implications for both evaluation of mechanobiological function and to elaborate on damage mechanisms. A common procedure for prediction of chondrocyte mechanics (and of cell mechanics in general) relies on a computational post-processing approach where tissue-level deformations drive cell-level models. Potential loss of information in this numerical coupling approach may cause erroneous cellular-scale results, particularly during multiphysics analysis of cartilage. The goal of this study was to evaluate the capacity of first- and second-order data passing to predict chondrocyte mechanics by analysing cartilage deformations obtained for varying complexity of loading scenarios. A tissue-scale model with a sub-region incorporating representation of chondron size and distribution served as control. The post-processing approach first required solution of a homogeneous tissue-level model, results of which were used to drive a separate cell-level model (same characteristics as the sub-region of control model). The first-order data passing appeared to be adequate for simplified loading of the cartilage and for a subset of cell deformation metrics, for example, change in aspect ratio. The second-order data passing scheme was more accurate, particularly when asymmetric permeability of the tissue boundaries was considered. Yet, the method exhibited limitations for predictions of instantaneous metrics related to the fluid phase, for example, mass exchange rate. Nonetheless, employing higher order data exchange schemes may be necessary to understand the biphasic mechanics of cells under lifelike tissue loading states for the whole time history of the simulation.
Four‐dimensional tissue deformation reconstruction (4D TDR) validation using a real tissue phantom
M. Szegedi, J. Hinkle, P. Rassiah, V. Sarkar, B. Wang, S. Joshi, B. Salter.
Four‐dimensional tissue deformation reconstruction (4D TDR) validation using a real tissue phantom, In Journal of Applied Clinical Medical Physics, Vol. 14, No. 1, pp. 115-132. 2013.
DOI: 10.1120/jacmp.v14i1.4012
ABSTRACT
Calculation of four‐dimensional (4D) dose distributions requires the remapping of dose calculated on each available binned phase of the 4D CT onto a reference phase for summation. Deformable image registration (DIR) is usually used for this task, but unfortunately almost always considers only endpoints rather than the whole motion path. A new algorithm, 4D tissue deformation reconstruction (4D TDR), that uses either CT projection data or all available 4D CT images to reconstruct 4D motion data, was developed. The purpose of this work is to verify the accuracy of the fit of this new algorithm using a realistic tissue phantom. A previously described fresh tissue phantom with implanted electromagnetic tracking (EMT) fiducials was used for this experiment. The phantom was animated using a sinusoidal and a real patient‐breathing signal. Four‐dimensional computer tomography (4D CT) and EMT tracking were performed. Deformation reconstruction was conducted using the 4D TDR and a modified 4D TDR which takes real tissue hysteresis (4D TDRHysteresis) into account. Deformation estimation results were compared to the EMT and 4D CT coordinate measurements. To eliminate the possibility of the high contrast markers driving the 4D TDR, a comparison was made using the original 4D CT data and data in which the fiducials were electronically masked. For the sinusoidal animation, the average deviation of the 4D TDR compared to the manually determined coordinates from 4D CT data was 1.9 mm, albeit with as large as 4.5 mm deviation. The 4D TDR calculation traces matched 95% of the EMT trace within 2.8 mm. The motion hysteresis generated by real tissue is not properly projected other than at endpoints of motion. Sinusoidal animation resulted in 95% of EMT measured locations to be within less than 1.2 mm of the measured 4D CT motion path, enabling accurate motion characterization of the tissue hysteresis. The 4D TDRHysteresis calculation traces accounted well for the hysteresis and matched 95% of the EMT trace within 1.6 mm. An irregular (in amplitude and frequency) recorded patient trace applied to the same tissue resulted in 95% of the EMT trace points within less than 4.5 mm when compared to both the 4D CT and 4D TDRHysteresis motion paths. The average deviation of 4D TDRHysteresis compared to 4D CT datasets was 0.9 mm under regular sinusoidal and 1.0 mm under irregular patient trace animation. The EMT trace data fit to the 4D TDRHysteresis was within 1.6 mm for sinusoidal and 4.5 mm for patient trace animation. While various algorithms have been validated for end‐to‐end accuracy, one can only be fully confident in the performance of a predictive algorithm if one looks at data along the full motion path. The 4D TDR, calculating the whole motion path rather than only phase‐ or endpoints, allows us to fully characterize the accuracy of a predictive algorithm, minimizing assumptions. This algorithm went one step further by allowing for the inclusion of tissue hysteresis effects, a real‐world effect that is neglected when endpoint‐only validation is performed. Our results show that the 4D TDRHysteresis correctly models the deformation at the endpoints and any intermediate points along the motion path.
Biomechanical evaluation of subpectoral biceps tenodesis: dual suture anchor versus interference screw fixation
R.Z. Tashjian, H.B. Henninger.
Biomechanical evaluation of subpectoral biceps tenodesis: dual suture anchor versus interference screw fixation, In Journal of Shoulder and Elbow Surgery, Vol. 22, No. 10, pp. 1408–-1412. 2013.
DOI: 10.1016/j.jse.2012.12.039
ABSTRACT
×
Background Subpectoral biceps tenodesis has been reliably used to treat a variety of biceps tendon pathologies. Interference screws have been shown to have superior biomechanical properties compared to suture anchors; although, only single anchor constructs have been evaluated in the subpectoral region. The purpose of this study was to compare interference screw fixation with a suture anchor construct, using 2 anchors for a subpectoral tenodesis.
Methods A subpectoral biceps tenodesis was performed using either an interference screw (8 × 12 mm; Arthrex) or 2 suture anchors (Mitek G4) with #2 FiberWire (Arthrex) in a Krackow and Bunnell configuration in seven pairs of human cadavers. The humerus was inverted in an Instron and the biceps tendon was loaded vertically. Displacement driven cyclic loading was performed followed by failure loading.
Results Suture anchor constructs had lower stiffness upon initial loading (P = .013). After 100 cycles, the stiffness of the suture anchor construct "softened" (decreased 9%, P < .001), whereas the screw construct was unchanged (0.4%, P = .078). Suture anchors had significantly higher ultimate failure strain than the screws (P = .003), but ultimate failure loads were similar between constructs: 280 ± 95 N (screw) vs 310 ± 91 N (anchors) (P = .438).
Conclusion The interference screw was significantly stiffer than the suture anchor construct. Ultimate failure loads were similar between constructs, unlike previous reports indicating interference screws had higher ultimate failure loads compared to suture anchors. Neither construct was superior with regards to stress; although, suture anchors could withstand greater elongation prior to failure.
Modeling Longitudinal {MRI} Changes in Populations Using a Localized, Information-Theoretic Measure of Contrast
A. Vardhan, M.W. Prastawa, J. Piven, G. Gerig.
Modeling Longitudinal MRI Changes in Populations Using a Localized, Information-Theoretic Measure of Contrast, In Proceedings of the 2013 IEEE 10th International Symposium on Biomedical Imaging (ISBI), pp. 1396--1399. 2013.
DOI: 10.1109/ISBI.2013.6556794
ABSTRACT
Longitudinal MR imaging during early brain development provides important information about growth patterns and the development of neurological disorders. We propose a new framework for studying brain growth patterns within and across populations based on MRI contrast changes, measured at each time point of interest and at each voxel. Our method uses regression in the LogOdds space and an informationtheoretic measure of distance between distributions to capture contrast in a manner that is robust to imaging parameters and without requiring intensity normalization. We apply our method to a clinical neuroimaging study on early brain development in autism, where we obtain a 4D spatiotemporal model of contrast changes in multimodal structural MRI.
A longitudinal structural {MRI} study of change in regional contrast in Autism Spectrum Disorder
A. Vardhan, J. Piven, M. Prastawa, G. Gerig.
A longitudinal structural MRI study of change in regional contrast in Autism Spectrum Disorder, In Proceedings of the 19th Annual Meeting of the Organization for Human Brain Mapping OHBM, pp. (in print). 2013.
ABSTRACT
×
The brain undergoes tremendous changes in shape, size, structure, and chemical composition, between birth and 2 years of age [Rutherford, 2001]. Existing studies have focused on morphometric and volumetric changes to study the early developing brain. Although there have been some recent appearance studies based on intensity changes [Serag et al., 2011], these are highly dependent on the quality of normalization. The study we present here uses the changes in contrast between gray and white matter tissue intensities in structural MRI of the brain, as a measure of regional growth [Vardhan et al., 2011]. Kernel regression was used to generate continuous curves characterizing the changes in contrast with time. A statistical analysis was then performed on these curves, comparing two population groups : (i) HR+ : high-risk subjects who tested positive for Autism Spectrum Disorder (ASD), and (ii) HR- : high-risk subjects who tested negative for ASD.
Proper Ordered Meshing of Complex Shapes and Optimal Graph Cuts Applied to Atrial-Wall Segmentation from {DE-MRI}
G. Veni, Z. Fu, S.P. Awate, R.T. Whitaker.
Proper Ordered Meshing of Complex Shapes and Optimal Graph Cuts Applied to Atrial-Wall Segmentation from DE-MRI, In Proceedings of the 2013 IEEE 10th International Symposium on Biomedical Imaging (ISBI), pp. 1296--1299. 2013.
DOI: 10.1109/ISBI.2013.6556769
ABSTRACT
×
Segmentation of the left atrium wall from delayed enhancement MRI is challenging because of inconsistent contrast combined with noise and high variation in atrial shape and size. This paper presents a method for left-atrium wall segmentation by using a novel sophisticated mesh-generation strategy and graph cuts on a proper ordered graph. The mesh is part of a template/model that has an associated set of learned intensity features. When this mesh is overlaid onto a test image, it produces a set of costs on the graph vertices which eventually leads to an optimal segmentation. The novelty also lies in the construction of proper ordered graphs on complex shapes and for choosing among distinct classes of base shapes/meshes for automatic segmentation. We evaluate the proposed segmentation framework quantitatively on simulated and clinical cardiac MRI.
Bayesian Segmentation of Atrium Wall using Globally-Optimal Graph Cuts on {3D} Meshes
G. Veni, S. Awate, Z. Fu, R.T. Whittaker.
Bayesian Segmentation of Atrium Wall using Globally-Optimal Graph Cuts on 3D Meshes, In Proceedings of the International Conference on Information Processing in Medical Imaging (IPMI), Lecture Notes in Computer Science (LNCS), Vol. 23, pp. 656--677. 2013.
PubMed ID: 24684007
ABSTRACT
×
Efficient segmentation of the left atrium (LA) wall from delayed enhancement MRI is challenging due to inconsistent contrast, combined with noise, and high variation in atrial shape and size. We present a surface-detection method that is capable of extracting the atrial wall by computing an optimal a-posteriori estimate. This estimation is done on a set of nested meshes, constructed from an ensemble of segmented training images, and graph cuts on an associated multi-column, proper-ordered graph. The graph/mesh is a part of a template/model that has an associated set of learned intensity features. When this mesh is overlaid onto a test image, it produces a set of costs which lead to an optimal segmentation. The 3D mesh has an associated weighted, directed multi-column graph with edges that encode smoothness and inter-surface penalties. Unlike previous graph-cut methods that impose hard constraints on the surface properties, the proposed method follows from a Bayesian formulation resulting in soft penalties on spatial variation of the cuts through the mesh. The novelty of this method also lies in the construction of proper-ordered graphs on complex shapes for choosing among distinct classes of base shapes for automatic LA segmentation. We evaluate the proposed segmentation framework on simulated and clinical cardiac MRI.
{UNC-Utah} {NA-MIC} {DTI} framework: atlas based fiber tract analysis with application to a study of nicotine smoking addiction
A.R. Verde, J.-B. Berger, A. Gupta, M. Farzinfar, A. Kaiser, V.W. Chanon, C. Boettiger, H. Johnson, J. Matsui, A. Sharma, C. Goodlett, Y. Shi, H. Zhu, G. Gerig, S. Gouttard, C. Vachet, M. Styner.
UNC-Utah NA-MIC DTI framework: atlas based fiber tract analysis with application to a study of nicotine smoking addiction, In Proc. SPIE 8669, Medical Imaging 2013: Image Processing, 86692D, Vol. 8669, pp. 86692D-86692D-8. 2013.
DOI: 10.1117/12.2007093
ABSTRACT
×
Purpose: The UNC-Utah NA-MIC DTI framework represents a coherent, open source, atlas fiber tract based DTI analysis framework that addresses the lack of a standardized fiber tract based DTI analysis workflow in the field. Most steps utilize graphical user interfaces (GUI) to simplify interaction and provide an extensive DTI analysis framework for non-technical researchers/investigators. Data: We illustrate the use of our framework on a 54 directional DWI neuroimaging study contrasting 15 Smokers and 14 Controls. Method(s): At the heart of the framework is a set of tools anchored around the multi-purpose image analysis platform 3D-Slicer. Several workflow steps are handled via external modules called from Slicer in order to provide an integrated approach. Our workflow starts with conversion from DICOM, followed by thorough automatic and interactive quality control (QC), which is a must for a good DTI study. Our framework is centered around a DTI atlas that is either provided as a template or computed directly as an unbiased average atlas from the study data via deformable atlas building. Fiber tracts are defined via interactive tractography and clustering on that atlas. DTI fiber profiles are extracted automatically using the atlas mapping information. These tract parameter profiles are then analyzed using our statistics toolbox (FADTTS). The statistical results are then mapped back on to the fiber bundles and visualized with 3D Slicer. Results: This framework provides a coherent set of tools for DTI quality control and analysis. Conclusions: This framework will provide the field with a uniform process for DTI quality control and analysis.
Analyzing Imaging Biomarkers for Traumatic Brain Injury Using {4D} Modeling of Longitudinal {MRI}
Bo Wang, M.W. Prastawa, A. Irimia, M.C. Chambers, N. Sadeghi, P.M. Vespa, J.D. van Horn, G. Gerig.
Analyzing Imaging Biomarkers for Traumatic Brain Injury Using 4D Modeling of Longitudinal MRI, In 2013 IEEE Proceedings of 10th International Symposium on Biomedical Imaging (ISBI), pp. 1392 - 1395. 2013.
DOI: 10.1109/ISBI.2013.6556793
ABSTRACT
×
Quantitative imaging biomarkers are important for assessment of impact, recovery and treatment efficacy in patients with traumatic brain injury (TBI). To our knowledge, the identification of such biomarkers characterizing disease progress and recovery has been insufficiently explored in TBI due to difficulties in registration of baseline and followup data and automatic segmentation of tissue and lesions from multimodal, longitudinal MR image data. We propose a new methodology for computing imaging biomarkers in TBI by extending a recently proposed spatiotemporal 4D modeling approach in order to compute quantitative features of tissue change. The proposed method computes surface-based and voxel-based measurements such as cortical thickness, volume changes, and geometric deformation. We analyze the potential for clinical use of these biomarkers by correlating them with TBI-specific patient scores at the level of the whole brain and of individual regions. Our preliminary results indicate that the proposed voxel-based biomarkers are correlated with clinical outcomes.
Inverse Electrocardiographic Source Localization of Ischemia: An Optimization Framework and Finite Element Solution
D. Wang, R.M. Kirby, R.S. MacLeod, C.R. Johnson.
Inverse Electrocardiographic Source Localization of Ischemia: An Optimization Framework and Finite Element Solution, In Journal of Computational Physics, Vol. 250, Academic Press, pp. 403--424. 2013.
ISSN: 0021-9991
DOI: 10.1016/j.jcp.2013.05.027
ABSTRACT
×
With the goal of non-invasively localizing cardiac ischemic disease using bodysurface potential recordings, we attempted to reconstruct the transmembrane potential (TMP) throughout the myocardium with the bidomain heart model. The task is an inverse source problem governed by partial differential equations (PDE). Our main contribution is solving the inverse problem within a PDE-constrained optimization framework that enables various physically-based constraints in both equality and inequality forms. We formulated the optimality conditions rigorously in the continuum before deriving finite element discretization, thereby making the optimization independent of discretization choice. Such a formulation was derived for the L2-norm Tikhonov regularization and the total variation minimization. The subsequent numerical optimization was fulfilled by a primal-dual interior-point method tailored to our problem's specific structure. Our simulations used realistic, fiberincluded heart models consisting of up to 18,000 nodes, much finer than any inverse models previously reported. With synthetic ischemia data we localized ischemic regions with roughly a 10% false-negative rate or a 20% false-positive rate under conditions up to 5% input noise. With ischemia data measured from animal experiments, we reconstructed TMPs with roughly 0.9 correlation with the ground truth. While precisely estimating the TMP in general cases remains an open problem, our study shows the feasibility of reconstructing TMP during the ST interval as a means of ischemia localization.
Keywords: cvrti, 2P41 GM103545-14
Visualizing Robustness of Critical Points for {2D} Time-Varying Vector Fields
Analyzing critical points and their temporal evolutions plays a crucial role in understanding the behavior of vector fields. A key challenge is to quantify the stability of critical points: more stable points may represent more important phenomena or vice versa. The topological notion of robustness is a tool which allows us to quantify rigorously the stability of each critical point. Intuitively, the robustness of a critical point is the minimum amount of perturbation necessary to cancel it within a local neighborhood, measured under an appropriate metric. In this paper, we introduce a new analysis and visualization framework which enables interactive exploration of robustness of critical points for both stationary and time-varying 2D vector fields. This framework allows the end-users, for the first time, to investigate how the stability of a critical point evolves over time. We show that this depends heavily on the global properties of the vector field and that structural changes can correspond to interesting behavior. We demonstrate the practicality of our theories and techniques on several datasets involving combustion and oceanic eddy simulations and obtain some key insights regarding their stable and unstable features.
Synthetic Brainbows
Y. Wan, H. Otsuna, C.D. Hansen.
Synthetic Brainbows, In Computer Graphics Forum, Vol. 32, No. 3pt4, Wiley-Blackwell, pp. 471--480. jun, 2013.
DOI: 10.1111/cgf.12134
ABSTRACT
×
Brainbow is a genetic engineering technique that randomly colorizes cells. Biological samples processed with this technique and imaged with confocal microscopy have distinctive colors for individual cells. Complex cellular structures can then be easily visualized. However, the complexity of the Brainbow technique limits its applications. In practice, most confocal microscopy scans use different florescence staining with typically at most three distinct cellular structures. These structures are often packed and obscure each other in rendered images making analysis difficult. In this paper, we leverage a process known as GPU framebuffer feedback loops to synthesize Brainbow-like images. In addition, we incorporate ID shuffling and Monte-Carlo sampling into our technique, so that it can be applied to single-channel confocal microscopy data. The synthesized Brainbow images are presented to domain experts with positive feedback. A user survey demonstrates that our synthetic Brainbow technique improves visualizations of volume data with complex structures for biologists.
Modeling 4D changes in pathological anatomy using domain adaptation: analysis of TBI imaging using a tumor database
Bo Wang, M. Prastawa, A. Saha, S.P. Awate, A. Irimia, M.C. Chambers, P.M. Vespa, J.D. Van Horn, V. Pascucci, G. Gerig.
Modeling 4D changes in pathological anatomy using domain adaptation: analysis of TBI imaging using a tumor database, In Proceedings of the 2013 MICCAI-MBIA Workshop, Lecture Notes in Computer Science (LNCS), Vol. 8159, Note: Awarded Best Paper!, pp. 31--39. 2013.
DOI: 10.1007/978-3-319-02126-3_4
ABSTRACT
×
Analysis of 4D medical images presenting pathology (i.e., lesions) is signi cantly challenging due to the presence of complex changes over time. Image analysis methods for 4D images with lesions need to account for changes in brain structures due to deformation, as well as the formation and deletion of new structures (e.g., edema, bleeding) due to the physiological processes associated with damage, intervention, and recovery. We propose a novel framework that models 4D changes in pathological anatomy across time, and provides explicit mapping from a healthy template to subjects with pathology. Moreover, our framework uses transfer learning to leverage rich information from a known source domain, where we have a collection of completely segmented images, to yield effective appearance models for the input target domain. The automatic 4D segmentation method uses a novel domain adaptation technique for generative kernel density models to transfer information between different domains, resulting in a fully automatic method that requires no user interaction. We demonstrate the effectiveness of our novel approach with the analysis of 4D images of traumatic brain injury (TBI), using a synthetic tumor database as the source domain.
Fluorender, An Interactive Tool for Confocal Microscopy Data Visualization and Analysis
Y. Wan.
Fluorender, An Interactive Tool for Confocal Microscopy Data Visualization and Analysis, Note: Ph.D. Thesis, School of Computing, University of Utah, June, 2013.
ABSTRACT
Confocal microscopy has become a popular imaging technique in biology research in recent years. It is often used to study three-dimensional (3D) structures of biological samples. Confocal data are commonly multi-channel, with each channel resulting from a different fluorescent staining. This technique also results finely detailed structures in 3D, such as neuron fibers. Despite the plethora of volume rendering techniques that have been available for many years, there is a demand from biologists for a flexible tool that allows interactive visualization and analysis of multi-channel confocal data. Together with biologists, we have designed and developed FluoRender. It incorporates volume rendering techniques such as a two-dimensional (2D) transfer function and multi-channel intermixing. Rendering results can be enhanced through tone-mappings and overlays. To facilitate analyses of confocal data, FluoRender provides interactive operations for extracting complex structures. Furthermore, we developed the Synthetic Brainbow technique, which takes advantage of the asynchronous behavior in Graphics Processing Unit (GPU) framebuffer loops and generates random colorizations for different structures in single-channel confocal data. The results from our Synthetic Brainbows, when applied to a sequence of developing cells, can then be used for tracking the movements of these cells. Finally, we present an application of FluoRender in the workflow of constructing anatomical atlases.
Comprehensible Presentation of Topological Information
G.H. Weber, K. Beketayev, P.-T. Bremer, B. Hamann, M. Haranczyk, M. Hlawitschka, V. Pascucci.
Comprehensible Presentation of Topological Information, No. LBNL-5693E, Lawrence Berkeley National Laboratory, 2013.
ABSTRACT
×
Topological information has proven very valuable in the analysis of scientific data. An important challenge that remains is presenting this highly abstract information in a way that it is comprehensible even if one does not have an in-depth background in topology. Furthermore, it is often desirable to combine the structural insight gained by topological analysis with complementary information, such as geometric information. We present an overview over methods that use metaphors to make topological information more accessible to non-expert users, and we demonstrate their applicability to a range of scientific data sets. With the increasingly complex output of exascale simulations, the importance of having effective means of providing a comprehensible, abstract overview over data will grow. The techniques that we present will serve as an important foundation for this purpose.
Contour Boxplots: A Method for Characterizing Uncertainty in Feature Sets from Simulation Ensembles
R.T. Whitaker, M. Mirzargar, R.M. Kirby.
Contour Boxplots: A Method for Characterizing Uncertainty in Feature Sets from Simulation Ensembles, In IEEE Transactions on Visualization and Computer Graphics, Vol. 19, No. 12, pp. 2713--2722. December, 2013.
DOI: 10.1109/TVCG.2013.143
PubMed ID: 24051838
ABSTRACT
×
Ensembles of numerical simulations are used in a variety of applications, such as meteorology or computational solid mechanics, in order to quantify the uncertainty or possible error in a model or simulation. Deriving robust statistics and visualizing the variability of an ensemble is a challenging task and is usually accomplished through direct visualization of ensemble members or by providing aggregate representations such as an average or pointwise probabilities. In many cases, the interesting quantities in a simulation are not dense fields, but are sets of features that are often represented as thresholds on physical or derived quantities. In this paper, we introduce a generalization of boxplots, called contour boxplots, for visualization and exploration of ensembles of contours or level sets of functions. Conventional boxplots have been widely used as an exploratory or communicative tool for data analysis, and they typically show the median, mean, confidence intervals, and outliers of a population. The proposed contour boxplots are a generalization of functional boxplots, which build on the notion of data depth. Data depth approximates the extent to which a particular sample is centrally located within its density function. This produces a center-outward ordering that gives rise to the statistical quantities that are essential to boxplots. Here we present a generalization of functional data depth to contours and demonstrate methods for displaying the resulting boxplots for two-dimensional simulation data in weather forecasting and computational fluid dynamics.
A Flexible Framework for Fusing Image Collections into Panoramas
W. Widanagamaachchi, P. Rosen, V. Pascucci.
A Flexible Framework for Fusing Image Collections into Panoramas, In Proceedings of the 2013 SIBGRAPI Conference on Graphics, Patterns, and Images, Note: Awarded Best Paper., pp. 195-202. 2013.
DOI: 10.1109/SIBGRAPI.2013.35
ABSTRACT
×
Panoramas create summary views of multiple images, which make them a valuable means of analyzing huge quantities of image and video data. This paper introduces the Ray Graph - a general framework for panorama construction. With rays as its vertices, the Ray Graph uses its edges to specify a set of coherency relationships among all input rays. Consequently, by using a set of simple graph traversal rules, a diverse set of panorama structures can be enumerated, which can be used to efficiently and robustly generate panoramic images from image collections. To demonstrate this framework, we first use it to recreate both 360° and street panoramas. We further introduce two new panorama models, the centipede panorama - a hybrid of the 360° and street panoramas, and the storytelling panorama - a time encoding panorama. Finally, we demonstrate the flexibility of this framework by enabling interactive brushing of panoramic regions for removal of undesired features such as occlusions and moving objects.
Adaptive Sparsity in Gaussian Graphical Models
E. Wong, S.P. Awate, P.T. Fletcher.
Adaptive Sparsity in Gaussian Graphical Models, In Proceedings of the 30th International Conference on Machine Learning (ICML), pp. (accepted). 2013.
ABSTRACT
An effective approach to structure learning and parameter estimation for Gaussian graphical models is to impose a sparsity prior, such as a Laplace prior, on the entries of the precision matrix. Such an approach involves a hyperparameter that must be tuned to control the amount of sparsity. In this paper, we introduce a parameter-free method for estimating a precision matrix with sparsity that adapts to the data automatically. We achieve this by formulating a hierarchical Bayesian model of the precision matrix with a noninformative Jeffreys' hyperprior. We also naturally enforce the symmetry and positivede definiteness constraints on the precision matrix by parameterizing it with the Cholesky decomposition. Experiments on simulated and real (cell signaling) data demonstrate that the proposed approach not only automatically adapts the sparsity of the model, but it also results in improved estimates of the precision matrix compared to the Laplace prior model with sparsity parameter chosen by cross-validation.
Bayesian Estimation of Regularization and Atlas Building in Diffeomorphic Image Registration
M. Zhang, N.P. Singh, P.T. Fletcher.
Bayesian Estimation of Regularization and Atlas Building in Diffeomorphic Image Registration, In Proceedings of the International Conference on Information Processing in Medical Imaging (IPMI), Lecture Notes in Computer Science (LNCS), pp. (accepted). 2013.
ABSTRACT
×
This paper presents a generative Bayesian model for diffeomorphic image registration and atlas building. We develop an atlas estimation procedure that simultaneously estimates the parameters controlling the smoothness of the diffeomorphic transformations. To achieve this, we introduce a Monte Carlo Expectation Maximization algorithm, where the expectation step is approximated via Hamiltonian Monte Carlo sampling on the manifold of diffeomorphisms. An added benefit of this stochastic approach is that it can successfully solve difficult registration problems involving large deformations, where direct geodesic optimization fails. Using synthetic data generated from the forward model with known parameters, we demonstrate the ability of our model to successfully recover the atlas and regularization parameters. We also demonstrate the effectiveness of the proposed method in the atlas estimation problem for 3D brain images.