Person:
Yezzi, Anthony

Associated Organization(s)
ORCID
ArchiveSpace Name Record

Publication Search Results

Now showing 1 - 10 of 82
  • Item
    Tracking deforming objects by filtering and prediction in the space of curves
    (Georgia Institute of Technology, 2009-12) Sundaramoorthi, Ganesh ; Mennucci, Andrea C. ; Soatto, Stefano ; Yezzi, Anthony
    We propose a dynamical model-based approach for tracking the shape and deformation of highly deforming objects from time-varying imagery. Previous works have assumed that the object deformation is smooth, which is realistic for the tracking problem, but most have restricted the deformation to belong to a finite-dimensional group, such as affine motions, or to finitely-parameterized models. This, however, limits the accuracy of the tracking scheme. We exploit the smoothness assumption implicit in previous work, but we lift the restriction to finite-dimensional motions/deformations. To do so, we derive analytical tools to define a dynamical model on the (infinitedimensional) space of curves. To demonstrate the application of these ideas to object tracking, we construct a simple dynamical model on shapes, which is a first-order approximation to any dynamical system. We then derive an associated nonlinear filter that estimates and predicts the shape and deformation of a object from image measurements.
  • Item
    Non-Euclidean Image-Adaptive Radial Basis Functions for 3D Interactive Segmentation
    (Georgia Institute of Technology, 2009-09) Mory, Benoit ; Ardon, Roberto ; Yezzi, Anthony ; Thiran, Jean-Philippe
    In the context of variational image segmentation, we propose a new finite-dimensional implicit surface representation. The key idea is to span a subset of implicit functions with linear combinations of spatially-localized kernels that follow image features. This is achieved by replacing the Euclidean distance in conventional Radial Basis Functions with non-Euclidean, image-dependent distances. For the minimization of an objective region-based criterion, this representation yields more accurate results with fewer control points than its Euclidean counterpart. If the user positions these control points, the non-Euclidean distance enables to further specify our localized kernels for a target object in the image. Moreover, an intuitive control of the result of the segmentation is obtained by casting inside/outside labels as linear inequality constraints. Finally, we discuss several algorithmic aspects needed for a responsive interactive workflow. We have applied this framework to 3D medical imaging and built a real-time prototype with which the segmentation of whole organs is only a few clicks away.
  • Item
    Brain MRI T₁-Map and T₁-weighted Image Segmentation in a Variational Framework
    (Georgia Institute of Technology, 2009-06) Cheng, Ping-Feng ; Steen, R.Grant ; Yezzi, Anthony ; Krim, Hamid
    In this paper we propose a constrained version of Mumford- Shah’s[1] segmentationwith an information-theoretic point of view[2] in order to devise a systematic procedure to segment brain MRI data for two modalities of parametric T₁-Map and T₁-weighted images in both 2-D and 3-D settings. The incorporation of a tuning weight in particular adds a probabilistic flavor to our segmentation method, and makes the three-tissue segmentation possible. Our method uses region based active contours which have proven to be robust. The method is validated by two real objects which were used to generate T₁- Maps and also by two simulated brains of T₁-weighted data from the BrainWeb[3] public database.
  • Item
    Non-Rigid 2D-3D Pose Estimation and 2D Image Segmentation
    (Georgia Institute of Technology, 2009-06) Sandhu, Romeil ; Dambreville, Samuel ; Yezzi, Anthony ; Tannenbaum, Allen R.
    In this work, we present a non-rigid approach to jointly solve the tasks of 2D-3D pose estimation and 2D image segmentation. In general, most frameworks which couple both pose estimation and segmentation assume that one has the exact knowledge of the 3D object. However, in non-ideal conditions, this assumption may be violated if only a general class to which a given shape belongs to is given (e.g., cars, boats, or planes). Thus, the key contribution in this work is to solve the 2D-3D pose estimation and 2D image segmentation for a general class of objects or deformations for which one may not be able to associate a skeleton model. Moreover, the resulting scheme can be viewed as an extension of the framework presented in, in which we include the knowledge of multiple 3D models rather than assuming the exact knowledge of a single 3D shape prior. We provide experimental results that highlight the algorithm's robustness to noise, clutter, occlusion, and shape recovery on several challenging pose estimation and segmentation scenarios.
  • Item
    Joint brain parameteric T1-map segmentation and RF inhomogeneity calibration
    (Georgia Institute of Technology, 2009) Chen, Ping-Feng ; Steen, R. Grant ; Yezzi, Anthony ; Krim, Hamid
    We propose a constrained version of Mumford and Shah’s (1989) segmentation model with an information-theoretic point of view in order to devise a systematic procedure to segment brain magnetic resonance imaging (MRI) data for parametric T1-Map and T1-weighted images, in both 2-D and 3D settings. Incorporation of a tuning weight in particular adds a probabilistic flavor to our segmentation method, and makes the 3-tissue segmentation possible. Moreover, we proposed a novel method to jointly segment the T1-Map and calibrate RF Inhomogeneity (JSRIC). This method assumes the average T1 value of whitematter is the same across transverse slices in the central brain region, and JSRIC is able to rectify the flip angles to generate calibrated T1-Maps. In order to generate an accurate T1-Map, the determination of optimal flip-angles and the registration of flip-angle images are examined. Our JSRIC method is validated on two human subjects in the 2D T1-Map modality and our segmentation method is validated by two public databases, BrainWeb and IBSR, of T1-weighted modality in the 3D setting.
  • Item
    TAC: Thresholding active contours
    (Georgia Institute of Technology, 2008-10) Dambreville, Samuel ; Yezzi, Anthony ; Lankton, Shawn ; Tannenbaum, Allen R.
    In this paper, we describe a region-based active contour technique to perform image segmentation. We propose an energy functional that realizes an explicit trade-off between the (current) image segmentation obtained from a curve and the (implied) segmentation obtained from dynamically thresholding the image. In contrast with standard region-based techniques, the resulting variational approach bypasses the need to fit (a priori chosen) statistical models to the object and the background. Our technique performs segmentation based on geometric considerations of the image and contour, instead of statistical ones. The resulting flow leads to very reasonable segmentations as shown by several illustrative examples.
  • Item
    Robust 3D pose estimation and efficient 2D region-based segmentation from a 3D shape prior
    (Georgia Institute of Technology, 2008-10) Dambreville, Samuel ; Sandhu, Romeil ; Yezzi, Anthony ; Tannenbaum, Allen R.
    In this work, we present an approach to jointly segment a rigid object in a 2D image and estimate its 3D pose, using the knowledge of a 3D model. We naturally couple the two processes together into a unique energy functional that is minimized through a variational approach. Our methodology differs from the standard monocular 3D pose estimation algorithms since it does not rely on local image features. Instead, we use global image statistics to drive the pose estimation process. This confers a satisfying level of robustness to noise and initialization for our algorithm, and bypasses the need to establish correspondences between image and object features. Moreover, our methodology possesses the typical qualities of region-based active contour techniques with shape priors, such as robustness to occlusions or missing information, without the need to evolve an infinite dimensional curve. Another novelty of the proposed contribution is to use a unique 3D model surface of the object, instead of learning a large collection of 2D shapes to accommodate for the diverse aspects that a 3D object can take when imaged by a camera. Experimental results on both synthetic and real images are provided, which highlight the robust performance of the technique on challenging tracking and segmentation applications.
  • Item
    Dynamic shape and appearance modeling via moving and deforming layers
    (Georgia Institute of Technology, 2008-08) Jackson, Jeremy D. ; Yezzi, Anthony ; Soatto, Stefano
    This model is based on a collection of overlapping layers that can move and deform, each supporting an intensity function that can change over time. We discuss the generality and limitations of this model in relation to existing ones such as traditional optical flow or motion segmentation, layers, deformable templates and deformotion. We then illustrate how this model can be used for inference of shape, motion, deformation and appearance of the scene from a collection of images. The layering structure allows for automatic inpainting of partially occluded regions. We illustrate the model on synthetic and real sequences where existing schemes fail, and show how suitable choices of constants in the model yield existing schemes, from optical flow to motion segmentation, etc.
  • Item
    Coarse-to-Fine Segmentation and Tracking Using Sobolev Active Contours
    (Georgia Institute of Technology, 2008-05) Sundaramoorthi, Ganesh ; Yezzi, Anthony ; Mennucci, Andrea C.
    Recently proposed Sobolev active contours introduced a new paradigm for minimizing energies defined on curves by changing the traditional cost of perturbing a curve and thereby redefining gradients associated to these energies. Sobolev active contours evolve more globally and are less attracted to certain intermediate local minima than traditional active contours, and it is based on a wellstructured Riemannian metric, which is important for shape analysis and shape priors. In this paper, we analyze Sobolev active contours using scale-space analysis in order to understand their evolution across different scales. This analysis shows an extremely important and useful behavior of Sobolev contours, namely, that they move successively from coarse to increasingly finer scale motions in a continuous manner. This property illustrates that one justification for using the Sobolev technique is for applications where coarse-scale deformations are preferred over fine-scale deformations. Along with other properties to be discussed, the coarse-to-fine observation reveals that Sobolev active contours are, in particular, ideally suited for tracking algorithms that use active contours. We will also justify our assertion that the Sobolev metric should be used over the traditional metric for active contours in tracking problems by experimentally showinghow a variety of active-contour-based tracking methods can be significantly improved merely by evolving the active contour according to the Sobolev method.
  • Item
    3-D Reconstruction of Shaded Objects from Multiple Images Under Unknown Illumination
    (Georgia Institute of Technology, 2008-03) Jin, Hailin ; Wang, Dejun ; Cremers, Daniel ; Prados, Emmanuel ; Yezzi, Anthony ; Soatto, Stefano
    We propose a variational algorithm to jointly estimate the shape, albedo, and light configuration of a Lambertian scene from a collection of images taken from different vantage points. Our work can be thought of as extending classical multi-view stereo to cases where point correspondence cannot be established, or extending classical shape from shading to the case of multiple views with unknown light sources. We show that a first naive formalization of this problem yields algorithms that are numerically unstable, no matter how close the initialization is to the true geometry. We then propose a computational scheme to overcome this problem, resulting in provably stable algorithms that converge to (local) minima of the cost functional. We develop a new model that explicitly enforces positivity in the light sources with the assumption that the object is Lambertian and its albedo is piecewise constant and show that the new model significantly improves the accuracy and robustness relative to existing approaches.