Mapping 2-d Image Onto A 3-d Surface Patents (Class 382/285)
  • Patent number: 7800627
    Abstract: Mesh quilting for geometric texture synthesis involves synthesizing a geometric texture by quilting a mesh texture swatch. In an example embodiment, geometry is matched between a mesh texture swatch and a portion of a synthesized geometric texture. Correspondences are ascertained between elements of the mesh texture swatch and the portion of the synthesized geometric texture. The ascertained corresponding elements of the mesh texture swatch and the portion of the synthesized geometric texture are aligned via local deformation to create a new patch. The new patch is merged into an output texture space to grow the synthesized geometric texture.
    Type: Grant
    Filed: June 8, 2007
    Date of Patent: September 21, 2010
    Assignee: Microsoft Corporation
    Inventors: Kun Zhou, Xin Huang, Xi Wang, Baining Guo, Heung-Yeung Shum
  • Publication number: 20100232727
    Abstract: An apparatus for providing an estimate for a 3D camera pose relative to a scene from 2D image data of 2D image frame provided by said camera. A candidate 2D key points detector determines candidate 2D key points from the 2D image frame. A detected 3D observations detector determines detected 3D observations from pre-recorded scene data and the candidate 2D key points. A detected 3D camera pose estimator determines a detected 3D camera pose estimate from the camera data, the detected 3D observations and the candidate 2D key points. A first storage stores the detected 2D candidate key points and the 2D image data, and outputs in response to a 3D camera pose estimate output previous 2D image data and candidate 2D key points related to a previous 3D camera pose estimate output. A second storage stores and outputs a previous 3D camera pose estimate.
    Type: Application
    Filed: May 22, 2008
    Publication date: September 16, 2010
    Applicant: METAIO GMBH
    Inventor: Torbjorn Engedal
  • Publication number: 20100225740
    Abstract: A metadata generating method including: receiving sub-region dividing information to divide an object into a plurality of sub-regions; and generating sub-region indicating information to indicate each of the plurality of sub-regions divided according to the sub-region dividing information.
    Type: Application
    Filed: September 10, 2009
    Publication date: September 9, 2010
    Applicant: Samsung Electronics Co., Ltd.
    Inventors: Kil-soo JUNG, Hye-young Jun, Dae-jong Lee
  • Patent number: 7792341
    Abstract: The present invention is related to a method for performing a cephalometric or anthropometric analysis comprising the steps of: acquiring a 3D scan of a person's head using a 3D medical image modality, generating a 3D surface model using data from the 3D scan, generating from the 3D scan at least one 2D cephalogram geometrically linked to the 3D surface model, indicating anatomical landmarks on the at least one 2D cephalogram and/or on the 3D surface model, performing the analysis using the anatomical landmarks.
    Type: Grant
    Filed: June 27, 2005
    Date of Patent: September 7, 2010
    Assignee: Medicim N.V.
    Inventor: Filip Schutyser
  • Publication number: 20100214291
    Abstract: For generating a 3D geometric model (44) and/or a definition of the 3D geometric model from a single digital image of a building facade (4), a facade structure is detected from the digital image by dividing the facade (4) along horizontal lines into horizontal layers representative of floors (41), and by dividing the horizontal layers along vertical lines into tiles (42). The tiles (42) are further subdivided into a hierarchy of rectangular image regions (43). 3D architectural objects (45) corresponding to the image regions (43) are determined in an architectural element library. The 3D geometric model (44) or the definition of the 3D geometric model is generated based on the facade structure, the hierarchy and the 3D architectural objects (45). The library-based generation of the 3D geometric model makes it possible to enhance simple textured building models constructed from aerial images and/or ground-based photographs.
    Type: Application
    Filed: July 22, 2008
    Publication date: August 26, 2010
    Applicant: ETH ZÜRICH
    Inventors: Pascal Müller, Gang Zeng, Luc Van Gool
  • Patent number: 7778491
    Abstract: An “Oblique Image Stitcher” provides a technique for constructing a photorealistic oblique view from a set of input images representing a series of partially overlapping views of a scene. The Oblique Image Stitcher first projects each input image onto a geometric proxy of the scene and renders the images from a desired viewpoint. Once the images have been projected onto the geometric proxy, the rendered images are evaluated to identify optimum seams along which the various images are to be blended. Once the optimum seams are selected, the images are remapped relative to those seams by leaving the mapping unchanged at the seams and interpolating a smooth mapping between the seams. The remapped images are then composited to construct the final mosaiced oblique view of the scene. The result is a mosaic image constructed by warping the input images in a photorealistic manner which agrees at seams between images.
    Type: Grant
    Filed: April 10, 2006
    Date of Patent: August 17, 2010
    Assignee: Microsoft Corporation
    Inventors: Drew Steedly, Richard Szeliski, Matthew Uyttendaele, Michael Cohen
  • Patent number: 7773085
    Abstract: The present invention is a system that grids original data, maps the data at the grid locations to height values at corresponding landscape image pixel locations and renders the landscape pixels into a three-dimensional (3D) landscape image. The landscape pixels can have arbitrary shapes and can be augmented with additional 3D information from the original data, such as an offset providing additional information, or generated from processing of the original data, such as to alert when a threshold is exceeded, or added for other purposes such as to point out a feature. The pixels can also convey additional information from the original data using other pixel characteristics such as texture, color, transparency, etc.
    Type: Grant
    Filed: March 7, 2006
    Date of Patent: August 10, 2010
    Assignee: Graphics Properties Holdings, Inc.
    Inventor: David William Hughes
  • Patent number: 7773799
    Abstract: A method for performing automatic stereo measure of a selected point in a scene. A sensor image is initially obtained from a sensor providing an image of a scene. First and second reference images of the scene that are a stereo pair of images are also provided. The sensor image is registered with the first reference image and a point of interest is selected from one of the sensor or first reference images. A stereo point measurement using the selected point and the two reference images is performed to determine a point in the second reference image that represents a stereo mate of the selected point in the first reference image. An analysis of a three dimensional grid volume centered about the selected point in the first reference image is then performed. The analysis uses both reference images, the selected point and its stereo mate to determine the three dimensional coordinates of the point in the grid that best matches the location of the stereo mate point in the second reference image.
    Type: Grant
    Filed: February 6, 2007
    Date of Patent: August 10, 2010
    Assignee: The Boeing Company
    Inventor: Lawrence A. Oldroyd
  • Publication number: 20100182400
    Abstract: Methods, systems, and apparatus, including computer program products, for aligning images are disclosed. In one aspect, a method includes receiving an inaccurate three-dimensional (3D) position of a physical camera, where the physical camera captured a photographic image; basing an initial 3D position of a virtual camera in a 3D virtual environment on the inaccurate 3D position of the physical camera; correlating one or more markers in the photographic image with one or more markers in the 3D virtual environment that appear in the virtual camera's field of view; and adjusting the initial 3D position of the virtual camera in the 3D virtual environment based on a disparity between the one or more markers' 3D positions in the photographic image as compared to the one or more markers' 3D positions in the virtual camera's field of view.
    Type: Application
    Filed: January 16, 2009
    Publication date: July 22, 2010
    Applicant: WORLD GOLF TOUR, INC.
    Inventors: Chad M. Nelson, Phil Gorrow, David Montgomery
  • Patent number: 7758799
    Abstract: A solid imaging apparatus and method employing sub-pixel shifting in multiple exposures of the digitally light projected image of a cross-section of a three-dimensional object on a solidifiable liquid medium. The multiple exposures provide increased resolution, preserving image features in a three-dimensional object and smoothing out rough or uneven edges that would otherwise be occur using digital light projectors that are limited by the number of pixels in an image projected over the size of the image. Algorithms are used to select pixels to be illuminated within the boundary of each image projected in the cross-section being exposed.
    Type: Grant
    Filed: April 1, 2005
    Date of Patent: July 20, 2010
    Assignee: 3D Systems, Inc.
    Inventors: Charles W. Hull, Jouni P. Partanen, Charles R. Sperry, Patrick Dunne, Suzanne M. Scott, Dennis F. McNamara, Chris R. Manners
  • Patent number: 7755618
    Abstract: There is provided an image processing apparatus and its method allowing a user to specify a desired 3D area without cutting volume data. The present invention provides an image processing apparatus specifying a desired 3D area on volume data, which includes a display section that displays the volume data, an input section that a user inputs a 2D path for the volume data displayed on the display section, a path rendering section that converts the 2D path into a 3D path according to analysis of relation between the 2D path and volume data, and a 3D area specification section that specifies the desired 3D area based on the 3D path.
    Type: Grant
    Filed: March 22, 2006
    Date of Patent: July 13, 2010
    Assignee: Sony Corporation
    Inventors: Frank Nielsen, Shigeru Owada
  • Patent number: 7755635
    Abstract: A system and method for combining computer generated 3D environments (virtual environments) with satellite images. In a specific application, the system enables users to see and communicate with each other as live avatars in the computer generated environment in real time.
    Type: Grant
    Filed: February 23, 2007
    Date of Patent: July 13, 2010
    Inventor: William J. Benman
  • Patent number: 7751650
    Abstract: The present invention relates to an image processing apparatus and an image processing program, and an image processing method for making various operations be performed more comfortably. A rotation input section 14 is provided substantially at the center of a mobile phone 1. When the rotation input section 14 is rotated in a clockwise or counterclockwise direction, an image displayed on a display section 13 is rotated to be displayed. In addition, when the rotation input section 14 is pressed at an upper, lower, left or right portion thereof toward inside of the mobile phone 1, a display range of the image displayed on the display section 13 is switched. For example, in a case where a map is displayed, the map is rotated when the rotation input section 14 is rotated, and the map is scaled up/down when a predetermined position of the rotation input section 14 is pressed. The present invention may be applied to mobile phones.
    Type: Grant
    Filed: March 3, 2003
    Date of Patent: July 6, 2010
    Assignees: Sony Corporation, Sony Ericsson Mobile Communications Japan
    Inventors: Naoki Tada, Kouichiro Takashima
  • Patent number: 7751066
    Abstract: An apparatus is disclosed for projecting patterned electromagnetic waves onto an object. This apparatus includes: an electromagnetic-wave source; a modulating element allowing at least part of an electromagnetic wave incoming from the source to be modulated; and a selector for allowing a selected one of angular components of an electromagnetic wave outgoing from the modulating element, to pass through the selector. The modulating element is shaped to include at least one pair of two portions having different surface shapes. One of the two portions allows one of the angular components which has a radiant angle characteristic that achieves a predetermined entrance numerical aperture, to go out as a component which will be selected by the selector. The other allows one of the angular components which has a radiant angle characteristic that does not achieve the entrance numerical aperture, to go out as a component which will not be selected by the selector.
    Type: Grant
    Filed: December 14, 2007
    Date of Patent: July 6, 2010
    Assignee: Brother Kogyo Kabushiki Kaisha
    Inventor: Takeo Iwasaki
  • Publication number: 20100166338
    Abstract: An image processing method, including extracting compensation information comprising one from among a depth compensation value and a depth value compensated for by using the depth compensation value; when the compensation information comprises the depth compensation value, compensating for a depth value to be applied to a pixel of a two-dimensional (2D) image by using the depth compensation value, and generating a depth map about the 2D image by using the compensated depth value, and when the compensation information comprises the compensated depth value, generating the depth map about the 2D image by using the compensated depth value; obtaining positions in a left-eye image and a right-eye image by using the depth map, wherein the pixel of the 2D image is mapped to the positions; and generating the left-eye image and the right-eye image comprising the positions to which the pixel is mapped.
    Type: Application
    Filed: September 21, 2009
    Publication date: July 1, 2010
    Applicant: Samsung Electronics Co., Ltd.
    Inventors: Dae-jong Lee, Sung-wook Park, Hyun-kwon Chung, Kil-soo Jung, Hye-young Jun
  • Patent number: 7747105
    Abstract: Method for producing a rotation-compensated image sequence allowing simplified reconstruction of the translation of a camera by real time rotation compensation of images recorded sequentially by an electronic camera randomly displaced in a scene and provided with a rotation sensor resulting from an association of color and/or brightness values of the pixels of a camera pixel field indicated by rectangular coordinates.
    Type: Grant
    Filed: October 10, 2006
    Date of Patent: June 29, 2010
    Assignee: Christian-Albrechts-Universitaet Zu Kiel
    Inventor: Reinhard Koch
  • Patent number: 7742629
    Abstract: Embodiments of the present invention include methods and systems for three-dimensional reconstruction of a tubular organ (for example, coronary artery) using a plurality of two-dimensional images.
    Type: Grant
    Filed: September 24, 2004
    Date of Patent: June 22, 2010
    Assignee: Paieon Inc.
    Inventors: Michael Zarkh, Moshe Klaiman
  • Patent number: 7739623
    Abstract: Method and system for 3D data editing is disclosed. A 3D volumetric data is rendered in a rendering space. A 2D graphical drawing tool is selected and used to create a 2D structure. Apply a 3D operation to the 3D volumetric data based on the 2D structure.
    Type: Grant
    Filed: April 14, 2005
    Date of Patent: June 15, 2010
    Assignee: Edda Technology, Inc.
    Inventors: Cheng-Chung Liang, Jian-Zhong Qian, Guo-Qing Wei, Li Fan, Xiaolan Zeng
  • Patent number: 7738733
    Abstract: The various embodiments generally describe systems and methods related to 3-dimensional (3-D) imaging. In one exemplary embodiment, an imaging system incorporates a 2-dimensional (2-D) image capture system that generates 2-D digital image information representing an object, a signal transmitter that transmits a ranging signal towards the object, and a signal receiver that receives the ranging signal returned by the object. Also included, is an image processor that computes distance information from the time difference between signal transmission and reception of the ranging signal. The image processor combines the distance information and 2-D digital image information to produce 3-D digital image information representing the object.
    Type: Grant
    Filed: September 29, 2005
    Date of Patent: June 15, 2010
    Assignee: Avago Technologies ECBU IP (Singapore) Pte. Ltd.
    Inventors: Marshall T. DePue, Tong Xie
  • Patent number: 7738687
    Abstract: A multi-level contraband detection system. At a first level, the system obtains volumetric information about an item under inspection. The volumetric information provides a basis for identifying suspect objects in the item under inspection and their locations. The location information is expressed in a first coordinate system relative to the device used for first level scanning. When first level scanning identifies a suspicious object, the item under inspection is passed to a second level scanner that can take further measurements on the suspicious objects. The second level machine is controlled in a second coordinate system. A translation between the two coordinate systems is obtained by registering a multi-dimensional image obtained at the first level with positioning information obtained at the second level. Registration is performed using a coarse and then a fine registration process for quick and accurate registration.
    Type: Grant
    Filed: December 11, 2006
    Date of Patent: June 15, 2010
    Assignee: L-3 Communications Security and Detection Systems, Inc.
    Inventors: John O. Tortora, Jeffrey H. Stillson, Kristoph D. Krug
  • Publication number: 20100142852
    Abstract: There is provided an image analysis system which captures image data of an arbitrary pair of a first image RI and a second image LI among images obtained by color-photographing a single object from different positions into an analysis computer, wherein the computer includes corresponding point extraction means for assigning a weighing factor to a pixel information value based on the contrast size of the pixel information value in each of a first local area ROI1 set around an arbitrary reference point in RI and second local areas ROI2s at which scanning is performed on LI, calculating the similarity between ROI1 and ROI2s, and extracting a corresponding point which corresponds to the reference point from a ROI2 having the highest similarity, and depth information calculating means for calculating depth information of the object based on coordinates of the reference point and the corresponding point.
    Type: Application
    Filed: March 10, 2008
    Publication date: June 10, 2010
    Inventors: Hiroshi Fujita, Toshiaki Nakagawa, Yoshinori Hayashi
  • Publication number: 20100134487
    Abstract: A 3D face model construction method is disclosed herein, which includes a training step and a face model reconstruction step. In the training step, a neutral shape model is built from multiple training faces, and a manifold-based approach is proposed for processing 3D expression deformation data of training faces in 2D manifold space. In the face model reconstruction step, first, a 2D face image is entered and a 3D face model is initialized. Then, texture, illumination and shape of the model are optimized until error converges. The present invention enables reconstruction of a 3D face model from a single face image, reducing the complexity for building the 3D face model by processing high dimensional 3D expression deformation data in a low dimensional manifold space, and removal or substituting an expression by a learned expression for the reconstructed 3D model built from the 2D image.
    Type: Application
    Filed: January 6, 2009
    Publication date: June 3, 2010
    Inventors: Shang-Hong Lai, Shu-Fan Wang
  • Publication number: 20100135596
    Abstract: Methods and systems for creating three-dimensional models from two-dimensional images are provided. According to one embodiment, a method of creating an inflatable icon involves a vectorizing module polygonizing an input image to produce an inflatable image by representing a set of pixels making up the input image as polygons. The inflatable image is then extruded by an extrusion module by generating appropriate z-coordinate values for a reference point associated with each polygon of the inflatable image based upon a biased diffusion process. End-user controlled pressure modulation is supported by an interface module by (i) adjusting one or more modulation functions employed by the biased diffusion process based upon end-user input regarding relative modulation bias for a selected set of one or more pixels associated with the inflatable image or (ii) applying the biased diffusion process to only the selected set of one or more pixels.
    Type: Application
    Filed: December 7, 2009
    Publication date: June 3, 2010
    Applicant: AGENTSHEETS, INC. A Colorado Corporation
    Inventor: Alexander Repenning
  • Publication number: 20100124368
    Abstract: An apparatus and method of reconstructing a three-dimensional (3D) image from two-dimensional (2D) images are disclosed. Three dimensional (3D) data may be reconstructed in an x-ray generation tube at a limited angle, and repeatedly updated for each pixel. A median from among each pixel of reconstruction data may be selected. Backprojecting may be performed using a search direction weight calculated using a reprojection image and residual image. A 3D image satisfying a Level 1 (L1) norm fidelity and sparsity constraint of the reconstruction data may be reconstructed.
    Type: Application
    Filed: September 29, 2009
    Publication date: May 20, 2010
    Applicants: SAMSUNG ELECTRONICS CO., LTD, KOREA ADVANCED INSTITUTE OF SCIENCE AND TECHNOLOGY
    Inventors: Jong Chul Ye, Young Hun Sung, Dong Goo Kang, Jae Duck Jang, Ji Young Choi, Min Woo Kim
  • Patent number: 7719531
    Abstract: A two-dimensional text editing mode is used when editing three-dimensional text. Once the three-dimensional text is selected for editing a two-dimensional text editing mode is automatically entered such that the user may easily edit the text. The two dimensional properties that are associated with the text are displayed within an outline of the shape such that the text may be edited in place. The 2-D properties, such as font, text color, shape color, and the like, are maintained during the editing. After the two-dimensional text editing has been completed, the text is redisplayed according to its 3-D properties.
    Type: Grant
    Filed: May 5, 2006
    Date of Patent: May 18, 2010
    Assignee: Microsoft Corporation
    Inventors: Lutz Gerhard, Christopher D. Dickens, Craig L. Daw, Damien N. Berger, Jason E. Long
  • Patent number: 7720308
    Abstract: Disclosed herein is a 3-D image display unit that can be controlled flexibly to protect the user from eyestrain and operated easily. The 3-D image display unit includes measuring means for measuring a display time of a 3-D image, parallax adjusting means for instructing 3-D image forming means to adjust the parallax of the 3-D image. In the case where the 3-D image display time measured by the measuring means exceeds a predetermined time, the parallax adjusting means instructs the 3-D image forming means to reduce the parallax of the 3-D image to be formed, thereby the display means comes to display a 3-D image having a small parallax. The user can thus be protected from eyestrain.
    Type: Grant
    Filed: September 26, 2003
    Date of Patent: May 18, 2010
    Assignee: Sharp Kabushiki Kaisha
    Inventors: Ryuji Kitaura, Hiroyuki Katata, Toshio Nomura, Norio Ito, Tomoko Aono, Maki Takahashi, Shinya Hasegawa, Tadashi Uchiumi, Motohiro Ito, Masatoshi Tsujimoto, Hiroshi Kusao
  • Publication number: 20100119156
    Abstract: An image processing apparatus includes a characteristic region detecting section that detects a characteristic region in an image, an image dividing section that divides the image into the characteristic region and a background region other than the characteristic region, and a compressing section that compresses a characteristic region image which is an image of the characteristic region and a background region image which is an image of the background region at different strengths from each other.
    Type: Application
    Filed: January 19, 2010
    Publication date: May 13, 2010
    Applicant: FUJIFILM Corporation
    Inventors: Yukinori NOGUCHI, Hirokazu Kameyama
  • Publication number: 20100118125
    Abstract: A method of generating three-dimensional (3D) image data from first and second image data obtained by photographing the same subject at different points of time, the method including generating third image data by adjusting locations of pixels in the second image data so that the second image data corresponds to the first image data, and generating the 3D image data based on a relationship between the third image data and the first image data.
    Type: Application
    Filed: November 4, 2009
    Publication date: May 13, 2010
    Applicant: Samsung Electronics Co., Ltd.
    Inventor: Hyun-soo PARK
  • Publication number: 20100111444
    Abstract: A stochastic method and system for fast stereoscopic ranging includes selecting a pair of images for stereo processing, in which the pair of images are a frame pair and one of the image is a reference frame, seeding estimated values for a range metric at each pixel of the reference frame, initializing one or more search stage constraints, stochastically computing local influence for each valid pixel in the reference frame, aggregating local influences for each valid pixel in the reference frame, refining the estimated values for the range metric at each valid pixel in the reference frame based on the aggregated local influence, and post-processing range metric data. A valid pixel is a pixel in the reference frame that has a corresponding pixel in the other frame of the frame pair. The method repeats n iterations of the stochastically computing through the post-processing.
    Type: Application
    Filed: April 24, 2008
    Publication date: May 6, 2010
    Inventor: Thayne R. Coffman
  • Patent number: 7711155
    Abstract: The present invention is a system and method for modeling faces from images captured from a single or a plurality of image capturing systems at different times. The method first determines the demographics of the person being imaged. This demographic classification is then used to select an approximate three dimensional face model from a set of models. Using this initial model and properties of camera projection, the model is adjusted leading to a more accurate face model.
    Type: Grant
    Filed: April 12, 2004
    Date of Patent: May 4, 2010
    Assignee: VideoMining Corporation
    Inventors: Rajeev Sharma, Kuntal Sengupta
  • Patent number: 7711495
    Abstract: For automatic identification of microorganisms collected on a carrier, a color image of the carrier surface with collected microorganisms is recorded and digitalized. The digitalized image is converted into a grayscale image and optionally converted subsequently into a silhouette image. When microorganisms are present, an image is produced with full-surface labeled objects of a first grayscale and a background of a second grayscale. Objects are identified in the grayscale and/or silhouette image by a model-based comparative method. Contours of the objects are marked in the color or grayscale image. Features of the objects in the color image and/or grayscale image are determined. The objects are classified based on the features. The classified objects are indicated and/or saved as species, name and/or code. Non-classified objects are indicated and/or saved as color, grayscale and/or silhouette image. Non-classified objects are subsequently discarded or added as a new case to the classification system.
    Type: Grant
    Filed: October 1, 2004
    Date of Patent: May 4, 2010
    Inventor: Petra Perner
  • Patent number: 7711180
    Abstract: The present invention provides a three-dimensional image measuring apparatus and method capable of measuring projections and depressions on a surface of an object with fine precision, as well as ensuring stable convergence, even for stereo images with significant project distortion.
    Type: Grant
    Filed: April 20, 2005
    Date of Patent: May 4, 2010
    Assignee: Topcon Corporation
    Inventors: Tadayuki Ito, Hitoshi Otani, Nobuo Kochi
  • Publication number: 20100104219
    Abstract: An image processing method including: obtaining points on left-eye and right-eye images to be generated from a two-dimensional (2D) image, to which a predetermined pixel of the 2D image is to be mapped, by using the sizes of holes to be generated in the left-eye and right-eye images; and generating the left-eye and right-eye images respectively having the obtained points to which the predetermined pixel of the 2D image is mapped.
    Type: Application
    Filed: September 10, 2009
    Publication date: April 29, 2010
    Applicant: Samsung Electronics Co., Ltd.
    Inventor: Alexander LIMONOV
  • Patent number: 7706600
    Abstract: Virtual navigation (2255) and examination of virtual objects are enhanced using methods of insuring that an entire surface to be examined has been properly viewed. A user interface (FIG. 23) identifies regions which have not been subject to examination and provides a mechanism (2250) to route the user to these regions in the 3D display. Virtual examination is further improved by the use of measuring disks (905) to enhance quantitative measurements such as diameter, distance, volume and angle. Yet another enhancement to the virtual examination of objects is a method of electronic segmentation, or cleaning, which corrects for partial volume effects occurring in an object.
    Type: Grant
    Filed: October 2, 2001
    Date of Patent: April 27, 2010
    Assignee: The Research Foundation of State University of New York
    Inventors: Kevin Kreeger, Sarang Lakare, Zhenrong Liang, Mark R. Wax, Ingmar Bitter, Frank Dachille, Dongqing Chen, Arie E. Kaufman
  • Patent number: 7706602
    Abstract: An apparatus for generating a three-dimensional model of an object includes a storage unit that stores three-dimensional coordinates of plural vertices on a standard model of the object, an image input unit that inputs plural input images acquired by photographing the object, a first detection unit that detects a coordinate of a first point corresponding to a vertex on the standard model, from a first image selected from among the plural input images, a second detection unit that detects a coordinate of a second point corresponding to the coordinate of the first point, from a second image other than the first image, a depth computation unit that computes a depth of the first point by using the coordinates of the first and second points, and a first update unit that updates the three-dimensional coordinate on the standard model based on the coordinate of the first point and the calculated depth.
    Type: Grant
    Filed: March 6, 2006
    Date of Patent: April 27, 2010
    Assignee: Kabushiki Kaisha Toshiba
    Inventor: Akiko Nakashima
  • Publication number: 20100092105
    Abstract: An information processing apparatus comprises a data conversion unit for converting second 3D image information, to which first image information can be pasted, into 3D photo frame data including three-dimensional object information representing a three-dimensional shape of an object included in the second 3D image information and parameter information including a pasting position of the first image information, a parse calculation unit for calculating an image of the 3D photo frame data projected onto a display screen, an image pasting unit for pasting the first image information to the 3D photo frame data, and a display control unit for outputting to the display screen the 3D photo frame data or the 3D photo frame data pasted with the first image information.
    Type: Application
    Filed: October 5, 2009
    Publication date: April 15, 2010
    Applicant: Sony Corporation
    Inventors: Shunsuke Aoki, Keigo Ihara, Shigeki Nakamura
  • Patent number: 7697787
    Abstract: The present invention concerns a method for replacing the face of an actor in a video clip, with the one of a user (U) of an entertainment video system (4), consisting in: a preparation phase, taking one first fixed picture of the face of the user; building a 3D-model of that face; replacing a first video picture of the actor with a reproduction of the face of the user; and while the clip is going on, replacing the face of the actor in the successive pictures of the video clip with successive pictures reproducing the face of the user, the transitions of the face of the actor being followed by applying at least orientation, size and displacement vectors to the 3D-model of face of the user on the basis of orientation, size and displacement vectors calculated for the face of the actor in the clip.
    Type: Grant
    Filed: June 6, 2003
    Date of Patent: April 13, 2010
    Assignee: Accenture Global Services GmbH
    Inventor: Martin Illsley
  • Patent number: 7693318
    Abstract: The invention provides improvements in reconstructive imaging of the type in which a volume is reconstructed from a series of measured projection images (or other two-dimensional representations) by utilizing the capabilities of graphics processing units (GPUs). In one aspect, the invention configures a GPU to reconstruct a volume by initializing an estimated density distribution of that volume to arbitrary values in a three-dimensional voxel-based matrix and, then, determining the actual density distribution iteratively by, for each of the measured projections, (a) forward-projecting the estimated volume computationally and comparing the forward-projection with the measured projection, (b) generating a correction term for each pixel in the forward-projection based on that comparison, and (c) back-projecting the correction term for each pixel in the forward-projection onto all voxels of the volume that were mapped into that pixel in the forward-projection.
    Type: Grant
    Filed: October 31, 2005
    Date of Patent: April 6, 2010
    Assignee: PME IP Australia Pty Ltd
    Inventors: Detlev Stalling, Malte Westerhoff, Martin Seebass, Ralf Kubis
  • Publication number: 20100080489
    Abstract: The first image may be displayed adjacent to the second image where the second image is a three dimensional image. An element may be selected in the first image and a matching element may be selected in the second image. A selection may be permitted to view a merged view where the merged view is the first image displayed over the second image by varying the opaqueness of the images. If the merged view is not acceptable, the method may repeat and if the merged view is acceptable; the first view onto the second view and the merged view may be stored as a merged image.
    Type: Application
    Filed: September 30, 2008
    Publication date: April 1, 2010
    Applicant: MICROSOFT CORPORATION
    Inventors: Billy Chen, Eyal Ofek, Gonzalo Ramos, Michael F. Cohen, Steven M. Drucker
  • Patent number: 7680322
    Abstract: Printed material for stereoscopic viewing is fabricated by creating images for the left eye and the right eye IL1 and IR1 for stereoscopic viewing, performing correction processing to remove perspective of images in the IL1 and IR1 at a base surface, creating images for the left eye and the right eye IL2 and IR2, and performing anaglyph processing or the like, based on the IL2 and IR2. An image IL for the left eye is created by rendering a projection of each point of a graphic object onto a base surface BS in a projection direction linking a viewpoint position for the left eye VPL to each point of the graphic object OB, and an IMAGE IR FOR THE RIGHT EYE is created by rendering a projection of each point of the graphic object onto the base surface BS in a projection direction linking a viewpoint position for the right eye VPL to each point of the graphic object OB. Printed material for stereoscopic viewing is fabricated by anaglyph processing or the like, based on the IL and IR.
    Type: Grant
    Filed: November 12, 2003
    Date of Patent: March 16, 2010
    Assignee: Namco Bandai Games Inc.
    Inventors: Shigeki Tooyama, Atsushi Miyazawa
  • Patent number: 7664315
    Abstract: An integrated image processor implemented on a substrate is disclosed. An input interface is configured to receive pixel data from two or more images. A pixel handling processor disposed on the substrate is configured to convert the pixel data into depth and intensity pixel data. In some embodiments, a foreground detector processor disposed on the substrate is configured to classify pixels as background or not background. In some embodiments, a projection generator disposed on the substrate is configured to generate a projection in space of the depth and intensity pixel data.
    Type: Grant
    Filed: October 31, 2005
    Date of Patent: February 16, 2010
    Assignee: Tyzx, Inc.
    Inventors: John Iselin Woodfill, Ronald John Buck, Gaile Gibson Gordon, David Walter Jurasek, Terrence Lee Brown
  • Publication number: 20100034439
    Abstract: A projected image generating unit generates a projected image of a two-dimensional image that expresses three-dimensional information, based on three-dimensional data stored in an original image storing unit. A position information storage unit records therein three-dimensional position information of a target pixel that has been detected by the projected image generating unit and the coordinates of the target pixel within the projected image, while keeping them in correspondence with each other. A user inputs a position of a specified point within the projected image by using an input unit. By referring to the position information storage unit, a position obtaining unit obtains three-dimensional position information of the specified point. An area extracting unit extracts a three-dimensional image of a target area containing the specified point, based on the three-dimensional position information of the specified point that has been obtained by the position obtaining unit.
    Type: Application
    Filed: July 22, 2009
    Publication date: February 11, 2010
    Applicant: KABUSHIKI KAISHA TOSHIBA
    Inventor: Mieko Asano
  • Patent number: 7657079
    Abstract: A system to capture an image and determine a position of an object utilizes a camera. A first processing module recognizes a set of predetermined landmarks, including a first landmark and remainder landmarks, in the image. A second processing module determines an actual location of the first landmark in the image, and applies at least one filtering scheme to estimate positions of the remainder landmarks in the image. A third processing module determines a pose of the object based on the actual location of the first landmark and the estimated positions of the remainder landmarks.
    Type: Grant
    Filed: June 28, 2002
    Date of Patent: February 2, 2010
    Assignee: Intel Corporation
    Inventors: Adam T. Lake, Carl S. Marshall
  • Publication number: 20100014781
    Abstract: An example-based 2D to 3D image conversion method, a computer readable medium therefor, and a system are provided. The embodiments are based on an image database with depth information or with which depth information can be generated. With respect to a 2D image to be converted into 3D content, a matched background image is found from the database. In addition, graph-based segmentation and comparison techniques are employed to detect the foreground of the 2D image so that the relative depth map can be generated from the foreground and background information. Therefore, the 3D content can be provided with the 2D image plus the depth information. Thus, users can rapidly obtain the 3D content from the 2D image automatically and the rendering of the 3D content can be achieved.
    Type: Application
    Filed: January 9, 2009
    Publication date: January 21, 2010
    Applicant: INDUSTRIAL TECHNOLOGY RESEARCH INSTITUTE
    Inventors: Kai-Che Liu, Fu-Chiang Jan, Wen-Chao Chen, Cheng-Feng Wu, Tsu-Han Chen, Qi Wu
  • Publication number: 20100002910
    Abstract: A method and apparatus for modeling an object in software are disclosed. The method includes generating a three-dimensional geometry of the object from a plurality of points obtained from a plurality of images of the object, the images having been acquired from a plurality of perspectives; and generating a three-dimensional model from the three-dimensional geometry for integration into an object recognition system. The apparatus may be a program storage medium encoded with instructions that, when executed by a computer, perform such a method or a computer programmed to perform such a method.
    Type: Application
    Filed: September 15, 2009
    Publication date: January 7, 2010
    Inventors: Walter H. Delashmit, JR., James T. Jack, JR.
  • Patent number: 7643662
    Abstract: Systems and methods are provided for accessing three dimensional representation of an anatomical surface and flattening the anatomical surface so as to produce a two dimensional representation of an anatomical surface. The two dimensional surface can be augmented with computed properties such as thickness, curvature, thickness and curvature, or user defined properties. The rendered two dimensional representation of an anatomical surface can be interacted by user so as to deriving quantitative measurements such as diameter, area, volume, and number of voxels.
    Type: Grant
    Filed: August 15, 2006
    Date of Patent: January 5, 2010
    Assignee: General Electric Company
    Inventor: David T Gering
  • Patent number: 7643673
    Abstract: Data-driven guarded evaluation of conditional-data associated with data objects is used to control activation and processing of the data objects in an interactive geographic information system. Methods of evaluating conditional-data to control activation of the data objects are disclosed herein. Data-structures to specify conditional data are also disclosed herein.
    Type: Grant
    Filed: June 12, 2007
    Date of Patent: January 5, 2010
    Assignee: Google Inc.
    Inventors: John Rohlf, Bent Hagemark, Brian McClendon, Michael T. Jones
  • Patent number: 7634298
    Abstract: The present invention provides for a mobile device component used in a 4DHelp information distribution system. The mobile device enables a user to utilize the 4DHelp information distribution system in a wireless environment such that even if the user is physically isolated from self-help data or instructional materials, the user can still access useful 4DHelp information and receive significant benefits. The mobile device includes an input device for which the user can enter a user request, a transmitting device that transmits and retrieves 4DHelp data based upon the user's request, a receiver that accepts the 4DHelp data transmitted, and a processing device that provides a visual output display to the user and allows the user to adjust the 4DHelp visual output display.
    Type: Grant
    Filed: December 5, 2006
    Date of Patent: December 15, 2009
    Inventor: Richard D. Kaplan
  • Publication number: 20090304265
    Abstract: In one embodiment, a system and method for modeling a three-dimensional object includes capturing two-dimensional images of the object from multiple different viewpoints to obtain multiple views of the object, estimating slices of the object that lie in parallel planes that cut through the object, and computing a surface of the object from the estimated slices.
    Type: Application
    Filed: February 5, 2009
    Publication date: December 10, 2009
    Inventors: Saad M. Khan, Pingkun Yan, Mubarak Shah
  • Patent number: 7630580
    Abstract: Systems and methods are provided for performing diffusion-based image extrusion. According to one embodiment, a three dimensional model is created by polygonizing an input image to produce an inflatable image. The input image may be either a 2D image or icon or a 3D image or icon. The set of pixels making up the input image are represented as a plurality of polygons. Then, an initial value is assigned to a z-coordinate of each pixel in the set of pixels. After polygonizing the input image to create the inflatable image, the inflatable image is extruded by applying a biased image-based diffusion process to generate appropriate z-coordinate values for a reference point associated with each polygon of the plurality of polygons. In various embodiments, an end-user may be provided with the ability to interactively change one or more parameters associated with the inflatable image and/or the diffusion process.
    Type: Grant
    Filed: May 4, 2005
    Date of Patent: December 8, 2009
    Assignee: AgentSheets, Inc.
    Inventor: Alexander Repenning