Mapping 2-d Image Onto A 3-d Surface Patents (Class 382/285)
  • Patent number: 8224122
    Abstract: A dynamic wide angle image viewing technique is presented which provides a way to view a wide-angle image while zooming between a wide angle view and a narrower angle view that employs both perspective and non-perspective projection models. In general, this involves first establishing the field of view for a view of the wide angle image that is to be displayed. The view is then rendered and displayed based on the established field of view, such that the projection transitions between a perspective projection associated with narrower angle views and a non-perspective projection (e.g., cylindrical, spherical or some other parameterization) associated with wider-angle views.
    Type: Grant
    Filed: April 23, 2007
    Date of Patent: July 17, 2012
    Assignee: Microsoft Corporation
    Inventors: Michael Cohen, Matthew Uyttendaele, Johannes Kopf
  • Patent number: 8224065
    Abstract: The present disclosure describes a system and method for transforming a two-dimensional image of an object into a three-dimensional representation, or model, that recreates the three-dimensional contour of the object. In one example, three pairs of symmetric points establish an initial relationship between the original image and a virtual image, then additional pairs of symmetric points in the original image are reconstructed. In each pair, a visible point and an occluded point are mapped into 3-space with a single free variable characterizing the mapping for all pairs. A value for the free variable is then selected to maximize compactness of the model, where compactness is defined as a function of the model's volume and its surface area. “Noise” correction derives from enforcing symmetry and selecting best-fitting polyhedra for the model. Alternative embodiments extend this to additional polyhedra, add image segmentation, use perspective, and generalize to asymmetric polyhedra and non-polyhedral objects.
    Type: Grant
    Filed: January 9, 2008
    Date of Patent: July 17, 2012
    Assignee: Purdue Research Foundation
    Inventors: Zygmunt Pizlo, Yunfeng Li, Robert M. Steinman
  • Patent number: 8224037
    Abstract: A method for face model fitting comprising, receiving a first observed image, receiving a second observed image, and fitting an active appearance model of a third image to the second observed image and the first observed image with an algorithm that includes a first function of a mean-square-error between a warped image of the second observed image and a synthesis of the active appearance model and a second function of a mean-square-error between the warped image of the second observed image and an appearance data of the first observed image.
    Type: Grant
    Filed: April 10, 2008
    Date of Patent: July 17, 2012
    Assignee: UTC Fire & Security Americas Corporation, Inc.
    Inventors: Xiaoming Liu, Peter Henry Tu, Frederick Wilson Wheeler
  • Patent number: 8218903
    Abstract: A system creates three-dimensional computer models of physical objects by displaying illumination patterns on a display device to incidentally illuminate a physical object. A video camera acquires images of the object illuminated by the patterns. The patterns can include motion and multiple colors for acquiring images with large variations in surface shading of the object. Shading values from acquired images of the object are analyzed to determine the orientations of points on the object's surface. The system evaluates the quality of acquired images and selects patterns tailored to specific attributes of objects. The points' orientations are determined by comparing the points' shading values with an illumination model or shading values acquired from a calibration shading object. A model surface is fitted to the points' orientations. Applications may utilize the model for any purpose, including creating and exchanging customized virtual objects, enhanced object tracking, and videoconferencing applications.
    Type: Grant
    Filed: August 17, 2007
    Date of Patent: July 10, 2012
    Assignee: Sony Computer Entertainment Inc.
    Inventor: Steven Osman
  • Patent number: 8218825
    Abstract: Capturing and processing facial motion data includes: coupling a plurality of sensors to target points on a facial surface of an actor; capturing frame by frame images of the plurality of sensors disposed on the facial surface of the actor using at least one motion capture camera disposed on a head-mounted system; performing, in the head-mounted system, a tracking function on the frame by frame images of the plurality of sensors to accurately map the plurality of sensors for each frame; and generating, in the head-mounted system, a modeled surface representing the facial surface of the actor.
    Type: Grant
    Filed: August 25, 2009
    Date of Patent: July 10, 2012
    Assignees: Sony Corporation, Sony Pictures Entertainment Inc.
    Inventors: Demian Gordon, Remington Scott, Parag Havaldar, Dennis J. Hauck, Jr.
  • Patent number: 8218836
    Abstract: A system and methods for generating 3D images (24) from 2D bioluminescent images (22) and visualizing tumor locations are provided. A plurality of 2D bioluminescent images of a subject are acquired during a complete revolution of an imaging system about a subject, using any suitable bioluminescent imaging system. After imaging, the 2D images are registered (20) according to the rotation axis to align each image and to compensate for differences between adjacent images. After registration (20), corresponding features are identified between consecutive sets of 2D image (22). For each corresponding feature identified in each set of 2D images an orthographic projection model (24) is applied, such that rays are projected through each point in the feature. The intersection point of the rays are plotted in a 3D image of a tumor is generated. The 3D image can be registered with a reference image of the subject, so that the shape and location of the tumor can be precisely visualized with respect to the subject.
    Type: Grant
    Filed: August 31, 2006
    Date of Patent: July 10, 2012
    Assignee: Rutgers, The State University of New Jersey
    Inventors: Dimitris Metaxas, Debabrata Banerjee, Xiaolei Huang
  • Patent number: 8218854
    Abstract: A method for synthesizing an image with multi-view images includes inputting multiple images, wherein each of the reference images is corresponding to a reference viewing-angle for photographing; synthesizing an image corresponding to a viewpoint and an intended viewing-angle; segmenting the intended synthesized image to obtain a plurality of meshes and a plurality of vertices of the meshes. Each of the vertices and the viewpoint respectively establish a viewing-angle, and the method further includes searching a plurality of neighboring images among the reference images referring to the viewing-angle. If at least one of the neighboring images falls within an adjacent region of the vertex, a first mode is adopted without interpolation to synthesize the intended synthesized image; when none of the neighboring images falls within the adjacent region of the vertex, a second mode is adopted, where a weighting-based interpolation mechanism is used for synthesizing the intended synthesized image.
    Type: Grant
    Filed: May 26, 2008
    Date of Patent: July 10, 2012
    Assignee: Industrial Technology Research Institute
    Inventors: Kai-Che Liu, Jen-Tse Huang, Hong-Zeng Yeh, Fu-Chiang Jan
  • Patent number: 8213742
    Abstract: An image reading apparatus to read image of rectangle shape document set on a setting board, is supplied capable of assigning correct edge image to an output image. In the image reading apparatus, a image data storing section stores image data reading area lager than the rectangle shape document; an edge feature extracting section extracts edge feature according to the image data; a rectangle feature extracting section extracts edge feature of side regions respectively corresponding to each side of the rectangle shape document according to the edge feature; a region selecting section selects two regions from the side regions; a coordinates calculating section calculates coordinates specifying position of straight lines representing four sides through using inclination information of the selected respective two regions; and a compounded-image outputting section replaces the feature region with frame image according to the coordinates, compounds the frame image with image data, and outputs the compounded image.
    Type: Grant
    Filed: February 6, 2008
    Date of Patent: July 3, 2012
    Assignee: Oki Data Corporation
    Inventor: Tomonori Kondo
  • Patent number: 8207964
    Abstract: The present invention provides methods and apparatus for generating a three dimensional output which includes a continuum of image data sprayed over three-dimensional models. The three-dimensional models can be representative of features captured by the image data wherein image data can be captured at multiple disparate points along another continuum. The user interface can also include multiple modalities of image data and statistical analysis of the image data.
    Type: Grant
    Filed: February 22, 2008
    Date of Patent: June 26, 2012
    Inventors: William D. Meadow, Randall A. Gordie, Jr., Matthew Pavelle
  • Patent number: 8204340
    Abstract: A method for a computer system includes receiving a first camera image of a 3D object having sensor markers, captured from a first location, at a first instance, receiving a second camera image of the 3D object from a second location, at a different instance, determining points from the first camera image representing sensor markers of the 3D object, determining points from the second camera image representing sensor markers of the 3D object, determining approximate correspondence between points from the first camera image and points from the second camera image, determining approximate 3D locations some sensor markers of the 3D object, and rendering an image including the 3D object in response to the approximate 3D locations.
    Type: Grant
    Filed: September 29, 2008
    Date of Patent: June 19, 2012
    Assignee: Two Pic MC LLC
    Inventors: Nori Kanazawa, Douglas Epps
  • Patent number: 8200041
    Abstract: Disclosed herein are approaches for detecting and/or generating silhouettes, in graphics processing applications, of objects (e.g., convex objects such as polyhedrons).
    Type: Grant
    Filed: December 18, 2008
    Date of Patent: June 12, 2012
    Assignee: Intel Corporation
    Inventors: David Bookout, Rahul P. Sathe
  • Patent number: 8195006
    Abstract: The invention relates to a method and a device for representing a two-dimensional digital image on a projection surface, whereby at least one projector projects the content of an image buffer onto the projection surface. The aim of the invention is to provide a method and a device which allow to project a digital image onto a background having any surface structure and/or color in such a manner that any influences on the image caused by the background are compensated to the last pixel for at least one special observer perspective. In a special embodiment, definition of the representation is optimized to the last pixel even on an uneven background. For this purpose, the digital image is processed during a rendering step by geometrically distorting it using a two-dimensional pixel-offset field which contains information on the projection surface, and by manipulating the color of the image by means of a two-dimensional surface texture of the projection surface.
    Type: Grant
    Filed: August 2, 2005
    Date of Patent: June 5, 2012
    Assignee: Bauhaus-Universitaet Weimar
    Inventors: Thomas Klemmer, Oliver Bimber, Andreas Emmerling
  • Patent number: 8184144
    Abstract: The present invention calibrates interior orientation parameters (IOP) and exterior orientation parameters (EOP). With the calibrated IOPs and EOPs, a remotely controlled camera can quickly obtains corresponding IOPs and EOPs no matter on panning, tilting or zooming. Thus, the remotely controlled camera obtains accuracies on imaging and measuring and obtains wide applications.
    Type: Grant
    Filed: July 19, 2009
    Date of Patent: May 22, 2012
    Assignee: National Central University
    Inventors: Chi-Farn Chen, Li-Yu Chang, Su-Rung Yang
  • Patent number: 8170367
    Abstract: A design image is transformed into a projection design image comprising the design image as it will appear when projected onto a physical 3-dimensional (3-D) curved object. In an embodiment, pixels of the design image are mapped into corresponding mapped pixels in a projection design image according to how the design image will appear in a flattened image of the design projected or printed onto the object having 3-dimension curves. The projection design image may be combined with a product image of the object having 3-dimension curves to generate a customized product image of the object having 3-dimension curves incorporating the design image. The customized product image is displayed to a user when customizing a product with a design to ensure that the user understands how the physical product will appear when the design is printed or projected onto the physical product.
    Type: Grant
    Filed: January 28, 2008
    Date of Patent: May 1, 2012
    Assignee: Vistaprint Technologies Limited
    Inventors: Jay T. Moody, Michael P. Daugherty, Terence M. Tirella
  • Publication number: 20120099804
    Abstract: Interactive three-dimensional (3D) virtual tours are generated from ordinary two-dimensional (2D) still images such as photographs. Two or more 2D images are combined to form a 3D scene, which defines a relationship among the 2D images in 3D space. 3D pipes connect the 2D images with one another according to defined spatial relationships and for guiding virtual camera movement from one image to the next. A user can then take a 3D virtual tour by traversing images within the 3D scene, for example by moving from one image to another, either in response to user input or automatically. In various embodiments, some or all of the 2D images can be selectively distorted to enhance the 3D effect, and thereby reinforce the impression that the user is moving within a 3D space. Transitions from one image to the next can take place automatically without requiring explicit user interaction.
    Type: Application
    Filed: September 7, 2011
    Publication date: April 26, 2012
    Applicant: 3DITIZE SL
    Inventors: Jaime Aguilera, Fernando Alonso, Juan Bautista Gomez
  • Patent number: 8150205
    Abstract: An image processing apparatus comprises a first and a second eyes-and-mouth detecting units, a first and a second skin model generating units, an inside-of-mouth model generating unit, an eyeball model generating unit, a first and a second deformation parameter generating units, and an output unit. The first and second eyes-and-mouth detecting units detect eye boundaries and a mouth boundary from a first and a second face images, respectively. The first and second skin model generating units generate a first and a second skin models, respectively. The inside-of-mouth model generating unit and the eyeball model generating unit generate an inside-of-mouth model and an eyeball model, respectively. The first and second deformation parameter generating units generate a first and a second deformation parameters, respectively. The output unit output the first and second skin models, the first and second deformation parameters, the inside-of-mouth model, and the eyeball model as animation data.
    Type: Grant
    Filed: December 1, 2006
    Date of Patent: April 3, 2012
    Assignee: Sony Corporation
    Inventor: Osamu Watanabe
  • Publication number: 20120070101
    Abstract: A method for estimating three-dimensional structure from a two-dimensional image (502) includes obtaining first and second vanishing points (120, 122); comparing image patches (202) along first and second virtual lines (204, 208) extending from the first and second vanishing points (120, 122), respectively, and through a pixel; generating values for each of the first and second virtual lines (204, 208) based on the comparison of the image patches (202); accumulating the values for each the pixel in the two-dimensional image (502); and determining a corner pixel (106) based on a highest of the accumulated values.
    Type: Application
    Filed: December 16, 2009
    Publication date: March 22, 2012
    Inventors: Hadas Kogan, Ron Maurer, Renato Keshet
  • Patent number: 8140295
    Abstract: An auto-referenced sensing device for scanning an object to provide three-dimensional surface points, including: a Light-Emitting Diode (LED) light source emitting light for illuminating and enabling image acquisition of retro-reflective target positioning features provided at a fixed position on the object; a laser pattern projector, additional to the LED light source, for providing a projected laser pattern on a surface of the object for illuminating and enabling image acquisition of dense points between the retro-reflective target positioning features; at least a pair of cameras for simultaneously acquiring a 2D image of the object, the projected laser pattern and the retro-reflective target positioning features are apparent on the image, wherein the simultaneous images contain both positioning measurements from the retro-reflective target positioning features and dense surface measurements from the points enabled by the projected laser pattern.
    Type: Grant
    Filed: December 3, 2010
    Date of Patent: March 20, 2012
    Assignee: Creaform Inc.
    Inventors: Patrick Hebert, Éric Saint-Pierre, Dragan Tubic
  • Patent number: 8134551
    Abstract: Embodiments of the invention provide a renderer-agnostic method for representing materials independently from an underlying rendering engine. Advantageously, materials libraries may be extended with new materials for rendering with an existing rendering engine and implementation. Also, new rendering engines and implementations may be added for existing materials. Thus, at run-time, rather than limiting the rendering to being performed on a pre-determined rendering engine, the rendering application may efficiently and conveniently manage rendering a graphics scene on a plurality of rendering engines or implementations.
    Type: Grant
    Filed: February 29, 2008
    Date of Patent: March 13, 2012
    Assignee: AUTODESK, Inc.
    Inventors: Jerome Maillot, Andre Gauthier, Daniel Levesque
  • Patent number: 8134570
    Abstract: A system, method and computer program product are provided for packing graphics attributes. In use, a plurality of graphics attributes is identified. Such graphics attributes are packed, such that the packed graphics attributes are capable of being processed utilizing a pixel shader.
    Type: Grant
    Filed: September 18, 2006
    Date of Patent: March 13, 2012
    Assignee: NVIDIA Corporation
    Inventors: Jerome F. Duluk, Jr., Andrew J. Tao, Roger L. Allen, Svetoslav D. Tzvetkov, Yan Yan Tang, Elena M. Ing
  • Patent number: 8126273
    Abstract: A method for reconstructing three-dimensional, plural views of images from two dimensional image data. The method includes: obtaining two-dimensional, stereo digital data from images of an object; processing the digital data to generate an initial three-dimensional candidate of the object, such process using projective geometric constraints imposed on edge points of the object; refining the initial candidate comprising examining spatial coherency of neighboring edge points along a surface of the candidate.
    Type: Grant
    Filed: March 10, 2008
    Date of Patent: February 28, 2012
    Assignee: Siemens Corporation
    Inventors: Gang Li, Yakup Genc
  • Patent number: 8126289
    Abstract: Some embodiments of the present invention may relate to a device and a method of enabling an automatic global matching of a plurality of images to provide a substantially consistent planar representation of a fundus. According to some embodiments of the invention, a device for enabling an automatic global matching of a plurality of images to provide a substantially consistent planar representation of a fundus may include a local matching module and a global matching module. The local matching module may be adapted to locally match a pair of overlapping images. As part of locally matching the images, the local matching module may be adapted to provide a best offset vector for the images based upon a matching of features from overlapping portions of the images. The global matching module may be adapted to globally match at least a triplet of locally matching pairs of images whose best offset vector sum is substantially zero.
    Type: Grant
    Filed: June 20, 2006
    Date of Patent: February 28, 2012
    Assignee: Ophthalmic Imaging Systems
    Inventors: Noam Allon, Nizan Horesh
  • Patent number: 8126225
    Abstract: A method is disclosed for generating 2D reconstruction images in the scope of image post-processing from a 3D image data set of a study object recorded particularly by use of a magnetic resonance device. In the method, the position of the 2D reconstruction layers in which the 2D reconstruction images lie is defined with the aid of layer position information which defines the position of individual 2D recording layers in which 2D layer images are recorded after recording the 3D image data set of the study object, or have already been recorded, and which is optionally stored in an operating mode for automatic layer position adaptation, or with the aid of layer position information defining 2D reconstruction layers which is stored in an operating mode for automatic layer position adaptation.
    Type: Grant
    Filed: September 18, 2006
    Date of Patent: February 28, 2012
    Assignee: Siemens Aktiengesellschaft
    Inventor: Gudrun Graf
  • Patent number: 8116591
    Abstract: Methods and systems for creating three-dimensional models from two-dimensional images are provided. According to one embodiment, a computer-implemented method of creating a polygon-based three-dimensional (3D) model from a two-dimensional (2D) pixel-based image involves creating an inflatable polygon-based 3D image and extruding the inflatable polygon-based 3D image. The inflatable polygon-based 3D image is created based on a 2D pixel-based input image by representing pixels making up the 2D pixel-based input image as polygons. The inflatable polygon-based 3D image is extruded by generating z-coordinate values for reference points associated with the polygons based upon a biased diffusion process.
    Type: Grant
    Filed: August 4, 2011
    Date of Patent: February 14, 2012
    Assignee: AgentSheets, Inc.
    Inventor: Alexander Repenning
  • Patent number: 8111906
    Abstract: A stereoscopic image display device is provided. The stereoscopic image display device includes a display unit including a plurality of pixels arranged in a matrix, the respective pixels including right/left-eye pixels arranged in a row direction, an area detector for detecting first data respectively corresponding to at least a part of the pixels from a plurality of input data, a data converter for converting the first data to the right/left-eye data corresponding to the right/left-eye pixels, a data combiner for combining stereoscopic image data by arranging the right/left-eye data, and a data driver for applying a data signal corresponding to the stereoscopic image data to the display unit.
    Type: Grant
    Filed: June 29, 2006
    Date of Patent: February 7, 2012
    Assignee: Samsung Mobile Display Co., Ltd.
    Inventors: Myoung-Seop Song, Jang-Doo Lee, Hyoung-Wook Jang, Woo-Jong Lee, Hyun-Sook Kim
  • Patent number: 8107677
    Abstract: A computer implemented method, apparatus, and computer program product for identifying positional data for an object moving in an area of interest. Positional data for each camera in a set of cameras associated with the object is retrieved. The positional data identifies a location of each camera in the set of cameras within the area of interest. The object is within an image capture range of each camera in the set of cameras. Metadata describing video data captured by the set of cameras is analyzed using triangulation analytics and the positional data for the set of cameras to identify a location of the object. The metadata is generated in real time as the video data is captured by the set of cameras. The positional data for the object is identified based on locations of the object over a given time interval. The positional data describes motion of the object.
    Type: Grant
    Filed: February 20, 2008
    Date of Patent: January 31, 2012
    Assignee: International Business Machines Corporation
    Inventors: Robert Lee Angell, David Wayne Cosby, Robert R. Friedlander, James R. Kraemer
  • Patent number: 8107722
    Abstract: A system for performing a three dimensional stereo measurement that uses a sensor for obtaining a sensor image of a scene, and a database for providing first and second reference images of the scene that are a stereo pair of images. At least one processing system is responsive to an output of the sensor and in communication with the database. The processing system registers the sensor image with the first reference image, and also selects a point of interest from one of the sensor image and the first reference image. The processing system performs a stereo point measurement from the selected point of interest and the first reference image to determine a point in the second reference image that represents a stereo mate of the selected point in the first reference image.
    Type: Grant
    Filed: August 9, 2010
    Date of Patent: January 31, 2012
    Assignee: The Boeing Company
    Inventor: Lawrence A. Oldroyd
  • Patent number: 8107684
    Abstract: The subject of the invention is a method for geolocalization of one or more stationary targets from an aircraft by means of a passive optronic sensor. The sensor acquires at least one image I1 containing the target P from a position C1 of the aircraft and an image I2 containing the target P from a position C2 of the aircraft. The images I1 and I2 have an area of overlap. The overlap area has at least one target P identified which is common to the two images I1 and I2. The position of each target P is determined in each of the two images. The distance d is calculated between each target P and a point C, situated for example in the vicinity of C1 and C2, as a function of the angle ?1 between a reference direction and the line of sight of the image I1, the angle ?2 between the same reference direction and the line of sight of the image I2, of the position of each target P in the image I1 and in the image I2.
    Type: Grant
    Filed: February 7, 2007
    Date of Patent: January 31, 2012
    Assignee: Thales
    Inventors: Gilles Bridenne, Michel Prenat
  • Patent number: 8107715
    Abstract: The invention relates to a device for detecting a sample in a longitudinal sample container (10), comprising a sample container holder (30) for holding the sample container in a housing. Said sample container holder comprises a side viewing window in the region of at least one longitudinal side of the sample container, and a front viewing window (35) in the region at least one front side of the sample container. Said device also comprises a first illumination arrangement for illuminating the sample container through the front viewing window, a second illumination arrangement for illuminating the sample container through the side viewing window, and an imaging photodetector for detecting a first image of the sample illuminated by means of the first illumination arrangement, and a second image of the sample illuminated by means of the second illumination arrangement. Said detection takes place through the side viewing window.
    Type: Grant
    Filed: April 24, 2007
    Date of Patent: January 31, 2012
    Assignee: Sartorius Stedim Biotech GmbH
    Inventors: Reinhard Baumfalk, Oscar-Werner Reif, Florian Wurm, Maria de Jesus, Martin Jordan, Matthieu Stettler, Stefan Obermann
  • Patent number: 8103126
    Abstract: A method of presenting information, capable of displaying an image including one or more objects being in the vicinity of the ground, the method including the steps of: acquiring viewpoint information; acquiring visual line information; acquiring posture information; acquiring additional information related to the object position information; calculating horizon line information in the image; determining a reference line on the image on the basis of the horizon line information and the posture information; calculating distance information from the viewpoint position to the object; determining a display attribute of the additional information including a display mode of the additional information and a display position of the additional information in the image with respect to the reference line; and presenting the additional information on the basis of the display mode so as to reveal a relationship between the additional information and the object when displaying the image on the display unit.
    Type: Grant
    Filed: March 27, 2009
    Date of Patent: January 24, 2012
    Assignee: Sony Corporation
    Inventor: Shunsuke Hayashi
  • Patent number: 8098960
    Abstract: An image processing apparatus includes: a table generation unit that generates a table in which a coefficient set including predetermined weighting coefficients and pixels contained in a resolution converted image are related to each other on the basis of a size of an input image and a size of a resolution converted image; a coefficient selecting unit that selects a coefficient set to be applied for a calculation of a pixel value in the resolution converted image out of plural coefficient sets on the basis of a table generated by the table generation unit; and a pixel value calculating unit that calculates pixel values to be used in the resolution converted image resulting from the resolution conversion of the input image on the basis of the coefficient set selected by the coefficient selecting unit and plural pixel values contained in the input image.
    Type: Grant
    Filed: April 23, 2007
    Date of Patent: January 17, 2012
    Assignee: Fuji Xerox Co., Ltd.
    Inventor: Kanya Ishizaka
  • Patent number: 8098891
    Abstract: System and methods are disclosed to perform multi-human 3D tracking with a plurality of cameras. At each view, a module receives each camera output and provides 2D human detection candidates. A plurality of 2D tracking modules are connected to the CNNs, each 2D tracking module managing 2D tracking independently. A 3D tracking module is connected to the 2D tracking modules to receive promising 2D tracking hypotheses. The 3D tracking module selects trajectories from the 2D tracking modules to generate 3D tracking hypotheses.
    Type: Grant
    Filed: November 24, 2008
    Date of Patent: January 17, 2012
    Assignee: NEC Laboratories America, Inc.
    Inventors: Fengjun Lv, Wei Xu, Yihong Gong
  • Patent number: 8098258
    Abstract: A method for a computer system including receiving a file comprising textures including a first and a second texture, and metadata, wherein the first texture need not have a predetermined geometric relationship to the second texture, wherein the metadata includes identifiers associated with textures and includes adjacency data, associating the first texture with a first location on an object in response to an identifier associated with the first texture, associating the second texture with a second location on the object in response to an identifier associated with the second texture, determining an edge of the first texture is adjacent to an edge of the second texture in response to the adjacency data, and performing a rendering operation with respect to the first and the second surface on the object to determine rendering data in response to the first texture and to the second texture.
    Type: Grant
    Filed: July 18, 2008
    Date of Patent: January 17, 2012
    Assignee: Disney Enterprises, Inc.
    Inventors: Brent D. Burley, J. Dylan Lacewell
  • Patent number: 8094148
    Abstract: A texture processing apparatus includes a CG data acquisition unit acquiring calculator graphics (CG) data including CG model data, camera data, light data, texture data items, and a preset emphasis parameter for texture mapping processing, the CG model data, the camera data and the light data composing data for rendering a CG image, the texture data items being acquired or produced under different conditions, a calculation unit calculating, using the CG data, an emphasis texture processing condition corresponding to the preset emphasis parameter, the emphasis texture processing condition being used to perform texture mapping processing on a CG model, an extraction unit extracting a particular texture data item from the acquired texture data items in accordance with the texture processing condition, and a processing unit performing emphasis processing on the particular texture data item in accordance with the preset emphasis parameter to obtain a emphasized texture data item.
    Type: Grant
    Filed: March 18, 2008
    Date of Patent: January 10, 2012
    Assignee: Kabushiki Kaisha Toshiba
    Inventors: Masahiro Sekine, Yasunobu Yamauchi, Isao Mihara
  • Patent number: 8090160
    Abstract: A novel method and system for 3d-aided-2D face recognition under large pose and illumination variations is disclosed. The method and system includes enrolling a face of a subject into a gallery database using raw 3D data. The method also includes verifying and/or identifying a target face form data produced by a 2D imagining or scanning device. A statistically derived annotated face model is fitted using a subdivision-based deformable model framework to the raw 3D data. The annotated face model is capable of being smoothly deformed into any face so it acts as a universal facial template. During authentication or identification, only a single 2D image is required. The subject specific fitted annotated face model from the gallery is used to lift a texture of a face from a 2D probe image, and a bidirectional relighting algorithm is employed to change the illumination of the gallery texture to match that of the probe.
    Type: Grant
    Filed: October 13, 2008
    Date of Patent: January 3, 2012
    Assignee: The University of Houston System
    Inventors: Ioannis A. Kakadiaris, George Toderici, Theoharis Theoharis, Georgios Passalis
  • Patent number: 8089479
    Abstract: A method of associating a computer generated camera with an object in a three-dimensional computer generated space. The method receives a command to associate the camera with an object in the simulated space. Based on the command the method determines a path for moving the camera to a position near the object and aiming the camera at the object. The method creates a video from the simulated camera's perspective of the three-dimensional simulated space.
    Type: Grant
    Filed: April 11, 2008
    Date of Patent: January 3, 2012
    Assignee: Apple Inc.
    Inventors: Sidhartha Deb, Gregory Niles, Stephen Sheeler, Guido Hucking
  • Patent number: 8090031
    Abstract: A method for use in video compression is disclosed. In particular, the claimed invention relates to a method of more efficient fractional-pixel interpolation in two steps by a fixed filter (240) and an adaptive filter (250) for fractional-pixel motion compensation.
    Type: Grant
    Filed: October 5, 2007
    Date of Patent: January 3, 2012
    Assignee: Hong Kong Applied Science and Technology Research Institute Company Limited
    Inventors: Hoi Ming Wong, Yan Jenny Huo
  • Patent number: 8082120
    Abstract: A method and hand-held scanning apparatus for three-dimensional scanning of an object is described. The hand-held self-referenced scanning apparatus has a light source for illuminating retro-reflective markers, the retro-reflective markers being provided at fixed positions on or around the object, a photogrammetric high-resolution camera, a pattern projector for providing a projected pattern on a surface of the object; at least a pair of basic cameras, the basic camera cooperating with light sources, the projected pattern and at least a portion of the retro-reflective markers being apparent on the 2D images, a frame for holding all components in position within the hand-held apparatus, the frame having a handle, the frame allowing support and free movement of the scanning apparatus by a user.
    Type: Grant
    Filed: December 2, 2009
    Date of Patent: December 20, 2011
    Assignee: Creaform Inc.
    Inventors: Éric St-Pierre, Pierre-Luc Gagné, Antoine Thomas Caron, Nicolas Beaupré, Dragan Tubic, Patrick Hébert
  • Patent number: 8081195
    Abstract: The present invention relates to a method for increasing operation speed in virtual three dimensional (3D) application and an operational method thereof. The method includes the steps of: providing a display frame, which includes a plurality of scan lines; dividing the display frame into at least one first area and a second area; providing a plurality of virtual 3D parameters according to the scan lines; and truncating a preset number of least significant bits (LSBs) of the virtual 3D parameters corresponding to the scan lines in the first area.
    Type: Grant
    Filed: May 16, 2008
    Date of Patent: December 20, 2011
    Assignee: Generalplus Technology Inc.
    Inventor: Yu Cheng Liao
  • Patent number: 8081841
    Abstract: A system, including a computer system running image processing software, receives an identification of a desired area to be imaged and collected into an oblique-mosaic image. The computer system creates a mathematical model of a virtual camera having a sensor higher in elevation from which the source oblique images were captured and looking down at an oblique angle, the mathematical model having an oblique-mosaic pixel map for the sensor of the desired area encompassing multiple source images. The computer system assigns a surface location to each pixel included in the oblique-mosaic pixel map and selects source oblique images of the geographic coordinates captured at an oblique angle and compass direction similar to the oblique angle and compass direction of the virtual camera. The computer system reprojects at least one source oblique image pixel of the area to be imaged for each pixel included in the oblique-mosaic pixel map to create the oblique-mosaic image.
    Type: Grant
    Filed: January 10, 2011
    Date of Patent: December 20, 2011
    Assignee: Pictometry International Corp.
    Inventors: Stephen Schultz, Frank Giuffrida, Robert Gray
  • Patent number: 8078396
    Abstract: The present invention provides methods and apparatus for generating a continuum of image data sprayed over three-dimensional models. The three-dimensional models can be representative of features captured by the image data and based upon multiple image data sets capturing the features. The image data can be captured at multiple disparate points along another continuum.
    Type: Grant
    Filed: February 21, 2008
    Date of Patent: December 13, 2011
    Inventors: William D. Meadow, Randall A. Gordie, Jr., Matthew Pavelle
  • Patent number: 8077964
    Abstract: A two dimensional/three dimensional (2D/3D) digital acquisition and display device for enabling users to capture 3D information using a single device. In an embodiment, the device has a single movable lens with a sensor. In another embodiment, the device has a single lens with a beam splitter and multiple sensors. In another embodiment, the device has multiple lenses and multiple sensors. In yet another embodiment, the device is a standard digital camera with additional 3D software. In some embodiments, 3D information is generated from 2D information using a depth map generated from the 2D information. In some embodiments, 3D information is acquired directly using the hardware configuration of the camera. The 3D information is then able to be displayed on the device, sent to another device to be displayed or printed.
    Type: Grant
    Filed: March 19, 2007
    Date of Patent: December 13, 2011
    Assignees: Sony Corporation, Sony Electronics Inc.
    Inventors: Alexander Berestov, Chuen-Chien Lee
  • Patent number: 8072448
    Abstract: The present invention relates to annotating images. In an embodiment, the present invention enables users to create annotations corresponding to three-dimensional objects while viewing two-dimensional images. In one embodiment, this is achieved by projecting a selecting object onto a three-dimensional model created from a plurality of two-dimensional images. The selecting object is input by a user while viewing a first image corresponding to a portion of the three-dimensional model. A location corresponding to the projection on the three-dimensional model is determined, and content entered by the user while viewing the first image is associated with the location. The content is stored together with the location information to form an annotation. The annotation can be retrieved and displayed together with other images corresponding to the location.
    Type: Grant
    Filed: January 15, 2008
    Date of Patent: December 6, 2011
    Assignee: Google Inc.
    Inventors: Jiajun Zhu, Daniel Filip, Luc Vincent
  • Patent number: 8064723
    Abstract: The invention provides a method and apparatus for fast volume rendering of 3D ultrasound image, comprising a dividing step, a calculating step, a determining step, a morphological closing operation step and a filling step, wherein each flat grid can be entirely filled by obtaining gray-scale values for all pixels inside the grid through interpolating; for the non-flat grid, the steps of dividing, calculating, determining and filling are performed repeatedly until the non-flat grid is subdivided into atomic grids to calculate gray-scale values for remaining pixels by projection. As the method can be finished in the rear end, no difficulty occurs in the implementation, and no process for being adapted to previous frames is required. Therefore, the method can improve the rendering speed effectively without degrading the quality of image to put 3D ultrasound imaging to the best use.
    Type: Grant
    Filed: October 22, 2007
    Date of Patent: November 22, 2011
    Assignee: Shenzhen Mindray Bio-Medical Electronics Co., Ltd.
    Inventors: Yong Tian, Bin Yao
  • Patent number: 8064686
    Abstract: A device for the contactless optical determination of the position of an object. In particular, the present invention provides a method and device for the contactless optical determination of the 3D position of an object wherein an image of the object is generated by means of a camera and the 3D position of the object is calculated from the camera image based on the image information about detected geometrical characteristics. Determination of the 3D position of the object includes determination of the 3D position and the 3D orientation of the object.
    Type: Grant
    Filed: March 24, 2006
    Date of Patent: November 22, 2011
    Assignee: Micro-Epsilon Messtechnik GmbH & Co. KG
    Inventors: Robert Wagner, Rainer Hesse
  • Patent number: 8064685
    Abstract: A method, device, system, and computer program for object recognition of a 3D object of a certain object class using a statistical shape model for recovering 3D shapes from a 2D representation of the 3D object and comparing the recovered 3D shape with known 3D to 2D representations of at least one object of the object class.
    Type: Grant
    Filed: August 11, 2005
    Date of Patent: November 22, 2011
    Assignee: Apple Inc.
    Inventors: Jan Erik Solem, Fredrik Kahl
  • Patent number: 8059914
    Abstract: The present invention provides a method and module for preprocessing ultrasound imaging. The method comprises a calculation step for constructing a multivalue vector field and a smoothing step for smoothing the whole volume data. The method further comprises a judgement step and minification and magnification steps. The module includes a calculation unit, a smoothing unit, a judgement unit, a minification unit and a magnification unit. According to the method for preprocessing ultrasound imaging, speckle noise can be eliminated effectively by calculating a mean value of a plurality of nodes distributed over the surface, so as to implement the smoothing. Therefore, this method is capable of smoothing data without compromising details.
    Type: Grant
    Filed: October 9, 2007
    Date of Patent: November 15, 2011
    Assignee: Shenzhen Mindray Bio-Medical Electronics Co., Ltd.
    Inventors: Yong Tian, Bin Yao, Qinjun Hu
  • Patent number: 8055061
    Abstract: A method forms a region image extracting a region of a physical object which generates three-dimensional model information from a video image of physical space. Then, from position and orientation information for the physical object and from the region image, a primitive virtual object of a size to encompass the region image is generated. From the primitive virtual object a virtual object having a shape according to the region image is generated, and the three-dimensional model information is generated as three-dimensional model information representing the physical object.
    Type: Grant
    Filed: February 22, 2008
    Date of Patent: November 8, 2011
    Assignee: Canon Kabushiki Kaisha
    Inventor: Yasuo Katano
  • Patent number: 8045806
    Abstract: A method and a device determine material interfaces in a test object. The novel method generates three-dimensional image data of the test object or uses already existing three-dimensional image data of the test object. Image values of the image data are, or were, obtained by invasive radiation. An evaluation line for evaluating the image data relative to the test object is determined, a location of a material interface of the test object is determined by evaluating the image data of image values along the evaluation line so that the value of the first partial derivative of the image values in the direction of the evaluation line has a local maximum at the location of the material interface.
    Type: Grant
    Filed: November 19, 2007
    Date of Patent: October 25, 2011
    Assignee: Carl Zeiss Industrielle Messtechnik GmbH
    Inventors: Hubert Lettenbauer, Andreas Lotze, Steffen Kunzmann
  • Patent number: 8040355
    Abstract: Textures are transferred between different object models using a point cloud. In a first phase, a point cloud in 3-D space is created to represent a texture map as applied to a first, or “source,” object model. In a second phase, a value for a target texel of a texture map associated with a second, or “target,” object model, is determined by identifying the 3-D location on a surface defined by the target object model that maps to the location of the target texel and assigning a value based on the nearest point (or points) to that location in the 3-D point cloud. To the extent that differences between the source and target object models are minor, the texture transfer can be accomplished without loss of information or manual cleanup.
    Type: Grant
    Filed: July 22, 2008
    Date of Patent: October 18, 2011
    Assignee: Disney Enterprises, Inc.
    Inventors: Brent Burley, Charles Tappan, Daniel Teece