Mapping 2-d Image Onto A 3-d Surface Patents (Class 382/285)
  • Patent number: 10271034
    Abstract: In a method of coding video data, a first depth value of a depth look up table (DLT) is determined, where the first depth value is associated with a first pixel of the video data, and a second depth value of the DLT is determined, where the second depth value is associated with a second pixel of the video data. Coding of the second depth value relative to the first depth value is performed during coding of the DLT.
    Type: Grant
    Filed: March 4, 2014
    Date of Patent: April 23, 2019
    Assignee: Qualcomm Incorporated
    Inventors: Li Zhang, Ying Chen, Marta Karczewicz
  • Patent number: 10269148
    Abstract: A system provides image undistortion in 3D reconstruction. The system receives an image produced by a sensor, and determines whether correction values are cached for the sensor, where each correction value is configured to place a corresponding pixel into a corrected location. When there are no cached correction values, the system calculates correction values for pixels in the image, generates a correction grid for the image including vertices corresponding to texture coordinates from the image, where each vertex in the correction grid includes a corresponding correction value, partitions the correction grid into partitioned grids, and caches the partitioned grids. The system then renders the image using the partitioned grids.
    Type: Grant
    Filed: May 1, 2017
    Date of Patent: April 23, 2019
    Assignee: Lockheed Martin Corporation
    Inventor: Michael Jones
  • Patent number: 10269147
    Abstract: A system provides camera position and point cloud estimation 3D reconstruction. The system receives images and attempts existing structure integration to integrate the images into an existing reconstruction under a sequential image reception assumption. If existing structure integration fails, the system attempts dictionary overlap detection by accessing a dictionary database and searching to find a closest matching frame in the existing reconstruction. If overlaps are found, the system matches the images with the overlaps to determine a highest probability frame from the overlaps, and attempts existing structure integration again. If overlaps are not found or existing structure integration fails again, the system attempts bootstrapping based on the images. If any of existing structure integration, dictionary overlap detection, or bootstrapping succeeds, and if multiple disparate tracks have come to exist, the system attempts reconstructed track merging.
    Type: Grant
    Filed: May 1, 2017
    Date of Patent: April 23, 2019
    Assignee: Lockheed Martin Corporation
    Inventors: Michael Jones, Adam James Dickin
  • Patent number: 10262414
    Abstract: Systems, methods, and computer program products for classifying a brain are disclosed. An embodiment method includes processing image data to generate segmented image data of a brain cortex. The method further includes generating a statistical analysis of the brain based on a three dimensional (3D) model of the brain cortex generated from the segmented image data. The method further includes using the statistical analysis to classify the brain cortex and to identify the brain as being associated with a particular neurological condition. According to a further embodiment, generating the 3D model of the brain further includes registering a 3D volume associated with the model with a corresponding reference volume and generating a 3D mesh associated with the registered 3D volume. The method further includes generating the statistical analysis by analyzing individual mesh nodes of the registered 3D mesh based on a spherical harmonic shape analysis of the 3D model.
    Type: Grant
    Filed: July 29, 2016
    Date of Patent: April 16, 2019
    Assignee: University of Louisville Research Foundation, Inc.
    Inventors: Matthew J. Nitzken, Ayman S. El-Baz, Manuel F. Casanova
  • Patent number: 10235795
    Abstract: A method of compressing a texture image for use in generating a 360 degree panoramic video is provided. The method including the steps of: receiving an original texture image for a sphere model, wherein the original texture image is an image with a rectangular shape and includes a plurality of pixel lines and each of the pixels lines has a corresponding spherical position on the sphere model; determining a compression ratio of each of the pixel lines according to the corresponding spherical position of each of the pixel lines; and compressing the pixel lines with the compression ratios corresponding thereto to generate a compressed texture image with a non-rectangular shape, wherein the compressed texture image is further being mapped to the sphere model to generate the 360 degree panoramic video.
    Type: Grant
    Filed: May 19, 2017
    Date of Patent: March 19, 2019
    Inventors: Yiting Yi, Gong Chen, Rong Xie
  • Patent number: 10229542
    Abstract: Techniques are provided for 3D analysis of a scene including detection, segmentation and registration of objects within the scene. The analysis results may be used to implement augmented reality operations including removal and insertion of objects and the generation of blueprints. An example method may include receiving 3D image frames of the scene, each frame associated with a pose of a depth camera, and creating a 3D reconstruction of the scene based on depth pixels that are projected and accumulated into a global coordinate system. The method may also include detecting objects, and associated locations within the scene, based on the 3D reconstruction, the camera pose and the image frames. The method may further include segmenting the detected objects into points of the 3D reconstruction corresponding to contours of the object and registering the segmented objects to 3D models of the objects to determine their alignment.
    Type: Grant
    Filed: February 18, 2016
    Date of Patent: March 12, 2019
    Assignee: Intel Corporation
    Inventors: Gershom Kutliroff, Yaron Yanai, Shahar Fleishman, Mark Kliger
  • Patent number: 10210668
    Abstract: Techniques are described for generating a three dimensional (3D) object from complete or partial 3D data. Image data defining or partially defining a 3D object may be obtained. Using that data, a common plane facing surface of the 3D object may be defined that is substantially parallel to a common plane (e.g., ground plane). One or more edges of the common plane facing surface may be determined, and extended to the common plane. A bottom surface, which is bound by the one or more extended edges and is parallel with the common plane, may be generated based on the common-plane facing surface. In some aspects, defining the common plane facing surface may include segmenting the image data into a plurality of polygons, orienting at least one of the polygons to face the common plane, and discarding occluding polygons.
    Type: Grant
    Filed: February 24, 2016
    Date of Patent: February 19, 2019
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Kristofer N. Iverson, Emmett Lalish, Gheorghe Marius Gheorghescu, Jan Jakubovic, Martin Kusnier, Vladimir Sisolak, Tibor Szaszi
  • Patent number: 10198858
    Abstract: A method based on Structure from Motion for processing a plurality of sparse images acquired by one or more acquisition devices to generate a sparse 3D points cloud and of a plurality of internal and external parameters of the acquisition devices includes the steps of collecting the images; extracting keypoints therefrom and generating keypoint descriptors; organizing the images in a proximity graph; pairwise image matching and generating keypoints connecting tracks according maximum proximity between keypoints; performing an autocalibration between image clusters to extract internal and external parameters of the acquisition devices, wherein calibration groups are defined that contain a plurality of image clusters and wherein a clustering algorithm iteratively merges the clusters in a model expressed in a common local reference system starting from clusters belonging to the same calibration group; and performing a Euclidean reconstruction of the object as a sparse 3D point cloud based on the extracted parame
    Type: Grant
    Filed: March 27, 2017
    Date of Patent: February 5, 2019
    Inventors: Yash Singh, Roberto Toldo, Luca Magri, Simone Fantoni, Andrea Fusiello
  • Patent number: 10115035
    Abstract: A vision system is configured to dynamically inspect an object in a field of view. This includes capturing, using a camera, three-dimensional (3D) point cloud data of the field of view and transforming each of the points of the 3D point cloud data into a plurality of tangential surface vectors. Surface normal vectors are determined for each of the points of the 3D point cloud data based upon the plurality of tangential surface vectors. Distribution peaks in the surface normal vectors are detected employing a unit sphere mesh. Parallel planes are separated using the distance distribution peaks. A radially bounded nearest neighbor strategy combined with a process of nearest neighbor searching based upon cell division is executed to segment a planar patch. A planar surface is identified based upon the segmented planar patch.
    Type: Grant
    Filed: January 8, 2015
    Date of Patent: October 30, 2018
    Assignees: SungKyunKwan University Foundation for Corporation Collaboration, GM Global Technology Operations LLC
    Inventors: Sukhan Lee, Hung Huu Nguyen, Jaewoong Kim, Jianying Shi
  • Patent number: 10086955
    Abstract: A camera pose estimation system is provided for estimating the position of a camera within an environment. The system may be configured to receive a 2D image captured by a camera within the environment, and interpret metadata of the 2D image to identify an estimated position of the camera. A synthetic 2D image from a 3D model of the environment may be rendered by a synthetic camera within the model at the estimated position. A correlation between the 2D image and synthetic 2D image may identify a 2D point of correlation, and the system may project a line from the synthetic camera through the 2D point on the synthetic 2D image rendered in an image plane of the synthetic camera such that the line intersects the 3D model at a corresponding 3D point therein. A refined position may be determined based on the 2D point and corresponding 3D point.
    Type: Grant
    Filed: October 23, 2015
    Date of Patent: October 2, 2018
    Assignee: The Boeing Company
    Inventor: John H. Aughey
  • Patent number: 10003769
    Abstract: A video telephony system includes a first image display apparatus and a second image display apparatus which makes a video call with the first image display apparatus. The first image display apparatus transmits a first image including a photographed image of a first user to the second image display apparatus, and receives a second image in which a background of a photographed image of a second user is substituted with a virtual background image, from the second image display apparatus. The second image display apparatus changes the virtual background image of the second image according to a change in a location of the first user and transmits the second image.
    Type: Grant
    Filed: July 18, 2016
    Date of Patent: June 19, 2018
    Inventors: Eun-seo Kim, Sang-yoon Kim
  • Patent number: 9979961
    Abstract: The invention relates to an image processing device and an image processing method capable of collectively encoding a color image and a depth image of different resolutions. The image processing device comprising an image frame converting unit that converts the resolution of the depth image to the same resolution as that of the color image. An additional information generating unit that generates additional information including information to specify the color image, or information to specify the depth image, image frame conversion information indicating an area of a black image included in the depth image, the resolution of which is converted, and resolution information to distinguish whether the resolutions of the color image and the depth image are different from each other. This technology may be applied to the image processing device of images of multiple viewpoints.
    Type: Grant
    Filed: March 9, 2012
    Date of Patent: May 22, 2018
    Assignee: Sony Corporation
    Inventors: Yoshitomo Takahashi, Shinobu Hattori
  • Patent number: 9967525
    Abstract: A monitoring camera apparatus, having a camera for recording a monitoring region with at least one object from a recording position of the camera. The apparatus also includes an actuator for changing the recording position of the camera and a control device for driving the actuator and the camera. A first image is recordable in a first recording position of the camera, and a second image is recordable in a second recording position of the camera. The first and second images show at least one identical subsection with the at least one object of the monitoring region as a common subsection. An evaluation device for evaluating the images of the camera is configured to determine depth information concerning the at least one object in the common subsection from the first and second images.
    Type: Grant
    Filed: December 16, 2014
    Date of Patent: May 8, 2018
    Assignee: Robert Bosch GmbH
    Inventors: Wolfgang Niehsen, Dieter Joecker, Michael Meyer
  • Patent number: 9875545
    Abstract: Provided is a camera pose estimation apparatus that estimates an initial camera pose using one of an input depth image and an input color image, and refines the initial camera pose using the other image. When the initial camera pose is estimated using the input depth image, the radius of a first area, in which color information is matched for refinement, can be adaptively set according to the distribution of the depth value of at least one first point that is subject to matching.
    Type: Grant
    Filed: May 15, 2014
    Date of Patent: January 23, 2018
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Seon Min Rhee, Hyong Euk Lee, Yong Beom Lee
  • Patent number: 9868256
    Abstract: A three-dimensional printing system includes a light source unit, at least one image-capturing module, a processing unit and a printing device. The image-capturing module includes an image-capturing unit and a focus-adjusting lens group. The processing unit controls a zooming lens to change a shooting focal length and controlling multiple images of the object to be measured captured by the image-capturing unit under the different shooting focal lengths, wherein each of the plural images includes focused and unfocused local images therein, and the processing unit calculates out a three-dimensional profile data of the object to be measured according to the focused local images in the images and the shooting focal lengths corresponding to the focused local images. The printing device prints multiple cross-sectional profiles corresponding to the object to be measured. Further, a method for three-dimensional printing is also provided.
    Type: Grant
    Filed: November 11, 2013
    Date of Patent: January 16, 2018
    Inventor: Ling-Yuan Tseng
  • Patent number: 9858638
    Abstract: A spherical harmonic is defined which is an operationally optimal small finite subset of the infinite number of spherical harmonics allowed to exist mathematically. The composition of the subset differs depending on its position on virtual hemisphere. The subsets are further divided into small spherical tesserae whose dimensions vary depending on the distance from the hemispherical center. The images of the outside visual scenes are projected on the flat surface of the webcam and from there are read and recalculated programmatically as if the images have been projected on the hemisphere. rotational invariants are then computed in the smallest tesserae using numerical integration, and then invariants from neighboring tesserae are added to compute the rotational invariant of their union. Every computed invariant is checked with the library and stored there if there is no match. The rotational invariants are solely used for visual recognition and classification and operational decision making.
    Type: Grant
    Filed: September 20, 2017
    Date of Patent: January 2, 2018
    Inventor: Alex Simon Blaivas
  • Patent number: 9811880
    Abstract: An apparatus, system, and method for increasing points in a point cloud. In one illustrative embodiment, a two-dimensional image of a scene and the point cloud of the scene are received. At least a portion of the points in the point cloud are mapped to the two-dimensional image to form transformed points. A fused data array is created using the two-dimensional image and the transformed points. New points for the point cloud are identified using the fused data array. The new points are added to the point cloud to form a new point cloud.
    Type: Grant
    Filed: November 9, 2012
    Date of Patent: November 7, 2017
    Inventors: Terrell Nathan Mundhenk, Yuri Owechko, Kyungnam Kim
  • Patent number: 9800896
    Abstract: A method and apparatus for depth lookup table (DLT) signaling in a three-dimensional and multi-view coding system are disclosed. According to the present invention, if the pictures contain only texture data, no DLT information is incorporated in the picture parameter set (PPS) corresponding to the pictures. On the other hand, if the pictures contain depth data, the DLT associated with the pictures is determined. If a previous DLT required for predicting the DLT exists, the DLT will be predicted based on the previous DLT. Syntax related to the DLT is included in the PPS. Furthermore, first bit-depth information related to first depth samples of the DLT is also included in the PPS and the first bit-depth information is consistent with second bit-depth information signaled in a sequence level data for second depth samples of a sequence containing the pictures.
    Type: Grant
    Filed: March 17, 2015
    Date of Patent: October 24, 2017
    Inventors: Kai Zhang, Jicheng An, Xianguo Zhang, Han Huang
  • Patent number: 9646264
    Abstract: An input time-series is decomposed into a set of constituent frequencies. For each constituent frequency in a subset of the set of constituent frequencies, a corresponding forecasting model is selected in a subset from a set of forecasting models. From a set of component forecasts produced by the subset of forecasting models, a subset of component forecasts is selected. A component forecast in the subset of component forecasts is selected according to a component forecast selection condition. The subset of component forecasts is output to revise the forecast selection condition. A revised forecast selection condition increases a relevance of a future subset of component forecasts.
    Type: Grant
    Filed: February 25, 2015
    Date of Patent: May 9, 2017
    Inventor: Aaron K. Baughman
  • Patent number: 9607387
    Abstract: A method for optimizing fiducial marker and camera positions/orientations is realized to simulate camera and fiducial positions and pose estimation algorithm to find best possible marker/camera placement comprises the steps of: acquiring mesh data representing possible camera positions and feasible orientation boundaries of cameras on the environment of tracked object; acquiring mesh data representing possible active marker positions and feasible orientation placements of markers on a tracked object; pose data representing possible poses of tracked object under working conditions; initializing the control parameter for camera placement; create initial solution strings for camera placement; solving marker placement problem for the current camera placement; evaluating the quality of the current LED and camera placement taking pose coverage, pose accuracy, number of placed markers, number of placed camera etc. into account; determining if a stopping criterion is satisfied.
    Type: Grant
    Filed: April 12, 2013
    Date of Patent: March 28, 2017
    Inventors: Erkan Okuyan, Ozgur Yilmaz
  • Patent number: 9595106
    Abstract: A calibration apparatus calibrating a projection apparatus projecting a projection image includes a captured image acquiring unit acquiring a captured image at each change of at least one of a relative position between the projection apparatus and a plane body and a relative posture between the projection apparatus and the plane body, a reflection position estimating unit acquiring reflection positions at each change of at least one of a position of the plane body and a posture of the plane body using a predetermined correspondence relationship between a pixel of the captured image and a position on the plane body, a plane body position posture estimating unit estimating positions and postures of the plane body so as to minimize a degree of misfit of the reflection positions from a straight line of the reflection positions, and a projection light beam identifying unit identifying an equation of the light beam.
    Type: Grant
    Filed: November 13, 2015
    Date of Patent: March 14, 2017
    Assignee: Ricoh Company, Ltd.
    Inventor: Takayuki Hara
  • Patent number: 9525862
    Abstract: A method for estimating a camera motion and for determining a three-dimensional model of an environment is provided that includes the steps of: providing intrinsic parameters of a camera; providing a set of reference two-dimensional imaged points captured by the camera at a first camera pose and reference depth samples; determining a three-dimensional model of the environment; providing a set of current two-dimensional imaged points captured by the camera at a second camera pose and current depth samples associated to the set of current two-dimensional imaged points and determining a current three-dimensional model; estimating a camera motion between the first camera pose and the second camera pose; determining a similarity measure between the three-dimensional model and the current three-dimensional model, and if it is determined that the similarity measure meets a first condition, updating the three-dimensional model of the environment and adding the set of current two-dimensional imaged points to the set o
    Type: Grant
    Filed: August 31, 2011
    Date of Patent: December 20, 2016
    Assignee: Metaio GmbH
    Inventors: Selim Benhimane, Sebastian Lieberknecht, Andrea Huber
  • Patent number: 9489724
    Abstract: A method of displaying. The method includes using a system including three-dimensional stereoscopic projection equipment to project an image onto a complex surface of a physical object disposed in an area. The complex surface includes at least one curve or angle. The system also includes a device worn by a user or disposed on a mobile device. The method further includes warping the image based on a geometry of the complex surface, tracking a position and orientation of the user or the mobile device, and further warping the image based on the position and orientation.
    Type: Grant
    Filed: March 31, 2014
    Date of Patent: November 8, 2016
    Assignee: The Boeing Company
    Inventors: Paul Robert Davies, Steven Matthew Gunther
  • Patent number: 9483829
    Abstract: A structure for determining a plane in a depth image includes dividing a portion of a depth image into a plurality of areas, fitting a two-dimensional line to depth points in each of the plurality of areas, and combining two or more of the plurality of two-dimensional lines to form a three-dimensional plane estimate.
    Type: Grant
    Filed: September 4, 2013
    Date of Patent: November 1, 2016
    Assignee: International Business Machines Corporation
    Inventor: Jonathan H. Connell, II
  • Patent number: 9466045
    Abstract: Shipments of ordered items may be optimized by minimizing geographic and time constraints, thereby improving efficiencies and reducing costs. Geographic locations of items included in an order may be considered when generating an optimal path for retrieving such items and for processing the order. Upon identifying the items included in an order, locations of an origin, a destination and each of the items may be determined by any means, such as through a photogrammetric analysis of one or more images, and an optimal path for picking each of the items may be determined based on the respective locations of each of the items. Additionally, orders for items for delivery to a common destination may be combined into a single shipment if the orders are received within a window of time, and if the items are sufficiently compatible with one another.
    Type: Grant
    Filed: December 11, 2013
    Date of Patent: October 11, 2016
    Assignee: Amazon Technologies, Inc.
    Inventor: Nirvay Kumar
  • Patent number: 9460238
    Abstract: A method for determining a form for a headphone part using a representative model includes receiving data for at least one ear model. The ear model may be oriented within a coordinate system based on the data, where orienting is focused on the alignment with respect to one or more areas of the ear. A representative model is determined from a plurality of oriented ear models, where the representative model is a representation for a volume determined to be common to at least two oriented ear models. The size and/or shape for the headphone part is determined based upon the representative model.
    Type: Grant
    Filed: March 14, 2013
    Date of Patent: October 4, 2016
    Assignee: APPLE INC.
    Inventors: M. Evans Hankey, Jonathan S. Aase, Matthew D. Rohrbach, Daniele G. De Iuliis, Kristi E. Bauerly
  • Patent number: 9456196
    Abstract: Provided are an apparatus and a method of providing a multiview still image service. The method includes: configuring a multiview still image file format including a plurality of image areas into which a plurality of pieces of image information forming a multiview still image are inserted; inserting the plurality of pieces of image information into the plurality of image areas, respectively; inserting three-dimensional (3D) basic attribute information to three-dimensionally reproduce the multiview still image into a first image area of the plurality of image areas into which main-view image information from among the plurality of pieces of image information is inserted; and outputting multiview still image data comprising the plurality of pieces of image information based on the multiview still image file format.
    Type: Grant
    Filed: February 23, 2011
    Date of Patent: September 27, 2016
    Inventors: Yong-tae Kim, Ha-joong Park, Gun-ill Lee, Houng-sog Min, Sung-bin Hong, Kwang-cheol Choi
  • Patent number: 9454796
    Abstract: Systems and methods for aligning ground based images of a geographic area taken from a perspective at or near ground level and a set of aerial images taken from, for instance, an oblique perspective, are provided. More specifically, candidate aerial imagery can be identified for alignment with the ground based image. Geometric data associated with the ground based image can be obtained and used to warp the ground based image to a perspective associated with the candidate aerial imagery. One or more feature matches between the warped image and the candidate aerial imagery can then be identified using a feature matching technique. The matched features can be used to align the ground based image with the candidate aerial imagery.
    Type: Grant
    Filed: August 21, 2015
    Date of Patent: September 27, 2016
    Assignee: Google Inc.
    Inventors: Steven Maxwell Seitz, Carlos Hernandez Esteban, Qi Shan
  • Patent number: 9436973
    Abstract: In an image processing for correcting a distorted image obtained by photography by use of a super-wide angle optical system such as a fisheye lens or an omnidirectional mirror, to obtain an image of a perspective projection method, a composite index (Rn) combining a height on the projection sphere with a distance from the optical axis is computed (301), and a distance (Rf) from an origin in the distorted image is computed (302), using the composite index (Rn). Further, two-dimensional coordinates (p, q) in the distorted image are computed (303), using the distance (Rf) from the origin, and a pixel value in an output image is determined using a pixel at a position in the distorted image specified by the two-dimensional coordinates, or a pixel or pixels neighboring the specified position. It is possible to perform the projection from the projection sphere to the image plane, that is, the computation of the coordinates on the coordinate plane, while restricting the amount of computation.
    Type: Grant
    Filed: May 23, 2014
    Date of Patent: September 6, 2016
    Inventors: Toru Aoki, Narihiro Matoba
  • Patent number: 9430038
    Abstract: Embodiments that relate to communicating to a user of a head-mounted display device an estimated quality level of a world-lock display mode are disclosed. For example, in one disclosed embodiment a sensor data is received from one or more sensors of the device. Using the sensor data, an estimated pose of the device is determined. Using the estimated pose, one or more virtual objects are displayed via the device in either the world-lock display mode or in a body-lock display mode. One or more of input uncertainty values of the sensor data and pose uncertainty values of the estimated pose are determined. The input uncertainty values and/or pose uncertainty values are mapped to the estimated quality level of the world-lock display mode. Feedback of the estimated quality level is communicated to a user via device.
    Type: Grant
    Filed: May 1, 2014
    Date of Patent: August 30, 2016
    Inventors: Michael John Ebstyne, Frederik Schaffalitzky, Drew Steedly, Ethan Eade, Martin Shetter, Michael Grabner
  • Patent number: 9418475
    Abstract: The present disclosure describes systems and techniques relating to generating three dimensional (3D) models from range sensor data. According to an aspect, multiple 3D point clouds, which are captured using one or more 3D cameras, are obtained. At least two of the 3D point clouds correspond to different positions of a body relative to at least a single one of the one or more 3D cameras. Two or more of the 3D point clouds are identified as corresponding to two or more predefined poses, and a segmented representation of the body is generated, in accordance with a 3D part-based volumetric model including cylindrical representations, based on the two 3D point clouds identified as corresponding to the two predefined pose.
    Type: Grant
    Filed: March 13, 2013
    Date of Patent: August 16, 2016
    Assignee: University of Southern California
    Inventors: Gerard Guy Medioni, Jongmoo Choi, Ruizhe Wang
  • Patent number: 9367919
    Abstract: A method for estimating a position of a target by using an image acquired from a camera is provided. The method includes the steps of: (a) setting multiple virtual estimated reference points by dividing a view-path; (b) comparing altitude values of the respective estimated reference points with those of respective points on terrain; (c) searching neighboring virtual estimated reference points among the multiple virtual estimated reference points to satisfy a requirement under which a difference between an altitude zk of one point among the neighboring estimated reference points and that of the terrain corresponding thereto and a difference between an altitude zk+1 of the other point among the neighboring estimated reference points and that of the terrain corresponding thereto have different signs; and (d) determining that the actual position of the target exists between the searched estimated reference points Pk.
    Type: Grant
    Filed: January 6, 2016
    Date of Patent: June 14, 2016
    Assignee: National Institute of Meteorological Research
    Inventors: Kyu Young Choi, Jong Chul Ha, Kwang Deuk Ahn, Hee Choon Lee
  • Patent number: 9367918
    Abstract: A multi-view stereo approach generates an inventory of objects located on an object holder. An object may be a sample tube and an object holder may be a tube rack as used in lab automation for healthcare diagnostics. A processor performs 3D tracking of the object holder and the geometric analysis of multiple images generated by a calibrated camera. A homography mapping between images is utilized to warp a second image to a viewpoint of a first image. Plane induced parallax causes a normalized cross-correlation score between the first image and the warped second image of a location on the holder that has an object that is significantly different from a normalized cross-correlation score of a location that has not an object and enables the processor to infer tube inventory and absence or presence of a tube at a location in a rack.
    Type: Grant
    Filed: February 29, 2012
    Date of Patent: June 14, 2016
    Assignee: Siemens Heathcare Diagnostics Inc.
    Inventors: Gang Li, Yakup Genc, Siddharth Ram Chhatpar, Sandeep M. Naik, Roy Barr, Daniel Sacco, Alexander Gelbman
  • Patent number: 9361703
    Abstract: A dynamic image is generated. An image processing device includes a moving object acquisition unit, a moving direction acquisition unit, a rear region detection unit, and a smoothing processing unit. The moving object acquisition unit acquires a region of a moving object in a target image which is at least one image among a plurality of images which are temporally consecutive. The moving direction acquisition unit acquires a moving direction of the moving object. The rear region detection unit detects a region of a rear portion with respect to the moving direction in the region of the moving object, as a rear region. The rear region processing unit performs a predetermined image process on the rear region.
    Type: Grant
    Filed: February 22, 2013
    Date of Patent: June 7, 2016
    Inventor: Mitsuharu Ohki
  • Patent number: 9355462
    Abstract: A system and method of estimation motion of a machine is disclosed. The method may include determining a first point cloud and a second point cloud corresponding to an environment in a vicinity of the machine. The method may further include generating a first extended gaussian image (EGI) for the first point cloud and a second EGI for the second point cloud. The method may further include determining a first EGI segment based on the first EGI and a second EGI segment based on the second EGI. The method may further include determining a first two dimensional distribution for points in the first EGI segment and a second two dimensional distribution for points in the second EGI segment. The method may further include estimating motion of the machine based on the first and second two dimensional distributions.
    Type: Grant
    Filed: May 8, 2013
    Date of Patent: May 31, 2016
    Assignee: Caterpillar Inc.
    Inventor: Qi Chen
  • Patent number: 9336625
    Abstract: Digitizing objects in a picture is discussed herein. A user presents the object to a camera, which captures the image comprising color and depth data for the front and back of the object. The object is recognized and digitized using color and depth data of the image. The user's client queries a server managing images uploaded by other users for virtual renditions of the object, as recognized in the other images. The virtual renditions from the other images are merged with the digitized version of the object in the image captured by the user to create a composite rendition of the object.
    Type: Grant
    Filed: October 25, 2011
    Date of Patent: May 10, 2016
    Inventors: Jeffrey Jesus Evertt, Justin Avram Clark, Christopher Harley Willoughby, Joel Deaguero, Relja Markovic
  • Patent number: 9330466
    Abstract: Methods and apparatus for three-dimensional (3D) camera positioning using a two-dimensional (2D) vanishing point grid. A vanishing point grid in a scene and initial camera parameters may be obtained. A new 3D camera may be calculated according to the vanishing point grid that places the grid as a ground plane in a scene. A 3D object may then be placed on the ground plane in the scene as defined by the 3D camera. The 3D object may be placed at the center of the vanishing point grid. Once placed, the 3D object can be moved to other locations on the ground plane or otherwise manipulated. The 3D object may be added as a layer in the image.
    Type: Grant
    Filed: November 29, 2012
    Date of Patent: May 3, 2016
    Assignee: Adobe Systems Incorporated
    Inventors: Peter F. Falco, Jr., Radomir Mech, Nikolai A. Svakhin, Zorana Gee
  • Patent number: 9330490
    Abstract: Methods and systems for visualization of 3D parametric data in a 2D image. The set of 3D parametric data includes a plurality of voxels in 3D space each associated with at least one parametric value, and the set of 2D image data includes information about a known camera position and a known camera orientation at which the 2D image was obtained. A graphical representation is generated of the parametric values of the voxels corresponding to a viewing surface in 3D space. A virtual 2D view of the viewing surface is determined. The 2D image is displayed registered with the graphical representation of the parametric values of the voxels corresponding to the virtual 2D view.
    Type: Grant
    Filed: April 26, 2012
    Date of Patent: May 3, 2016
    Inventors: Robert Weersink, David A. Jaffray, Jimmy Qiu, Andrew Hope, John Cho, Michael B. Sharpe
  • Patent number: 9317938
    Abstract: A bidirectional texture function image as a bidirectional reflectance distribution function for each pixel of a target object is input. An average bidirectional reflectance distribution function within at least a partial region of the bidirectional texture function image data, a power of the bidirectional reflectance distribution function of each pixel of the bidirectional texture function image data, and a peak position of each pixel of the bidirectional texture function image data are stored in a storage apparatus as information indicating the bidirectional reflectance distribution function for each pixel.
    Type: Grant
    Filed: April 25, 2013
    Date of Patent: April 19, 2016
    Inventor: Kosei Takahashi
  • Patent number: 9314692
    Abstract: One example embodiment includes a method for creating an avatar from an image. The method includes receiving an image including a face from a user. The method also includes constructing a 3D model of the face from the image. The method further includes animating the 3D model. The method additionally includes attaching the 3D model to an animated character.
    Type: Grant
    Filed: September 21, 2012
    Date of Patent: April 19, 2016
    Inventors: Aleksey Konoplev, Yury Volkov, Aleksey Orlov
  • Patent number: 9305364
    Abstract: A motion estimation system is disclosed. The motion estimation system may include one or more memories storing instructions, and one or more processors configured to execute the instructions to receive, from a scanning device, scan data representing at least one object obtained by a scan over at least one of the plurality of sub-scanning regions, and generate, from the scan data, a sub-pointcloud for one of the sub-scanning regions. The sub-pointcloud includes a plurality of surface points of the at least one object in the sub-scanning region. The one or more processors may be further configured to execute the instructions to estimate the motion of the machine relative to the at least one object by comparing the sub-pointcloud with a reference sub-pointcloud.
    Type: Grant
    Filed: February 19, 2013
    Date of Patent: April 5, 2016
    Assignee: Caterpillar Inc.
    Inventors: Qi Chen, Paul Russell Friend
  • Patent number: 9292928
    Abstract: A method of forming a refined depth map DR of an image I using a binary depth map DI of the image, said method comprising segmenting (315) the image into a superpixel image SREP, defining (330) a foreground and a background in the superpixel image SREP, to form a superpixel depth map DS, intersecting (450) the respective foreground and the background of the superpixel depth map DS with the binary depth map DI determined independently of the superpixel image SREP, to define a trimap T consisting of a foreground region, a background region and an unknown region, and forming the refined binary depth map DR of the image from the trimap T by reclassifying (355, 365) the pixels in the unknown region as either foreground or background based on a comparison (510) of the pixel values in the unknown region with pixel values in at least one of the other trimap regions.
    Type: Grant
    Filed: June 27, 2014
    Date of Patent: March 22, 2016
    Assignee: Canon Kabushiki Kaisha
    Inventor: Ernest Yiu Cheong Wan
  • Patent number: 9286717
    Abstract: According to an example, 3D modeling motion parameters may be simultaneously determined for video frames according to different first and second motion estimation techniques. In response to detecting a failure of the first motion estimation technique, the 3D modeling motion parameters determined according to the second motion estimation technique may be used to re-determine the 3D modeling motion parameters according to the first motion estimation technique.
    Type: Grant
    Filed: July 30, 2013
    Date of Patent: March 15, 2016
    Inventors: Vuong Le, Wei Hong, Kar-Han Tan, John Apostolopoulos
  • Patent number: 9269187
    Abstract: Various disclosed embodiments include methods, systems, and computer-readable media for generating a 3-dimensional (3D) panorama. A method includes receiving images of a 3D scene. The method includes reconstructing geometry of a plurality of 3D bubble-views from the images. Reconstructing includes using a structure from motion framework for camera localization, generating a 3D surface mesh model of the scene using multi-view stereo via cylindrical surface sweeping for each bubble-view, and registering multiple 3D bubble-views in a common coordinate system. The method includes displaying the surface mesh model.
    Type: Grant
    Filed: July 11, 2013
    Date of Patent: February 23, 2016
    Assignee: Siemens Product Lifecycle Management Software Inc.
    Inventors: Yao-Jen Chang, Ronny Bismark
  • Patent number: 9225955
    Abstract: The invention relates to a method and an apparatus for processing of media data (3), as well as to corresponding management of the processed media data (3). Proceeding from a state of the art that makes it possible to provide media data with additional product and advertising information with regard to selected objects, a system is proposed, within the scope of the invention, which allows processing of media data (3) with current product and advertising information called up in real time, in contrast to the state of the art. This is made possible in that an XML file (4) is generated for a media file to be processed accordingly, in which file the data are compiled not statically, but rather dynamically, for example in the form of links. Furthermore, the system offers dynamic management of the system-related data in the form of tables that are dynamically connected with one another.
    Type: Grant
    Filed: November 21, 2012
    Date of Patent: December 29, 2015
    Assignee: nrichcontent UG
    Inventors: Marc Langner, Ilhan Sakinc, Serjik Margosian Khoygani
  • Patent number: 9183631
    Abstract: Three-dimensional data are registered by selecting a first set of primitives from the data in a first coordinate system, wherein the first set of primitives includes at least one plane, at least one point, and a third primitive that is either a point or a plane, and selecting a second set of primitives from the data in a second coordinate system, wherein the second set of primitives includes at least one plane, at least one point, and a third primitive corresponding to the third primitive in the first set of primitives. Then, the planes are registered with each other, as are the points, to obtain registered primitives.
    Type: Grant
    Filed: June 29, 2012
    Date of Patent: November 10, 2015
    Assignee: Mitsubishi Electric Research Laboratories, Inc.
    Inventors: Yuichi Taguchi, Srikumar Ramalingam, Yong-Dian Jian, Chen Feng
  • Patent number: 9164723
    Abstract: Techniques for displaying content using an augmented reality device are described. Embodiments provide a visual scene for display, the visual scene captured using one or more camera devices of the augmented reality device. Embodiments adjust physical display geometry characteristics of the visual scene to correct for optimal projection. Additionally, illumination characteristics of the visual scene are modified based on environmental illumination data to improve realism of the visual scene when it is displayed. Embodiments further adjust display characteristics of the visual scene to improve tone mapping output. The adjusted visual scene is then output for display on the augmented reality device.
    Type: Grant
    Filed: June 30, 2011
    Date of Patent: October 20, 2015
    Assignees: Disney Enterprises, Inc., ETH Zurich (Eidgenoessische Technische Hochschule Zurich)
    Inventors: Stefan C. Geiger, Wojciech Jarosz, Manuel J. Lang, Kenneth J. Mitchell, Derek Nowrouzezahrai, Robert W. Sumner, Thomas Williams
  • Patent number: 9159162
    Abstract: A method of constructing a bounding box comprises: acquiring a set of sensed data points; adding, for each sensed data point, at least one calculated data point; and defining a bounding box containing the sensed and calculated data points. A method of identifying voxels in a voxel grid corresponding to a plurality of data points comprises: calculating, for each data point, a distance between it and each voxel; creating a subset of voxels comprising voxels having a distance from one data point that is less than a predetermined distance; creating another subset comprising those voxels that neighbor a voxel in the first subset; computing, for each voxel in the second subset, a distance between it and each voxel in the first subset; and identifying each voxel in the first subset that is a distance away from each voxel in the second subset that exceeds a predetermined distance.
    Type: Grant
    Filed: December 28, 2011
    Date of Patent: October 13, 2015
    Assignee: St. Jude Medical, Atrial Fibrillation Division, Inc.
    Inventors: Carlos Carbonera, Vasily Vylkov, Daniel R. Starks, Jiang Qian, Eric J. Voth
  • Patent number: 9154772
    Abstract: A method and apparatus for converting two-dimensional (2D) contents into three-dimensional (3D) contents is disclosed. The method including: displaying a frame, the frame containing an object which will be extracted from among plural frames contained in the 2D contents; designating a boundary region of an object to be extracted on the displayed frame, in accordance with a user command through a user interface (UI) for collectively designating a region; generating a trimap based on the designated boundary region including inner and outer regions of the object to be extracted, and extracting the object based on the generated trimap. With this, a user can more conveniently and efficiently convert 2D contents into 3D contents.
    Type: Grant
    Filed: September 4, 2012
    Date of Patent: October 6, 2015
    Inventors: Ji-bum Moon, Han-soo Kim, Oh-jae Kwon, Won-seok Ahn, Seung-hoon Han
  • Patent number: 9153059
    Abstract: A method for displaying an image onto a relief projection surface using a projection system includes the automatic measurement, using a fixed camera, of the distance separating virtual reference frame points projected onto the projection surface, from real reference frame points produced on a three-dimensional screen to set the projection system, acquisition, by the same fixed camera as that used during the setting of the projection system, of an image of a rear face of the three-dimensional screen, identification of the position of a finger in contact with a front face of the three-dimensional screen in the acquired image, and control of an appliance according to the identified position of the finger.
    Type: Grant
    Filed: December 13, 2012
    Date of Patent: October 6, 2015
    Assignee: Commissariat á l'énergie atomique et aux énergies alternatives
    Inventors: Gorka Arrizabalaga, Jean-Francois Mainguet