Single Camera From Multiple Positions Patents (Class 348/50)
  • Patent number: 10834372
    Abstract: A method and system of converting stereo video content to multi-view video content combines an Eulerian approach with a Lagrangian approach. The method comprises generating a disparity map for each of the left and right views of a received stereoscopic frame. For each corresponding pair of left and right scanlines of the received stereoscopic frame, the method further comprises decomposing the left and right scanlines into a left sum of wavelets or other basis functions, and a right sum wavelets or other basis functions. The method further comprises establishing an initial disparity correspondence between left wavelets and right wavelets based on the generated disparity maps, and refining the initial disparity between the left wavelet and the right wavelet using a phase difference between the corresponding wavelets. The method further comprises reconstructing at least one novel view based on the left and right wavelets.
    Type: Grant
    Filed: June 5, 2018
    Date of Patent: November 10, 2020
    Assignee: Massachusetts Institute of Technology
    Inventors: Wojciech Matusik, Piotr K. Didyk, William T. Freeman, Petr Kellnhofer, Pitchaya Sitthi-Amorn, Frederic Durand, Szu-Po Wang
  • Patent number: 10809053
    Abstract: A movement assisting device includes an imaging unit to take first and second images of an imaging unit from first and second viewpoints; a first calculator to calculate, using the first and second images, first position and orientation of the imaging unit at the first viewpoint and calculate second position and orientation of the imaging unit at the second viewpoint; a second calculator to calculate, using the first and second positions and the first and second orientations, and calculate a first angle between a first axis based on a first optical axis of the imaging unit at the first viewpoint, and a second axis based on a second optical axis at the second viewpoint; a third calculator to calculate the next viewpoint positioned in direction of increase of the first angle; and an output unit to output information prompting movement of the imaging unit in direction of the next viewpoint.
    Type: Grant
    Filed: September 9, 2015
    Date of Patent: October 20, 2020
    Assignee: KABUSHIKI KAISHA TOSHIBA
    Inventors: Hideaki Uchiyama, Akihito Seki
  • Patent number: 10726616
    Abstract: An image processing system configured to process perceived images of an environment includes a central processing unit (CPU) including a memory storage device having stored thereon a computer model of the environment, at least one sensor configured and disposed to capture a perceived environment including at least one of visual images of the environment and range data to objects in the environment, and a rendering unit (RU) configured and disposed to render the computer model of the environment forming a rendered model of the environment. The image processing system compares the rendered model of the environment to the perceived environment to update the computer model of the environment.
    Type: Grant
    Filed: November 12, 2019
    Date of Patent: July 28, 2020
    Assignee: ROSEMOUNT AEROSPACE INC.
    Inventors: Julian C. Ryde, Xuchu Ding
  • Patent number: 10719953
    Abstract: A system tracks poses of a passive object using fiducial markers on fiducial surfaces of a polygonal structure of the object using image data captured by a camera. The system includes an object tracking controller that generates an estimated pose for a frame of the image data using an approximate pose estimation (APE), and then updates the estimated pose using a dense pose refinement (DPR) of pixels. The APE may include minimizing reprojection error between projected image points of the fiducial markers and observed image points of the fiducial markers in the frame. The DPR may include minimizing appearance error between image pixels of the fiducial markers in the frame and projected model pixels of the fiducial markers determined from the estimated pose and the object model. In some embodiments, an inter-frame corner tracking (ICT) of the fiducial markers may be used to facilitate the APE.
    Type: Grant
    Filed: March 27, 2018
    Date of Patent: July 21, 2020
    Assignee: Facebook Technologies, LLC
    Inventors: Yuting Ye, Robert Y. Wang, Christopher David Twigg, Shangchen Han, Po-Chen Wu
  • Patent number: 10686973
    Abstract: An image pickup apparatus, includes a first optical system and a second optical system which generate two optical images having mutual parallax; and one image sensor which captures the two optical images, wherein each of the first optical system and the second optical system has a different focusing unit, further includes a first frame which holds some of lenses in the first optical system, a second frame which holds the image sensor; and a third frame which holds lenses in the first optical system other than the lenses held by the first frame, and lenses in the second optical system, wherein the first frame is movable in a direction of optical axes with respect to the third frame, and the second frame is movable in a direction of optical axes with respect to the third frame.
    Type: Grant
    Filed: June 8, 2018
    Date of Patent: June 16, 2020
    Assignee: OLYMPUS CORPORATION
    Inventor: Takeshi Suga
  • Patent number: 10643389
    Abstract: A system for allowing a virtual object to interact with other virtual objects across different spaces within an augmented reality (AR) environment and to transition between the different spaces is described. An AR environment may include a plurality of spaces, each comprising a bounded area or volume within the AR environment. In one example, an AR environment may be associated with a three-dimensional world space and a two-dimensional object space corresponding with a page of a book within the AR environment. A virtual object within the AR environment may be assigned to the object space and transition from the two-dimensional object space to the three-dimensional world space upon the detection of a space transition event. In some cases, a dual representation of the virtual object may be used to detect interactions between the virtual object and other virtual objects in both the world space and the object space.
    Type: Grant
    Filed: March 28, 2016
    Date of Patent: May 5, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Mathew J. Lamb, Ben J. Sugden, Robert L. Crocco, Jr., Brian E. Keane, Christopher E. Miles, Kathryn Stone Perez, Laura K. Massey, Alex Aben-Athar Kipman
  • Patent number: 10643360
    Abstract: Some systems include a memory, and a processor coupled to the memory, wherein the processor is configured to: identify one or more spatial markers in a medical data-based image of a patient, identify one or more spatial markers in a real-time perceived image of the patient, wherein the one or more spatial markers in the medical data-based image correspond to an anatomical feature of the patient and the one or more spatial markers in the real-time perceived image correspond to the anatomical feature of the patient, superimpose the medical data-based image of the patient with the real-time perceived image of the patient, and align the one or more spatial markers in the medical data-based image with the respective one or more spatial markers in the real-time perceived image.
    Type: Grant
    Filed: February 9, 2018
    Date of Patent: May 5, 2020
    Assignee: ARIZONA BOARD OF REGENTS ON BEHALF OF ARIZONA STATE UNIVERSITY
    Inventors: David Frakes, Ross Maciejewski, Mark Spano, Dustin Plaas, Alison Van Putten, Joseph Sansone, Matthew Mortensen, Nathaniel Kirkpatrick, Jonah Thomas
  • Patent number: 10600240
    Abstract: A system for acquiring a 3D digital representation of a physical object, the system comprising: A scanning station comprising an object support for receiving a physical object; an image capturing device operable to capture two or more images of a physical object when the physical object is placed on the object support, wherein the two or more images are taken from different view points relative to the physical object; and a processor configured to process the captured two or more images and to create a 3D digital model of the physical object.
    Type: Grant
    Filed: March 30, 2017
    Date of Patent: March 24, 2020
    Assignee: LEGO A/S
    Inventors: Morgan James Walker, Jonathan B. Bennink, Luka Kapeter, Henrik Munk Storm
  • Patent number: 10587796
    Abstract: Systems and methods for analyzing image data to automatically ensure image capture consistency are described. According to certain aspects, a server may access aerial and other image data and identify a set of parameters associated with the capture of the image data. The server may access a corresponding set of acceptable image capture parameters, and may compare the set of parameters to the set of acceptable parameters to determine whether image data is consistent with the set of acceptable parameters. In some embodiments, if the image data is not consistent, the server may generate a notification or instruction to cause an image capture component to recapture additional image data.
    Type: Grant
    Filed: February 20, 2018
    Date of Patent: March 10, 2020
    Assignee: STATE FARM MUTUAL AUTOMOBILE INSURANCE COMPANY
    Inventors: Bradley A. Sliz, Lucas Allen, Jeremy T. Cunningham
  • Patent number: 10552666
    Abstract: A candidate human head is found in depth video using a head detector. A head region of light intensity video is spatially resolved with a three-dimensional location of the candidate human head in the depth video. Facial recognition is performed on the head region of the light intensity video using a face recognizer.
    Type: Grant
    Filed: August 1, 2017
    Date of Patent: February 4, 2020
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Robert M. Craig, Vladimir Tankovich, Craig Peeper, Ketan Dalal, Bhaven Dedhia, Casey Meekhof
  • Patent number: 10529128
    Abstract: A sensor apparatus may include an array of protruding members that each extend outward in a different radial direction from a central axis, each protruding member including a projector that projects structured light into a local environment, one or more cameras that capture reflections of the structured light from the local environment, and another camera that captures visible-spectrum light from the local environment. The sensor apparatus may also include one or more localization devices for determining a location of the sensor apparatus. Various other apparatuses, systems, and methods are also disclosed.
    Type: Grant
    Filed: April 27, 2018
    Date of Patent: January 7, 2020
    Assignee: Facebook Technologies, LLC
    Inventors: Yuheng Ren, Julian Straub, Shobhit Verma, Richard Andrew Newcombe, Renzo De Nardi
  • Patent number: 10499046
    Abstract: A camera system captures images from a set of cameras to generate binocular panoramic views of an environment. The cameras are oriented in the camera system to maximize the minimum number of cameras viewing a set of randomized test points. To calibrate the system, matching features between images are identified and used to estimate three-dimensional points external to the camera system. Calibration parameters are modified to improve the three-dimensional point estimates. When images are captured, a pipeline generates a depth map for each camera using reprojected views from adjacent cameras and an image pyramid that includes individual pixel depth refinement and filtering between levels of the pyramid. The images may be used to generate views of the environment from different perspectives (relative to the image capture location) by generating depth surfaces corresponding to the depth maps and blending the depth surfaces.
    Type: Grant
    Filed: April 18, 2017
    Date of Patent: December 3, 2019
    Assignee: Facebook, Inc.
    Inventors: Michael John Toksvig, Forrest Samuel Briggs, Brian Keith Cabral
  • Patent number: 10484561
    Abstract: A smartphone may be freely moved in three dimensions as it captures a stream of images of an object. Multiple image frames may be captured in different orientations and distances from the object and combined into a composite image representing an three-dimensional image of the object. The image frames may be formed into the composite image based on representing features of each image frame a set of points in a three dimensional depth map. Coordinates of the points in the depth map may be estimated with a level of certainty. The level of certainty may be used to determine which points are included in the composite image. The selected points may be smoothed and a mesh model may be formed by creating a convex hull of the selected points. The mesh model and associated texture information may be used to render a three-dimensional representation of the object on a two-dimensional display.
    Type: Grant
    Filed: May 11, 2015
    Date of Patent: November 19, 2019
    Assignee: ML Netherlands C.V.
    Inventors: Alexander Ilic, Benedikt Koeppel
  • Patent number: 10477195
    Abstract: Different full screen content is displayed on the same television at the same time from the perspective of the viewer by displaying as the content as two full screen sequential frames. The different full screen content may be provided as a single combined frame signal such as a side-by-side, top-bottom, or checkerboard signal which is then displayed as two sequential full screen frames. Configured glasses such as polarized or shutter are used to view the different content as full screen content where one pair of configured glasses views an initial one of the sequential frames but blocks the subsequent one and another pair of configured glasses blocks the initial one of the sequential frames and views the subsequent one. Shutter glasses have both lenses open during the initial frame and both closed during the subsequent one. For polarized glasses, the initial frame has a polarization matching both lenses of one pair of glasses while the subsequent frame has a polarization that differs from both lenses.
    Type: Grant
    Filed: July 13, 2017
    Date of Patent: November 12, 2019
    Inventor: Jeramie J. Keys
  • Patent number: 10444021
    Abstract: Some embodiments of location estimation methods may (1) facilitate the task of efficiently finding the location of a mobile platform in scenarios in which the uncertainties associated with the coordinates of the map features are anisotropic and/or non-proportional, and/or (2) facilitate decoupling of location estimation from feature estimation. Some embodiments of feature estimation methods may (1) facilitate the combining of environmental descriptions provided by two or more mobile platforms, and/or (2) facilitate decoupling of a data aggregation from feature re-estimation.
    Type: Grant
    Filed: February 4, 2019
    Date of Patent: October 15, 2019
    Assignee: Reification Inc.
    Inventor: Gabriel Archacki Hare
  • Patent number: 10432842
    Abstract: A movement recognition system includes an inertial sensor, a depth sensor, and a processor. The inertial sensor is coupled to an object and configured to measure a first unit of inertia of the object. The depth sensor is configured to measure a three dimensional shape of the object using projected light patterns and a camera. The processor is configured to receive a signal representative of the measured first unit of inertia from the inertial sensor and a signal representative of the measured shape from the depth sensor and to determine a type of movement of the object based on the measured first unit of inertia and the measured shape utilizing a classification model.
    Type: Grant
    Filed: April 6, 2016
    Date of Patent: October 1, 2019
    Assignee: THE TEXAS A&M UNIVERSITY SYSTEM
    Inventors: Nasser Kehtarnavaz, Roozbeh Jafari, Kui Liu, Chen Chen, Jian Wu
  • Patent number: 10373337
    Abstract: A method is provided for calibrating a stereo imaging system by using at least one camera and a planar mirror. The method involves obtaining at least two images with the camera, each of the images being captured from a different camera position and containing the mirror view of the camera and a mirror view of an object, thereby obtaining multiple views of the object. The method further involves finding the center of the picture of the camera in each of the images, obtaining a relative focal length of the camera, determining an aspect ratio in each of the images, determining the mirror plane equation in the coordinate system of the camera, defining an up-vector in the mirror's plane, selecting a reference point in the mirror's plane, determining the coordinate transformation from the coordinate system of the image capturing camera into the mirror coordinate system, and determining a coordinate transformation.
    Type: Grant
    Filed: March 18, 2016
    Date of Patent: August 6, 2019
    Assignee: Lateral Reality KFT.
    Inventor: Péter Torma
  • Patent number: 10356394
    Abstract: An apparatus and method for measuring the position of a stereo camera. The apparatus for measuring a position of the camera according to an embodiment includes a feature point extraction unit for extracting feature points from images captured by a first camera and a second camera and generating a first feature point list based on the feature points, a feature point recognition unit for extracting feature points from images captured by the cameras after the cameras have moved, generating a second feature point list based on the feature points, and recognizing actual feature points based on the first feature point list and the second feature point list, and a position variation measurement unit for measuring variation in positions of the cameras based on variation in relative positions of the actual feature points.
    Type: Grant
    Filed: February 18, 2016
    Date of Patent: July 16, 2019
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Jae-Hean Kim, Hyun Kang, Soon-Chul Jung, Young-Mi Cha, Jin-Sung Choi
  • Patent number: 10353191
    Abstract: Described embodiments provide a method of generating an image of a region of interest of a target object. A plurality of concentric circular scan trajectories are determined to sample the region of interest. Each of the concentric circular scan trajectories have a radius incremented from an innermost concentric circular scan trajectory having a minimum radius to an outermost concentric circular scan trajectory having a maximum radius. A number of samples are determined for each of the concentric circular scan trajectories. A location of each sample is determined for each of the concentric circular scan trajectories. The locations of each sample are substantially uniformly distributed in a Cartesian coordinate system of the target object. The target object is iteratively rotated along each of the concentric circular scan trajectories and images are captured at the determined sample locations to generate a reconstructed image from the captured images.
    Type: Grant
    Filed: January 13, 2016
    Date of Patent: July 16, 2019
    Assignee: Massachusetts Institute of Technology
    Inventors: Brian W. Anthony, Xian Du
  • Patent number: 10347029
    Abstract: A three-dimensional shape measurement apparatus includes an imaging unit that successively outputs two-dimensional images captured, a memory unit that stores the two-dimensional images outputted by the imaging unit, a three-dimensional shape model generation unit which generates a three-dimensional shape model, based on the two-dimensional images and stores the three-dimensional shape model in the memory unit, a region calculation unit that calculates, based on the two-dimensional images and the three-dimensional shape model stored in the memory unit, a measurement-completed region in the two-dimensional images, and a display image generation unit that generates, based on the measurement-completed region, a display image from the two-dimensional images.
    Type: Grant
    Filed: May 6, 2016
    Date of Patent: July 9, 2019
    Assignee: TOPPAN PRINTING CO., LTD.
    Inventors: Hiroki Unten, Tatsuya Ishii
  • Patent number: 10341568
    Abstract: Methods and apparatuses are disclosed for assisting a user in performing a three dimensional scan of an object. An example user device to assist with scanning may include a processor. The user device may further include a scanner coupled to the processor and configured to perform a three dimensional scan of an object. The user device may also include a display to display a graphical user interface, wherein the display is coupled to the processor. The user device may further include a memory coupled to the processor and the display, the memory including one or more instructions that when executed by the processor cause the graphical user interface to display a target marker for a three dimensional (3D) scan and display a scanner position marker to assist in moving the scanner to a preferred location and direction.
    Type: Grant
    Filed: March 15, 2017
    Date of Patent: July 2, 2019
    Assignee: QUALCOMM Incorporated
    Inventors: Matthew Fischler, Arthur Pajak, Mithun Kumar Ranganath, Sairam Sundaresan, Michel Adib Sarkis, Scott Beith
  • Patent number: 10237532
    Abstract: A method of colorizing a 3D point cloud includes receiving the 3D point cloud, receiving a 2D color image acquired by a camera, creating a 2D intensity image of the 3D point cloud based on intrinsic and extrinsic parameters of the camera, generating a set of refined camera parameters by matching the 2D intensity image and the 2D color image, creating a depth buffer for the 3D point cloud using the set of refined camera parameters, determining a foreground depth for each respective pixel of the depth buffer, and coloring the point cloud by, for each respective point of the 3D point cloud: upon determining that the respective point is in the foreground, assigning a color of a corresponding pixel in the 2D color image to the respective point; and upon determining that the respective point is not in the foreground, not assigning any color to the respective point.
    Type: Grant
    Filed: March 7, 2017
    Date of Patent: March 19, 2019
    Assignee: Trimble AB
    Inventors: Fabrice Monnier, Thomas Chaperon, Guillaume Tremblay
  • Patent number: 10198811
    Abstract: An image processing apparatus includes: a detecting unit that detects regions of interest that are estimated as an object to be detected, from a group of a series of images acquired by sequentially imaging a lumen of a living body, and to extract images of interest including the regions of interest; a neighborhood range setting unit that sets, as a time-series neighborhood range, a neighborhood range of the images of interest in the group of the series of images arranged in time series so as to be wider than an interval between images that are continuous in time series in the group of the series of images; an image-of-interest group extracting unit that extracts an image-of-interest group including identical regions of interest from the extracted images of interest, based on the time-series neighborhood range; and a representative-image extracting unit that extracts a representative image from the image-of-interest group.
    Type: Grant
    Filed: September 16, 2016
    Date of Patent: February 5, 2019
    Assignee: OLYMPUS CORPORATION
    Inventors: Toshiya Kamiyama, Yamato Kanda, Makoto Kitamura
  • Patent number: 10198842
    Abstract: The methods and systems described herein comprise: receiving a first image and a second image, a first timestamp and a second timestamp, the first image and the second image depicting a common object; determining an actual position of first common object pixels and second common object pixels; computing a common object motion information based on the actual position of the first common object pixels and the second common object pixels, the first and the second timestamp; receiving a third image and a third timestamp, the third image depicting the common object; determining an actual position of third common object pixels; computing an estimated position of the third common object pixels based on the common object motion information, the third timestamp and the third pixels; if the actual position of the third common object pixels and the estimated position of the third common object pixels don't match, generating the synthetic image.
    Type: Grant
    Filed: March 15, 2017
    Date of Patent: February 5, 2019
    Assignee: YANDEX EUROPE AG
    Inventor: Anton Vasilyevich Korzunov
  • Patent number: 10181202
    Abstract: A control apparatus includes a projection unit configured to project a first pattern onto an object; and a selection unit configured to select a single first pattern from a plurality of first patterns. After the projection unit projects each of the plurality of first patterns having different resolutions onto the object, the projection unit projects a second pattern onto the object, the second pattern having the same resolution as that of the selected single first pattern.
    Type: Grant
    Filed: May 22, 2015
    Date of Patent: January 15, 2019
    Assignee: Seiko Epson Corporation
    Inventors: Tomoki Harada, Koichi Hashimoto, Shogo Arai, Toshiki Boshu
  • Patent number: 10140702
    Abstract: Provided herein is a method of quantifying dentin tubules in a dentin surface. Also provided are uses of the method and a computer program which performs the method.
    Type: Grant
    Filed: October 4, 2013
    Date of Patent: November 27, 2018
    Assignee: Colgate-Palmolive Company
    Inventor: Richard Sullivan
  • Patent number: 10110817
    Abstract: Provided is an image processing device including an acquisition unit configured to acquire information on an imaging position and an imaging direction in units of frame images that constitute a moving image obtained through capturing by an imaging unit, a converted image generation unit configured to generate converted images having different imaging directions for each frame image that constitutes the moving image based on the frame image itself and preceding and succeeding frame images of the frame image, an evaluation value calculation unit configured to calculate an evaluation value for each converted moving image constituted by combining the converted image and the original frame image, the evaluation value being used to evaluate a blur between the converted images or between the original frame images, and a selection unit configured to select a converted moving image with less blur based on an evaluation value calculated by the evaluation value calculation unit.
    Type: Grant
    Filed: December 27, 2016
    Date of Patent: October 23, 2018
    Assignee: SONY CORPORATION
    Inventors: Yasuhiro Sutou, Hideki Shimomura, Atsushi Okubo, Kazumi Aoyama, Akichika Tanaka
  • Patent number: 10082663
    Abstract: A novel method is disclosed to allow for the simultaneous capture of image data from multiple depths of a volumetric sample. The method allows for the seamless acquisition of a 2D or 3D image, while changing on the fly the acquisition depth in the sample. This method can also be used for auto focusing. Additionally this method of capturing image data from the sample allows for optimal efficiency in terms of speed, and light sensitivity, especially for the herein mentioned purpose of 2D or 3D imaging of samples when using a tilted configuration as depicted in FIG. 2. The method may be particularly used with an imaging sensor comprising a 2D array of pixels in an orthogonal XY coordinate system where gaps for electronic circuitry are present. Also other imaging sensor may be used. Further, an imaging device is presented which automatically carries out the method.
    Type: Grant
    Filed: March 5, 2018
    Date of Patent: September 25, 2018
    Assignee: KONINKLIJKE PHILIPS N.V.
    Inventor: Bas Hulsken
  • Patent number: 10080012
    Abstract: Disclosed herein are methods, systems, and computer-readable storage media for generating three-dimensional (3D) images of a scene. According to an aspect, a method includes capturing a real-time image and a first still image of a scene. Further, the method includes displaying the real-time image of the scene on a display. The method also includes determining one or more properties of the captured images. The method also includes calculating an offset in a real-time display of the scene to indicate a target camera positional offset with respect to the first still image. Further, the method includes determining that a capture device is in a position of the target camera positional offset. The method also includes capturing a second still image. Further, the method includes correcting the captured first and second still images. The method also includes generating the three-dimensional image based on the corrected first and second still images.
    Type: Grant
    Filed: June 27, 2016
    Date of Patent: September 18, 2018
    Assignee: 3DMedia Corporation
    Inventors: Michael McNamer, Marshall Robers, Tassos Markas, Jason Paul Hurst
  • Patent number: 10013812
    Abstract: A method includes defining a virtual space and a virtual camera for determining a field of view region at a first position in the virtual space. The method includes specifying a reference slight line of the user and a direction of the virtual camera. The method includes generating a field of view image corresponding to the field of view region and outputting the field of view image. The method includes receiving a movement input for specifying a movement destination of the virtual camera. The method includes specifying a temporal state of the movement input. The method includes moving the virtual camera from the first position to a second position in the virtual space based on the temporal state. The method includes generating an updated field of view image based on the virtual camera reaching the second position and outputting the updated field of view image.
    Type: Grant
    Filed: March 9, 2017
    Date of Patent: July 3, 2018
    Assignee: COLOPL, INC.
    Inventor: Sadaharu Muta
  • Patent number: 9990861
    Abstract: A method of providing artificial vision to a visually-impaired user implanted with a visual prosthesis. The method includes configuring, in response to selection information received from the user, a smart prosthesis to perform at least one function of a plurality of functions in order to facilitate performance of a visual task. The method further includes extracting, from an input image signal generated in response to optical input representative of a scene, item information relating to at least one item within the scene relevant to the visual task. The smart prosthesis then generates image data corresponding to an abstract representation of the scene wherein the abstract representation includes a representation of the at least one item. Pixel information based upon the image data is then provided to the visual prosthesis.
    Type: Grant
    Filed: May 28, 2014
    Date of Patent: June 5, 2018
    Assignee: Pixium Vision
    Inventors: Eduardo-Jose Chichilnisky, Martin Greschner, Lauren Jepson
  • Patent number: 9984499
    Abstract: Systems and methods for image based location estimation are described. In one example embodiment, a first positioning system is used to generate a first position estimate. A 3D point cloud data describing an environment is then accessed. A first image of an environment is captured, and a portion of the image is matched to a portion of key points in the 3D point cloud data. An augmented reality object is then aligned within one or more images of the environment based on the match of the 3D point cloud with the image. In some embodiments, building façade data may additionally be used to determine a device location and place the augmented reality object within an image.
    Type: Grant
    Filed: November 30, 2015
    Date of Patent: May 29, 2018
    Assignee: Snap Inc.
    Inventors: Nathan Jurgenson, Linjie Luo, Jonathan M Rodriguez, II, Rahul Sheth, Jia Li, Xutao Lv
  • Patent number: 9967537
    Abstract: An intermediate image (161) is generated from stereo data (105) comprising a left image (101), a left disparity data (111), a right image (102) and a right disparity data (112). The intermediate image (161) corresponds to an intermediate view (155). A mixing policy (156) is determined based on a predicted image quality of the intermediate image (161). When the determined mixing policy (156) so requires, a left intermediate image (131) is generated from the left data (103) for the intermediate view (155). When the determining mixing policy (156) so requires, a right intermediate image (141) is generated from the right data (104) for the intermediate view (155). The intermediate image (161) is generated by mixing (180) the left intermediate image (131) and the right intermediate image (141), according to the mixing policy (156).
    Type: Grant
    Filed: November 13, 2013
    Date of Patent: May 8, 2018
    Assignee: KONINKLIJKE PHILIPS N.V.
    Inventors: Wilhelmus Hendrikus Alfonsus Bruls, Meindert Onno Wildeboer
  • Patent number: 9961321
    Abstract: A video is generated from multi-aspect images. For each frame of the output video, a provisional video is created with provisional images which are used to designate a position of a target object on which to focus and to acquire a depth map showing a depth of the object, and which are reconstructed using default values from LFIs as the frames. Then, information designating coordinates of focal positions of the target object on the provisional video is acquired. A list creator acquires the depth value of the target object using a depth map created from the LFI of the current frame, and records this in a designation list after obtaining the reconstruction distance from the depth value. A corrector corrects the designation list. A main video creator creates and outputs a main video with reconstructed images focused at a focal length designated by the post-correction designation list as frames.
    Type: Grant
    Filed: May 28, 2013
    Date of Patent: May 1, 2018
    Assignee: CASIO COMPUTER CO., LTD.
    Inventor: Kouichi Nakagome
  • Patent number: 9910258
    Abstract: A novel method is disclosed to allow for the simultaneous capture of image data from multiple depths of a volumetric sample. The method allows for the seamless acquisition of a 2D or 3D image, while changing on the fly the acquisition depth in the sample. This method can also be used for auto focusing. Additionally this method of capturing image data from the sample allows for optimal efficiency in terms of speed, and light sensitivity, especially for the herein mentioned purpose of 2D or 3D imaging of samples when using a tilted configuration as depicted in FIG. 2. The method may be particularly used with an imaging sensor comprising a 2D array of pixels in an orthogonal XY coordinate system where gaps for electronic circuitry are present. Also other imaging sensor may be used. Further, an imaging device is presented which automatically carries out the method.
    Type: Grant
    Filed: December 11, 2015
    Date of Patent: March 6, 2018
    Assignee: KONINKLIJKE PHILIPS N.V.
    Inventor: Bas Hulsken
  • Patent number: 9900595
    Abstract: The present technique relates to an encoding device, an encoding method, a decoding device, and a decoding method capable of improving encoding efficiency of a parallax image using information about the parallax image. The correction unit corrects a prediction image of a parallax image of a reference viewpoint using information about the parallax image of the reference viewpoint. The arithmetic operation unit encodes the parallax image of the reference viewpoint using the corrected prediction image. The encoded parallax image of the reference viewpoint and the information about the parallax image of the reference viewpoint are transmitted. The present technique can be applied to, for example, an encoding device of the parallax image.
    Type: Grant
    Filed: August 21, 2012
    Date of Patent: February 20, 2018
    Assignee: Sony Corporation
    Inventor: Yoshitomo Takahashi
  • Patent number: 9894276
    Abstract: System and method can support three-dimensional display. The system can receive a plurality of image frames, which are captured by an imaging device on a movable object. Furthermore, the system can obtain state information of the imaging device on the movable object, and use the state information to configure a pair of image frames based on the plurality of image frames for supporting a three-dimensional first person view (FPV). Additionally, an image frame selected from the plurality of image frames can be used for a first image frame in the pair of image frames.
    Type: Grant
    Filed: November 11, 2016
    Date of Patent: February 13, 2018
    Assignee: SZ DJI TECHNOLOGY CO., LTD.
    Inventors: Zisheng Cao, Linchao Bao, Pan Hu, Mingyu Wang
  • Patent number: 9824436
    Abstract: A method of comparing measured three-dimensional (3D) measurement data to an object is provided. The method includes the steps of providing a three dimensional measurement device configured to measure 3D coordinates of points on the object and a computing device having a camera and display. During an inspection process, the method measures the object with the 3D measurement device which provides a first collection of 3D coordinates. The first collection of 3D coordinates is stored on the computer network and is associated with an AR marker. During an observation process the method reads the AR marker and transmits from the computer network the first collection of 3D coordinates and a dimensional representation of the object to the computing device. A portion of the first collection of 3D coordinates is registered to the camera image. On the integrated display the registered collection of 3D coordinates and the camera image are shown.
    Type: Grant
    Filed: March 21, 2017
    Date of Patent: November 21, 2017
    Assignee: FARO TECHNOLOGIES, INC.
    Inventor: Robert M. Persely
  • Patent number: 9824450
    Abstract: A method generates a three-dimensional map of a region from successive images of the region captured from different camera poses. The method detects feature points within the captured images and designates a subset of the images as a set of keyframes each having camera pose data and respective sets of measurement data representing image positions of landmark points detected as feature points in that image. The method also includes performing bundle-adjustment to generate bundle-adjusted landmark points by iteratively refining the three dimensional spatial positions of the landmarks and the camera pose data associated with at least a subset of the keyframes. And for a feature point, not corresponding to a bundle-adjusted landmark point, detected at an intervening image which is not a keyframe and present in another intervening image which is not a keyframe, the method generates a non-bundle-adjusted point corresponding to that feature point and derives a camera pose.
    Type: Grant
    Filed: July 25, 2013
    Date of Patent: November 21, 2017
    Assignee: Sony Interactive Entertainment Europe Limited
    Inventor: Antonio Martini
  • Patent number: 9826216
    Abstract: A pattern projection system includes a coherent light source, a repositionable DOE disposed to receive coherent light from said coherent light source and disposed to output at least one pattern of projectable light onto a scene to be imaged by an (x,y) two-dimensional optical acquisition system. Coherent light speckle artifacts in the projected pattern are reduced by rapidly controllably repositioning the DOE or the entire pattern projection system. Different projectable patterns are selected from a set of M patterns that are related to each other by a translation and/or rotation operation in two-dimensional cosine space. A resultant (x,y,z) depth map has improved quality and robustness due to projection of the selected patterns. Three-dimensional (x,y,z) depth data obtained from two-dimensional imaged data including despeckling is higher quality data than if projected patterns without despeckling were used.
    Type: Grant
    Filed: September 23, 2016
    Date of Patent: November 21, 2017
    Assignee: AQUIFI, INC.
    Inventors: Aryan Hazeghi, Carlo Dal Mutto, Giulio Marin, Francesco Peruch, Michele Stoppa, Abbas Rafii
  • Patent number: 9813621
    Abstract: Systems and methods for capturing omnistereo content for a mobile device may include receiving an indication to capture a plurality of images of a scene, capturing the plurality of images using a camera associated with a mobile device and displaying on a screen of the mobile device and during capture, a representation of the plurality of images and presenting a composite image that includes a target capture path and an indicator that provides alignment information corresponding to a source capture path associated with the mobile device during capture of the plurality of images. The system may detect that a portion of the source capture path does not match a target capture path. The system can provide an updated indicator in the screen that may include a prompt to a user of the mobile device to adjust the mobile device to align the source capture path with the target capture path.
    Type: Grant
    Filed: May 26, 2015
    Date of Patent: November 7, 2017
    Assignee: Google LLC
    Inventors: Robert Anderson, Steven Maxwell Seitz, Carlos Hernandez Esteban
  • Patent number: 9644973
    Abstract: Disclosed here are methods and systems that relate to determining a location of a mobile device in an indoor environment. The indoor environment is installed with one or more light sources at disparate locations. Each light source has a unique light attribute different from that of other light sources. The mobile device is equipped with a photosensor to detect light emissions by the light sources. The methods and systems described herein may determine the location of the mobile device based on the light detection by the photosensor.
    Type: Grant
    Filed: March 26, 2014
    Date of Patent: May 9, 2017
    Assignee: Google Inc.
    Inventor: Martyn James
  • Patent number: 9648302
    Abstract: An RGB-D imaging system having an ultrasonic array for generating images that include depth data, and methods for manufacturing and using same. The RGB-D imaging system includes an ultrasonic sensor array positioned on a housing that includes an ultrasonic emitter and a plurality of ultrasonic sensors. The RGB-D imaging system also includes an RGB camera assembly positioned on the housing in a parallel plane with, and operably connected to, the ultrasonic sensor. The RGB-D imaging system thereby provides/enables improved imaging in a wide variety of lighting conditions compared to conventional systems.
    Type: Grant
    Filed: December 17, 2015
    Date of Patent: May 9, 2017
    Assignee: SZ DJI TECHNOLOGY CO., LTD.
    Inventors: Jiebin Xie, Wei Ren, Guyue Zhou
  • Patent number: 9638989
    Abstract: Example embodiments disclosed herein relate to determining a motion based on projected image information. Image information is projected onto an external surface from a device. Sensor information about the external surface and/or projection is received. Motion of the device is determined based on the sensor information.
    Type: Grant
    Filed: April 17, 2015
    Date of Patent: May 2, 2017
    Assignee: QUALCOMM INCORPORATED
    Inventors: Stefan J. Marti, Eric Liu, Seung Wook Kim
  • Patent number: 9633435
    Abstract: A computer-implemented method for automatically calibrating an RGB-D sensor and an imaging device using a transformation matrix includes using a medical image scanner to acquire a first dataset representative of an apparatus attached to a downward facing surface of a patient table, wherein corners of the apparatus are located at a plurality of corner locations. The plurality of corner locations are identified based on the first dataset and the RGB-D sensor is used to acquire a second dataset representative of a plurality of calibration markers displayed on an upward facing surface of the patient table at the corner locations. A plurality of calibration marker locations are identified based on the second dataset and the transformation matrix is generated by aligning the first dataset and the second dataset using the plurality of corner locations and the plurality of calibration marker locations.
    Type: Grant
    Filed: September 25, 2015
    Date of Patent: April 25, 2017
    Inventors: Kai Ma, Yao-jen Chang, Vivek Kumar Singh, Thomas O'Donnell, Michael Wels, Tobias Betz, Andreas Wimmer, Terrence Chen
  • Patent number: 9633481
    Abstract: A method of comparing measured three-dimensional (3D) measurement data to an object is provided. The method includes the steps of providing a three dimensional measurement device configured to measure 3D coordinates of points on the object and a computing device having a camera and display. During an inspection process, the method measures the object with the 3D measurement device which provides a first collection of 3D coordinates. The first collection of 3D coordinates is stored on the computer network and is associated with an AR marker. During an observation process the method reads the AR marker and transmits from the computer network the first collection of 3D coordinates and a dimensional representation of the object to the computing device. A portion of the first collection of 3D coordinates is registered to the camera image. On the integrated display the registered collection of 3D coordinates and the camera image are shown.
    Type: Grant
    Filed: March 25, 2016
    Date of Patent: April 25, 2017
    Assignee: FARO TECHNOLOGIES, INC.
    Inventor: Robert M. Persely
  • Patent number: 9615075
    Abstract: The invention relates to a method and a device for improving the depth impression of stereoscopic images and image sequences. In autostereoscopic multi-viewer display devices, generally a plurality of intermediate perspectives are generated, which lead to a reduced stereo base upon perception by the viewers. The stereo base widening presented in this application leads to a significant improvement and thus to a more realistic depth impression. It can either be effected during recording in the camera or be integrated into a display device. The improvement in the depth impression is achieved by the generation of synthetic perspectives situated, in the viewing direction of the camera lenses, on the left and right of the extreme left and extreme right recorded camera perspective on the right and left lengthening of the connection line formed by the extreme left and extreme right camera perspectives.
    Type: Grant
    Filed: January 17, 2012
    Date of Patent: April 4, 2017
    Assignee: STERRIX TECHNOLOGIES UG
    Inventor: Rolf-Dieter Naske
  • Patent number: 9591284
    Abstract: A method of indicating a suitable pose for a camera for obtaining a stereoscopic image, with the camera comprising an imaging sensor. The method comprises obtaining and storing a first image of a scene using the imaging sensor when the camera is in a first pose; moving the camera to a second pose; and obtaining a second image of the scene when the camera is in the second pose. One or more disparity vectors are determined, each disparity vector being determined between a feature identified within the first image and a corresponding feature identified in the second image. On the basis of the one or more disparity vectors, a determination is made of whether the second image of the scene is suitable for use, together with the first image, as a stereoscopic image pair.
    Type: Grant
    Filed: December 23, 2013
    Date of Patent: March 7, 2017
    Assignee: OPTIS CIRCUIT TECHNOLOGY, LLC
    Inventor: Stéphane Valente
  • Patent number: 9380292
    Abstract: Disclosed herein are methods, systems, and computer-readable storage media for generating three-dimensional (3D) images of a scene. According to an aspect, a method includes capturing a real-time image and a first still image of a scene. Further, the method includes displaying the real-time image of the scene on a display. The method also includes determining one or more properties of the captured images. The method also includes calculating an offset in a real-time display of the scene to indicate a target camera positional offset with respect to the first still image. Further, the method includes determining that a capture device is in a position of the target camera positional offset. The method also includes capturing a second still image. Further, the method includes correcting the captured first and second still images. The method also includes generating the three-dimensional image based on the corrected first and second still images.
    Type: Grant
    Filed: May 25, 2011
    Date of Patent: June 28, 2016
    Assignee: 3DMedia Corporation
    Inventors: Michael McNamer, Marshall Robers, Tassos Markas, Jason Paul Hurst, Jon Boyette
  • Patent number: 9191648
    Abstract: In a system or method where three-dimensional data is acquired as a sequence of frames of data along a camera path, disparate sequences are related to one another through a number of geometric stitches obtained by direct registration of three-dimensional data between frames in the disparate sequences. These geometric stitches are then used in addition to or in place of other camera path relationships to form a single virtual stitch graph for the combined model, upon which an integrated global path optimization can be performed.
    Type: Grant
    Filed: February 17, 2012
    Date of Patent: November 17, 2015
    Assignee: 3M INNOVATIVE PROPERTIES COMPANY
    Inventors: Ilya A. Kriveshko, Benjamin Frantzdale