3-d Or Stereo Imaging Analysis Patents (Class 382/154)
  • Patent number: 10260862
    Abstract: A method and system estimates a three-dimensional (3D) pose of a sensor by first acquiring scene data of a 3D scene by the sensor. Two-dimensional (2D) lines are detected in the scene data and the 2D lines are matched to 3D lines of a 3D model of the scene to produce matching lines. Then, the 3D pose of the sensor is estimated using the matching lines.
    Type: Grant
    Filed: November 2, 2015
    Date of Patent: April 16, 2019
    Assignee: Mitsubishi Electric Research Laboratories, Inc.
    Inventors: Srikumar Ramalingam, Jay Thornton, Wenzhen Yuan
  • Patent number: 10264238
    Abstract: According to an aspect of an embodiment, a method may include obtaining a first digital image via a mapping application. The method may also include obtaining a second digital image via the mapping application. The method may additionally include determining a displacement factor between the first digital image and the second digital image. Further, the method may include generating a third digital image by cropping the second digital image based on the displacement factor, adjusting an aspect ratio of the third digital image, and resizing the third digital image. Moreover, the method may include generating a stereoscopic map image of the setting that includes a first-eye image and a second eye image. The first-eye image may be based on the first digital image and the second-eye image may be based on the third digital image.
    Type: Grant
    Filed: November 11, 2016
    Date of Patent: April 16, 2019
    Assignee: BITANIMATE, INC.
    Inventors: Behrooz Maleki, Sarvenaz Sarkhosh
  • Patent number: 10262692
    Abstract: It is provided a method for a computerized, server autonomously producing a TV show of a sports game in a scene. The method includes receiving from several video cameras a stream of video images of the scene for capturing a panoramic view of the scene, analyzing the stream of video images for allowing definition of several frame streams, determining location data of the frame streams accordingly, and rendering an active frame stream with images imaging a respective portion of the panoramic view of the scene. The method includes also transmitting for broadcasting a stream of image frames imaging the respective portion of the panoramic view. The step of analyzing the stream of video images includes identifying a playing object, tracking the playing, object and identifying players. The method also includes calibrating the cameras using points in the playing field.
    Type: Grant
    Filed: March 31, 2015
    Date of Patent: April 16, 2019
    Assignee: PIXELLOT LTD.
    Inventors: Miki Tamir, Gal Oz, Tal Ridnik
  • Patent number: 10262428
    Abstract: A scanner system is configured for acquiring three dimensional image information of an object. The scanner includes a projector, a camera, a graphics processing device, and a processor. The projector projects one of several pre-defined patterns upon the object. The camera captures an image from the object, which is received by the processor. The processor approximates mutual information from the object and the pattern using the graphics processing device, and selects a second pattern for projecting on the object.
    Type: Grant
    Filed: April 7, 2017
    Date of Patent: April 16, 2019
    Assignee: Massachusetts Institute of Technology
    Inventors: Guy Rosman, Daniela Rus, John W. Fisher, III
  • Patent number: 10262238
    Abstract: A camera system captures images from a set of cameras to generate binocular panoramic views of an environment. The cameras are oriented in the camera system to maximize the minimum number of cameras viewing a set of randomized test points. To calibrate the system, matching features between images are identified and used to estimate three-dimensional points external to the camera system. Calibration parameters are modified to improve the three-dimensional point estimates. When images are captured, a pipeline generates a depth map for each camera using reprojected views from adjacent cameras and an image pyramid that includes individual pixel depth refinement and filtering between levels of the pyramid. The images may be used generate views of the environment from different perspectives (relative to the image capture location) by generating depth surfaces corresponding to the depth maps and blending the depth surfaces.
    Type: Grant
    Filed: April 18, 2017
    Date of Patent: April 16, 2019
    Assignee: Facebook, Inc.
    Inventors: Forrest Samuel Briggs, Michael John Toksvig, Brian Keith Cabral
  • Patent number: 10255686
    Abstract: During a training phase, a machine accesses reference images with corresponding depth information. The machine calculates visual descriptors and corresponding depth descriptors from this information. The machine then generates a mapping that correlates these visual descriptors with their corresponding depth descriptors. After the training phase, the machine may perform depth estimation based on a single query image devoid of depth information. The machine may calculate one or more visual descriptors from the single query image and obtain a corresponding depth descriptor for each visual descriptor from the generated mapping. Based on obtained depth descriptors, the machine creates depth information that corresponds to the submitted single query image.
    Type: Grant
    Filed: January 18, 2017
    Date of Patent: April 9, 2019
    Assignee: eBay Inc.
    Inventors: Anurag Bhardwaj, Mohammad Haris Baig, Robinson Piramuthu, Vignesh Jagadeesh, Wei Di
  • Patent number: 10254111
    Abstract: The invention relates to a device for optical 3D measurement of an object using an optical depth-scanning measurement method, comprising at least two light sources, at least two optical means for producing textured patterns and at least one recording means. A first pattern is produced with the aid of a first optical means and projected onto the object to be recorded as a first projection beam. A second pattern is produced with the aid of a second optical means and projected onto the object to be recorded as a second projection beam. The imaging optics is controlled and adjusted in such a way that a sharp focal plane is incrementally varied along an optical axis of the device.
    Type: Grant
    Filed: May 23, 2016
    Date of Patent: April 9, 2019
    Assignee: DENTSPLY SIRONA Inc.
    Inventors: Frank Thiel, Gerrit Kocherscheidt
  • Patent number: 10250802
    Abstract: An apparatus for processing a wide viewing angle image may include: a correction parameter generating unit for analyzing an image input from a camera to generate a correction parameter, a projection geometry generating unit for generating a projection geometry to output a wide viewing angle image by using the correction parameter, and a wide viewing angle image packaging unit for encoding the input image, the correction parameter and the projection geometry to generate a wide viewing angle image package. A method for processing a wide viewing angle image may be performed using the apparatus.
    Type: Grant
    Filed: November 12, 2014
    Date of Patent: April 2, 2019
    Assignee: FXGear Inc.
    Inventors: Kwang Jin Choi, Yeong Jun Park, Kyung Gun Na
  • Patent number: 10249052
    Abstract: Stereo correspondence model fitting techniques are described. In one or more implementations, a model may be fit to a region in at least one of a plurality of stereoscopic images of an image scene. The model may then be used as part of a stereo correspondence calculation, which may include computing disparities for the region based at least in part on correspondence to the model.
    Type: Grant
    Filed: December 19, 2012
    Date of Patent: April 2, 2019
    Assignee: Adobe Systems Incorporated
    Inventors: Scott D. Cohen, Brian L. Price, Chenxi Zhang
  • Patent number: 10242438
    Abstract: An apparatus comprises an unit configured to obtain an image of an assembled object that is constituted by first and second objects that have been assembled; an unit configured to obtain a three-dimensional shape model of the assembled object that has at least one area to which an attribute that corresponds to the first object or the second object is added; an unit configured to obtain a position and orientation of the assembled object based on the image; an unit configured to obtain, from the three-dimensional shape model of the position and orientation, first and second evaluation values that are for evaluating a state of assembly in areas that correspond to the first and second objects; and an unit configured to determine whether or not the assembly was successful based on the first and second evaluation values.
    Type: Grant
    Filed: February 10, 2017
    Date of Patent: March 26, 2019
    Assignee: CANON KABUSHIKI KAISHA
    Inventor: Daisuke Watanabe
  • Patent number: 10244353
    Abstract: An approach is provided for determining location offset information. A correction manager determines to present, at a device, a location-based display including one or more representations of one or more location-based features. Next, the correction manager receives an input for specifying offset information for at least one of the one or more representations with respect to the location-based display. Then, the correction manager determines to present the one or more representations in the location-based display based, at least in part, on the offset information.
    Type: Grant
    Filed: April 24, 2017
    Date of Patent: March 26, 2019
    Assignee: Nokia Technologies Oy
    Inventors: Ciprian Cudalbu, Mari Joller, James Mulholland
  • Patent number: 10242294
    Abstract: An example apparatus for classifying target objects using three-dimensional geometric filtering includes a patch receiver to receive patches with objects to be classified. The apparatus also includes a geometric filter to filter out patches including objects with sizes outside a target range using three dimensional geometry to generate filtered patches. The apparatus further includes a background remover to remove background pixels from the filtered patches to generate preprocessed patches. The apparatus includes a classification score calculator to calculate a classification score for each of the preprocessed patches.
    Type: Grant
    Filed: May 1, 2017
    Date of Patent: March 26, 2019
    Assignee: Intel Corporation
    Inventors: Avigdor Eldar, Ovadya Menadeva, Kfir Viente
  • Patent number: 10242458
    Abstract: Systems and methods configured to generate virtual gimbal information for range images produced from 3D depth scans are described. In operation according to embodiments, known and advantageous spatial geometries of features of a scanned volume are exploited to generate virtual gimbal information for a pose. The virtual gimbal information of embodiments may be used to align a range image of the pose with one or more other range images for the scanned volume, such as for combining the range images for use in indoor mapping, gesture recognition, object scanning, etc. Implementations of range image registration using virtual gimbal information provide a realtime one shot direct pose estimator by detecting and estimating the normal vectors for surfaces of features between successive scans which effectively imparts a coordinate system for each scan with an orthogonal set of gimbal axes and defines the relative camera attitude.
    Type: Grant
    Filed: April 21, 2017
    Date of Patent: March 26, 2019
    Assignee: QUALCOMM Incorporated
    Inventors: James Nash, Kalin Atanassov, Albrecht Johannes Lindner
  • Patent number: 10235605
    Abstract: Image labeling is described, for example, to recognize body organs in a medical image, to label body parts in a depth image of a game player, to label objects in a video of a scene. In various embodiments an automated classifier uses geodesic features of an image, and optionally other types of features, to semantically segment an image. For example, the geodesic features relate to a distance between image elements, the distance taking into account information about image content between the image elements. In some examples the automated classifier is an entangled random decision forest in which data accumulated at earlier tree levels is used to make decisions at later tree levels. In some examples the automated classifier has auto-context by comprising two or more random decision forests. In various examples parallel processing and look up procedures are used.
    Type: Grant
    Filed: April 10, 2013
    Date of Patent: March 19, 2019
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Antonio Criminisi, Peter Kontschieder, Pushmeet Kohli, Jamie Daniel Joseph Shotton
  • Patent number: 10234856
    Abstract: A system for communicating between a machine and a remote system generates machine data and image data on-board the machine. The data is segmented into machine data subsets and image data subsets. At least some of the machine data subsets and only some of the image data subsets are transmitted off-board the machine to an off-board controller. The off-board controller updates an electronic map based upon the image data subsets, and generates a machine movement plan based upon the updated electronic map.
    Type: Grant
    Filed: May 12, 2016
    Date of Patent: March 19, 2019
    Assignee: Caterpillar Inc.
    Inventors: Oluseun Aremu, Michael Schilling
  • Patent number: 10233615
    Abstract: A position measurement system includes: at least a pair of imaging devices mounted on a work machine; a calculation unit provided at the work machine and configured to perform stereo measurement by using information of an image of an object captured by at least the pair of imaging devices; and a determination unit configured to determine condition related to image capturing by the imaging device based on a performance result of the stereo measurement.
    Type: Grant
    Filed: October 15, 2015
    Date of Patent: March 19, 2019
    Assignee: Komatsu Ltd.
    Inventors: Hiroyoshi Yamaguchi, Taiki Sugawara, Shun Kawamoto
  • Patent number: 10235747
    Abstract: An accurate camera pose is determined by pairing a first camera with a second camera in proximity to one another, and by developing a known spatial relationship between them. An image from the first camera and an image from the second camera are analyzed to determine corresponding features in both images, and a relative homography is calculated from these corresponding features. A relative parameter, such as a focal length or an extrinsic parameter is used to calculate a first camera's parameter based on a second camera's parameter and the relative homography.
    Type: Grant
    Filed: March 9, 2012
    Date of Patent: March 19, 2019
    Assignee: Disney Enterprises, Inc.
    Inventors: Howard J. Kennedy, Smadar Gefen
  • Patent number: 10235752
    Abstract: Slice selection for interpolation-based 3D manual segmentation is provided such that propagation error is minimized during 3D reconstruction. In various embodiments, a plurality of 2D images is read. Each of the plurality of 2D images represents a slice of a 3D volume. Deformable registration is performed between each adjacent pair of the plurality of 2D images. From the deformable registration, propagation error is estimated between each pair of the plurality of 2D images. The plurality of 2D images is clustered into a predetermined number of clusters. A slice is selected for annotation from each of the predetermined number of clusters.
    Type: Grant
    Filed: November 21, 2016
    Date of Patent: March 19, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventor: Hongzhi Wang
  • Patent number: 10237537
    Abstract: Methods and systems are disclosed for creating a virtual reality (VR) movie having at least one live action element. A live action element is an element that is not computer-generated, but is instead filmed (e.g. the filmed performance of a real human actor). The VR movie may be interactive in that small movements of the viewer's head, when viewing the live action elements, may result in different visual points-of-view that match the point-of-view changes expected by the viewer. In one embodiment, at least one live action element is filmed using at least two cameras to obtain a stereoscopic video recording. A stereoscopic digital still image of the background is also obtained separate from the live action elements. The stereoscopic video recording and the stereoscopic digital still image of the background are stored in memory, as separate files, for later compositing in a home device.
    Type: Grant
    Filed: January 17, 2017
    Date of Patent: March 19, 2019
    Assignee: ALEXANDER SEXTUS LIMITED
    Inventor: Paul Donovan
  • Patent number: 10229349
    Abstract: A camera system captures images from a set of cameras to generate binocular panoramic views of an environment. The cameras are oriented in the camera system to maximize the minimum number of cameras viewing a set of randomized test points. To calibrate the system, matching features between images are identified and used to estimate three-dimensional points external to the camera system. Calibration parameters are modified to improve the three-dimensional point estimates. When images are captured, a pipeline generates a depth map for each camera using reprojected views from adjacent cameras and an image pyramid that includes individual pixel depth refinement and filtering between levels of the pyramid. The images may be used generate views of the environment from different perspectives (relative to the image capture location) by generating depth surfaces corresponding to the depth maps and blending the depth surfaces.
    Type: Grant
    Filed: April 18, 2017
    Date of Patent: March 12, 2019
    Assignee: Facebook, Inc.
    Inventors: Forrest Samuel Briggs, Michael John Toksvig, Brian Keith Cabral
  • Patent number: 10230932
    Abstract: In one embodiment of the present invention, a hybrid subsystem orchestrates animated transitions between stereoscopic imaging and non-stereoscopic imaging. In operation, the hybrid subsystem receives frames that represent a three-dimensional object over time. The hybrid subsystem renders the first frame based on a left eye position and then re-renders the first frame based a right eye position. The left eye position and the right eye position are separated by a predetermined distance that is optimized for stereoscopic viewing. As part of rendering and re-rendering subsequent frames, the hybrid subsystem gradually deceases the distance between the left eye position and the right eye position. Upon receiving a final frame in the transition, the hybrid subsystem renders once—to a single eye position. Advantageously, because the rendered three-dimensional object image gradually loses depth throughout the animated transition, the hybrid subsystem minimizes disruptions to the viewing experience.
    Type: Grant
    Filed: February 12, 2015
    Date of Patent: March 12, 2019
    Assignee: AUTODESK, INC.
    Inventors: Tovi Grossman, George Fitzmaurice, Natalia Bogdan
  • Patent number: 10225265
    Abstract: Systems and methods for end to end encryption are provided. In example embodiments, a computer accesses an image including a geometric shape. The computer determines that the accessed image includes a candidate shape inside the geometric shape. The computer determines, using the candidate shape, an orientation of the geometric shape. The computer determines a public key of a communication partner device by decoding, based on the determined orientation, data encoded within the geometric shape. The computer receives a message. The computer verifies, based on the public key of the communication partner device, whether the message is from the communication partner device. The computer provides an output including the message and an indication of the communication partner device if the message is verified to be from the communication partner device. The computer provides an output indicating an error if the message is not verified to be from the communication partner device.
    Type: Grant
    Filed: April 15, 2016
    Date of Patent: March 5, 2019
    Assignee: Snap Inc.
    Inventor: Subhash Sankuratripati
  • Patent number: 10223805
    Abstract: A coded tracking system includes an imaging device and a target object that includes a plurality of locators emitting light according to a first pattern. An image of the target object captured by the imaging device includes light received by the imaging device from a subset of the plurality of locators. A pattern controller is configured to determine a resolution value for an adjacent pair of light sources in the captured image. The resolution value is indicative of the pattern controller being able to resolve the adjacent pair of light sources as two separate sources. The pattern controller determines a second pattern for the locators based on the resolution value. The second pattern improves a likelihood that the pattern controller can resolve between individual light sources emitting light in the second pattern. The pattern controller instructs the target object for the locators to emit light according to the second pattern.
    Type: Grant
    Filed: August 7, 2017
    Date of Patent: March 5, 2019
    Assignee: Facebook Technologies, LLC
    Inventor: Nicholas Daniel Trail
  • Patent number: 10217277
    Abstract: A method, system, and apparatus provide the ability to globally register point cloud scans. A first and a second three-dimensional (3D) point cloud are acquired. The point clouds have a subset of points in common and there is no prior knowledge on an alignment between the point clouds. Particular points that are likely to be identified in the other point cloud are detected. Information about a normal of each of the detected particular points is retrieved. A descriptor (that only describes 3D information) is built on each of the detected particular points. Matching pairs of descriptors are determined. Rigid transformation hypotheses are estimated (based on the matching pairs) and represent a transformation. The hypotheses are accumulated into a fitted space, selected based on density, and validated based on a scoring. One of the hypotheses is then selected as a registration.
    Type: Grant
    Filed: December 2, 2016
    Date of Patent: February 26, 2019
    Assignee: AUTODESK, INC.
    Inventors: Luc Franck Robert, Nicolas Gros, Yann Noutary, Lucas Malleus, Frederic Precioso, Diane Lingrand
  • Patent number: 10217286
    Abstract: Motion sickness resulting from use of a virtual reality headset can be mitigated by displaying a virtual nose in the field of view of the user. Nose data can be obtained by a user selecting various options, or determined dynamically using various image analysis algorithms. Image data captured of the user's face can be data analyzed to determine aspects such as the size, shape, color, texture, and reflectivity of the user's nose. A three-dimensional nose model is generated, which is treated as an object in the virtual world and can have lighting, shadows, and textures applied accordingly. The pupillary distance can be determined from the image data and used to determine the point of view from which to render each nose portion. Changes in lighting or expression can cause the appearance of the nose to change, as well as the level of detail of the rendering.
    Type: Grant
    Filed: September 21, 2015
    Date of Patent: February 26, 2019
    Assignee: Amazon Technologies, Inc.
    Inventors: Justin-Josef Angel, Eric Alan Breitbard, Colin Neil Swann, Robert Steven Murdock
  • Patent number: 10215856
    Abstract: A method for determining whether a distance that a CW-TOF range camera provides for a scene is degraded by multipath interference (MPI) comprising operating the camera to determine a propagation phase delay and a phase delay coefficient for each of a plurality of modulation frequencies of light that illuminates the scene and using the phase delay coefficient and/or the phase delay to determine whether a distance provided by the camera is compromised by MPI.
    Type: Grant
    Filed: November 27, 2017
    Date of Patent: February 26, 2019
    Assignee: Microsoft Technology Licensing, LLC
    Inventor: Zhanping Xu
  • Patent number: 10218956
    Abstract: A method of generating a depth cue for three dimensional video content is disclosed. The method comprises the steps of (a) detecting three dimensional video content that will appear in observer space when displayed (110) (b) identifying a reference projection parameter (120) (c) estimating a location of a shadow that would be generated by the detected content as a consequence of a light source emitting light according to the reference projection parameter (130) and (d) projecting light content imitating a shadow to the estimated location to coincide with display of the three dimensional video content (140). Also disclosed are a computer program product for carrying out a method of generating a depth cue for three dimensional video content and an apparatus (800) for generating a depth cue for three dimensional video content.
    Type: Grant
    Filed: October 1, 2012
    Date of Patent: February 26, 2019
    Assignee: TELEFONAKTIEBOLAGET LM ERICSSON (PUBL)
    Inventor: Julien Michot
  • Patent number: 10217282
    Abstract: Movies of volume rendered medical images, that give an impression of the anatomy, become more and more important, because this type of visualization comes close to reality. However it is time consuming to compose these movies, if another path than geometrical primitives (like a circle) is preferred. Besides this it is virtually impossible to reproduce comparable complex, manually composed, fly-paths. The proposed apparatus focuses on volume rendered movies of whole heart MR scans. It solves the problems mentioned above, by automatically deriving a fly-path from the segmentation data of the coronary arteries. A method, computer-readable medium and use are also provided.
    Type: Grant
    Filed: October 27, 2008
    Date of Patent: February 26, 2019
    Assignee: KONINKLIJKE PHILIPS N.V.
    Inventors: Cornelis Pieter Visser, Hubrecht Lambertus Tjalling De Bliek
  • Patent number: 10210618
    Abstract: Within examples, object image masking is provided. An example method includes receiving a depth mask of an object, projecting the depth mask of the object onto an image of the object in a background so as to generate a depth image of the object in the background, determining portions of the depth image of the object in the background that are representative of the object and that are representative of the background, based on the portions of the depth image of the object in the background that are representative of the object determining a foreground mask of the object, and using the foreground mask of the object to identify portions of the image representative of the object.
    Type: Grant
    Filed: December 22, 2014
    Date of Patent: February 19, 2019
    Assignee: Google LLC
    Inventors: James Joseph Kuffner, James Robert Bruce, Ken Conley, Arshan Poursohi
  • Patent number: 10210660
    Abstract: An image processing system is designed to generate a canvas view that has smooth transition between binocular views and monocular views. Initially, the image processing system receives top/bottom images and side images of a scene and calculates offsets to generate synthetic side images for left and right view of a user. To realize smooth transition between binocular views and monocular views, the image processing system first warps top/bottom images onto corresponding synthetic side images to generate warped top/bottom images, which realizes the smooth transition in terms of shape. The image processing system then morphs the warped top/bottom images onto the corresponding synthetic side images to generate blended images for left and right eye views with the blended images. The image processing system creates the canvas view which has smooth transition between binocular views and monocular views in terms of image shape and color based on the blended images.
    Type: Grant
    Filed: April 11, 2016
    Date of Patent: February 19, 2019
    Assignee: Facebook, Inc.
    Inventors: Brian Keith Cabral, Forrest Samuel Briggs
  • Patent number: 10212412
    Abstract: A method of increasing a photographing speed of a photographing device which capture an image through a combination of two or more photographing devices and generate and provide an image by using the captured image, thereby increasing a photographing speed. An RGB image obtaining device and a depth image obtaining device alternately perform photographing to obtain an image. Also, a second depth image and a second RGB image respectively corresponding to a first RGB image and a first depth image which are alternately obtained by performing alternate photographing are synthesized and output, thereby actually increasing a photographing speed by twice.
    Type: Grant
    Filed: March 9, 2016
    Date of Patent: February 19, 2019
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Ju Yong Chang, Jin Seo Kim, Hee Kwon Kim, Soon Chan Park, Ji Young Park, Kwang Hyun Shim, Moon Wook Ryu, Ho Wook Jang, Hyuk Jeong
  • Patent number: 10212409
    Abstract: The present invention provides a depth generation method. The method includes obtaining a left two-dimensional (2D) image and a right two-dimensional image, each having a first image resolution; scaling the left 2D image and the right 2D image to obtain a scaled left 2D image and a scaled right 2D image, each having a second image resolution; and generating an output depth map based on the scaled left 2D image and the scaled right 2D image.
    Type: Grant
    Filed: December 18, 2015
    Date of Patent: February 19, 2019
    Assignee: BOE TECHNOLOGY GROUP CO., LTD
    Inventors: Xingxing Zhao, Jibo Zhao
  • Patent number: 10204400
    Abstract: An imaging unit images an object through an imaging optical system so as to acquire image data. A depth map acquiring unit acquires information relating to a depth distribution of an object as depth map data. The resolution of depth map data is relatively lower than the resolution of image data which has been imaged. A depth map shaping unit references image data of an object so as to conform to the resolution of a depth map, when it performs shaping of the depth map based on image data of the object.
    Type: Grant
    Filed: June 22, 2016
    Date of Patent: February 12, 2019
    Assignee: Canon Kabushiki Kaisha
    Inventor: Masaaki Matsuoka
  • Patent number: 10204454
    Abstract: Image georegistration method and system. An imaging sensor acquires a sensor-image of a scene. Imaging parameters of the acquired sensor-image are obtained, the imaging parameters including at least the detected 3D position and orientation of the imaging sensor when acquiring the sensor-image, as detected using a location measurement unit. A model-image of the scene is generated from a textured 3D geographic model, the model-image representing a texture-based 2D image of the scene as acquired in the 3D model using the imaging parameters. The sensor-image and the model-image are compared and the discrepancies between them determined. An updated 3D position and orientation of the imaging sensor is determined in accordance with the discrepancies. The updated position and orientation may be used to display supplementary content overlaid on the sensor-image in relation to a selected location on the sensor-image, or for determining the geographic location coordinates of a scene element.
    Type: Grant
    Filed: May 28, 2015
    Date of Patent: February 12, 2019
    Assignee: ELBIT SYSTEMS LAND AND C4I LTD.
    Inventors: Benny Goldman, Eli Haham
  • Patent number: 10204402
    Abstract: Related are a drone-mounted imaging hyperspectral geometric correction method and a system, comprising: collecting position attitude information of a current drone low-precision POS sensor in real time; based on the position attitude information, parsing precise photography center position attitude information of a digital photograph, and generating a DEM of an area covered by the photograph; based on the precise photography center position attitude information, performing correction on position attitude data corresponding to multiple imaging hyperspectral scan lines between photography centers of adjacent digital photographs, and obtaining high-precision linear array position attitude information of the multiple imaging hyperspectral scan lines; based on the high-precision linear array position attitude information and the DEM, establishing a collinearity equation and generating a hyperspectral image.
    Type: Grant
    Filed: January 23, 2014
    Date of Patent: February 12, 2019
    Assignee: BEIJING RESEARCH CENTER FOR INFORMATION TECHNOLOGY IN AGRICULTURE
    Inventors: Guijun Yang, Chunjiang Zhao, Haiyang Yu, Xiaodong Yang, Xingang Xu, Xiaohe Gu, Haikuan Feng, Hao Yang, Hua Yan
  • Patent number: 10200667
    Abstract: A recorder creating an encoded data stream comprising an encoded video stream and an encoded graphics stream, the video stream comprising an encoded 3D (three-dimensional) video object, and the graphics stream comprising at least a first encoded segment and a second encoded segment, the first segment comprising 2D (two-dimensional) graphics data and the second segment comprises a depth map for the 2D graphics data. A graphics decoder decoding the first and second encoded segments to form respective first and second decoded sequences. Outputting the first and second decoded sequences separately to a 3D display unit. The 3D display unit combining the first and second decoded sequences and rendering the combination as a 3D graphics image overlaying a 3D video image simultaneously rendered from a decoded 3D video object decoded from the encoded 3D video object.
    Type: Grant
    Filed: April 29, 2014
    Date of Patent: February 5, 2019
    Assignee: KONINKLIJKE PHILIPS N.V.
    Inventors: Darwin He, Li Hong, Philip Steven Newton
  • Patent number: 10198633
    Abstract: A solar power measurement method is provided. A method may include determining an azimuth of a reference roof edge relative to an orientation of an aerial image of a structure. The method may include capturing at least one spherical image at at least one determined measurement location proximate the structure. Further, the method may include determining a relative azimuth of the reference roof edge from a downward view of a lower hemisphere of the at least one image. In addition, the method may include determining an orientation of an upper hemisphere of the at least one image based on the azimuth of the reference roof edge, the relative azimuth of the reference roof edge, and a known tilt of a roof edge of the structure. Furthermore, the method may include calculating shading conditions for a time period for known sun positions during the time period based on the orientation of the upper hemisphere of the at least one spherical image.
    Type: Grant
    Filed: October 26, 2016
    Date of Patent: February 5, 2019
    Assignee: Vivint Solar, Inc.
    Inventors: Roger L. Jungerman, Mark Galli, Judd Reed, Willard S. MacDonald
  • Patent number: 10200804
    Abstract: Embodiments of the present invention relate to video content assisted audio object extraction. A method of audio object extraction from channel-based audio content is disclosed. The method comprises extracting at least one video object from video content associated with the channel-based audio content, and determining information about the at least one video object. The method further comprises extracting from the channel-based audio content an audio object to be rendered as an upmixed audio signal based on the determined information. Corresponding system and computer program product are also disclosed.
    Type: Grant
    Filed: February 24, 2016
    Date of Patent: February 5, 2019
    Assignee: Dolby Laboratories Licensing Corporation
    Inventors: Lianwu Chen, Xuejing Sun, Lie Lu
  • Patent number: 10192328
    Abstract: A method of computing statistical weights for a computed tomography (CT) iterative reconstruction process is provided. The method includes obtaining detector count data from a CT scan of an object; calculating variance data based on the count data and an electronic noise variance; transforming the calculated variance data to obtain statistical weight data; and performing the CT iterative reconstruction process using the statistical weight data and raw projection data to obtain a reconstructed CT image.
    Type: Grant
    Filed: October 24, 2013
    Date of Patent: January 29, 2019
    Assignee: Toshiba Medical Systems Corporation
    Inventors: Alexander A. Zamyatin, Daxin Shi, Thomas Labno
  • Patent number: 10194077
    Abstract: The invention relates to a method for operating a camera assembly, in which a first camera and a second camera capture images (36, 42). Respective fields of view of the two cameras overlap at least in a partial region (24). At least in an image (36) captured by the first camera, at least one contamination region (38) including a plurality of pixels is detected within the partial region (24). Thereupon, data values specifying the respective transparency of the pixels in the at least one contamination region (38) are varied with respect to respective reference values of transparency, wherein those reference values increase in the partial region (24) towards an edge (28) of the respective images upon superimposition of the images. Furthermore, the invention relates to a camera assembly.
    Type: Grant
    Filed: February 9, 2017
    Date of Patent: January 29, 2019
    Assignee: Connaught Electronics Ltd.
    Inventors: Michael Burke, Patrick Eoghan Denny
  • Patent number: 10192311
    Abstract: A structured light active sensing systems may be configured to transmit and received codewords to generate a depth map by analyzing disparities between the locations of the transmitted and received codewords. To determine the locations of received codewords, an image of the projected codewords is identified, from which one or more codeword boundaries are detected. The codeword boundaries may be detected based upon a particular codeword bit of each codeword. Each detected codeword boundary may be constrained from overlapping with other detected codeword boundaries, such that no pixel of the received image is associated with more than one codeword boundary.
    Type: Grant
    Filed: August 5, 2016
    Date of Patent: January 29, 2019
    Assignee: QUALCOMM Incorporated
    Inventors: Yunke Pan, Stephen Michael Verrall
  • Patent number: 10191265
    Abstract: An image generation apparatus includes a plurality of irradiators, and a control circuit. The control circuit performs an operation including generating an in-focus image of an object in each of a plurality of predetermined focal planes, extracting a contour of at least one or more cross sections of the object represented in the plurality of in-focus images, generating at least one or more circumferences based on the contour of the at least one or more cross sections, generating a sphere image in the form of a three-dimensional image of at least one or more spheres, each sphere having one of the circumferences, generating a synthetic image by processing the sphere image such that a cross section appears, and displaying the resultant synthetic image on a display.
    Type: Grant
    Filed: March 6, 2017
    Date of Patent: January 29, 2019
    Assignee: Panasonic Intellectual Property Management Co., Ltd.
    Inventors: Yumiko Kato, Taichi Sato, Yoshihide Sawada
  • Patent number: 10192145
    Abstract: A method of providing a set of feature descriptors configured to be used in matching an object in an image of a camera is provided. The method includes: a) providing at least two images of a first object; b) extracting in at least two of the images at least one feature from the respective image, c) providing at least one descriptor for an extracted feature, and storing the descriptors; d) matching descriptors in the first set of descriptors; e) computing a score parameter based on the result of the matching process; f) selecting at least one descriptor based on its score parameter; g) adding the selected descriptor(s) to a second set of descriptors; and h) updating the score parameter of descriptors in the first set based on a selection process and to the result of the matching process.
    Type: Grant
    Filed: February 28, 2017
    Date of Patent: January 29, 2019
    Assignee: Apple Inc.
    Inventors: Mohamed Selim Ben Himane, Daniel Kurz, Thomas Olszamowski
  • Patent number: 10183659
    Abstract: A camera is configured to be mounted facing a front of a vehicle. A computer is programmed to receive first and second images from the camera, determine a height of an obstacle located to the front of the vehicle using at least the first and second images, and, based at least in part on the height of the obstacle, send an instruction via a communications bus to a component controller to control a speed of the vehicle.
    Type: Grant
    Filed: November 24, 2014
    Date of Patent: January 22, 2019
    Assignee: FORD GLOBAL TECHNOLOGIES, LLC
    Inventor: Aaron L. Mills
  • Patent number: 10187589
    Abstract: A system and method for mixing a scene with a virtual scenario. An image capturing unit is arranged to capture at least one image so as to cover the scene from a first viewpoint. An image representation generation unit is arranged to generate at least one image representation based on the captured image. A game engine unit is arranged to generate a virtual scenario. An image processing unit is arranged to adapt the at least one image representation based on the generated virtual scenario so as to provide a virtual video sequence.
    Type: Grant
    Filed: December 19, 2008
    Date of Patent: January 22, 2019
    Assignee: SAAB AB
    Inventors: Måns Hagström, Ulf Erlandsson, Johan Borg, Folke Isaksson, Ingmar Andersson, Adam Tengblad
  • Patent number: 10183398
    Abstract: A point cloud system having two separate sets of points, each of these sets having different points of view, creating data with potentially occluded points in the point cloud. An accelerated approach of close sister points is used to determine which occluded points can be removed by looking out from an assumed non-occluded point, then finding the closest point in the next set of points, then looking back into the first set of points, or jumping to the closest not occluded point and looking back, and if this second sister is close to initial point, it is a close sister.
    Type: Grant
    Filed: September 14, 2016
    Date of Patent: January 22, 2019
    Assignee: SKUR, Inc.
    Inventors: Adam Cohen, James Creasy, Alan Gushurst
  • Patent number: 10187378
    Abstract: An authentication server 2 stores, for each of one or more objects to be registered, unique pattern information of a surface of the object to be registered and a personal identification number into a database in association with each other, acquires unique pattern information of a surface of an object to be authenticated which is related to an authentication request, and a personal identification number, extracts, from the database, unique pattern information stored in association with the personal identification number related to the authentication request, and determines whether the extracted unique pattern information includes unique pattern information corresponding to the unique pattern information related to the authentication request.
    Type: Grant
    Filed: August 5, 2015
    Date of Patent: January 22, 2019
    Assignee: FUJI XEROX CO., LTD.
    Inventor: Kensuke Ito
  • Patent number: 10178303
    Abstract: A process is provided for guiding a capture device (e.g., smartphone, tablet, drone, etc.) to capture a series of images of a building. Images are captured as the camera device moves around the building—taking a plurality of images (e.g., video) from multiple angles and distances. Quality of the image may be determined to prevent low quality images from being captured or to provide instructions on how to improve the quality of the image capture. The series of captured images are uploaded to an image processing system to generate a 3D building model that is returned to the user. The returned 3D building model may incorporate scaled measurements of building architectural elements and may include a dataset of measurements for one or more architectural elements such as siding (e.g., aluminum, vinyl, wood, brick and/or paint), windows, doors or roofing.
    Type: Grant
    Filed: March 31, 2017
    Date of Patent: January 8, 2019
    Assignee: HOVER INC.
    Inventors: William Castillo, Derek Halliday, Manish Upendran
  • Patent number: 10176644
    Abstract: Simulating a 3D audio environment, including receiving a visual representation of an object at a location in a scene, wherein the location represents a point in 3D space, receiving a sound element, and binding the sound element to the visual representation of the object such that a characteristic of the sound element is dynamically modified coincident with a change in location in the scene of the visual representation of the object in 3D space.
    Type: Grant
    Filed: June 7, 2015
    Date of Patent: January 8, 2019
    Assignee: Apple Inc.
    Inventors: Thomas Goossens, Sebastien Metrot
  • Patent number: 10166923
    Abstract: An image generation device for referencing a correspondence relationship and generating a line-of-sight-converted image from a captured image of an in-vehicle camera mounted to a vehicle is provided. The image generation device includes a first region updating unit that, upon sensing deviation of at least one of the mounting position and the mounting angle of the in-vehicle camera and calculating a new mounting position and mounting angle, updates a correspondence relationship of a predetermined first region in the line-of-sight-converted image in accordance with the new mounting position and mounting angle, and a second region updating unit that, upon satisfaction of a predetermined updating condition after updating the correspondence relationship of the first region, updates the correspondence relationship for a second region in the line-of-sight-converted image in accordance with the new mounting position and mounting angle.
    Type: Grant
    Filed: September 24, 2015
    Date of Patent: January 1, 2019
    Assignee: DENSO CORPORATION
    Inventors: Hitoshi Tanaka, Youji Morishita, Muneaki Matsumoto