3-d Or Stereo Imaging Analysis Patents (Class 382/154)
  • Patent number: 10690489
    Abstract: Systems, methods, and media for performing shape measurement are provided. In some embodiments, systems for performing shape measurement are provided, the systems comprising: a projector that projects onto a scene a plurality of illumination patterns, wherein each of the illumination patterns has a given frequency, each of the illumination patterns is projected onto the scene during a separate period of time, three different illumination patterns are projected with a first given frequency, and only one or two different illumination patterns are projected with a second given frequency; a camera that detects an image of the scene during each of the plurality of periods of time; and a hardware processor that is configured to: determine the given frequencies of the plurality of illumination patterns; and measure a shape of an object in the scene.
    Type: Grant
    Filed: December 22, 2017
    Date of Patent: June 23, 2020
    Assignee: The Trustees of Columbia University in the City of New York
    Inventors: Mohit Gupta, Shree K. Nayar
  • Patent number: 10687727
    Abstract: The present disclosure provides systems and methods for generating an electrophysiological map of a geometric structure. The system includes a computer-based model construction system configured to acquire electrical information at a plurality of diagnostic landmark points, assign a color value, based on the acquired electrical information, to each of the diagnostic landmark points, create a first 3D texture region storing floats for a weighted physiological metric, create a second 3D texture region storing floats for a total weight, for each diagnostic landmark point, additively blend the color value of the diagnostic landmark point into voxels of the first 3D texture region that are within a predetermined distance, normalize the colored voxels using the second 3D texture region to generate a normalized 3D texture map, generate the electrophysiological map from the normalized 3D texture map and a surface of the geometric structure, and display the generated electrophysiological map.
    Type: Grant
    Filed: February 26, 2019
    Date of Patent: June 23, 2020
    Assignee: ST. JUDE MEDICAL, CARDIOLOGY DIVISION, INC.
    Inventors: Eric J. Voth, Cable Patrick Thompson
  • Patent number: 10691976
    Abstract: Implementations are directed to receiving a set of training data including a plurality of data points, at least a portion of which are to be labeled for subsequent supervised training of a computer-executable machine learning (ML) model, providing at least one visualization based on the set of training data, the at least one visualization including a graphical representation of at least a portion of the set of training data, receiving user input associated with the at least one visualization, the user input indicating an action associated with a label assigned to a respective data point in the set of training data, executing a transformation on data points of the set of training data based on one or more heuristics representing the user input to provide labeled training data in a set of labeled training data, and transmitting the set of labeled training data for training the ML model.
    Type: Grant
    Filed: November 16, 2017
    Date of Patent: June 23, 2020
    Assignee: Accenture Global Solutions Limited
    Inventors: Phillip Henry Rogers, Andrew E. Fano, Joshua Neland, Allan Enemark, Tripti Saxena, Jana A. Thompson, David William Vinson
  • Patent number: 10685424
    Abstract: Determining three-dimensional structure in a road environment using a system mountable in a host vehicle including a camera connectable to a processor. Multiple image frames are captured in the field of view of the camera. In the image frames, a line is selected below which the road is imaged. The line separates between upper images essentially excluding images of the road and lower images essentially including images of the road. One or more of the lower images is warped, according to a road homography to produce at least one warped lower image. The three-dimensional structure may be provided from motion of a matching feature within the upper images or from motion of a matching feature within at least one of the lower images and at least one warped lower image.
    Type: Grant
    Filed: April 19, 2018
    Date of Patent: June 16, 2020
    Assignee: Mobileye Vision Technologies Ltd.
    Inventors: Harel Livyatan, Oded Berberian, Barak Cohen, Gideon Stein
  • Patent number: 10685221
    Abstract: Methods and apparatus to monitor environments are disclosed. Example audience measurement devices disclosed herein include means for executing a first recognition analysis on three-dimensional data representative of a first object within a threshold distance from a three-dimensional sensor. Disclosed example audience measurement devices also include means for executing a second recognition analysis on two-dimensional data representative of a second object outside the threshold distance from the three-dimensional sensor. Disclosed example audience measurement devices further include means for combining a first detection of a first person provided by the first recognition analysis and a second detection of a second person provided by the second recognition analysis to generate a people count for an environment.
    Type: Grant
    Filed: August 7, 2018
    Date of Patent: June 16, 2020
    Assignee: The Nielsen Company (US), LLC
    Inventors: Morris Lee, Alejandro Terrazas
  • Patent number: 10684537
    Abstract: In described examples, a geometric progression of structured light elements is iteratively projected for display on a projection screen surface. The displayed progression is for determining a three-dimensional characterization of the projection screen surface. Points of the three-dimensional characterization of a projection screen surface are respaced in accordance with a spacing grid and an indication of an observer position. A compensated depth for each of the respaced points is determined in response to the three-dimensional characterization of the projection screen surface. A compensated image can be projected on the projection screen surface in response to the respaced points and respective compensated depths.
    Type: Grant
    Filed: November 14, 2017
    Date of Patent: June 16, 2020
    Assignee: TEXAS INSTRUMENTS INCORPORATED
    Inventors: Jaime Rene De La Cruz, Jeffrey Mathew Kempf
  • Patent number: 10684358
    Abstract: A situation awareness sensor includes a plurality of N sensor channels, each channel including an optical phased array (OPA) having a plurality of solid-state laser emitters, a command circuit and a detector. The command circuit controls the relative phase between the laser emitters to command a divergence, shape and exit angle of a spot-beam to scan a channel field-of-view (FOV). The OPAs may be controlled individually or in combination to command one or more spot-beams to scan an aggregate sensor FOV and to track one or more objects.
    Type: Grant
    Filed: November 11, 2016
    Date of Patent: June 16, 2020
    Assignee: Raytheon Company
    Inventors: Gerald P. Uyeno, Sean D. Keller
  • Patent number: 10683596
    Abstract: A method of generating an image that shows an embroidery area in an embroidery frame for computerized embroidery includes: acquiring a first image of a calibration board that has multiple feature points forming a contour approximating an embroidery area defined by the embroidery frame; identifying the feature points of the calibration board in the first image to acquire multiple graphical feature points; acquiring a geometric curved surface that fits the graphical feature points and that includes multiple curved surface points; acquiring a second image in which the embroidery area of the embroidery frame is shown; and performing image correction on the second image based on the geometric curved surface to generate a corrected image.
    Type: Grant
    Filed: June 7, 2018
    Date of Patent: June 16, 2020
    Assignee: ZENG HSING INDUSTRIAL CO., LTD.
    Inventors: Zheng-Hau Lin, Kun-Lung Hsu
  • Patent number: 10685229
    Abstract: Systems and methods for image based localization for unmanned aerial vehicles (UAVs) are disclosed. In one embodiment, a method for navigating a UAV includes: flying a UAV along a flight path; acquiring an image of a ground area along the flight path with a camera carried by the UAV; and sending the image to a base station. The method further includes receiving navigation data from the base station, based upon a comparison of the image of the ground area to at least one terrestrial map of the flight path.
    Type: Grant
    Filed: April 25, 2018
    Date of Patent: June 16, 2020
    Assignee: Wing Aviation LLC
    Inventors: Dinuka Abeywardena, Damien Jourdan
  • Patent number: 10681330
    Abstract: A display processing device, a display processing method, and a display apparatus are disclosed. The display processing device includes a 3D image processing chip and a 2D image processing chip. The 3D image processing chip is configured to receive a 3D image signal, and process the 3D image signal into a 2D image signal in which left and right eye images are respectively arranged in alternate rows; and the 3D image processing chip is configured to transmit 2D image signal in which left and right eye images are respectively arranged in alternate rows obtained after processing to the 2D image processing chip, and the 2D image processing chip is configured to perform image processing on the 2D image signal.
    Type: Grant
    Filed: December 20, 2017
    Date of Patent: June 9, 2020
    Assignee: BOE TECHNOLOGY GROUP CO., LTD.
    Inventor: Xinshe Yin
  • Patent number: 10678253
    Abstract: Systems and methods are provided for controlling an autonomous vehicle (AV). A map generator module processes sensor data to generate a world representation of a particular driving scenario (PDS). A scene understanding module (SUM) processes navigation route data, position information and a feature map to define an autonomous driving task (ADT), and decomposes the ADT into a sequence of sub-tasks. The SUM selects a particular combination of sensorimotor primitive modules (SPMs) to be enabled and executed for the PDS. Each one of the SPMs addresses a sub-task in the sequence. A primitive processor module executes the particular combination of the SPMs such that each generates a vehicle trajectory and speed (VTS) profile. A selected one of the VTS profiles is then processed to generate the control signals, which are then processed at a low-level controller to generate commands that control one or more of actuators of the AV.
    Type: Grant
    Filed: May 24, 2018
    Date of Patent: June 9, 2020
    Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLC
    Inventors: Shuqing Zeng, Wei Tong, Upali P. Mudalige
  • Patent number: 10681318
    Abstract: In described examples, structured light elements are projected for display on a projection screen surface. The projected light elements are captured for determining a three-dimensional characterization of the projection screen surface. A three-dimensional characterization of the projection screen surface is generated in response to the displayed structured light elements. An observer perspective characterization of the projection screen surface is generated in response to an observer position and the three-dimensional characterization. A depth for at least one point of the observer perspective characterization is determined in response to depth information of respective neighboring points of the at least one point of the observer perspective characterization. A compensated image can be projected on the projection screen surface in response to the observer perspective characterization and depth information of respective neighboring points of the at least one point of the observer perspective characterization.
    Type: Grant
    Filed: November 26, 2018
    Date of Patent: June 9, 2020
    Assignee: TEXAS INSTRUMENTS INCORPORATED
    Inventors: Jaime Rene De La Cruz, Jeffrey Mathew Kempf
  • Patent number: 10679090
    Abstract: A method for estimating 6-DOF relative displacement using vision-based localization and apparatus therefor are disclosed. A method for estimating 6-DOF relative displacement may include acquiring images of a first marker attached to a fixing member and a second marker attached to a dynamic member for assembling to the fixing member by using a camera, extracting a feature point of the first marker and a feature point of the second marker through image processing for the acquired images, and estimating 6-DOF relative displacement of the dynamic member for the fixing member based on the extracted feature point of the first marker and feature point of the second marker.
    Type: Grant
    Filed: May 14, 2018
    Date of Patent: June 9, 2020
    Assignee: KOREA ADVANCED INSTITUTE OF SCIENCE AND TECHNOLOGY
    Inventors: Hyeon Myeong, Su Young Choi, Wancheol Myeong
  • Patent number: 10668621
    Abstract: Techniques described herein include a system and methods for implementing fast motion planning of collision detection. In some embodiments, an area voxel map is generated with respect to a three-dimensional space within which a repositioning event is to occur. A number of movement voxel maps are then identified as being related to potential repositioning options. The area voxel map is then compared to each of the movement voxel maps to identify collisions that may occur with respect to the repositioning options. In some embodiments, each voxel map includes a number of bits which each represent voxels in a volume of space. The comparison between the area voxel map and each of the movement voxel maps may include a logical conjunction (e.g., an AND operation). Movement voxel maps for which the comparisons result includes a value of 1 are then removed from a set of valid repositioning options.
    Type: Grant
    Filed: June 23, 2017
    Date of Patent: June 2, 2020
    Assignee: Amazon Technologies, Inc.
    Inventor: Stephen A. Caldara
  • Patent number: 10674117
    Abstract: A method and system for enhancement of video systems using wireless device proximity detection. The enhanced video system consists of one or more video capture devices along with one or more sensors detecting the presence of devices with some form of wireless communications enabled. The proximity of a device communicating wirelessly is sensed and cross referenced with received video image information. Through time, movement of wirelessly communicating mobile devices through a venue or set of venues can be deduced and additionally cross referenced to and augmented over image data from the set of video capture devices.
    Type: Grant
    Filed: June 30, 2017
    Date of Patent: June 2, 2020
    Assignee: INPIXON CANADA, INC.
    Inventors: James Francis Hallett, Kirk Arnold Moir
  • Patent number: 10672147
    Abstract: Disclosed is a method to calibrate an on-board stereo system. The method includes correlating the depth deviation of a point of a scene observed by the system with respect to a supposedly planar scene and the corresponding yaw deviation between the cameras of the system, then deducing therefrom a yaw calibration correction for the cameras. The comparison between the scene as observed and as expected consists in determining, via spatio-temporal filtering, a depth deviation between the observed depth of at least one point of a planar scene image formed in the image plane of first camera, as positioned in the coordinate system of the other camera, and the expected depth of this point projected onto the planar scene from the first camera, then in determining the yaw calibration deviation between the cameras as a function of the deviation in depth averaged over a sufficient set of points.
    Type: Grant
    Filed: August 24, 2015
    Date of Patent: June 2, 2020
    Assignees: CONTINENTAL AUTOMOTIVE FRANCE, CONTINENTAL AUTOMOTIVE GMBH
    Inventor: Lucien Garcia
  • Patent number: 10671890
    Abstract: Techniques for generating 3D gaze predictions based on a deep learning system are described. In an example, the deep learning system includes a neural network. The neural network is trained with training images generated by cameras and showing eyes of user while gazing at stimulus points. Some of the stimulus points are in the planes of the camera. Remaining stimulus points are not un the planes of the cameras. The training includes inputting a first training image associated with a stimulus point in a camera plane and inputting a second training image associated with a stimulus point outside the camera plane. The training minimizes a loss function of the neural network based on a distance between at least one of the stimulus points and a gaze line.
    Type: Grant
    Filed: March 30, 2018
    Date of Patent: June 2, 2020
    Assignee: Tobii AB
    Inventor: Erik Linden
  • Patent number: 10672181
    Abstract: Previously our 3DSOM software has solved the problem of extracting the object of interest from the scene by allowing the user to cut-out the object shape in several photographs. This is known as manual “masking”. However, we wish to avoid this step to make the system as easy to use as possible. For an unskilled user the process needs to be as simple to use as possible. We propose a method of extracting a complete closed model of an object without the user being required to do anything other than capture the shots.
    Type: Grant
    Filed: October 2, 2017
    Date of Patent: June 2, 2020
    Assignee: ULSee Inc.
    Inventor: Adam Michael Baumberg
  • Patent number: 10663594
    Abstract: Some embodiments are directed to a processing method of a three-dimensional point cloud, including: obtaining a 3D point cloud from a predetermined view point of a depth sensor; extracting 3D coordinates and intensity data from each point of the 3D point cloud with respect to the view point; transforming 3D coordinates and intensity data into at least three two-dimensional spaces, namely an intensity 2D space function of the intensity data of each point, a height 2D space function of an elevation data of each point, and a distance 2D space function of a distance data between each point of 3D point cloud and the view point, defining a single multi-channel 2D space.
    Type: Grant
    Filed: March 14, 2017
    Date of Patent: May 26, 2020
    Assignee: IMRA EUROPE S.A.S.
    Inventors: Dzmitry Tsishkou, Frédéric Abad, Rémy Bendahan
  • Patent number: 10664570
    Abstract: A method and system for providing a data analysis in the form of a customized geographic visualization on a graphical user interface (GUI) on a remote client computing device using only a web browser on the remote client device. The system receives a user's selected data analysis to be performed by the system for display on the remote client device. The system verifies the data access permissions of the user to render a data analysis solution customized to that particular user, and automatically prevents that user from gaining access to data analysis solutions to which that user is prohibited. The system is configured to respond to the user's data analysis request, perform the necessary computations on the server side on the fly, and send a dataset interpretable by the client device's web browser for display on the client device or on a device associated with the client device.
    Type: Grant
    Filed: October 27, 2016
    Date of Patent: May 26, 2020
    Assignee: Blue Cross Blue Shield Institute, Inc.
    Inventors: Teresa Nguyen Clark, Michael Steven Weinberg, Carlos Ricardo Villarreal, Nathania Hau, Jelani Akil McLean, Abigail Berube, Trent Tyrone Haywood
  • Patent number: 10666933
    Abstract: 3D image display device and method are provided. The 3D image display device divides a first 3D image into multiple depth layers, determines irregular pixels corresponding to the divided depth layers, generates second 3D images corresponding to the depth layers, respectively, using the corresponding irregular pixels, and synthesizes the generated images, thereby providing a final high-resolution 3D image.
    Type: Grant
    Filed: May 14, 2015
    Date of Patent: May 26, 2020
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Jiao Shaohui, Zhou Mingcai, Hong Tao, Li Weiming, Wang Haitao, Dong Kyung Nam, Wang Xiying
  • Patent number: 10664969
    Abstract: Targeting of a lesion which is performed by a stereoscopic biopsy device or the like is performed simply and highly accurately. Designation of a predetermined position in the stereoscopic image is received to acquire position information when a stereoscopic image is displayed, radiological images of radiographing directions are displayed as two-dimensional images, a mark based on the position information, which is designated in the stereoscopic image, is displayed in the two-dimensional images, designation of a predetermined position in the two-dimensional images is further received to acquire the position information after the mark is displayed.
    Type: Grant
    Filed: April 19, 2017
    Date of Patent: May 26, 2020
    Assignee: FUJIFILM Corporation
    Inventors: Hiroki Nakayama, Akira Hasegawa, Harlan Romsdahl
  • Patent number: 10663750
    Abstract: Apparatus and methods for super-resolution imaging of extended objects utilize a scanned illumination source with a small spot size. Sub-images composed of highly-overlapping point spread functions are captured and each sub-image is iteratively compared to a series of brightness combinations of template point spread functions in a dictionary. The dictionary is composed of highly-overlapping point spread functions. Each sub-image is associated with a best-match template combination solution, and the best-match reconstructions created by best-match solutions are combined into a super-resolution image of the object.
    Type: Grant
    Filed: March 15, 2017
    Date of Patent: May 26, 2020
    Assignee: The Regents of the University of Colorado, a body
    Inventors: Juin-Yann Yu, Carol J. Cogswell, Simeng Chen, Robert H. Cormack, Jian Xing
  • Patent number: 10665028
    Abstract: In one embodiment, a method includes determining, using one or more location sensors of a computing device, an approximate location of the computing device, identifying a content object located within a threshold distance of the approximate location, wherein an augmented-reality map associates the content object with a stored model of a real-world object and specifies a location of the content object on or relative to the stored model of the real-world object, obtaining an image from a camera of the device, identifying, in the image, a target real-world object that matches the stored model of the real-world object, determining a content object location based on a location of the target real-world object and the location of the content object on or relative to the model of the real-world object, and displaying the content object at the content object location.
    Type: Grant
    Filed: August 9, 2018
    Date of Patent: May 26, 2020
    Assignee: Facebook, Inc.
    Inventors: Matthew Adam Simari, Alvaro Collet Romea, Krishnan Kumar Ramnath
  • Patent number: 10656724
    Abstract: Embodiments described herein includes a system comprising a processor coupled to display devices, sensors, remote client devices, and computer applications. The computer applications orchestrate content of the remote client devices simultaneously across at least one of the display devices and the remote client devices, and allow simultaneous control of the display devices. The simultaneous control includes automatically detecting a gesture of at least one object from gesture data received via the sensors. The gesture data is absolute three-space location data of an instantaneous state of the at least one object at a point in time and space. The detecting comprises aggregating the gesture data, and identifying the gesture using only the gesture data. The computer applications translate the gesture to a gesture signal, and control at least one of the display devices and the remote client devices in response to the gesture signal.
    Type: Grant
    Filed: March 12, 2018
    Date of Patent: May 19, 2020
    Assignee: Oblong Industries, Inc.
    Inventors: Kwindla Hultman Kramer, John Underkoffler, Carlton Sparrell, Navjot Singh, Kate Hollenbach, Paul Yarin
  • Patent number: 10659677
    Abstract: A camera parameter set calculation apparatus calculates three-dimensional coordinate sets based on a first image obtained by a first camera mounted on a mobile apparatus, a second image obtained by a second camera arranged on or in an object different from the mobile apparatus, a camera parameter set of the first camera, and a camera parameter set of the second camera, determines first pixel coordinate pairs obtained by projecting the three-dimensional coordinate sets onto the first image based on the first camera parameter set and second pixel coordinate pairs obtained by projecting the three-dimensional coordinate sets onto the second image based on the second camera parameter set, calculates an evaluation value based on pixel values at the first pixel coordinate pairs and pixel values at the second pixel coordinate pairs, and updates the camera parameter set of the first camera based on the evaluation value.
    Type: Grant
    Filed: July 10, 2018
    Date of Patent: May 19, 2020
    Assignee: PANASONIC INTELLECTUAL PROPERTY MANAGMENT CO., LTD.
    Inventors: Kunio Nobori, Nobuhiko Wakai, Satoshi Sato, Takeo Azuma
  • Patent number: 10659764
    Abstract: Apparatuses, methods and storage media for providing a depth image of an object are described. In some embodiments, the apparatus may include a projector to project a light pattern on an object, and to move the projected light pattern over the object, to swipe the object with the light pattern, and a camera coupled with the projector. The camera may include a dynamic vision sensor (DVS) device, to capture changes in at least some image elements that correspond to an image of the object, during the swipe of the object with the light pattern. The apparatus may further include a processor coupled with the projector and the camera, to generate a depth image of the object, based at least in part on the changes in the at least some image elements. Other embodiments may be described and claimed.
    Type: Grant
    Filed: June 20, 2016
    Date of Patent: May 19, 2020
    Assignee: Intel Corporation
    Inventor: Nizan Horesh
  • Patent number: 10659753
    Abstract: A photogrammetry system and method is provided. The photogrammetry system a two-dimensional (2D) camera operable to acquire a 2D image at a first resolution and a second resolution, and a 2D video image at the second resolution. A controller performs a method that includes acquiring a first 2D image of an object with the 2D camera at the first resolution. At least one feature on the object in the first 2D image. An image sequence is determined having a second position. A plurality of second 2D images are acquired with the 2D camera at the second resolution. The 2D camera is tracked. A direction of movement is indicated on the display. A third 2D image of the object is acquired when the 2D camera reaches the second position. Three-dimensional coordinates of the object are determined based on the first 2D image and the third 2D image.
    Type: Grant
    Filed: October 8, 2019
    Date of Patent: May 19, 2020
    Assignee: FARO TECHNOLOGIES, INC.
    Inventors: Matthias Wolke, Rolf Heidemann
  • Patent number: 10650042
    Abstract: Systems and methods of the present disclosure can use machine-learned image descriptor models for image retrieval applications and other applications. A trained image descriptor model can be used to analyze a plurality of database images to create a large-scale index of keypoint descriptors associated with the database images. An image retrieval application can provide a query image as input to the trained image descriptor model, resulting in receipt of a set of keypoint descriptors associated with the query image. Keypoint descriptors associated with the query image can be analyzed relative to the index to determine matching descriptors (e.g., by implementing a nearest neighbor search). Matching descriptors can then be geometrically verified and used to identify one or more matching images from the plurality of database images to retrieve and provide as output (e.g., by providing for display) within the image retrieval application.
    Type: Grant
    Filed: September 3, 2019
    Date of Patent: May 12, 2020
    Assignee: Google LLC
    Inventors: Andre Filgueiras de Araujo, Jiwoong Sim, Bohyung Han, Hyeonwoo Noh
  • Patent number: 10650531
    Abstract: A system, computer-readable medium, and method for improving semantic mapping and traffic participant detection for an autonomous vehicle are provided. The methods and systems may include obtain a two-dimensional image, obtain a three-dimensional point cloud comprising a plurality of points, perform semantic segmentation on the image to map objects with a discrete pixel color, and overlaying the semantic segmentation on the image to generate a updated image, generate superpixel clusters from the semantic segmentation to group like pixels together, project the point cloud onto the updated image comprising the superpixel clusters, and remove points determined to be noise/errors from the point cloud based on determining noisy points within each superpixel cluster.
    Type: Grant
    Filed: March 16, 2018
    Date of Patent: May 12, 2020
    Assignee: HONDA MOTOR CO., LTD.
    Inventors: Athmanarayanan Lakshmi Narayanan, Yi-Ting Chen
  • Patent number: 10650608
    Abstract: A system and method for constructing a 3D scene model comprising 3D objects and representing a scene, based upon a prior 3D scene model. The method comprises the steps of acquiring an image of the scene; initializing the computed 3D scene model to the prior 3D scene model; and modifying the computed 3D scene model to be consistent with the image. The step of modifying the computed 3D scene models consists of the sub-steps of comparing data of the image with objects of the 3D scene model, resulting in associated data and unassociated data; using the unassociated data to compute new objects that are not in the 3D scene model and adding the new objects to the 3D scene model; and using the associated data to detect objects in the prior 3D scene model that are absent and removing the absent objects from the 3D scene model. The present invention may also be used to construct multiple alternative 3D scene models.
    Type: Grant
    Filed: October 8, 2008
    Date of Patent: May 12, 2020
    Assignee: Strider Labs, Inc.
    Inventors: Eliot Leonard Wegbreit, Gregory D. Hager
  • Patent number: 10650223
    Abstract: Described are techniques for indoor mapping and navigation. A reference mobile device including sensors to capture range, depth and position data and processes such data. The reference mobile device further includes a processor that is configured to process the captured data to generate a 2D or 3D mapping of localization information of the device that is rendered on a display unit, execute an object recognition to identify types of installed devices of interest of interest in a part of the 2D or 3D device mapping, integrate the 3D device mapping in the built environment to objects in the environment through capturing point cloud data along with 2D image or video frame data of the build environment.
    Type: Grant
    Filed: November 6, 2018
    Date of Patent: May 12, 2020
    Assignee: TYCO FIRE & SECURITY GMBH
    Inventors: Manjuprakash Rama Rao, Rambabu Chinta, Surajit Borah
  • Patent number: 10648799
    Abstract: The present disclosure relates to a method for generating a monochrome permutation structured-light pattern and a structured-light system using the method, and the method for generating a monochrome structured-light pattern that is represented with two or more luminous intensities of a single color includes selecting two different luminous intensities of a single color, generating a stripe for each selected luminous intensity, apart from the two luminous intensities selected previously, generating at least one combination stripe by combining regions of two or more different luminous intensities that are identical to or different from the previously selected two luminous intensities, generating permutations using the generated stripes as elements, setting each permutation as a subpattern, and introducing permutation overlapping into the set subpatterns to generate a monochrome structured-light pattern in which each subpattern is connected in sequential overlapping manner.
    Type: Grant
    Filed: July 16, 2018
    Date of Patent: May 12, 2020
    Assignee: SOGANG UNIVERSITY RESEARCH FOUNDATION
    Inventors: Changsoo Je, Hyung-Min Park
  • Patent number: 10645417
    Abstract: Video blocks of stereo or non-stereo video sequences are coded using a parameterized motion model. For example, encoding a current block of a stereo video sequence can include determining a block-level disparity between first and second frames and identifying plane normal candidates within the current block of the first frame based on the block-level disparity. One of the plane normal candidates is selected based on rate-distortion values, and warping parameters are determined for predicting motion within the current block using the selected plane normal candidate. The current block is then encoded using a reference block generated by applying the warping parameters. Decoding that encoded block can include receiving a bitstream representing an encoded stereo video sequence, determining warping parameters for predicting motion within the encoded block based on syntax elements encoded to the bitstream, and decoding encoded block using a reference block generated by applying the warping parameters.
    Type: Grant
    Filed: October 9, 2017
    Date of Patent: May 5, 2020
    Assignee: GOOGLE LLC
    Inventors: Yunqing Wang, Yiming Qian
  • Patent number: 10645299
    Abstract: The present application provides a method for tracking and shooting a moving target and a tracking device, firstly, feature points of a template image corresponding to each shooting angle is extracted in advance, after that, only a feature point of a currently shot target image needs to be calculated when matching, and then the feature points of the target image is matched with the feature points of the template image corresponding to each shooting angle, the shooting angel corresponding to the matched template image is determined as the shooting angle of the currently shot target image; if the shooting angel of the currently shot target image does not coincide with the preset shooting angle of the target, the tracking device is moved to track and shoot the target, so that the shooting angel of the target coincides with the preset shooting angle of the target, greatly reducing the calculation amount of the detection and matching of the feature point and improving the real-time performance of the tracking and
    Type: Grant
    Filed: July 31, 2017
    Date of Patent: May 5, 2020
    Assignee: GOERTEK INC.
    Inventors: Weiyu Cao, Xiang Chen, Dachuan Zhao
  • Patent number: 10643397
    Abstract: A virtual reality system, comprising an electronic 2d interface having a depth sensor, the depth sensor allowing a user to provide input to the system to instruct the system to create a virtual 3D object in a real-world environment. The virtual 3D object is created with reference to at least one external physical object in the real-world environment, with the external physical object concurrently displayed with the virtual 3D object by the interface. The virtual 3D object is based on physical artifacts of the external physical object.
    Type: Grant
    Filed: March 19, 2018
    Date of Patent: May 5, 2020
    Assignee: Purdue Research Foundation
    Inventors: Ke Huo, Vinayak Raman Krishnamurthy, Karthik Ramani
  • Patent number: 10638115
    Abstract: In a calibration device, entry work of coordinate information on a real space is not required. An imaging system is configured to include an image acquisition unit which acquires an image from an imaging device, an object extraction unit which extracts a plurality of objects from the image, a characteristic information addition unit which adds geometry information that indicates geometric relationships among the plurality of objects to each set of objects as characteristic information, a camera parameter estimation unit which obtains image coordinates of the objects in accordance with the characteristic information to estimate camera parameters based on the characteristic information and the image coordinates, and a camera parameter output unit which outputs the camera parameters.
    Type: Grant
    Filed: October 16, 2015
    Date of Patent: April 28, 2020
    Assignee: HITACHI, LTD.
    Inventors: So Sasatani, Ryou Yumiba, Masaya Itoh
  • Patent number: 10635979
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for determining a clustering of images into a plurality of semantic categories. In one aspect, a method comprises: training a categorization neural network, comprising, at each of a plurality of iterations: processing an image depicting an object using the categorization neural network to generate (i) a current prediction for whether the image depicts an object or a background region, and (ii) a current embedding of the image; determining a plurality of current cluster centers based on the current values of the categorization neural network parameters, wherein each cluster center represents a respective semantic category; and determining a gradient of an objective function that includes a classification loss and a clustering loss, wherein the clustering loss depends on a similarity between the current embedding of the image and the current cluster centers.
    Type: Grant
    Filed: July 15, 2019
    Date of Patent: April 28, 2020
    Assignee: Google LLC
    Inventors: Steven Hickson, Anelia Angelova, Irfan Aziz Essa, Rahul Sukthankar
  • Patent number: 10636165
    Abstract: An information processing apparatus configured to store a plurality of images captured by an imaging device, store first position information and first orientation information indicating positions and orientations of the imaging device in capturing of each of the plurality of images, identify, among from the plurality of images, a first image resembling a second image, identify a first area included in the first image, identify a second area, included in the second image, corresponding to the first area, identify second position information and second orientation information indicating a position and an orientation of the imaging device respectively in capturing of the second image, based on a comparison between a first luminance of a first pixel included in the first area and a second luminance of a second pixel included in the second area and the first position information and the first orientation information of the first image.
    Type: Grant
    Filed: February 23, 2018
    Date of Patent: April 28, 2020
    Assignee: FUJITSU LIMITED
    Inventors: Atsunori Moteki, Nobuyasu Yamaguchi, Taichi Murase
  • Patent number: 10636193
    Abstract: A virtual reality (VR) or augmented reality (AR) head mounted display (HMD) includes various image capture devices that capture images of portions of the user's face and body. Through image analysis, points of each portion of the user's face and body are identified from the images and their movement is tracked. The identified points are mapped to a three dimensional model of a face and to a three dimensional model of a body. From the identified points, animation parameters describing positioning of various points of the user's face and body are determined for each captured image. From the animation parameters and transforms mapping the captured images to three dimensions, the three dimensional model of the face and the three dimensional model of the body is altered to render movement of the user's face and body.
    Type: Grant
    Filed: June 29, 2018
    Date of Patent: April 28, 2020
    Assignee: Facebook Technologies, LLC
    Inventors: Yaser Sheikh, Hernan Badino, Alexander Trenor Hypes, Dawei Wang, Mohsen Shahmohammadi, Michal Perdoch, Jason Saragih, Shih-En Wei
  • Patent number: 10636115
    Abstract: An information processing apparatus that receives image data from a server apparatus, comprising: a generation unit configured to generate a push instruction that includes identification information regarding one or more projection methods of a plurality of projection methods that are applicable to a projection target image; a transmitting unit configured to transmit a push instruction generated by the generation unit to the server apparatus; and a receiving unit configured to receive image data pushed from the server apparatus in response to a push instruction transmitted by the transmitting unit, the image data being generated by projecting a projection target image, using a projection method that is decided based on identification information that is included in the push instruction.
    Type: Grant
    Filed: March 13, 2019
    Date of Patent: April 28, 2020
    Assignee: Canon Kabushiki Kaisha
    Inventor: Tomoya Sakai
  • Patent number: 10636155
    Abstract: A method for depth mapping includes acquiring first depth data with respect to an object using a first depth mapping technique and providing first candidate depth coordinates for a plurality of pixels, and acquiring second depth data with respect to the object using a second depth mapping technique, different from the first depth mapping technique, and providing second candidate depth coordinates for the plurality of pixels. A weighted voting process is applied to the first and second depth data in order to select one of the candidate depth coordinates at each pixel. A depth map of the object is output, including the selected one of the candidate depth coordinates at each pixel.
    Type: Grant
    Filed: November 5, 2018
    Date of Patent: April 28, 2020
    Assignee: APPLE INC.
    Inventors: Alexander Shpunt, Gerard Medioni, Daniel Cohen, Erez Sali, Ronen Deitch
  • Patent number: 10636110
    Abstract: One embodiment provides for a graphics processing apparatus comprising first logic to rasterize pixel regions associated with multiple interleaved primitives; second logic to shade pixel regions covered by one or more of the multiple interleaved primitives; and third logic to interleave output of the second logic for the multiple interleaved primitives to a single render target, the single render target including output associated with the multiple interleaved primitives.
    Type: Grant
    Filed: June 28, 2016
    Date of Patent: April 28, 2020
    Assignee: Intel Corporation
    Inventor: Rahul P. Sathe
  • Patent number: 10638038
    Abstract: An embodiment method for enhancing the intrinsic spatial resolution of an optical sensor includes projecting, by an optical source of the optical sensor, a plurality of illumination patterns onto an object and detecting, by an optical detector of the optical sensor, reflected radiation for each of the plurality of illumination patterns. The method further includes generating, by a processor of the optical sensor, a plurality of sub-images, where each sub-image corresponds to a respective illumination pattern of the plurality of illumination patterns, each sub-image having a first image resolution. The method additionally includes reconstructing, by the processor and from the plurality of sub-images, an image having a second image resolution, the second image resolution being finer than the first image resolution.
    Type: Grant
    Filed: July 27, 2017
    Date of Patent: April 28, 2020
    Assignee: STMICROELECTRONICS (RESEARCH & DEVELOPMENT) LIMITED
    Inventor: James Peter Drummond Downing
  • Patent number: 10628948
    Abstract: An image registration device includes a mapping section for deciding a first mapping for transforming the first image to an environmental map and a second mapping for transforming the second image to an environmental map, a corresponding point pair extractor for extracting a pair of corresponding points by detecting one point in the first image and the corresponding one point in the second image, a rotational mapping deriver for deriving a rotational mapping for registering an image of the first image in the environmental map and an image of the second image in the environmental map with each other, based on positions and local feature amounts of the points in the first and second images, and a registration section for registering the data of the first image with the data of the second image based on the first mapping, the rotational mapping, and the second mapping.
    Type: Grant
    Filed: January 24, 2017
    Date of Patent: April 21, 2020
    Assignee: Japan Science and Technology Agency
    Inventors: Atsushi Nakazawa, Christian Nitschke
  • Patent number: 10628967
    Abstract: An image processing system calibrates multiple cameras of a camera rig system. The image processing system may correct barrel distortion of raw images to generate rectilinear images. The image processing system identifies key points in neighboring rectilinear image pairs captured by adjacent cameras. The image processing system may generate a parameter vector by solving an optimization problem using a gradient descent loop and the key points. The optimization problem reduces a displacement error to align the key points of the rectilinear images by adjusting calibration of the cameras or a transform of the images (which corresponds to camera calibration). The image processing system may jointly rectify, i.e., calibrate, multiple cameras simultaneously.
    Type: Grant
    Filed: December 21, 2017
    Date of Patent: April 21, 2020
    Assignee: Facebook, Inc.
    Inventor: Forrest Samuel Briggs
  • Patent number: 10628982
    Abstract: A visual characteristic on a physical surface in the physical space is determined from an image of a physical space, with the physical surface including a first area inside the visual characteristic and a second area outside of the visual characteristic. A first region of virtual space that corresponds to the first area and a second region of virtual space that corresponds to the second area is determined based at least in part on the visual characteristic. Information that includes a position and orientation of a digital object relative to the first region and the second region in the virtual space is determined based at least in part on the visual characteristic. The digital object is rendered at a position and orientation in the virtual space in accordance with the information.
    Type: Grant
    Filed: May 14, 2018
    Date of Patent: April 21, 2020
    Assignee: Vulcan Inc.
    Inventors: Richard Earl Simpkinson, Carlos Miguel Lugtu, Eric Thomas Smith
  • Patent number: 10621758
    Abstract: This invention provides a technique of appropriately managing a high-resolution tomographic image and effectively displaying an arbitrary cross section even under an environment where a 3D texture has a size limitation. An information processing apparatus includes a determination unit adapted to determine whether a size of a plurality of tomographic images acquired from a single object is not more than a predetermined size, a management unit adapted to, upon determining that the size is not more than the predetermined size, manage the plurality of tomographic images as three-dimensional voxel data, a decision unit adapted to decide a cross-sectional image as a display target of an object image managed as the three-dimensional voxel data, and a display control unit adapted to cause a display unit to display the decided cross-sectional image.
    Type: Grant
    Filed: February 27, 2019
    Date of Patent: April 14, 2020
    Assignee: Canon Kabushiki Kaisha
    Inventors: Yuichi Nishii, Nobu Miyazawa
  • Patent number: 10617489
    Abstract: Creating a digital tooth model of a patient's tooth using interproximal information is provided. Interproximal information is received that represents a space between adjacent physical teeth of the patient. A digital teeth model of a set of physical teeth of the patient that includes the adjacent physical teeth is received. One or more digital tooth models is created that more accurately depicts one or more of the physical teeth than the corresponding digital teeth included in the digital teeth model based on the interproximal information.
    Type: Grant
    Filed: March 5, 2013
    Date of Patent: April 14, 2020
    Assignee: Align Technology, Inc.
    Inventors: Bob Grove, Eric Kuo
  • Patent number: 10623714
    Abstract: A stereoscopic display device is capable of adjusting visual effects. The display device has a display module, an optical modulator, a storage element, and a controller. The display module has a plurality of pixels. The optical modulator is disposed on the display module and modulates light emitted from the display module to corresponding directions. The optical modulator has a plurality of lenses each having a reference line. The storage element stores a pixel offset map containing pixel offsets between the center of each pixel of the plurality of pixels to a closest reference line of the plurality of lenses. The controller is coupled to the display module and the storage element, and used to adjust data of the each pixel according to the pixel offset map.
    Type: Grant
    Filed: October 29, 2018
    Date of Patent: April 14, 2020
    Assignee: InnoLux Corporation
    Inventor: Naoki Sumi