3-d Or Stereo Imaging Analysis Patents (Class 382/154)
  • Patent number: 11907414
    Abstract: An animation system includes an animated figure, multiple sensors, and an animation controller that includes a processor and a memory. The memory stores instructions executable by the processor. The instructions cause the animation controller to receive guest detection data from the multiple sensors, receive shiny object detection data from the multiple sensors, determine an animation sequence of the animated figure based on the guest detection data and shiny object detection data, and transmit a control signal indicative of the animation sequence to cause the animated figure to execute the animation sequence. The guest detection data is indicative of a presence of a guest near the animated figure. The animation sequence is responsive to a shiny object detected on or near the guest based on the guest detection data and the shiny object detection data.
    Type: Grant
    Filed: May 31, 2022
    Date of Patent: February 20, 2024
    Assignee: Universal City Studios LLC
    Inventors: David Michael Churchill, Clarisse Vamos, Jeffrey A. Bardt
  • Patent number: 11902500
    Abstract: A light field (LF) display system presents holographic content to one or more viewers in a public setting for digital signage applications. In some embodiments, the LF display system includes a sensory feedback assembly, a tracking system and/or a viewer profiling module. The sensory feedback assembly may comprise sensory feedback devices that provide sensory feedback to viewers of the LF display system in tandem with the presented holographic content. The tracking system may comprise cameras used to track the viewers of the LF display system. Based on a viewer's tracked position and/or tracked gaze, the LF display system may generate holographic content that is perceivable by certain viewers but not viewable by others. The viewer profiling module may identify each viewer for providing personalized holographic content and may further monitor and record behavior of viewers for informing subsequent presentations of holographic content by the LF display system.
    Type: Grant
    Filed: August 9, 2019
    Date of Patent: February 13, 2024
    Assignee: Light Field Lab, Inc.
    Inventors: Jonathan Sean Karafin, Brendan Elwood Bevensee, John Dohm
  • Patent number: 11900535
    Abstract: The following relates generally to light detection and ranging (LIDAR) and artificial intelligence (AI). In some embodiments, a system: receives LIDAR data generated from a LIDAR camera; measures a plurality of dimensions of a landscape based upon processor analysis of the LIDAR data; builds a 3D model of the landscape based upon the measured plurality of dimensions, the 3D model including: (i) a structure, and (ii) a vegetation; and displays a representation of the 3D model.
    Type: Grant
    Filed: April 26, 2021
    Date of Patent: February 13, 2024
    Assignee: STATE FARM MUTUAL AUTOMOBILE INSURANCE COMPANY
    Inventors: Nicholas Carmelo Marotta, Laura Kennedy, J D Johnson Willingham
  • Patent number: 11896441
    Abstract: A measurement system accesses first and second images captured respectively from first and second vantage points by first and second cameras included within a stereoscopic endo scope located at a surgical area associated with a patient. The measurement system receives user input designating a user-selected two-dimensional (ā€œ2Dā€) endpoint corresponding to a feature within the surgical area as represented in the first image, and identifies, based on the user-selected 2D endpoint, a matched 2D endpoint corresponding to the feature as represented in the second image. Based on the user-selected and matched 2D endpoints, the measurement system defines a three-dimensional (ā€œ3Dā€) endpoint corresponding to the feature within the surgical area. The measurement system then determines a distance from the 3D endpoint to an additional 3D endpoint corresponding to an additional feature within the surgical area. Corresponding systems and methods are also described.
    Type: Grant
    Filed: May 2, 2019
    Date of Patent: February 13, 2024
    Assignee: Intuitive Surgical Operations, Inc.
    Inventors: Rohitkumar Godhani, Brian D. Hoffman
  • Patent number: 11902577
    Abstract: A three-dimensional data encoding method includes: obtaining geometry information which includes first three-dimensional positions on a measurement target, and is generated by a measurer that radially emits an electromagnetic wave in different directions and obtains a reflected wave which is the electromagnetic wave that is reflected by the measurement target; generating a two-dimensional image including first pixels corresponding to the directions, based on the geometry information; and encoding the two-dimensional image to generate a bitstream. Each of the first pixels has a pixel value indicating a first three-dimensional position or attribute information of a three-dimensional point which is included in a three-dimensional point cloud and corresponds to a direction to which the first pixel corresponds among the directions.
    Type: Grant
    Filed: July 21, 2021
    Date of Patent: February 13, 2024
    Assignee: PANASONIC INTELLECTUAL PROPERTY CORPORATION OF AMERICA
    Inventors: Toshiyasu Sugio, Noritaka Iguchi, Pongsak Lasang, Chi Wang, Chung Dean Han
  • Patent number: 11893680
    Abstract: A computing device for video object detection. Images from a camera are transferred in parallel to a first processor running object detection and a second processor running a 3D reconstruction. The object detection identifies a semantic object of interest and assigns a label to it and outputs the label information to an object mapper. The object mapper assigns the label to a component in the 3D model representing the object. The computing device can form part of a subsea or other harsh environment imaging system.
    Type: Grant
    Filed: October 31, 2019
    Date of Patent: February 6, 2024
    Assignee: Rovco Limited
    Inventors: Iain Wallace, Pep Lluis Negre Carrasco, Lyndon Hill
  • Patent number: 11893313
    Abstract: A computer-implemented method of machine-learning including obtaining a dataset of 3D point clouds. Each 3D point cloud includes at least one object. Each 3D point cloud is equipped with a specification of one or more graphical user-interactions each representing a respective selection operation of a same object in the 3D point cloud. The method further includes teaching, based on the dataset, a neural network configured for segmenting an input 3D point cloud including an object. The segmenting is based on the input 3D point cloud and on a specification of one or more input graphical user-interactions each representing a respective selection operation of the object in the 3D point cloud.
    Type: Grant
    Filed: December 16, 2020
    Date of Patent: February 6, 2024
    Assignee: DASSAULT SYSTEMES
    Inventors: Asma Rejeb Sfar, Tom Durand, Malika Boulkenafed
  • Patent number: 11893087
    Abstract: A multimodal perception system for an autonomous vehicle includes a first sensor that is one of a video, RADAR, LIDAR, or ultrasound sensor, and a controller. The controller may be configured to, receive a first signal from a first sensor, a second signal from a second sensor, and a third signal from a third sensor, extract a first feature vector from the first signal, extract a second feature vector from the second signal, extract a third feature vector from the third signal, determine an odd-one-out vector from the first, second, and third feature vectors via an odd-one-out network of a machine learning network, based on inconsistent modality prediction, fuse the first, second, and third feature vectors and odd-one-out vector into a fused feature vector, output the fused feature vector, and control the autonomous vehicle based on the fused feature vector.
    Type: Grant
    Filed: June 16, 2021
    Date of Patent: February 6, 2024
    Inventors: Karren Yang, Wan-Yi Lin, Manash Pratim, Filipe J. Cabrita Condessa, Jeremy Kolter
  • Patent number: 11893705
    Abstract: Moving images of a space, which includes objects 34 and 35 of a display target, as viewed from reference points are created in advance as reference images, and they are combined in response to actual positions of the points of view to draw a moving image. When the object 35 is displaced as indicated by an arrow mark in the space, reference points of view 30a to 30e are fixed as depicted in (a). Alternatively, the reference points of view are displaced in response to the displacement like reference points of view 36a to 36e in (b). Then, the moving images from the reference points of view are generated as the reference images.
    Type: Grant
    Filed: March 1, 2022
    Date of Patent: February 6, 2024
    Assignee: SONY INTERACTIVE ENTERTAINMENT INC.
    Inventor: Yuki Karasawa
  • Patent number: 11893785
    Abstract: Embodiments of this application disclose an object annotation method and apparatus, a movement control method and apparatus, a device, and a storage medium. The method includes: obtaining a reference image recorded by an image sensor from an environment space, the reference image comprising at least one reference object; obtaining target point cloud data obtained by a three-dimensional space sensor by scanning the environment space, the target point cloud data indicating a three-dimensional space region occupied by a target object in the environment space; determining a target reference object corresponding to the target object from the reference image; determining a projection size of the three-dimensional space region corresponding to the target point cloud data and the three-dimensional space region being projected onto the reference image; and performing three-dimensional annotation on the target reference object in the reference image according to the determined projection size.
    Type: Grant
    Filed: October 15, 2020
    Date of Patent: February 6, 2024
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventor: Chao Zeng
  • Patent number: 11891091
    Abstract: An example driver assistance system includes an object detection (OD) network, a semantic segmentation network, a processor, and a memory. In an example method, an image is received and stored in the memory. An object detection (OD) polygon is generated for each object detected in the image, and each OD polygon encompasses at least a portion of the corresponding object detected in the image. A region of interest (ROI) is associated with each OD polygon. Such method may further comprise generating a mask for each ROI, each mask configured as a bitmap approximating a size of the corresponding ROI; generating at least one boundary polygon for each mask based on the corresponding mask, each boundary polygon having multiple vertices and enclosing the corresponding mask; and reducing a number of vertices of the boundary polygons based on a comparison between points of the boundary polygons and respective points on the bitmaps.
    Type: Grant
    Filed: February 22, 2023
    Date of Patent: February 6, 2024
    Assignee: Texas Instruments Incorporated
    Inventors: Soyeb Noormohammed Nagori, Deepak Poddar, Hrushikesh Tukaram Garud
  • Patent number: 11887253
    Abstract: Embodiments of the systems and methods described herein provide a terrain generation and population system that can determine terrain population rules for terrain population objects and features when placing objects and features in a three dimensional virtual space. As such, the terrain generation and population system can generate realistic terrain for use in game. The terrain generation and population system can receive an image, such as a satellite image, and utilize artificial intelligence to perform image segmentation at the pixel level to segment features and/or objects in the image. The game terrain system can automatically detect and apply feature and object masks based on the identified features and/or objects from the image segmentation. The game terrain system can place the features and/or objects in corresponding masks in the three dimensional space according to the application of terrain population rules.
    Type: Grant
    Filed: July 24, 2019
    Date of Patent: January 30, 2024
    Assignee: Electronic Arts Inc.
    Inventors: Han Liu, Mohsen Sardari, Harold Henry Chaput, Navid Aghdaie, Kazi Atif-Uz Zaman
  • Patent number: 11889173
    Abstract: A topographical measurement system uses an imaging cartridge formed of a rigid optical element and a clear, elastomeric sensing surface configured to capture high-resolution topographical data from a measurement surface. The imaging cartridge may be configured as a removable cartridge for the system so that the imaging cartridge, including the rigid optical element and elastomeric sensing surface can be removed and replaced as a single, integral component that is robust/stable over multiple uses, and easily user-replaceable as frequently as necessary or desired. The cartridge may also usefully incorporate a number of light shaping and other features to support optimal illumination and image capture.
    Type: Grant
    Filed: May 17, 2021
    Date of Patent: January 30, 2024
    Assignee: GelSight, Inc.
    Inventors: Janos Rohaly, Edward H. Adelson
  • Patent number: 11880993
    Abstract: An image processing device used in determining a distance to an object includes a memory that stores a stereo image of the object including first and second images, and a processor configured to detect first and second reference lines in the first image, calculate disparity between the first and second images, correct, using the calculated disparity, a position of the first reference line in the second image, and calculate a parameter for determining the distance to the object, the parameter indicating a difference between the first and second images based on a distance between the first and second reference lines in the first image and disparity between the first reference line in the first image and the corrected first reference line in the second image.
    Type: Grant
    Filed: February 25, 2019
    Date of Patent: January 23, 2024
    Assignees: Kabushiki Kaisha Toshiba, Toshiba Electronic Devices & Storage Corporation
    Inventor: Yutaka Oki
  • Patent number: 11881001
    Abstract: A jig holds an imaging apparatus including a plurality of cameras with different optical axis orientations and a chart including a plurality of planes with different angles and changes the orientation of the imaging apparatus relative to the chart. A calibration apparatus obtains camera parameters of the imaging apparatus by sequentially acquiring captured images captured by adjacent cameras when these cameras have obtained predetermined fields-of-view relative to the chart and extracting images of feature points of chart patterns.
    Type: Grant
    Filed: May 23, 2019
    Date of Patent: January 23, 2024
    Assignee: Sony Interactive Entertainment Inc.
    Inventor: Yoshihiro Myokan
  • Patent number: 11880430
    Abstract: A computer-implemented method for predicting a cropland data layer (CDL) for a current year includes: retrieving a first set of records from a historical CDL database, where the first set corresponds to sampled areas of a region taken over a period for a number of years; retrieving a second set of records from a historical imagery database, where the second set corresponds to the sampled areas of the region, the period, and the number of years; employing the second set as inputs to train a deep learning network to generate the first set; retrieving a third set of records from a current imagery database, where the third set corresponds to a prescribed region, and where the third set corresponds to the time period and the current year; and using the third set as inputs and executing the trained deep learning network to generate a predicted CDL for the current year.
    Type: Grant
    Filed: May 31, 2021
    Date of Patent: January 23, 2024
    Assignee: CIBO Technologies, Inc.
    Inventors: Ernesto Brau, R. Shane Bussmann, Ethan Sargent
  • Patent number: 11880208
    Abstract: A method for drivable area detection and autonomous obstacle avoidance of unmanned haulage equipment in deep confined spaces is disclosed, which includes the following steps: acquiring 3D point cloud data of a roadway; computing a 2D image drivable area of the coal mine roadway; acquiring a 3D point cloud drivable area of the coal mine roadway; establishing a 2D grid map and a risk map, and performing autonomous obstacle avoidance path planning by using a particle swarm path planning method designed for deep confined roadways; and acquiring an optimal end point to be selected of a driving path by using a greedy strategy, and enabling an unmanned auxiliary haulage vehicle to drive according to the optimal end point and an optimal path. Images of a coal mine roadway are acquired actively by use of a single-camera sensor device.
    Type: Grant
    Filed: February 23, 2022
    Date of Patent: January 23, 2024
    Assignee: CHINA UNIVERSITY OF MINING AND TECHNOLOGY
    Inventors: Chunyu Yang, Zhencai Zhu, Yidong Zhang, Xin Zhang, Zhen Gu, Qingguo Wang
  • Patent number: 11881000
    Abstract: This invention applies dynamic weighting between a point-to-plane and point-to-edge metric on a per-edge basis in an acquired image using a vision system. This allows an applied ICP technique to be significantly more robust to a variety of object geometries and/or occlusions. A system and method herein provides an energy function that is minimized to generate candidate 3D poses for use in alignment of runtime 3D image data of an object with model 3D image data. Since normals are much more accurate than edges, the use of normal is desirable when possible. However, in some use cases, such as a plane, edges provide information in relative directions the normals do not. Hence the system and method defines a ā€œnormal information matrixā€, which represents the directions in which sufficient information is present. Performing (e.g.) a principal component analysis (PCA) on this matrix provides a basis for the available information.
    Type: Grant
    Filed: March 22, 2021
    Date of Patent: January 23, 2024
    Assignee: Cognex Corporation
    Inventors: Andrew Hoelscher, Simon Barker, Adam Wagman, David J. Michael
  • Patent number: 11875523
    Abstract: The present disclosure provides an adaptive stereo matching optimization method, apparatus, and device, and a storage medium. The method includes: acquiring images of at least two perspectives of the same target scene, accordingly obtaining, through calculation, disparity value ranges corresponding to pixels in the target scene; and obtaining optimized depth value ranges by adjusting the disparity value ranges of the pixels in the target scene in real time through an adaptive stereo matching model; adjusting an execution cycle in the adaptive stereo matching model in real time through a DVFS algorithm according to a resource constraint condition of the processing system; and/or training on a plurality of scene image data sets through a convolutional neural network, so that the specific function parameters in the adaptive stereo matching model are correspondingly adjusted in real time according to the acquired different scene images.
    Type: Grant
    Filed: September 20, 2019
    Date of Patent: January 16, 2024
    Assignee: ShanghaiTech University
    Inventors: Fupeng Chen, Heng Yu, Yajun Ha
  • Patent number: 11868896
    Abstract: The AI engine operates with the common API. The common API supports i) any of multiple different training sources and/or prediction sources installed on ii) potentially different sets of customer computing hardware in a plurality of on-premises' environments, where the training sources, prediction sources as well as the set of customer computing hardware may differ amongst the on-premises' environments. The common API via its cooperation with a library of base classes is configured to allow users and third-party developers to interface with the AI-engine modules of the AI engine in an easy and predictable manner through the three or more base classes available from the library. The common API via its cooperation with the library of base classes is configured to be adaptable to the different kinds of training sources, prediction sources, and the different sets of hardware found a particular on-premises environment.
    Type: Grant
    Filed: August 16, 2018
    Date of Patent: January 9, 2024
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Matthew Brown, Michael Estee
  • Patent number: 11869143
    Abstract: Provided are a cutting method, apparatus and system for a point cloud model. In an embodiment, the method includes: using one two-dimensional first cutting window to select a point cloud structure comprising a target object from among one point cloud model; adjusting the depth of the first cutting window, the length, width and depth of the first cutting window constituting one three-dimensional second cutting window, the target object being located in the second cutting window; identifying and marking all point cloud structures in the second cutting window to form a plurality of three-dimensional third cutting windows, the target object being located in one of the third cutting windows; and calculating the volume ratio of the point cloud structure in each third cutting window relative to the second cutting window, and selecting the third cutting window having the largest volume ratio.
    Type: Grant
    Filed: June 28, 2019
    Date of Patent: January 9, 2024
    Assignee: Siemens Ltd., China
    Inventors: Hai Feng Wang, Tao Fei
  • Patent number: 11861876
    Abstract: A three-dimensional (3D) video reconstruction method, an encoder and a decoder are provided, comprising obtaining a list of video content screens or video content frames of an object from the 3D video; obtaining a list of depth screens of the 3D video; adding a shape screen to each video frame of the 3D video; superimposing each of the video content screens or video content frames with the depth screen and the shape screen to form a shape identification library; and storing the shape identification library at a header of a compressed file for unmasking of the object. The shape recognition list format may significantly reduce the storage size and increase the compression ratio by replacing the original shape with the identifications, and help improve the rendering quality.
    Type: Grant
    Filed: March 13, 2020
    Date of Patent: January 2, 2024
    Assignee: Marvel Research Limited
    Inventors: Ying Chiu Herbert Lee, Chang Yuen Chan
  • Patent number: 11861829
    Abstract: The present disclosure provides a deep learning based medical image detection method and apparatus, a computer-readable medium, and an electronic device. The method includes: acquiring a to-be-detected medical image comprising a plurality of slices; for each slice in the to-be-detected medical image: extracting N basic feature maps of the slice by a deep neural network, N being an integer greater than 1, merging features of the N basic feature maps by the deep neural network, to obtain M enhanced feature maps, M being an integer greater than 1, and respectively performing a hierarchically dilated convolutions operation on the M enhanced feature maps by the deep neural network, to generate a superposed feature map of each enhanced feature map; and predicting position information of a region of interest and a confidence score thereof in the to-be-detected medical image by the deep neural network based on the superposed feature map.
    Type: Grant
    Filed: May 12, 2021
    Date of Patent: January 2, 2024
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventor: Lijun Gong
  • Patent number: 11861859
    Abstract: Method and systems are provided for robust disparity estimation based on cost-volume attention. A method includes extracting first feature maps from left images captured by a first camera; extracting second feature maps from right images captured by a second camera; calculating a matching cost based on a comparison of the first and second feature maps to generate a cost volume; generating an attention-aware cost volume from the generated cost volume; and aggregating the attention-aware cost volume to generate an output disparity.
    Type: Grant
    Filed: December 22, 2020
    Date of Patent: January 2, 2024
    Assignee: Samsung Electronics Co., Ltd
    Inventors: Mostafa El-Khamy, Jungwon Lee, Haoyu Ren
  • Patent number: 11853117
    Abstract: In one implementation, an apparatus includes a display having a front surface and a back surface. The display includes a plurality of pixel regions that emit light from the front surface to display a displayed image and a plurality of apertures that transmit light from the front surface to the back surface. The apparatus includes a camera disposed on a side of the back surface of the display. The camera is configured to capture a captured image. The apparatus includes a processor coupled to the display and the camera. The processor is configured to receive the captured image and apply a first digital filter to a first portion of the captured image and a second digital filter, different than the first digital filter, to a second portion of the captured image to reduce image distortion caused by the display.
    Type: Grant
    Filed: January 26, 2023
    Date of Patent: December 26, 2023
    Assignee: APPLE INC.
    Inventors: Manohar B. Srikanth, Marduke Yousefpor, Ting Sun, Kathrin Berkner-Cieslicki, Mohammad Yeke Yazdandoost, Ricardo J. Motta
  • Patent number: 11854279
    Abstract: A vehicle exterior environment recognition apparatus includes a monocular distance calculator, a relaxation distance calculator, and an updated distance calculator. The monocular distance calculator calculates a monocular distance of a three-dimensional object from a luminance image generated by an imaging unit. The relaxation distance calculator calculates a relaxation distance of the three-dimensional object from two luminance images generated by two imaging units based on a degree of image matching between the two luminance images determined using a threshold more lenient than another threshold used to determine the degree of image matching to generate a stereo distance of the three-dimensional object. The updated distance calculator calculates an updated distance of the three-dimensional object by mixing the monocular distance and the relaxation distance at a predetermined ratio.
    Type: Grant
    Filed: April 14, 2021
    Date of Patent: December 26, 2023
    Assignee: SUBARU CORPORATION
    Inventor: Naoki Takahashi
  • Patent number: 11854224
    Abstract: A system includes processing hardware and a memory storing software code. When executed, the software code receives first skeleton data including a first location of each of multiple skeletal key-points from the perspective of a first camera, receives second skeleton data including a second location of each of the skeletal key-points from the perspective of a second camera, correlates first and second locations of some or all of the multiple skeletal key-points to produce correlated skeletal key-point location data for each of at least some skeletal key-points. The software code further merges the correlated skeletal key-point location data for each of those at least some skeletal key-points to provide merged location data, and generates, using the merged location data and the locations of the first, second, and third cameras, a mapping of the 3D pose of a skeleton.
    Type: Grant
    Filed: July 23, 2021
    Date of Patent: December 26, 2023
    Assignee: Disney Enterprises, Inc.
    Inventors: Jeremie A. Papon, Andrew James Kilkenny, David R. Rose
  • Patent number: 11853070
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for updating data of a map. One of the methods includes storing a map of an environment, the map comprising surfels and a road graph; receiving new surfel data for the surfels; adjusting the surfels based on the new surfel data; determining a vector field difference between the surfels of the stored map and the adjusted surfels; adjusting a portion of the road graph based on the vector field difference; generating an updated map comprising the adjusted surfels and the adjusted portion of the road graph; and storing the updated map.
    Type: Grant
    Filed: September 4, 2020
    Date of Patent: December 26, 2023
    Assignee: Waymo LLC
    Inventors: Michael Montemerlo, Peter Michal Pawlowski, Joy Weng Zhang
  • Patent number: 11847730
    Abstract: Systems and methods of automatic orientation detection in fluoroscopic images using deep learning enable local registration for correction of initial CT-to-body registration in Electromagnetic Navigation Bronchoscopy (ENB) systems.
    Type: Grant
    Filed: November 12, 2020
    Date of Patent: December 19, 2023
    Assignee: COVIDIEN LP
    Inventors: Daniel Ovadia, Guy Alexandroni, Ariel Birenbaum
  • Patent number: 11847736
    Abstract: The consistent use of lighting in different instances of digital media may help ensure that objects are depicted in a similar manner in the digital media. However, in some cases, a three-dimensional (3D) model may depict an object under lighting conditions that differ from the lighting conditions depicted in other digital media. The present disclosure provides systems and methods for generating 3D models to include lighting that is consistent with the lighting used in other digital media. According to an embodiment, a lighting template is determined based on digital media depicting a first physical object. A modified 3D model of a second physical object is then generated based on the lighting template to light the second physical object according to the lighting template.
    Type: Grant
    Filed: April 22, 2021
    Date of Patent: December 19, 2023
    Assignee: SHOPIFY INC.
    Inventors: Byron Leonel Delgado, Stephan Leroux, Daniel Beauchamp
  • Patent number: 11847756
    Abstract: A messaging system processes three-dimensional (3D) models to generate ground truths for training machine learning models for applications of the messaging system. A method of generating ground truths for machine learning includes generating a plurality of first rendered images from a first 3D base model where each first rendered image includes the 3D base model modified by first augmentations of a plurality of augmentations.
    Type: Grant
    Filed: October 20, 2021
    Date of Patent: December 19, 2023
    Assignee: SNAP INC.
    Inventors: Gleb Dmukhin, Egor Nemchinov, Yurii Volkov
  • Patent number: 11841924
    Abstract: The present disclosure is directed to a software tool that engages in an image matching technique. In one implementation, the software tool (i) accesses a set of two-dimensional drawings representative of a construction project, (ii) determines multiple candidate pixel regions for each of the two-dimensional drawings, (iii) compares the candidate pixel regions to identify a set of landmark pixel regions that appear in the set of two-dimensional drawings at a threshold rate, (iv) compare a first subset of landmark pixel regions from a first two-dimensional drawing with a second subset of landmark pixel regions from a second two-dimensional drawing to identify matching landmark pixel regions, (v) project the first and second two-dimensional drawings onto a projection space such that a maximum number of the matching landmark pixel regions align, and (vi) determine an extent of similarity between the projected first and second two-dimensional drawings in the projection space.
    Type: Grant
    Filed: April 22, 2021
    Date of Patent: December 12, 2023
    Assignee: Procore Technologies, Inc.
    Inventor: Lei Wu
  • Patent number: 11842464
    Abstract: Techniques are described for using computing devices to perform automated operations to generate mapping information of a defined area via analysis of visual data of images, including by using attribute information exchanged between paired or otherwise grouped images of multiple types to generate enhanced images, and for using the generated mapping information in further automated manners, including to use the generated mapping information for automated navigation and/or to display or otherwise present the generated mapping information. In some situations, the defined area includes an interior of a multi-room building, and the generated information includes at least one or more enhanced images and/or a partial floor plan and/or other modeled representation of the building, with the generating performed in some cases without having measured depth information about distances from the images' acquisition locations to walls or other objects in the surrounding building.
    Type: Grant
    Filed: September 22, 2021
    Date of Patent: December 12, 2023
    Assignee: MFTB Holdco, Inc.
    Inventors: Naji Khosravan, Sing Bing Kang, Ivaylo Boyadzhiev
  • Patent number: 11836883
    Abstract: There is provided a display control device including a display controller configured to place a virtual object within an augmented reality space corresponding to a real space in accordance with a recognition result of a real object shown in an image captured by an imaging part, and an operation acquisition part configured to acquire a user operation. When the user operation is a first operation, the display controller causes the virtual object to move within the augmented reality space.
    Type: Grant
    Filed: May 16, 2022
    Date of Patent: December 5, 2023
    Assignee: SONY CORPORATION
    Inventor: Shingo Tsurumi
  • Patent number: 11833759
    Abstract: A method and a system for making an orthodontic appliance are provided, the method comprises: obtaining a 3D digital model comprising a plurality of vertices representative of surfaces of upper and lower teeth of a subject in a current occlusion therebetween; obtaining an indication of a desired occlusion between the upper and lower arch forms; determining, based on the desired occlusion, a shift value between the current occlusion and the desired occlusion; generating, based on the desired occlusion, an outer surface of the appliance 3D digital model corresponds to the respective occlusal surface portion of the given one of the upper and lower teeth having been repositioned towards the desired occlusion by the shift value.
    Type: Grant
    Filed: September 13, 2022
    Date of Patent: December 5, 2023
    Assignee: Oxilio Ltd
    Inventor: Islam Khasanovich Raslambekov
  • Patent number: 11830136
    Abstract: A method includes creating a point cloud model of an environment, applying at least one filter to the point cloud model to produce a filtered model of the environment and defining a plane in the filtered model corresponding to a horizontal expanse associated with a floor of the environment.
    Type: Grant
    Filed: October 14, 2020
    Date of Patent: November 28, 2023
    Assignee: CARNEGIE MELLON UNIVERSITY
    Inventor: Steven Huber
  • Patent number: 11830211
    Abstract: Embodiments of the disclosure provide a disparity map acquisition method and apparatus, a device, a control system and a storage medium. The method includes: respectively performing feature extraction on left-view images and right-view images of a captured object layer by layer through M cascaded feature extraction layers, to obtain a left-view feature map set and a right-view feature map set of each layer, M being a positive integer greater than or equal to 2; constructing an initial disparity map based on the left-view feature map set and the right-view feature map set extracted by an Mth feature extraction layer; and iteratively refining, starting from an (M?1)th layer, the disparity map through the left-view feature map set and the right-view feature map set extracted by each feature extraction layer in sequence until a final disparity map is obtained based on an iteratively refined disparity map of a first layer.
    Type: Grant
    Filed: April 22, 2021
    Date of Patent: November 28, 2023
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventor: Ze Qun Jie
  • Patent number: 11830151
    Abstract: Disclosed is an approach for managing and displaying virtual content in a mixed reality environment on a one-on-one basis independently by each application, each virtual content is rendered by its respective application into a bounded volume referred herein as a ā€œPrism.ā€ Each Prism may have characteristics and properties that allow a universe application to manage and display the Prism in the mixed reality environment such that the universe application may manage the placement and display of the virtual content in the mixed reality environment by managing the Prism itself.
    Type: Grant
    Filed: April 27, 2021
    Date of Patent: November 28, 2023
    Assignee: Magic Leap, Inc.
    Inventors: June Tate-Gans, Eric Norman Yiskis, Mark Ashley Rushton, David William Hover, Praveen Babu J D
  • Patent number: 11826111
    Abstract: A method of tracking motion of a body part, the method comprising: (a) gathering motion data from a body part repositioned within a range of motion, the body part having mounted thereto a motion sensor; (b) gathering a plurality of radiographic images taken of the body part while the body part is in different positions within the range of motion, the plurality of radiographic images having the body part and the motion sensor within a field of view; and, (c) constructing a virtual three dimensional model of the body part from the plurality of radiographic images using a structure of the motion sensor identifiable within at least two of the plurality of radiographic images to calibrate the radiographic images.
    Type: Grant
    Filed: April 12, 2022
    Date of Patent: November 28, 2023
    Assignee: TECHMAH MEDICAL LLC
    Inventor: Mohamed R. Mahfouz
  • Patent number: 11830141
    Abstract: In an embodiment, a 3D facial modeling system includes a plurality of cameras configured to capture images from different viewpoints, a processor, and a memory containing a 3D facial modeling application and parameters defining a face detector, wherein the 3D facial modeling application directs the processor to obtain a plurality of images of a face captured from different viewpoints using the plurality of cameras, locate a face within each of the plurality of images using the face detector, wherein the face detector labels key feature points on the located face within each of the plurality of images, determine disparity between corresponding key feature points of located faces within the plurality of images, and generate a 3D model of the face using the depth of the key feature points.
    Type: Grant
    Filed: February 22, 2022
    Date of Patent: November 28, 2023
    Assignee: Adela Imaging LLC
    Inventor: Kartik Venkataraman
  • Patent number: 11823402
    Abstract: A method and apparatus for correcting an error in depth information estimated from a two-dimensional (2D) image are disclosed. The method includes diagnosing an error in depth information by inputting a color image and depth information estimated using the color image to a depth error detection network, and determining enhanced depth information by maintaining or correcting the depth information based on the diagnosed error.
    Type: Grant
    Filed: May 3, 2021
    Date of Patent: November 21, 2023
    Assignees: Electronics and Telecommunications Research Institute, The Trustees of Indiana University
    Inventors: Soon Heung Jung, Jeongil Seo, Jagpreet Singh Chawla, Nikhil Thakurdesai, David Crandall, Md Reza, Anuj Godase
  • Patent number: 11823415
    Abstract: An autoencoder may be trained to predict 3D pose labels using simulation data extracted from a simulated environment, which may be configured to represent an environment in which the 3D pose estimator is to be deployed. Assets may be used to mimic the deployment environment such as 3D models or textures and parameters used to define deployment scenarios and/or conditions that the 3D pose estimator will operate under in the environment. The autoencoder may be trained to predict a segmentation image from an input image that is invariant to occlusions. Further, the autoencoder may be trained to exclude areas of the input image from the object that correspond to one or more appendages of the object. The 3D pose may be adapted to unlabeled real-world data using a GAN, which predicts whether output of the 3D pose estimator was generated from real-world data or simulated data.
    Type: Grant
    Filed: March 3, 2021
    Date of Patent: November 21, 2023
    Assignee: NVIDIA Corporation
    Inventors: Sravya Nimmagadda, David Weikersdorfer
  • Patent number: 11823365
    Abstract: The present invention provides a computer-based method for automatically evaluating validity and extent of at least one damaged object from image data, comprising the steps of: (a) receive image data comprising one or more images of at least one damaged object; (b) inspect any one of said one or more images for existing image alteration utilising an image alteration detection algorithm, and remove any image comprising image alterations from said one or more images; (c) identify and classify said at least one damaged object in any one of said one or more images, utilising at least one first machine learning algorithm; (d) detect at least one damaged area of said classified damaged object, utilising at least one second machine learning algorithm; (e) classify, quantitatively and/or qualitatively, an extent of damage of said at least one damaged area, utilising at least one third machine learning algorithm, and characteristic information of said damaged object and/or an undamaged object that is at least equivale
    Type: Grant
    Filed: September 15, 2017
    Date of Patent: November 21, 2023
    Assignee: Emergent Network Intelligence Ltd.
    Inventors: Christopher Campbell, Karl Hewitson, Karl Brown, Jon Wilson, Sam Warren
  • Patent number: 11818303
    Abstract: A computer-implemented method of detecting an object depicted in a digital image includes: detecting a plurality of identifying features of the object, wherein the plurality of identifying features are located internally with respect to the object; projecting a location of region(s) of interest of the object based on the plurality of identifying features, where each region of interest depicts content; building and/or selecting an extraction model configured to extract the content based at least in part on: the location of the region(s) of interest, the of identifying feature(s), or both; and extracting the some or all of the content from the digital image using the extraction model. Corresponding system and computer program product embodiments are disclosed. The inventive concepts enable reliable extraction of data from digital images where portions of an object are obscured/missing, and/or depicted on a complex background.
    Type: Grant
    Filed: August 27, 2020
    Date of Patent: November 14, 2023
    Assignee: KOFAX, INC.
    Inventors: Jiyong Ma, Stephen M. Thompson, Jan W. Amtrup
  • Patent number: 11816795
    Abstract: The photo-video based spatial-temporal volumetric capture system more efficiently, produces high frame rate and high resolution 4D dynamic human videos, without a need for 2 separate 3D and 4D scanner systems, by combining a set of high frame rate machine vision video cameras with a set of high resolution photography cameras. It reduces a need for manual CG works, by temporally up-sampling shape and texture resolution of 4D scanned video data from a temporally sparse set of higher resolution 3D scanned keyframes that are reconstructed both by using machine vision cameras and photography cameras. Unlike typical performance capture system that uses single static template model at initialization (e.g. A or pose), the photo-video based spatial-temporal volumetric capture system stores multiple keyframes of high resolution 3D template models for robust and dynamic shape and texture refinement of 4D scanned video sequence.
    Type: Grant
    Filed: December 20, 2019
    Date of Patent: November 14, 2023
    Assignee: SONY GROUP CORPORATION
    Inventors: Kenji Tashiro, Chuen-Chien Lee, Qing Zhang
  • Patent number: 11816854
    Abstract: A three-dimensional shape of a subject is analyzed by inputting captured images of a depth camera and a visible light camera. There is provided an image processing unit configured to input captured images of the depth camera and the visible light camera, to analyze a three-dimensional shape of the subject. The image processing unit generates a depth map based TSDF space (TSDF Volume) by using a depth map acquired from a captured image of the depth camera, and generates a visible light image based TSDF space by using a captured image of the visible light camera. Moreover, an integrated TSDF space is generated by integration processing on the depth map based TSDF space and the visible light image based TSDF space, and three-dimensional shape analysis processing on the subject is executed using the integrated TSDF space.
    Type: Grant
    Filed: March 4, 2020
    Date of Patent: November 14, 2023
    Assignee: SONY GROUP CORPORATION
    Inventor: Hiroki Mizuno
  • Patent number: 11816829
    Abstract: A novel disparity computation technique is presented which comprises multiple orthogonal disparity maps, generated from approximately orthogonal decomposition feature spaces, collaboratively generating a composite disparity map. Using an approximately orthogonal feature set extracted from such feature spaces produces an approximately orthogonal set of disparity maps that can be composited together to produce a final disparity map. Various methods for dimensioning scenes and are presented. One approach extracts the top and bottom vertices of a cuboid, along with the set of lines, whose intersections define such points. It then defines a unique box from these two intersections as well as the associated lines. Orthographic projection is then attempted, to recenter the box perspective. This is followed by the extraction of the three-dimensional information that is associated with the box, and finally, the dimensions of the box are computed. The same concepts can apply to hallways, rooms, and any other object.
    Type: Grant
    Filed: December 4, 2022
    Date of Patent: November 14, 2023
    Assignee: Golden Edge Holding Corporation
    Inventors: Tarek El Dokor, Jordan Cluster
  • Patent number: 11809526
    Abstract: The disclosure relates to an object identification method and device as well as a non-transitory computer-readable storage medium. The object identification method includes: acquiring a first object image, and generating a first identification result group according to the first object image, wherein the first identification result group includes one or more first identification results arranged in order of confidence from high to low; acquiring a second object image, and generating a second identification result group based on the second object image, wherein the second identification result group includes one or more second identification results arranged in order of confidence from high to low; and determining whether the first object image and the second object image correspond to the same object to be identified according to the first identification result group and the second identification result group.
    Type: Grant
    Filed: February 1, 2021
    Date of Patent: November 7, 2023
    Assignee: Hangzhou Glority Software Limited
    Inventors: Qingsong Xu, Qing Li
  • Patent number: 11809524
    Abstract: Systems and methods for training an adapter network that adapts a model pre-trained on synthetic images to real-world data are disclosed herein. A system may include a processor and a memory in communication with the processor and having machine-readable that cause the processor to output, using a neural network, a predicted scene that includes a three-dimensional bounding box having pose information of an object, generate a rendered map of the object that includes a rendered shape of the object and a rendered surface normal of the object, and train the adapter network, which adapts the predicted scene to adjust for a deformation of the input image by comparing the rendered map to the output map acting as a ground truth.
    Type: Grant
    Filed: July 23, 2021
    Date of Patent: November 7, 2023
    Assignees: Woven Planet North America, Inc., Toyota Research Institute, Inc.
    Inventors: Sergey Zakharov, Wadim Kehl, Vitor Guizilini, Adrien David Gaidon
  • Patent number: 11812007
    Abstract: An apparatus including an interface and a processor. The interface may be configured to receive pixel data. The processor may be configured to generate a reference image and a target image from the pixel data, perform disparity operations on the reference image and the target image and build a disparity map in response to the disparity operations. The disparity operations may comprise selecting a guide node from the pixel data comprising a pixel and a plurality of surrounding pixels, determining a peak location for the pixel by performing a full range search, calculating a shift offset peak location for each of the surrounding pixels by performing block matching operations in a local range near the peak location and generating values in a disparity map for the pixel data in response to the peak location for the pixel and the shift offset peak location for each of the surrounding pixels.
    Type: Grant
    Filed: January 12, 2021
    Date of Patent: November 7, 2023
    Assignee: Ambarella International LP
    Inventors: Ke-ke Ren, Zhi He