Mapping 2-d Image Onto A 3-d Surface Patents (Class 382/285)
  • Patent number: 11961298
    Abstract: Systems and methods for detecting objects in a video are provided. A method can include inputting a video comprising a plurality of frames into an interleaved object detection model comprising a plurality of feature extractor networks and a shared memory layer. For each of one or more frames, the operations can include selecting one of the plurality of feature extractor networks to analyze the one or more frames, analyzing the one or more frames by the selected feature extractor network to determine one or more features of the one or more frames, determining an updated set of features based at least in part on the one or more features and one or more previously extracted features extracted from a previous frame stored in the shared memory layer, and detecting an object in the one or more frames based at least in part on the updated set of features.
    Type: Grant
    Filed: February 22, 2019
    Date of Patent: April 16, 2024
    Assignee: GOOGLE LLC
    Inventors: Menglong Zhu, Mason Liu, Marie Charisse White, Dmitry Kalenichenko, Yinxiao Li
  • Patent number: 11941827
    Abstract: A computer-implemented method of performing a three-dimensional 3D point cloud registration with multiple two-dimensional (2D) images may include estimating a mathematical relationship between 3D roto-translations of dominant planes of objects in a 3D point cloud and bi-dimensional homographies in a 2D image plane, thereby resulting in a 3D point cloud registration using multiple 2D images. A trained classifier may be used to determine correspondence between homography matrices and inferred motion of the dominant plane(s) on a 3D point cloud for paired image frames. A homography matrix between the paired images of the dominant plane(s) on the 2D image plane may be selected based on the correspondence between the inferred motions and measured motion of the dominant plane(s) on the 3D point cloud for the paired image frames. The process may be less computationally intensive than conventional 2D-3D registration approaches.
    Type: Grant
    Filed: October 19, 2021
    Date of Patent: March 26, 2024
    Assignee: Datalogic IP Tech S.R.L.
    Inventors: Francesco D'Ercoli, Marco Cumoli
  • Patent number: 11922659
    Abstract: A coordinate calculation apparatus 10 includes: an image selection unit 11 configured to select, when a specific portion is designated in an object, two or more images including the specific portion from the images of the object; a three-dimensional coordinate calculation unit 12 configured to specify, for each of the selected images, a location of points corresponding to each other at the specific portion, and calculating a three-dimensional coordinate of the specific portion by using the location of the point specified for each of the images and the camera matrix calculated in advance for each of the images; a three-dimensional model display unit 13 configured to display, using the point cloud data of the object, a three-dimensional model of the object on a screen, and displaying the designated specific portion on the three-dimensional model based on the calculated three-dimensional coordinates.
    Type: Grant
    Filed: November 27, 2019
    Date of Patent: March 5, 2024
    Assignee: NEC Solution Innovators, Ltd.
    Inventor: Yoshihiro Yamashita
  • Patent number: 11922580
    Abstract: Various implementations disclosed herein include devices, systems, and methods that generate floorplans and measurements using a three-dimensional (3D) representation of a physical environment generated based on sensor data.
    Type: Grant
    Filed: January 12, 2021
    Date of Patent: March 5, 2024
    Assignee: Apple Inc.
    Inventors: Feng Tang, Afshin Dehghan, Kai Kang, Yang Yang, Yikang Liao, Guangyu Zhao
  • Patent number: 11908162
    Abstract: A handheld three-dimensional (3D) measuring system operates in a target mode and a geometry mode. In the target mode, a target-mode projector projects a first line of light onto an object, and a first illuminator sends light to markers on or near the object. A first camera captures an image of the first line of light and the illuminated markers. In the geometry mode, a geometry-mode projector projects onto the object a first multiplicity of lines, which are captured by the first camera and a second camera. One or more processors determines 3D coordinates in the target mode and the geometry mode.
    Type: Grant
    Filed: December 20, 2021
    Date of Patent: February 20, 2024
    Assignee: FARO Technologies, Inc.
    Inventors: Francesco Bonarrigo, Paul C. Atwell, John Lucas Creachbaum, Nitesh Dhasmana, Fabiano Kovalski, Andrea Riccardi, William E. Schoenfeldt, Marco Torsello, Christopher Michael Wilson
  • Patent number: 11893675
    Abstract: Various implementations set forth a computer-implemented method for scanning a three-dimensional (3D) environment. The method includes generating, in a first time interval, a first extended reality (XR) stream based on a first set of meshes representing a 3D environment, transmitting, to a remote device, the first XR stream for rendering a 3D representation of a first portion of the 3D environment in a remote XR environment, determining that the 3D environment has changed based on a second set of meshes representing the 3D environment and generated subsequent to the first time interval, generating a second XR stream based on the second set of meshes, and transmitting, to the remote device, the second XR stream for rendering a 3D representation of at least a portion of the changed 3D environment in the remote XR environment.
    Type: Grant
    Filed: October 29, 2021
    Date of Patent: February 6, 2024
    Assignee: SPLUNK INC.
    Inventors: Devin Bhushan, Caelin Thomas Jackson-King, Stanislav Yazhenskikh, Jim Jiaming Zhu
  • Patent number: 11847732
    Abstract: Various implementations set forth a computer-implemented method for scanning a three-dimensional (3D) environment. The method includes generating, in a first time interval, a first extended reality (XR) stream based on a first set of meshes representing a 3D environment, transmitting, to a remote device, the first XR stream for rendering a 3D representation of a first portion of the 3D environment in a remote XR environment, determining that the 3D environment has changed based on a second set of meshes representing the 3D environment and generated subsequent to the first time interval, generating a second XR stream based on the second set of meshes, and transmitting, to the remote device, the second XR stream for rendering a 3D representation of at least a portion of the changed 3D environment in the remote XR environment.
    Type: Grant
    Filed: October 29, 2021
    Date of Patent: December 19, 2023
    Assignee: SPLUNK INC.
    Inventors: Devin Bhushan, Caelin Thomas Jackson-King, Stanislav Yazhenskikh, Jim Jiaming Zhu
  • Patent number: 11841434
    Abstract: An annotation system uses annotations for a first set of sensor measurements from a first sensor to identify annotations for a second set of sensor measurements from a second sensor. The annotation system identifies reference annotations in the first set of sensor measurements that indicates a location of a characteristic object in the two-dimensional space. The annotation system determines a spatial region in the three-dimensional space of the second set of sensor measurements that corresponds to a portion of the scene represented in the annotation of the first set of sensor measurements. The annotation system determines annotations within the spatial region of the second set of sensor measurements that indicates a location of the characteristic object in the three-dimensional space.
    Type: Grant
    Filed: June 10, 2022
    Date of Patent: December 12, 2023
    Assignee: Tesla, Inc.
    Inventor: Anting Shen
  • Patent number: 11836879
    Abstract: An information processing apparatus for correcting a shift between a first three-dimensional position in an image capturing region that is identified based on a position of a first feature point on a first image and a second three-dimensional position in an image capturing region that is identified based on a position of a second feature point corresponding to the first feature point on a second image.
    Type: Grant
    Filed: February 3, 2021
    Date of Patent: December 5, 2023
    Assignee: CANON KABUSHIKI KAISHA
    Inventor: Nozomu Kasuya
  • Patent number: 11836960
    Abstract: An object detection device (1) includes an object detection unit (2) that detects an object from an image including the object by neural computation using a CNN. The object detection unit (2) includes: a feature amount extraction unit (2a) that extracts a feature amount of the object from the image; an information acquisition unit (2b) that obtains a plurality of object rectangles indicating candidates for the position of the object on the basis of the feature amount and obtains information and a certainty factor of a category of the object for each of the object rectangles; and an object tag calculation unit (2c) that calculates, for each of the object rectangles, an object tag indicating which object in the image the object rectangle is linked to, on the basis of the feature amount.
    Type: Grant
    Filed: May 22, 2020
    Date of Patent: December 5, 2023
    Assignee: Konica Minolta, Inc.
    Inventor: Fumiaki Sato
  • Patent number: 11783493
    Abstract: Some implementations of the disclosure are directed to techniques for facial reconstruction from a sparse set of facial markers. In one implementation, a method comprises: obtaining data comprising a captured facial performance of a subject with a plurality of facial markers; determining a three-dimensional (3D) bundle corresponding to each of the plurality of facial markers of the captured facial performance; using at least the determined 3D bundles to retrieve, from a facial dataset comprising a plurality of facial shapes of the subject, a local geometric shape corresponding to each of the plurality of the facial markers; and merging the retrieved local geometric shapes to create a facial reconstruction of the subject for the captured facial performance.
    Type: Grant
    Filed: November 5, 2021
    Date of Patent: October 10, 2023
    Assignee: Lucasfilm Entertainment Company Ltd. LLC
    Inventors: Matthew Cong, Ronald Fedkiw, Lana Lan
  • Patent number: 11776123
    Abstract: A method, user device, and system for displaying augmented anatomical features is disclosed. The method includes detecting a target individual, displaying a visual representation of the body, and determining an anatomical profile of the target individual based on a plurality of reference markers. The method further includes displaying, on the display, a graphical representation of the inner anatomical features onto the visual representation of the body so as to assist in the identification of the inner anatomical features. In another aspect, an initial three-dimensional representation of the body is mapped and a preferred anatomical profile is determined based upon the reference markers. The initial three-dimensional representation of the body is modified to be the shape of the preferred anatomical profile and displayed.
    Type: Grant
    Filed: February 16, 2022
    Date of Patent: October 3, 2023
    Inventor: Gustav Lo
  • Patent number: 11734790
    Abstract: Disclosed are a method and apparatus for recognizing a landmark in a panoramic image. The method includes steps of performing projection transformation on the panoramic image so as to generate a projection image; conducting semantic segmentation on the projection image so as to determine a landmark region and a road surface region; correcting distortion in the landmark region so as to produce a corrected landmark region; and recognizing the landmark in the corrected landmark region.
    Type: Grant
    Filed: March 9, 2021
    Date of Patent: August 22, 2023
    Assignee: Ricoh Company, Ltd.
    Inventors: Ke Liao, Weitao Gong, Hong Yi, Wei Wang
  • Patent number: 11727575
    Abstract: A system and method for recognizing objects in an image is described. The system can receive an image from a sensor and detect one or more objects in the image. The system can further detect one or more components of each detected object. Subsequently, the system can create a segmentation map based on the components detected for each detected object and determine whether the segmentation map matches a plurality of 3-D models (or projections thereof). Additionally, the system can display a notification through a user interface indicating whether the segmentation map matches at least one of the plurality of 3-D models.
    Type: Grant
    Filed: September 28, 2021
    Date of Patent: August 15, 2023
    Assignee: CoVar LLC
    Inventors: Peter A. Torrione, Mark Hibbard
  • Patent number: 11710282
    Abstract: Methods for rendering augmented reality (AR) content are presented. An a priori defined 3D albedo model of an object is leveraged to adjust AR content so that is appears as a natural part of a scene. Disclosed devices recognize a known object having a corresponding albedo model. The devices compare the observed object to the known albedo model to determine a content transformation referred to as an estimated shading (environmental shading) model. The transformation is then applied to the AR content to generate adjusted content, which is then rendered and presented for consumption by a user.
    Type: Grant
    Filed: October 19, 2021
    Date of Patent: July 25, 2023
    Assignee: Nant Holdings IP, LLC
    Inventors: Matheen Siddiqui, Kamil Wnuk
  • Patent number: 11693390
    Abstract: There is provided a computer implemented method of manufacturing a truss of a three dimensional (3D) object representation, comprising: receiving a definition of the 3D object representation, arranging within an interior space of the 3D object representation, a plurality of instances of a sphere having a common radius to create a packed sphere arrangement, computing nodes of a truss for the 3D object representation, each respective node positioned at a center of each respective instance of each sphere of the packed sphere arrangement, computing beams of the truss by connecting adjacent nodes with respective beams, and providing code instructions for execution by a manufacturing device controller of a manufacturing device for manufacturing the truss.
    Type: Grant
    Filed: May 6, 2021
    Date of Patent: July 4, 2023
    Assignees: Technion Research & Development Foundation Limited, BCAM—Basque Center for Applied Mathematics
    Inventors: Gershon Elber, Boris Van Sosin, Daniil Rodin, Michael Barton, Hanna Sliusarenko
  • Patent number: 11694402
    Abstract: Systems and methods are provided for receiving a two-dimensional (2D) image comprising a 2D object; identifying a contour of the 2D object; generating a three-dimensional (3D) mesh based on the contour of the 2D object; and applying a texture of the 2D object to the 3D mesh to output a 3D object representing the 2D object.
    Type: Grant
    Filed: October 20, 2021
    Date of Patent: July 4, 2023
    Assignee: Snap Inc.
    Inventors: Samuel Edward Hare, Andrew James McPhee, Daniel Moreno, Kyle Goodrich
  • Patent number: 11683458
    Abstract: Systems and methods for projecting a multi-faceted image onto a convex polyhedron based on an input image are described. A system can include a controller configured to determine a mapping between pixels within a wide-angle image and a multi-faceted image, and generate the multi-faceted image based on the mapping.
    Type: Grant
    Filed: July 29, 2021
    Date of Patent: June 20, 2023
    Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLC
    Inventors: Michael Slutsky, Albert Shalumov
  • Patent number: 11682142
    Abstract: Subject matter regards colorizing a three-dimensional (3D) point set. A method of colorizing a 3D point can include voxelizing 3D points including the 3D point into voxels such that a voxel of the voxels including the 3D point includes a voxel subset of the 3D points, projecting the voxel subset to respective image spaces of first and second images used to generate the 3D points, and associating a color value, determined based on a respective number of pixels of the first and second images to which the voxel subset projects, with the 3D point.
    Type: Grant
    Filed: September 14, 2020
    Date of Patent: June 20, 2023
    Assignee: Raytheon Company
    Inventors: Stephen J. Raif, Allen Hainline
  • Patent number: 11677920
    Abstract: This application generally relates to capturing and aligning panoramic image and depth data. In one embodiment, a device is provided that comprises a housing and a plurality of cameras configured to capture two-dimensional images, wherein the cameras are arranged at different positions on the housing and have different azimuth orientations relative to a center point such that the cameras have a collective field-of-view spanning up to 360° horizontally. The device further comprises a plurality of depth detection components configured to capture depth data, wherein the depth detection components are arranged at different positions on the housing and have different azimuth orientations relative to the center point such that the depth detection components have the collective field-of-view spanning up to 360° horizontally.
    Type: Grant
    Filed: September 3, 2019
    Date of Patent: June 13, 2023
    Assignee: Matterport, Inc.
    Inventors: Kyle Simek, David Gausebeck, Matthew Tschudy Bell
  • Patent number: 11666222
    Abstract: A monitoring method and system for providing visual enhancements during a medical procedure is described. The method includes capturing current visual information of a site during the medical procedure in real time, storing at least a portion of the captured visual information as stored visual information, identifying a feature of interest in at least one of the current visual information and the stored visual information; generating feedback data associated with the feature of interest, and displaying a virtual representation of the feedback data overlaid on the current visual information.
    Type: Grant
    Filed: June 2, 2021
    Date of Patent: June 6, 2023
    Assignee: SYNAPTIVE MEDICAL INC.
    Inventors: Bradley Allan Fernald, Gal Sela, Neil Jeffrey Witcomb
  • Patent number: 11663733
    Abstract: A method includes obtaining a reference image and a target image each representing an environment containing moving features and static features. The method also includes determining an object mask configured to mask out the moving features and preserves the static features in the target image. The method additionally includes determining, based on motion parallax between the reference image and the target image, a static depth image representing depth values of the static features in the target image. The method further includes generating, by way of a machine learning model, a dynamic depth image representing depth values of both the static features and the moving features in the target image. The model is trained to generate the dynamic depth image by determining depth values of at least the moving features based on the target image, the object mask, and the static depth image.
    Type: Grant
    Filed: March 23, 2022
    Date of Patent: May 30, 2023
    Assignee: Google LLC
    Inventors: Tali Dekel, Forrester Cole, Ce Liu, William Freeman, Richard Tucker, Noah Snavely, Zhengqi Li
  • Patent number: 11663380
    Abstract: The invention relates to a method for transferring a stress state of an FE simulation result to a new FE mesh geometry of a simulated construction system, such as a component for motor vehicles that has a 3-D shape, in a simulation chain of production operations, comprising: a) providing a first data set, which describes the FE simulation result with a stress state of the FE simulation of the construction system or component of a first production operation, b) creating the new FE mesh geometry of the simulated construction system or component, which new FE mesh geometry is associated with a second production operation, c) transferring the stress state of the provided first data set to the new FE mesh geometry of the construction system or component, d) performing an equilibrium calculation by using the stress tensor in the FE mesh geometry, wherein deformation of the construction system or component results, which deformation differs from the deformation in the FE mesh by a shape alteration u>tolerance val
    Type: Grant
    Filed: March 2, 2017
    Date of Patent: May 30, 2023
    Assignee: inpro Innovationsgesellschaft für fortgeschrittene Produktionssysteme in der Fahrzeugindustrie mbH
    Inventors: Martin Nitsche, Heribert Wessels
  • Patent number: 11651533
    Abstract: Aspects of the disclosure include methods, apparatuses, and non-transitory computer-readable storage mediums for generating a floor plan from a point cloud model. An apparatus includes processing circuitry that receives an input three-dimensional point cloud corresponding to a three-dimensional space. The processing circuitry determines a plurality of wall planes in the received input three-dimensional point cloud. The processing circuitry generates a plurality of line segments. Each line segment is generated by projecting a respective wall plane of the plurality of wall planes to a floor plane in the three-dimensional space. The processing circuitry represents the plurality of wall planes in the three-dimensional space using the plurality of line segments in a two-dimensional space corresponding to the floor plan. The processing circuitry adjusts the plurality of line segments in the two-dimensional space to improve the floor plan.
    Type: Grant
    Filed: October 15, 2021
    Date of Patent: May 16, 2023
    Assignee: TENCENT AMERICA LLC
    Inventors: Xiang Zhang, Bing Jian, Lu He, Haichao Zhu, Shan Liu, Kelin Liu, Weiwei Feng
  • Patent number: 11619724
    Abstract: According to an aspect of an embodiment, operations may comprise (a) accessing a portion of a high definition (HD) map comprising a point cloud of a region through which a vehicle is driving, (b) identifying a base LIDAR from a plurality of LIDARs mounted on the vehicle, (c) for each of the LIDARs: receiving a LIDAR scan comprising a point cloud of the region, and determining a pose for the LIDAR, (d) for each LIDAR other than the base LIDAR, determining a transform for the LIDAR with respect to the base LIDAR, (e) repeating (c) to generate a plurality of samples, (f) for each of the samples, repeating (d) to determine a plurality of transforms for each LIDAR with respect to the base LIDAR, and (g) calibrating each of the LIDARs other than the base LIDAR by determining an aggregate transform for the LIDAR.
    Type: Grant
    Filed: June 25, 2020
    Date of Patent: April 4, 2023
    Assignee: NVIDIA CORPORATION
    Inventors: Di Zeng, Mengxi Wu
  • Patent number: 11615594
    Abstract: A method by an extended reality (XR) display device includes accessing image data and sparse depth points corresponding to a plurality of image frames to be displayed on one or more displays of the XR display device. The method further includes determining a plurality of sets of feature points for a current image frame of the plurality of image frames, constructing a cost function configured to propagate the sparse depth points corresponding to the current image frame based on the plurality of sets of feature points, and generating a dense depth map corresponding to the current image frame based on an evaluation of the cost function. The method thus includes rendering the current image frame on the one or more displays of the XR display device based on the dense depth map.
    Type: Grant
    Filed: June 3, 2021
    Date of Patent: March 28, 2023
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Yingen Xiong, Christopher A. Peri
  • Patent number: 11615582
    Abstract: Images may be captured at an image capture device mounted on an image capture device gimbal capable of rotating the image capture device around a nodal point in one or more dimensions. Each of the plurality of images may be captured from a respective rotational position. The images may be captured by a designated camera that is not located at the nodal point in one or more of the respective rotational positions. A designated three-dimensional point cloud may be determined based on the plurality of images. The designated three-dimensional point cloud may include a plurality of points each having a respective position in a virtual three-dimensional space.
    Type: Grant
    Filed: June 8, 2021
    Date of Patent: March 28, 2023
    Assignee: Fyusion, Inc.
    Inventors: Nico Gregor Sebastian Blodow, Martin Saelzle, Matteo Munaro, Krunal Ketan Chande, Rodrigo Ortiz Cayon, Stefan Johannes Josef Holzer
  • Patent number: 11577748
    Abstract: A small-object perception system, for use in a vehicle, includes a stereo vision system that captures stereo images and outputs information identifying an object having a dimension in a range of ˜20 cm to about ˜100 cm in a perception range of ˜3 meters to ˜150 meters from the vehicle, and a system controller configured to receive output signals from the stereo vision system and to provide control signals to control a path of movement of the vehicle. The stereo vision system includes cameras separated by a baseline of ˜1 meter to ˜4 meters. The stereo vision system includes a stereo matching module configured to perform stereo matching on left and right initial images and to output a final disparity map based on a plurality of preliminary disparity maps generated from the left and right initial images, with the preliminary disparity maps having different resolutions from each other.
    Type: Grant
    Filed: February 28, 2022
    Date of Patent: February 14, 2023
    Assignee: NODAR Inc.
    Inventors: Jing Wang, Leaf Alden Jiang
  • Patent number: 11562715
    Abstract: When a graphics processor is processing data for an application on a host processor, the graphics processor generates in advance of their being required for display by the application a plurality of frame sequences corresponding to a plurality of different possible “future states” for the application. The graphics processing system, when producing a frame in a sequence of frames corresponding to a given future state for the application, determines one or more region(s) of the frame that are to be produced at a first, higher quality, and producing the determined region(s) of the frame at a first, higher quality, whereas other regions of the frame are produced at a second, lower quality.
    Type: Grant
    Filed: August 28, 2020
    Date of Patent: January 24, 2023
    Assignee: Arm Limited
    Inventors: Daren Croxford, Guy Larri
  • Patent number: 11551422
    Abstract: Various implementations disclosed herein include devices, systems, and methods that generate floorplans and measurements using a three-dimensional (3D) representation of a physical environment generated based on sensor data.
    Type: Grant
    Filed: January 12, 2021
    Date of Patent: January 10, 2023
    Assignee: Apple Inc.
    Inventors: Feng Tang, Afshin Dehghan, Kai Kang, Yang Yang, Yikang Liao, Guangyu Zhao
  • Patent number: 11548504
    Abstract: A driver assistance system according to an embodiment of the present disclosure includes: a radar provided in the vehicle to have an external sensing field for the vehicle and configured to acquire radar data; a memory configured to store a first graph stored in advance; and a processor configured to determine a static target based on the radar data and driving information comprising a driving velocity, generate a second graph based on the determined static target, and correct the driving velocity based on the first graph and the second graph.
    Type: Grant
    Filed: December 18, 2020
    Date of Patent: January 10, 2023
    Assignee: HL KLEMOVE CORP.
    Inventor: Joungchel Moon
  • Patent number: 11517099
    Abstract: A method for processing images includes: detecting a plurality of human face key points of a three-dimensional human face in a target image; acquiring a virtual makeup image, wherein the virtual makeup image includes a plurality of reference key points, the reference key points indicating human face key points of a two-dimensional human face; and acquiring a target image fused with the virtual makeup image by fusing the virtual makeup image and the target image with each of the reference key points in the virtual makeup image aligned with a corresponding human face key point.
    Type: Grant
    Filed: November 20, 2020
    Date of Patent: December 6, 2022
    Assignee: Beijing Dajia Internet Information Technology Co., Ltd.
    Inventors: Shanshan Wu, Paliwan Pahaerding, Bo Wang
  • Patent number: 11508077
    Abstract: A processor-implemented method of detecting a moving object includes: estimating a depth image of a current frame; determining an occlusion image of the current frame by calculating a depth difference value between the estimated depth image of the current frame and an estimated depth image of a previous frame; determining an occlusion accumulation image of the current frame by adding a depth difference value of the occlusion image of the current frame to a depth difference accumulation value of an occlusion accumulation image of the previous frame; and outputting an area of a moving object based on the occlusion accumulation image.
    Type: Grant
    Filed: February 4, 2021
    Date of Patent: November 22, 2022
    Assignees: Samsung Electronics Co., Ltd., SNU R&DB FOUNDATION
    Inventors: Hyoun Jin Kim, Haram Kim
  • Patent number: 11495026
    Abstract: A technique facilitates selecting and designating an arbitrary one of a plurality of aerial lines. The aerial line extraction system, includes: an area-of-interest cropping unit that crops a region where an aerial line is assumed to exist as an area of interest by setting a support of the aerial line as a reference from a three-dimensional point cloud data; an element segmenting unit that segments the area of interest into a plurality of subdivided areas, obtains a histogram by counting three-dimensional point clouds existing in each of the subdivided areas, and obtains a segmentation plane of the area of interest on the basis of the histogram; and an element display unit that segments the area of interest into a plurality of segmented areas by the segmentation plane and displays the three-dimensional point clouds included in each of the segmented areas in a distinguishable manner.
    Type: Grant
    Filed: January 16, 2019
    Date of Patent: November 8, 2022
    Assignee: HITACHI SOLUTIONS, LTD.
    Inventors: Sadaki Nakano, Nobutaka Kimura, Kishiko Maruyama, Nobuhiro Chihara
  • Patent number: 11483468
    Abstract: Methods and systems for capturing a three dimensional image are described. An image capture process is performed while moving a lens to capture image data across a range of focal depths, and a three dimensional image reconstruction process generates a three dimensional image based on the image data. A two-dimensional image is also rendered including focused image data from across the range of focal depths. The two dimensional image and the three dimensional image are fused to generate a focused three dimensional model.
    Type: Grant
    Filed: July 1, 2020
    Date of Patent: October 25, 2022
    Assignee: Taiwan Semiconductor Manufacturing Company, Ltd.
    Inventor: Chih-Min Liu
  • Patent number: 11481878
    Abstract: Systems, computer program products, and techniques for detecting and/or reconstructing objects depicted in digital image data within a three-dimensional space are disclosed. The concepts utilize internal features for detection and reconstruction, avoiding reliance on information derived from location of edges. The inventive concepts provide an improvement over conventional techniques since objects may be detected and/or reconstructed even when edges are obscured or not depicted in the digital image data. In one aspect, detecting a document depicted in a digital image includes: detecting a plurality of identifying features of the document, wherein the plurality of identifying features are located internally with respect to the object; projecting a location of one or more edges of the document based at least in part on the plurality of identifying features; and outputting the projected location of the one or more edges of the document to a display of a computer, and/or a memory.
    Type: Grant
    Filed: August 19, 2020
    Date of Patent: October 25, 2022
    Assignee: KOFAX, INC.
    Inventors: Jiyong Ma, Stephen Michael Thompson, Jan W. Amtrup
  • Patent number: 11465772
    Abstract: An exterior aircraft image projector includes at least one light source, providing a light output in operation; an optical system configured for transforming the light output of the at least one light source into a light beam and projecting said light beam onto the ground below the aircraft and the of the aircraft; a photo detector arranged to detect a brightness level (Iambient) of the ground or the exterior and configured to provide a corresponding brightness signal; and a controller, coupled to the photo detector and the at least one light source configured to control an intensity of the light output of the at least one light source as a function of the brightness level (Iambient), as provided by the photo detector via the brightness signal.
    Type: Grant
    Filed: October 21, 2020
    Date of Patent: October 11, 2022
    Assignee: GOODRICH LIGHTING SYSTEMS GMBH
    Inventors: Bjoern Schallenberg, Carsten Pawliczek
  • Patent number: 11453130
    Abstract: A robot system, including: a robot; a base supporting the robot; a controller connected to the robot; a processor connected to the controller; a depth camera connected to the processor; a flange plate; a coupling shaft including a first end and a second end; a mounting base including an elongated hole, a first side wall, and a second side wall; a sprayer including a mounting shaft; a first positioning bolt; a limit arm includes a first end and a second end; an axis pin; a limit shaft; a second positioning bolt; a gas cylinder; a piston rod; a connector; a shifter level; a trigger. The robot is connected to the first end of the coupling shaft via the flange plate. The second end of the coupling shaft is connected to the mounting base. The mounting shaft of the sprayer is disposed in the elongated hole of the mounting base.
    Type: Grant
    Filed: June 28, 2020
    Date of Patent: September 27, 2022
    Assignee: DALIAN NEWSTAR AUTOMOBILE EQUIPMENT CO., LTD.
    Inventors: Kedong Bi, Chaoping Qin, Long Cui, Wentao Li
  • Patent number: 11436735
    Abstract: A volume of an object is extracted from a three-dimensional image to generate a three-dimensional object image, where the three-dimensional object image represents the object but little to no other aspects of the three-dimensional image. The three-dimensional image is yielded from an examination in which the object, such as a suitcase, is situated within a volume, such as a luggage bin, that may contain other aspects or objects that are not of interest, such as sidewalls of the luggage bin. The three-dimensional image is projected to generate a two-dimensional image, and a two-dimensional boundary of the object is defined, where the two-dimensional boundary excludes or cuts off at least some of the uninteresting aspects. In some embodiments, the two-dimensional boundary is reprojected over the three-dimensional image to generate a three-dimensional boundary, and voxels comprised within the three-dimensional boundary are extracted to generate the three-dimensional object image.
    Type: Grant
    Filed: February 11, 2015
    Date of Patent: September 6, 2022
    Assignee: ANALOGIC CORPORATION
    Inventors: David Lieblich, Nirupam Sarkar, Daniel B. Keesing
  • Patent number: 11417020
    Abstract: A method includes: obtaining a stereo pair of images from a stereo camera assembly of a mobile computing device, the stereo pair of images depicting a first marker and a second marker each associated with the mobile computing device; determining, from the stereo pair of images, a distance between the first and second markers; comparing a threshold to a difference between the determined distance and a reference distance corresponding to the first and second reference markers; and when the difference exceeds the threshold, generating an alert notification.
    Type: Grant
    Filed: April 7, 2020
    Date of Patent: August 16, 2022
    Assignee: Zebra Technologies Corporation
    Inventors: Serguei Zolotov, Lawrence Allen Stone
  • Patent number: 11386616
    Abstract: A spatial indexing system receives a sequence of images depicting an environment, such as a floor of a construction site, and performs a spatial indexing process to automatically identify the spatial locations at which each of the images were captured. The spatial indexing system also generates an immersive model of the environment and provides a visualization interface that allows a user to view each of the images at its corresponding location within the model.
    Type: Grant
    Filed: July 27, 2020
    Date of Patent: July 12, 2022
    Assignee: OPEN SPACE LABS, INC.
    Inventors: Michael Ben Fleischman, Philip DeCamp, Jeevan Kalanithi, Thomas Friel Allen
  • Patent number: 11380050
    Abstract: A face image generation method includes: determining, according to a first face image, a three dimensional morphable model (3DMM) corresponding to the first face image as a first model; determining, according to a reference element, a 3DMM corresponding to the reference element as a second model, the reference element representing a posture and/or an expression of a target face image; determining, according to the first model and the second model, an initial optical flow map corresponding to the first face image, and deforming the first face image according to the initial optical flow map to obtain an initial deformation map; obtaining, through a convolutional neural network, an optical flow increment map and a visibility probability map that correspond to the first face image; and generating the target face image according to the first face image, the initial optical flow map, the optical flow increment map, and the visibility probability map.
    Type: Grant
    Filed: April 20, 2021
    Date of Patent: July 5, 2022
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Xue Fei Zhe, Yonggen Ling, Lin Chao Bao, Yi Bing Song, Wei Liu
  • Patent number: 11367264
    Abstract: A computer implemented method or system including a map conversion toolkit and a map Population toolkit. The map conversion toolkit allows one to quickly trace the layout of a floor plan, generating a file (e.g., GeoJSON file) that can be rendered in two dimensions (2D) or three dimensions (3D) using web tools such as Mapbox. The map population toolkit takes the scan (e.g., 3D scan) of a room in the building (taken from an RGB-D camera), and, through a semi- automatic process, generates individual objects, which are correctly dimensioned and positioned in the (e.g., GeoJSON) representation of the building. In another example, a computer implemented method for diagraming a space comprises obtaining a layout of the space; and annotating or decorating the layout with meaningful labels that are translatable to glanceable visual signals or audio signals.
    Type: Grant
    Filed: July 17, 2020
    Date of Patent: June 21, 2022
    Assignee: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventors: Viet Trinh, Roberto Manduchi
  • Patent number: 11361457
    Abstract: An annotation system uses annotations for a first set of sensor measurements from a first sensor to identify annotations for a second set of sensor measurements from a second sensor. The annotation system identifies reference annotations in the first set of sensor measurements that indicates a location of a characteristic object in the two-dimensional space. The annotation system determines a spatial region in the three-dimensional space of the second set of sensor measurements that corresponds to a portion of the scene represented in the annotation of the first set of sensor measurements. The annotation system determines annotations within the spatial region of the second set of sensor measurements that indicates a location of the characteristic object in the three-dimensional space.
    Type: Grant
    Filed: July 17, 2019
    Date of Patent: June 14, 2022
    Assignee: Tesla, Inc.
    Inventor: Anting Shen
  • Patent number: 11321852
    Abstract: A method for initializing a tracking algorithm for target objects, includes generating a 3D point cloud of the target object and iteratively determining a spatial position and orientation of the target object using a 3D model. A spatial position and orientation of the target object is first determined using an artificial neural network, thereafter the tracking algorithm is initialized with a result of this determination. A method for training an artificial neural network for initializing a tracking algorithm for target objects includes generating a 3D point cloud of the target object by a scanning method, and iteratively determining a spatial position and orientation of the using a 3D model of the target object. The artificial neural network is trained using training data to initially determine a spatial position and orientation of the target object and thereafter initialize the tracking algorithm with a result of this initial determination.
    Type: Grant
    Filed: March 8, 2019
    Date of Patent: May 3, 2022
    Assignee: Jena-Optronik GmbH
    Inventors: Christoph Schmitt, Johannes Both, Florian Kolb
  • Patent number: 11315274
    Abstract: A method includes obtaining a reference image and a target image each representing an environment containing moving features and static features. The method also includes determining an object mask configured to mask out the moving features and preserves the static features in the target image. The method additionally includes determining, based on motion parallax between the reference image and the target image, a static depth image representing depth values of the static features in the target image. The method further includes generating, by way of a machine learning model, a dynamic depth image representing depth values of both the static features and the moving features in the target image. The model is trained to generate the dynamic depth image by determining depth values of at least the moving features based on the target image, the object mask, and the static depth image.
    Type: Grant
    Filed: September 20, 2019
    Date of Patent: April 26, 2022
    Assignee: Google LLC
    Inventors: Tali Dekel, Forrester Cole, Ce Liu, William Freeman, Richard Tucker, Noah Snavely, Zhengqi Li
  • Patent number: 11301953
    Abstract: Disclosed are a panoramic video asymmetrical mapping method and a corresponding inverse mapping method that include mapping a spherical surface corresponding to a panoramic image or video A onto a two-dimensional image or video B, projecting the spherical surface onto an isosceles quadrangular pyramid with a square bottom plane, and further projecting the isosceles quadrangular pyramid onto a planar surface, using isometric projection on a main viewpoint region in the projection and using a relatively high sampling density to ensure that the video quality of the region of the main viewpoint is high, while using a relatively low sample density for non-main viewpoint regions so as to reduce bit rate. The panoramic video asymmetrical inverse mapping technique provides a method for mapping from a planar surface to a spherical surface, and a planar surface video may be mapped back to a spherical surface for rendering and viewing.
    Type: Grant
    Filed: May 29, 2018
    Date of Patent: April 12, 2022
    Assignee: PEKING UNIVERSITY SHENZHEN GRADUATE SCHOOL
    Inventors: Ronggang Wang, Yueming Wang, Zhenyu Wang, Wen Gao
  • Patent number: 11282275
    Abstract: A method for generating a storybook includes generating metadata including shape information which is a predefined value for specifying a shape that a character model has in each of scenes in which a character of storybook content appears, receiving a facial image of a user, generating a user model based on a user face by applying texture information of the facial image to the character, generating a model image of the user model having a predefined shape in each of the scenes by reflecting shape information predefined in each of the scenes into the user model, and generating a file printable on a certain actual object to include at least one of the model images.
    Type: Grant
    Filed: December 9, 2020
    Date of Patent: March 22, 2022
    Assignee: ILLUNI INC.
    Inventors: Byunghwa Park, Youngjun Kwon, Gabee Jo
  • Patent number: 11263820
    Abstract: A method of operating a computing system to generate a model of an environment represented by a mesh is provided. The method allows to update 3D meshes to client applications in real time with low latency to support on the fly environment changes. The method provides 3D meshes adaptive to different levels of simplification requested by various client applications. The method provides local update, for example, updating the mesh parts that are changed since last update. The method also provides 3D meshes with planarized surfaces to support robust physics simulations. The method includes segmenting a 3D mesh into mesh blocks. The method also includes performing a multi-stage simplification on selected mesh blocks. The multi-stage simplification includes a pre-simplification operation, a planarization operation, and a post-simplification operation.
    Type: Grant
    Filed: October 26, 2020
    Date of Patent: March 1, 2022
    Assignee: Magic Leap, Inc.
    Inventors: David Geoffrey Molyneaux, Frank Thomas Steinbrücker, Zhongle Wu, Xiaolin Wei, Jianyuan Min, Yifu Zhang
  • Patent number: 11252430
    Abstract: The present disclosure is directed a system and method for exploiting camera and depth information associated with rendered video frames, such as those rendered by a server operating as part of a cloud gaming service, to more efficiently encode the rendered video frames for transmission over a network. The method and system of the present disclosure can be used in a server operating in a cloud gaming service to improve, for example, the amount of latency, downstream bandwidth, and/or computational processing power associated with playing a video game over its service. The method and system of the present disclosure can be further used in other applications where camera and depth information of a rendered or captured video frame is available.
    Type: Grant
    Filed: November 1, 2019
    Date of Patent: February 15, 2022
    Assignees: Advanced Micro Devices, Inc., ATI Technologies ULC
    Inventors: Khaled Mammou, Ihab Amer, Gabor Sines, Lei Zhang, Michael Schmit, Daniel Wong