Patents Examined by Martin Mushambo
  • Patent number: 11928780
    Abstract: In one implementation, a method of enriching a three-dimensional scene model with a three-dimensional object model based on a semantic label is performed at a device including one or more processors and non-transitory memory. The method includes obtaining a three-dimensional scene model of a physical environment including a plurality of points, wherein each of the plurality of points is associated with a set of coordinates in a three-dimensional space, wherein a subset of the plurality of points is associated with a particular cluster identifier and a particular semantic label. The method includes retrieving a three-dimensional object model based on the particular semantic label, the three-dimensional object model including at least a plurality of points. The method includes updating the three-dimensional scene model by replacing the subset of the plurality of points with the three-dimensional object model.
    Type: Grant
    Filed: July 20, 2022
    Date of Patent: March 12, 2024
    Assignee: APPLE INC.
    Inventor: Payal Jotwani
  • Patent number: 11925860
    Abstract: This application discloses techniques for generating and querying projective hash maps. More specifically, projective hash maps can be used for spatial hashing of data related to N-dimensional points. Each point is projected onto a projection surface to convert the three-dimensional (3D) coordinates for the point to two-dimensional (2D) coordinates associated with the projection surface. Hash values based on the 2D coordinates are then used as an index to store data in the projective hash map. Utilizing the 2D coordinates rather than the 3D coordinates allows for more efficient searches to be performed to locate points in the 3D space. In particular, projective hash maps can be utilized by graphics applications for generating images, and the improved efficiency can, for example, enable a game streaming application on a server to render images transmitted to a user device via a network at faster frame rates.
    Type: Grant
    Filed: June 9, 2021
    Date of Patent: March 12, 2024
    Assignee: NVIDIA Corporation
    Inventors: Marco Salvi, Jacopo Pantaleoni, Aaron Eliot Lefohn, Christopher Ryan Wyman, Pascal Gautron
  • Patent number: 11928787
    Abstract: Systems, apparatuses and methods may provide for technology that estimates poses of a plurality of input images, reconstructs a proxy three-dimensional (3D) geometry based on the estimated poses and the plurality of input images, detects a user selection of a virtual viewpoint, encodes, via a first neural network, the plurality of input images with feature maps, warps the feature maps of the encoded plurality of input images based on the virtual viewpoint and the proxy 3D geometry, and blends, via a second neural network, the warped feature maps into a single image, wherein the first neural network is deep convolutional network and the second neural network is a recurrent convolutional network.
    Type: Grant
    Filed: September 22, 2020
    Date of Patent: March 12, 2024
    Assignee: Intel Corporation
    Inventors: Gernot Riegler, Vladlen Koltun
  • Patent number: 11908080
    Abstract: The various embodiments described herein include methods, devices, and systems for generating object meshes. In some embodiments, a method includes obtaining a trained classifier, and an input observation of a 3D object. The method further includes generating a three-pole signed distance field from the input observation using the trained classifier. The method also includes generating an output mesh of the 3D object from the three-pole signed distance field; and generating a display of the 3D object from the output mesh.
    Type: Grant
    Filed: April 4, 2022
    Date of Patent: February 20, 2024
    Assignee: TENCENT AMERICA LLC
    Inventors: Weikai Chen, Weiyang Li, Bo Yang
  • Patent number: 11908098
    Abstract: Various implementations disclosed herein include devices, systems, and methods that generate a combined 3D representation of a user based on an alignment based on a 3D reference point. For example, a process may include obtaining a predetermined three-dimensional (3D) representation that is associated with a 3D reference point defined relative to a skeletal representation of the user. The process may further include obtaining a sequence of frame-specific 3D representations corresponding to multiple instants in a period of time, each of the frame-specific 3D representations representing a second portion of the user at a respective instant of the multiple instants in the period of time. The process may further include generating combined 3D representations of the user generated by combining the predetermined 3D representation with a respective frame-specific 3D representation based on an alignment which is based on the 3D reference point.
    Type: Grant
    Filed: September 20, 2023
    Date of Patent: February 20, 2024
    Assignee: Apple Inc.
    Inventor: Michael S. Hutchinson
  • Patent number: 11900535
    Abstract: The following relates generally to light detection and ranging (LIDAR) and artificial intelligence (AI). In some embodiments, a system: receives LIDAR data generated from a LIDAR camera; measures a plurality of dimensions of a landscape based upon processor analysis of the LIDAR data; builds a 3D model of the landscape based upon the measured plurality of dimensions, the 3D model including: (i) a structure, and (ii) a vegetation; and displays a representation of the 3D model.
    Type: Grant
    Filed: April 26, 2021
    Date of Patent: February 13, 2024
    Assignee: STATE FARM MUTUAL AUTOMOBILE INSURANCE COMPANY
    Inventors: Nicholas Carmelo Marotta, Laura Kennedy, J D Johnson Willingham
  • Patent number: 11893698
    Abstract: An electronic device according to various embodiments of the disclosure includes: a communication module comprising communication circuitry and a processor operatively connected to the communication module. The processor may be communicatively connected to an augmented reality (AR) device through the communication module, and be configured to receive image information obtained by a camera of the AR device from the AR device, to detect an object based on the received image information, to acquire virtual information corresponding to the object, to control the communication module to transmit the virtual information to the AR device, to determine, based on the received image information, whether the object is out of a viewing range of the AR device, and to change a transfer interval of the virtual information for the AR device based on the determination.
    Type: Grant
    Filed: December 23, 2021
    Date of Patent: February 6, 2024
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Seungbum Lee, Seungseok Hong, Donghyun Yeom
  • Patent number: 11886767
    Abstract: The disclosed system receives a request from a user to interact with an agent of a wireless telecommunication network including a 5G wireless telecommunication network or higher generation wireless telecommunication network. The system determines whether the user is associated with a first AR/VR device including a camera configured to capture an object proximate to the first AR/VR device and a display configured to show a virtual object, which is not part of a surrounding associated with the first AR/VR device. Upon determining that the user is associated with the first AR/VR device, the system creates a high-bandwidth communication channel over the wireless telecommunication network between the first AR/VR device and a second AR/VR device and a virtual room enabling the user and the agent to share visual information over the high-bandwidth communication channel.
    Type: Grant
    Filed: June 17, 2022
    Date of Patent: January 30, 2024
    Assignee: T-Mobile USA, Inc.
    Inventor: Phi Nguyen
  • Patent number: 11887260
    Abstract: Aspects of the present disclosure involve a system for presenting augmented reality (AR) items. The system performs operations including receiving a video that includes a depiction of a real-world environment and generating a 3D model of the real-world environment based on the video. The operations include determining, based on the 3D model of the real-world environment, that an AR item has been placed in the video at a particular 3D position and identifying a portion of the 3D model corresponding to the real-world environment currently being displayed on a screen. The operations include determining that the 3D position of the AR item is excluded from the portion of the 3D model currently being displayed on the screen and in response, displaying an indicator that identifies the 3D position of the AR item in the 3D model relative to the portion of the 3D model currently being displayed on a screen.
    Type: Grant
    Filed: December 30, 2021
    Date of Patent: January 30, 2024
    Assignee: Snap Inc.
    Inventors: Avihay Assouline, Itamar Berger, Gal Dudovitch, Peleg Harel, Gal Sasson
  • Patent number: 11880939
    Abstract: Techniques related to embedding a 3D object model within a 3D scene are discussed. Such techniques include determining two or more object mask images for two or more corresponding cameras trained on the 3D scene, projecting 3D points from the 3D object model to the image planes of the two or more cameras, and determining a position and orientation of the 3D object model in the scene using the object mask images and the projected 3D points.
    Type: Grant
    Filed: August 20, 2020
    Date of Patent: January 23, 2024
    Assignee: Intel Corporation
    Inventors: Danny Khazov, Itay Kaufman, Or Weiser, Zohar Avnat, Roee Lazar
  • Patent number: 11869148
    Abstract: The present disclosure provides a three-dimensional object modeling method, an image processing method, and an image processing device.
    Type: Grant
    Filed: June 11, 2020
    Date of Patent: January 9, 2024
    Assignee: BEIJING CHENGSHI WANGLIN INFORMATION TECHNOLOGY CO., LTD.
    Inventors: Lianjiang Zhou, Haibo Guan, Xiaojun Duan, Zhongfeng Wang, Haiyang Li, Yi Yang, Chen Zhu, Hu Tian
  • Patent number: 11867917
    Abstract: Various implementations disclosed herein include devices, systems, and methods that enable improved display of virtual content in computer generated reality (CGR) environments. In some implementations, the CGR environment is provided at an electronic device based on a field of view (FOV) of the device and a position of virtual content within the FOV. A display characteristic of the virtual object is adjusted to minimize or negate any adverse effects of the virtual object or a portion of the virtual object falling outside of the FOV of the electronic device.
    Type: Grant
    Filed: May 24, 2022
    Date of Patent: January 9, 2024
    Assignee: Apple Inc.
    Inventor: Luis R. Deliz Centeno
  • Patent number: 11860367
    Abstract: A display system for a head mounted device (HMD) including a lens comprising a display area on the lens of the HMD, the lens having a base angle and a pantoscopic tilt, a display engine and optics, and a prism to redirect output from the optics to the display area on the lens of the HMD, accounting for the base angle and the pantoscopic tilt.
    Type: Grant
    Filed: July 12, 2022
    Date of Patent: January 2, 2024
    Assignee: Avegant Corp.
    Inventors: Aaron Matthew Eash, Andrew John Gross, Edward Chia Ning Tang, Warren Cornelius Welch, III, Christopher David Westra
  • Patent number: 11861773
    Abstract: Provided is an information processing apparatus, and an information processing method in which data of content is acquired, and a first visual field image corresponding to a visual field of a first user is cut out from a content image based on the data of the content. In addition, visual field information representing a visual field of a second user viewing the content image is acquired. Furthermore, in a display apparatus, the first visual field image is displayed, and the visual field of the second user is displayed based on the visual field information of the second user.
    Type: Grant
    Filed: March 6, 2023
    Date of Patent: January 2, 2024
    Assignee: SONY CORPORATION
    Inventors: Tsuyoshi Ishikawa, Kei Takahashi, Daisuke Sasaki
  • Patent number: 11862095
    Abstract: An OLED display system having compensation or loss of brightness is provided, including OLED-based display pixels, a sensing system having sensors, a processor having an LIA, an LPF, and analog to digital circuitry connected to each sensor and for providing a sensor signal for each sensor. The processor is adapted to apply a drive signal having a periodic signal to at least one OLED pixel in the display, receive the sensor signal, provide a primary frequency component from the sensor signals using the LIA based on the periodic signal, provide secondary frequency components from the sensor signals using the LPF, convert the secondary frequency components to a digital signal using the ADC, provide the digital signal to the processor as a sensing signal, and determine compensation for the drive signal. A method is also provided.
    Type: Grant
    Filed: June 14, 2022
    Date of Patent: January 2, 2024
    Assignee: eMagin Corporation
    Inventors: Seonki Kim, Amalkumar P. Ghosh, Olivier Prache, Hyuk Sang Kwon
  • Patent number: 11861900
    Abstract: Images of an object may be captured via a camera at a mobile computing device at different viewpoints. The images may be used to identify components of the object and to identify damage estimates estimating damage to some or all of the components. Capture coverage levels corresponding with the components may be determined, and then recording guidance may be provided for capturing additional images to increase the capture coverage levels.
    Type: Grant
    Filed: November 12, 2021
    Date of Patent: January 2, 2024
    Assignee: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Matteo Munaro, Pavel Hanchar, Rodrigo Ortiz-Cayon, Aidas Liaudanskas
  • Patent number: 11836882
    Abstract: Provided are an initial view angle control and presentation method and system based on a three-dimensional point cloud and a point cloud system. A three-dimensional media stream is read and parsed. An initial viewpoint, a normal vector and a forward direction vector in the three-dimensional media stream are parsed. A user can view an initial angle, that is, a region of interest, designated by a content producer when initially consuming three-dimensional media content. Moreover, scaling, that is, scale transformation, of the three-dimensional media content is allowed in an optimized manner. In a real-time interactive scene, the view range of the user can be adjusted based on the position of the user relative to the initial viewpoint. The freedom degree of visual media consumption is fully improved according to the interactive behavior of the user, providing an immersive user experience.
    Type: Grant
    Filed: June 28, 2020
    Date of Patent: December 5, 2023
    Assignee: SHANGHAI JIAO TONG UNIVERSITY
    Inventors: Yiling Xu, Linyao Gao, Wenjie Zhu, Yunfeng Guan
  • Patent number: 11823348
    Abstract: A method and system for training a neural network to perform processing of digital data. The input data can be heterogeneous and the method or system obtains a multiple of loss signals. The input data can be selected so that the loss signals are balanced and can fulfill several conditions of the output data. When running a trained neural network on digital frame images, intermediate results of processing a frame can be used on later frames and in this way processing delay can be decreased.
    Type: Grant
    Filed: May 17, 2019
    Date of Patent: November 21, 2023
    Assignee: BARCO N.V.
    Inventor: Karel Jan Willem Moens
  • Patent number: 11823346
    Abstract: Aspects of the present disclosure involve a system for presenting AR items. The system performs operations including: receiving an image that includes a depiction of a first real-world body part in a real-world environment; applying a machine learning technique to the image to generate a plurality of dense outputs each associated with a respective pixel of a plurality of pixels in the image; applying a first task-specific decoder to the plurality of dense outputs to identify a pixel corresponding to a center of the first real-world body part; applying a second task-specific decoder using the identified pixel to retrieve a 3D rotation, translation and scale of first real-world body part from the plurality of dense outputs; modifying an AR object based on the 3D rotation, translation, and scale of first real-world body part; and modifying the image to include a depiction of the modified AR object.
    Type: Grant
    Filed: March 9, 2022
    Date of Patent: November 21, 2023
    Assignee: Snap Inc.
    Inventors: Daniel Monteiro Stoddart, Efstratios Skordos, Iason Kokkinos
  • Patent number: 11804015
    Abstract: The present invention provides a method for determining a plane, a method for displaying Augmented Reality (AR) display information and corresponding devices. The method comprises the steps of: performing region segmentation and depth estimation on multimedia information; determining, according to the result of region segmentation and the result of depth estimation, 3D plane information of the multimedia information; and, displaying AR display information according to the 3D plane information corresponding to the multimedia information. With the method for determining a plane, the method for displaying AR display information and the corresponding devices provided by the present invention, virtual display information can be added into a 3D plane, the reality of the display effect of enhanced display can be improved, and the user experience can be improved.
    Type: Grant
    Filed: October 30, 2018
    Date of Patent: October 31, 2023
    Inventors: Zhenbo Luo, Shu Wang, Xiangyu Zhu, Yingying Jiang