Patents Examined by Michael J Cobb
  • Patent number: 11967033
    Abstract: Certain aspects of the present disclosure provide techniques for rendering visual artifacts in virtual worlds using machine learning models. An example method generally includes identifying, based on a machine learning model and a streaming natural language input, an intent associated with the streaming natural language input; generating, based on the identified intent associated with the streaming natural language input, one or more virtual objects for rendering in a virtual environment displayed on one or more displays of an electronic device; and rendering the generated one or more virtual objects in the virtual environment.
    Type: Grant
    Filed: June 30, 2023
    Date of Patent: April 23, 2024
    Assignee: INTUIT INC.
    Inventors: David A. Pisoni, Nigel T. Menendez, Richard J. Becker
  • Patent number: 11961177
    Abstract: A method of controlling a display device includes rendering a plurality of viewpoint images, generating a plurality of sub-images based on the plurality of viewpoint images and a plurality of mapping pattern images corresponding to the plurality of viewpoint images, generating a single light-field image based on the plurality of sub-images, and outputting the single light-field image.
    Type: Grant
    Filed: October 4, 2021
    Date of Patent: April 16, 2024
    Assignees: SAMSUNG DISPLAY CO., LTD., MAXST CO., LTD.
    Inventors: Rang Kyun Mok, Ji Young Choi, Gi Seok Kwon, Jae Joong Kwon, Beom Shik Kim, Jae Wan Park
  • Patent number: 11948244
    Abstract: Methods and systems for providing a dynamic product presentation are disclosed. In one example, a method comprises providing, by a processor, a three-dimensional representation of a product in a virtual environment for display on a customer device; and responsive to the processor identifying a surface in a camera feed of the customer device having a dimensionality suitable for the product, generating, by the processor, an augmented media containing an augmented reality representation of a three-dimensional model for the product on the surface.
    Type: Grant
    Filed: February 11, 2022
    Date of Patent: April 2, 2024
    Assignee: SHOPIFY INC.
    Inventors: Russ Maschmeyer, Adam Debreczeni, Eric Andrew Florenzano, Brennan Letkeman, Sarah Hurtgen, James Harold Hall, Jr.
  • Patent number: 11941727
    Abstract: Systems and methods for facial image generation are described. One aspect of the systems and methods includes receiving an image depicting a face, wherein the face has an identity non-related attribute and a first identity-related attribute; encoding the image to obtain an identity non-related attribute vector in an identity non-related attribute vector space, wherein the identity non-related attribute vector represents the identity non-related attribute; selecting an identity-related vector from an identity-related vector space, wherein the identity-related vector represents a second identity-related attribute different from the first identity-related attribute; generating a modified latent vector in a latent vector space based on the identity non-related attribute vector and the identity-related vector; and generating a modified image based on the modified latent vector, wherein the modified image depicts a face that has the identity non-related attribute and the second identity-related attribute.
    Type: Grant
    Filed: July 21, 2022
    Date of Patent: March 26, 2024
    Assignee: ADOBE INC.
    Inventors: Saeid Motiian, Wei-An Lin, Shabnam Ghadar
  • Patent number: 11928785
    Abstract: Techniques (e.g., systems, apparatus, methods) for context-based management of tokens are described. In an example, a geographical location of a device is used as one possible context. This location can correspond to a physical location associated with an AR virtual object container. This container can be associated with a set of virtual object container information applicable to the context, such as to the device's location. Based on separately maintained virtual object information, virtual objects to be shown as being available from the container are determined. Each of such virtual objects can be associated with a set of tokens. In an AR session, the container and the virtual objects are presented. An interaction with the container or a virtual object can result in associating a relevant set of tokens with a user account by recording information about the container, the virtual object, the user account, and/or the context(s).
    Type: Grant
    Filed: September 13, 2023
    Date of Patent: March 12, 2024
    Assignee: Nant Holdings IP, LLC
    Inventors: Nicholas J. Witchey, John Wiacek, Jake Fyfe, Patrick Soon-Shiong
  • Patent number: 11908059
    Abstract: A server device is configured to provide a combined environment and includes processor circuitry configured to determine first parameters indicative of a first location, generate first environment data indicative of the first location, determine second parameters indicative of a second location, and associate the second parameters with the first environment data for providing combined environment data, and output the combined environment data.
    Type: Grant
    Filed: March 15, 2022
    Date of Patent: February 20, 2024
    Assignee: SONY GROUP CORPORATION
    Inventors: Hannes Bergkvist, Peter Exner, Peter Blomqvist, Anders Isberg
  • Patent number: 11900552
    Abstract: A method for generating virtual pseudo three dimensional 360 degree outputs from 2D images of an object 102 is provided. An image viewer plane of the object 102 in the 3D image to be rendered on a user device 108 is detected using an augmented reality technique. An image viewer plane is placed facing the user device 108 rendering ‘Image 0’ and movement coordinates of the user device 108 with respect to the image viewer plane is detected to calculate the virtual pseudo 3D image set to be displayed based on at least one angle of view by performing interpolation between two consecutive virtual pseudo 3D images. The image viewer plane is changed with respect to the movement of the user device 108 to change the virtual pseudo 3D image and the interpolated virtual pseudo 3D image on the plane and that image is displayed as an augmented reality object in real-time to the user device 108.
    Type: Grant
    Filed: March 26, 2022
    Date of Patent: February 13, 2024
    Inventor: Eobin Alex George
  • Patent number: 11893700
    Abstract: Spatial information that describes spatial locations of visual objects as in a three-dimensional (3D) image space as represented in one or more multi-view unlayered images is accessed. Based on the spatial information, a cinema image layer and one or more device image layers are generated from the one or more multi-view unlayered images. A multi-layer multi-view video signal comprising the cinema image layer and the device image layers is sent to downstream devices for rendering.
    Type: Grant
    Filed: April 28, 2022
    Date of Patent: February 6, 2024
    Assignee: Dolby Laboratories Licensing Corporation
    Inventors: Ajit Ninan, Neil Mammen, Tyrome Y. Brown
  • Patent number: 11887251
    Abstract: A computing device in communication with an immersive content generation system can generate and present images of a virtual environment on one or more light-emitting diode (LED) displays at least partially surrounding a performance area. The device may capture a plurality of images of a performer or a physical object in the performance area along with at least some portion of the images of the virtual environment by a taking camera. The device may identify a color mismatch between a portion of the performer or the physical object and a virtual image of the performer or the physical object in the images of the virtual environment. The device may generate a patch for the images of the virtual environment to correct the color mismatch. The device may insert the patch into the images of the virtual environment. Also, the device may generate content based on the plurality of captured images.
    Type: Grant
    Filed: April 8, 2022
    Date of Patent: January 30, 2024
    Assignee: LUCASFILM ENTERTAINMENT COMPANY LTD.
    Inventors: Michael Jutan, David Hirschfield, Alan Bucior
  • Patent number: 11880951
    Abstract: A method for representing virtual information in a view of a real environment comprises providing a virtual object having a global position and orientation with respect to a geographic global coordinate system, with first pose data on the global position and orientation of the virtual object, in a database of a server, taking an image of a real environment by a mobile device and providing second pose data as to at which position and with which orientation with respect to the geographic global coordinate system the image was taken. The method further includes displaying the image on a display of the mobile device, accessing the virtual object in the database and positioning the virtual object in the image on the basis of the first and second pose data, manipulating the virtual object or adding a further virtual object, and providing the manipulated virtual object with modified first pose data or the further virtual object with third pose data in the database.
    Type: Grant
    Filed: August 8, 2022
    Date of Patent: January 23, 2024
    Assignee: Apple Inc.
    Inventors: Peter Meier, Michael Kuhn, Frank Angermann
  • Patent number: 11875439
    Abstract: Embodiments described herein relate to an augmented expression system to generate and cause display of a specially configured interface to present an augmented reality perspective. The augmented expression system receives image and video data of a user and tracks facial landmarks of the user based on the image and video data, in real-time to generate and present a 3-dimensional (3D) bitmoji of the user.
    Type: Grant
    Filed: April 15, 2020
    Date of Patent: January 16, 2024
    Assignee: Snap Inc.
    Inventors: Chen Cao, Yang Gao, Zehao Xue
  • Patent number: 11875600
    Abstract: The subject technology captures first image data by a computing device, the first image data comprising a target face of a target actor and facial expressions of the target actor, the facial expressions including lip movements. The subject technology generates, based at least in part on frames of a source media content, sets of source pose parameters. The subject technology receives a selection of a particular facial expression from a set of facial expressions. The subject technology generates, based at least in part on sets of source pose parameters and the selection of the particular facial expression, an output media content. The subject technology provides augmented reality content based at least in part on the output media content for display on the computing device.
    Type: Grant
    Filed: March 29, 2022
    Date of Patent: January 16, 2024
    Assignee: Snap Inc.
    Inventors: Roman Golobokov, Alexandr Marinenko, Aleksandr Mashrabov, Aleksei Bromot, Grigoriy Tkachenko
  • Patent number: 11861798
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for generating composite images. One of the methods includes maintaining first data associating each location within an environment with a particular time; obtaining an image depicting the environment from a point of view of a display device; obtaining second data characterizing one or more virtual objects; and processing the obtained image and the second data to generate a composite image depicting the one or more virtual objects at respective locations in the environment from the point of view of the display device, wherein the composite image depicts each virtual object according to the particular time that the first data associates with the location of the virtual object in the environment.
    Type: Grant
    Filed: July 26, 2021
    Date of Patent: January 2, 2024
    Inventor: Stephen Wilkes
  • Patent number: 11854152
    Abstract: Wearable systems for privacy preserving expression generation for augmented or virtual reality client applications. An example method includes receiving, by an expression manager configured to communicate expression information to client applications, a request from a client application for access to the expression information. The expression information reflects information derived from one or more sensors of the wearable system, with the client application being configured to present virtual content including an avatar rendered based on the expression information. A user interface is output for presentation which requests user authorization for the client application to access the expression information. In response to receiving user input indicating user authorization, enabling access to the expression information is enabled. The client application obtains periodic updates to the expression information, and the avatar is rendered based on the periodic updates.
    Type: Grant
    Filed: January 30, 2023
    Date of Patent: December 26, 2023
    Assignee: Magic Leap, Inc.
    Inventors: Tomislav Pejsa, Dushan Vasilevski, Victor Ng-Thow-Hing, Koichi Mori
  • Patent number: 11847727
    Abstract: A computer-implemented method for generating a machine-learned model to generate facial position data based on audio data comprising training a conditional variational autoencoder having an encoder and decoder. The training comprises receiving a set of training data items, each training data item comprising a facial position descriptor and an audio descriptor; processing one or more of the training data items using the encoder to obtain distribution parameters; sampling a latent vector from a latent space distribution based on the distribution parameters; processing the latent vector and the audio descriptor using the decoder to obtain a facial position output; calculating a loss value based at least in part on a comparison of the facial position output and the facial position descriptor of at least one of the one or more training data items; and updating parameters of the conditional variational autoencoder based at least in part on the calculated loss value.
    Type: Grant
    Filed: December 21, 2022
    Date of Patent: December 19, 2023
    Assignee: ELECTRONIC ARTS INC.
    Inventors: Jorge del Val Santos, Linus Gisslen, Martin Singh-Blom, Kristoffer Sjöö, Mattias Teye
  • Patent number: 11842444
    Abstract: Embodiments include systems and methods for visualizing the position of a capturing device within a 3D mesh, generated from a video stream from the capturing device. A capturing device may provide a video stream along with point cloud data and camera pose data. This video stream, point cloud data, and camera pose data are then used to progressively generate a 3D mesh. The camera pose data and point cloud data can further be used, in conjunction with a SLAM algorithm, to indicate the position and orientation of the capturing device within the generated 3D mesh.
    Type: Grant
    Filed: June 2, 2021
    Date of Patent: December 12, 2023
    Assignee: STREEM, LLC
    Inventors: Sean M. Adkinson, Teressa Chizeck, Ryan R. Fink
  • Patent number: 11830154
    Abstract: The application provides an AR-based information displaying method and an AR apparatus, an electronic device and a storage medium, applicable to the technical field of computers. The method comprises: acquiring voice information and a user image of a user; identifying the voice information and extracting user characteristics; and when the user image matches the user characteristics, displaying, by an AR displaying device, target information associated with the user at a display position corresponding to the user image, wherein the target information comprises at least one of user information and voice associated information.
    Type: Grant
    Filed: September 29, 2021
    Date of Patent: November 28, 2023
    Assignees: Beijing BOE Optoelectronics Technology Co., Ltd., BOE Technology Group Co., Ltd.
    Inventors: Jinghua Miao, Yanqiu Zhao, Qingwen Fan, Xuefeng Wang, Wenyu Li, Lili Chen, Hao Zhang
  • Patent number: 11816770
    Abstract: Ontological graph creation via a user interface is disclosed, including: receiving a selection to import an imported ontological subgraph into a current ontological graph; presenting at least a portion of the imported ontological subgraph in a user interface associated with editing the current ontological graph; receiving, via the user interface, a user input to associate a newly defined node associated with the current ontological graph with a previously defined node or edge associated with the presented at least portion of the imported ontological subgraph; and updating a graph database associated with the current ontological graph based at least in part on the user input and the imported ontological subgraph.
    Type: Grant
    Filed: February 8, 2022
    Date of Patent: November 14, 2023
    Inventors: Jefferson Barlew, Christopher Riley
  • Patent number: 11816787
    Abstract: The invention relates to a method for representing an environmental region of a motor vehicle in an image, in which real images of the environmental region are captured by a plurality of real cameras of the motor vehicle and the image is generated from these real images, which at least partially represents the environmental region, wherein the image is represented from a perspective of a virtual camera arranged in the environmental region, and the image is generated as a bowl shape, wherein at least one virtual elongated distance marker is represented in the image, by which a distance to the motor vehicle is symbolized in the virtual bowl shape. The invention also relates to a computer program product and a display system for a motor vehicle.
    Type: Grant
    Filed: January 10, 2019
    Date of Patent: November 14, 2023
    Assignee: Connaught Electronics Ltd.
    Inventors: Huanqing Guo, Fergal O'Malley, Guenter Bauer, Felix Ruhl
  • Patent number: 11803999
    Abstract: Systems, methods, and techniques utilize reinforcement learning to efficiently schedule a sequence of jobs for execution by one or more processing threads. A first sequence of execution jobs associated with rendering a target frame of a sequence of frames is received. One or more reward metrics related to rendering the target frame are selected. A modified sequence of execution jobs for rendering the target frame is generated, such as by reordering the first sequence of execution jobs. The modified sequence is evaluated with respect to the selected reward metric(s); and rendering the target frame is initiated based at least in part on the evaluating of the modified sequence with respect to the one or more selected reward metric(s).
    Type: Grant
    Filed: November 18, 2021
    Date of Patent: October 31, 2023
    Assignees: Advanced Micro Devices, Inc., ATI TECHNOLOGIES ULC
    Inventors: Thomas Daniel Perry, Steven Tovey, Mehdi Saeedi, Andrej Zdravkovic, Zhuo Chen