Patents Examined by Nicholas R Wilson
  • Patent number: 11321800
    Abstract: A method for graphics processing. The method including rendering graphics for an application using a plurality of graphics processing units (GPUs). The method including dividing responsibility for the rendering geometry of the graphics between the plurality of GPUs based on a plurality of screen regions, each GPU having a corresponding division of the responsibility which is known to the plurality of GPUs. The method including generating information regarding a piece of geometry with respect to a first screen region for which a first GPU has a first division of responsibility, while rendering the piece of geometry at a second GPU for an image. The method including rendering the piece of geometry at the first GPU using the information.
    Type: Grant
    Filed: February 3, 2020
    Date of Patent: May 3, 2022
    Assignee: Sony Interactive Entertainment Inc.
    Inventors: Mark E. Cerny, Florian Strauss, Tobias Berghoff
  • Patent number: 11307653
    Abstract: Various embodiments of an apparatus, methods, systems and computer program products described herein are directed to an Interaction Engine. According to various embodiments, the Interaction Engine generates within a unified three-dimensional (3D) coordinate space: (i) a virtual 3D model container; (ii) a virtual 3D medical model positioned according to a model pose within the virtual 3D model container; and (iii) a virtual 3D representation of at least a portion of at least one of a user's hands. The Interaction Engine renders an Augmented Reality (AR) display that includes concurrent display of the virtual 3D medical model container, the virtual 3D medical model and the virtual 3D representation of the user's hands. The Interaction Engine detects one or more physical gestures associated with the user and one or more types of virtual interacts associated with a detected physical gesture(s).
    Type: Grant
    Filed: March 5, 2021
    Date of Patent: April 19, 2022
    Assignee: Medivis, Inc.
    Inventors: Long Qian, Wenbo Lan, Christopher Morley
  • Patent number: 11308327
    Abstract: Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing a program and method for providing augmented reality content with a captured image in association with traveling, in accordance with some example embodiments. The program and method provide for receiving, by a messaging application running on a device of a user, a request to scan an image captured by a device camera; determining, in response to the receiving, a travel parameter associated with the request and an attribute of an object depicted in the image; obtaining supplemental content based on the travel parameter and on the attribute; and displaying an augmented reality content item, which includes the supplemental content, with the captured image.
    Type: Grant
    Filed: January 14, 2021
    Date of Patent: April 19, 2022
    Assignee: Snap Inc.
    Inventors: Virginia Drummond, Jean Luo, Alek Matthiessen, Celia Nicole Mourkogiannis
  • Patent number: 11295500
    Abstract: A mobile device comprises one or more processors, a display, and a camera configured to capture an image of a live scene. The one or more processors are configured to determine a location of the mobile device and display an augmented image based on the captured image. The augmented image includes at least a portion of the image of the live scene and a map including an indication of the determined location of the mobile device. The one or more processors are also configured to display the at least a portion of the image of the live scene in a first portion of the display and displaying the map in a second portion of the display. The augmented image is updated as the mobile device is moved, and the map is docked to the second portion of the display as the augmented image is updated.
    Type: Grant
    Filed: June 24, 2021
    Date of Patent: April 5, 2022
    Assignee: QUALCOMM Incorporated
    Inventor: Arnold Jason Gum
  • Patent number: 11288874
    Abstract: A system and method robustly displays augmented reality (AR) applications on a vehicle. The system has a mobile terminal that is configured to execute an AR application on the vehicle, wherein the AR application includes a display of AR contents on the vehicle using a three-dimensional grid. The mobile terminal has at least one sensor that is configured to capture physical sensor data relating to a position of the mobile terminal. The mobile terminal has a capture unit that is configured to capture a predefinable fixed point on the vehicle. The mobile terminal further includes a computing unit that is configured to evaluate the physical sensor data and the captured fixed point and to display the AR contents on the vehicle in a contact-analog manner.
    Type: Grant
    Filed: September 4, 2020
    Date of Patent: March 29, 2022
    Assignee: Bayerische Motoren Werke Aktiengesellschaft
    Inventor: Andreas Stroka
  • Patent number: 11288871
    Abstract: Example implementations described herein are directed to the transmission of hand information from a user hand or other object to a remote device via browser-to-browser connections, such that the hand or other object is oriented correctly on the remote device based on orientation measurements received from the remote device. Such example implementations can facilitate remote assistance in which the user of the remote device needs to view the hand or object movement as provided by an expert for guidance.
    Type: Grant
    Filed: November 8, 2019
    Date of Patent: March 29, 2022
    Assignee: FUJIFILM Business Innovation Corp.
    Inventors: Chelhwon Kim, Patrick Chiu, Yulius Tjahjadi, Donald Kimber, Qiong Liu
  • Patent number: 11288854
    Abstract: An information processing apparatus according to an aspect of the present technology includes an acquisition unit, a generation unit, and a generation control unit. The acquisition unit acquires an image of a target object. The generation unit is able to execute each of a first generation process and a second generation process different from the first generation process as a generation process of generating a model of the target object on the basis of the acquired image of the target object. The generation control unit controls switching of execution of the first generation process and execution of the second generation process by the generation unit.
    Type: Grant
    Filed: February 11, 2021
    Date of Patent: March 29, 2022
    Assignee: SONY CORPORATION
    Inventor: Masato Shimakawa
  • Patent number: 11282250
    Abstract: The present disclosure describes systems and methods directed to updating pre-generated content by applying secondary effects associated with detected and/or collected environmental data corresponding to a display environment. In operation, a sensor device detects environmental data corresponding to a display environment. A computing device may identify a secondary effect corresponding to the detected environmental data. The secondary effect may be applied to pre-generated content to update the content, and the updated pre-generated content may be displayed on a display. Accordingly, systems and methods described herein enable an improved immersive viewing experience by incorporating features of a user's environment into pre-generated content.
    Type: Grant
    Filed: March 31, 2020
    Date of Patent: March 22, 2022
    Assignee: DISNEY ENTERPRISES, INC.
    Inventors: Alice J. Taylor, Steven M. Chapman, Alexandra J. Lewis Christiansen, Jackson Rogow
  • Patent number: 11282285
    Abstract: Embodiments described herein provide approaches for enabling visual location of a real-world object. Specifically, an object location service is initiated in a service orchestration layer of a 5G telecom network in response to a request from a user corresponding to the real-world object. This object location service collects a stream of three-dimensional location coordinates from both the user's device and the real-world object's device. Based on these sets of sets of location coordinates, the object location service calculates a continuously updated three-dimensional vector from the user to the real-world object. The object location service uses this continuously updated three-dimensional vector to apply an augmented reality indicator that is continuously updated in real-time to the real-world object on a display of the UE device corresponding to the user.
    Type: Grant
    Filed: January 8, 2021
    Date of Patent: March 22, 2022
    Assignee: International Business Machines Corporation
    Inventors: Craig M. Trim, Kimberly Greene Starks, Gandhi Sivakumar, Kushal S. Patel, Sarvesh S. Patel
  • Patent number: 11275946
    Abstract: Receiving data recorded during a remotely-assisted augmented reality session held between a remote user and a local user, the data including: drawn graphic annotations that are associated with locations in a 3D model representing a physical scene adjacent the local user, and a transcript of a conversation between the remote and local users. Generating at least one candidate label for each location, each candidate label being textually descriptive of a physical entity that is located, in the physical scene, at a location corresponding to the respective location in the 3D model. The generation of each candidate label includes: for each graphic annotation, automatically analyzing the transcript to detect at least one potential entity name that was mentioned, by the remote and/or local user, temporally adjacent the drawing of the respective graphic annotation. Accepting or rejecting each candidate label, to define it as a true label of the respective physical entity.
    Type: Grant
    Filed: September 15, 2020
    Date of Patent: March 15, 2022
    Assignee: International Business Machines Corporation
    Inventors: Adi Raz Goldfarb, Erez Lev Meir Bilgory
  • Patent number: 11270491
    Abstract: A computer system receives user selection of an avatar story template. User-specific parameters relating to the user are determined and real-time data, based at least in part on the user-specific parameters, is retrieved. Specific media or digital assets are obtained based on at least one of the real-time data and the user-specific parameters. An avatar story is then generated by combining the avatar story template and the specific media or digital assets. The avatar story is then displayed on a display of a computing device.
    Type: Grant
    Filed: June 24, 2021
    Date of Patent: March 8, 2022
    Assignee: Snap Inc.
    Inventors: Andrés Monroy-Hernández, Yu Jiang Tham
  • Patent number: 11270522
    Abstract: An exemplary method includes an augmented reality system acquiring an image of an event from a viewpoint of a camera of a computing device in proximity to the event, the event including a performance area; identifying at least part of the performance area of the event within the image of the event; determining a three-dimensional (3D) pose of the performance area of the event within the image of the event; and providing, for concurrent display by a display device of the computing device, the image of the event and augmented reality content that is oriented according to the 3D pose of the performance area of the event within the image of the event.
    Type: Grant
    Filed: August 31, 2020
    Date of Patent: March 8, 2022
    Assignee: Verizon Patent and Licensing Inc.
    Inventors: Zachary Tauber, Herve Bizira, Viktor Kyriazakos, Christos Papapavlou, Sean McCall, Guy Dassa, Jeffrey Scholz, Praveen Mareedu, Xingyue Zhou, Haoxin Guo
  • Patent number: 11237392
    Abstract: A preferred system and method for displaying a set of indicator coordinates in relation to a building information model includes a system administrator, a registration base and a headset unit. The system includes a headset including a display and a registration base for initial placement of the headset. The headset further includes a set of stereo cameras. The cameras capture an image of the construction site and overlay an image which includes the indicator coordinates. A position of the headset is determined relative to the registration base from motion sensors. A set of indicator coordinates is downloaded and projected to the display based on the position and orientation of the headset.
    Type: Grant
    Filed: March 29, 2021
    Date of Patent: February 1, 2022
    Inventor: Timothy A. Cummings
  • Patent number: 11231895
    Abstract: An electronic device may include: a foldable housing; at least one sensor; a first display having a first size and a first pixel density; a second display having a second size and a second pixel density; a processor; and a memory. The memory may store instructions that, when executed, cause the processor to control the electronic device to: display at least one first surface image generated based on the window having the first size through the first display; change the size of the window to a third size based on information associated with the first display and the second display in response to detecting a first event through the at least one sensor based on the at least one first surface image being displayed; and display at least one second surface image generated based on the window, of which the size has been changed to the third size, through the second display.
    Type: Grant
    Filed: February 19, 2020
    Date of Patent: January 25, 2022
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Seonghoon Choi, Yongjin Kwon, Jeongwon Yang, Byungseok Jung, Il Jung, Jinwan An
  • Patent number: 11232639
    Abstract: Systems, methods, devices, and other techniques for placing and rendering virtual objects in three-dimensional environments. The techniques include providing, by a device, a view of an environment of a first user. A first computing system associated with the first user receives an instruction to display, within the view of the environment of the first user, a virtual marker at a specified position of the environment of the first user, the specified position derived from a second user's interaction with a three-dimensional (3D) model of at least a portion of the environment of the first user. The device displays, within the view of the environment of the first user, the virtual marker at the specified position of the environment of the first user.
    Type: Grant
    Filed: August 5, 2020
    Date of Patent: January 25, 2022
    Assignee: Accenture Global Solutions Limited
    Inventors: Matthew Thomas Short, Sunny Webb, Joshua Opel, Theo E. Christensen
  • Patent number: 11232621
    Abstract: Systems and methods are provided for enhanced animation generation based on conditional modeling. An example method includes accessing an autoencoder trained based on poses and conditional information associated with the poses, each pose being defined based on location information associated with joints, and the conditional information for each pose reflecting prior poses of the pose, with the autoencoder being trained to reconstruct, via a latent variable space, each pose based on the conditional information. Poses in a sequence of poses, are obtained via an interactive user interface, and the latent variable space is sampled. An output pose is generated based on the sampling, the output pose being included in the interactive user interface.
    Type: Grant
    Filed: April 6, 2020
    Date of Patent: January 25, 2022
    Assignee: Electronic Arts Inc.
    Inventors: Elaheh Akhoundi, Fabio Zinno
  • Patent number: 11232641
    Abstract: Methods, computer program products, and systems are presented. The method computer program products, and systems can include, for instance: obtaining virtual image data representing a virtual object; and encoding the virtual image data with physical image data to provide a formatted image file, wherein the encoding includes for a plurality of spatial image elements providing one or more data field that specifies physical image information and one or more data field that specifies virtual image information based on the virtual image data so the formatted image file for each of the plurality of spatial image elements provides physical image information and virtual image information, and wherein the encoding includes providing indexing data that associates an identifier for the virtual object to spatial image elements for the virtual object.
    Type: Grant
    Filed: October 5, 2020
    Date of Patent: January 25, 2022
    Assignee: Wayfair LLC
    Inventors: David C. Bastian, Aaron K. Baughman, Nicholas A. McCrory, Todd R. Whitman
  • Patent number: 11222468
    Abstract: In one embodiment, a method includes instructing, at a first time, a camera with multiple pixel sensors to capture a first image of an environment comprising an object to determine a first object pose of the object. Based on the first object pose, the method determines a predicted object pose of the object at a second time. The method determines a predicted camera pose of the camera at the second time. The method generates pixel-activation instructions based on a projection of a 3D model of the object having the predicted object pose onto a virtual image plane associated with the predicted camera pose. The method instructs, at the second time, the camera to use a subset of the plurality of pixel sensors to capture a second image of the environment according to the pixel-activation instructions. The method determines, based on the second image, a second object pose of the object.
    Type: Grant
    Filed: November 2, 2020
    Date of Patent: January 11, 2022
    Assignee: Facebook Technologies, LLC.
    Inventors: Steven John Lovegrove, Richard Andrew Newcombe, Andrew Samuel Berkovich, Lingni Ma, Chao Li
  • Patent number: 11222475
    Abstract: Disclosed herein is software technology that leverages improved AR technology to facilitate presentation of virtual content overlaid on a view of a real-world environment. Additionally, also disclosed herein is an “insights” software application that functions to provide insights about the real-world environment. In one aspect, disclosed herein is a method that involves an AR-enabled device that includes one or more sensors, a user input interface, a display screen and is configured to (1) based on user input, determine an initial position and orientation of the computing device within a virtual 3D model of a real-world environment; (2) align the virtual 3D model of the real-world environment with the real-world environment; and (3) cause a display screen to present the aligned virtual 3D model as overlaid virtual content on a view of the real-world environment.
    Type: Grant
    Filed: July 2, 2020
    Date of Patent: January 11, 2022
    Assignee: Procore Technologies, Inc.
    Inventors: Kevin McKee, Jon Hoover, Christopher Bindloss, David McCool, Winson Chu, Christopher Myers
  • Patent number: 11217003
    Abstract: Systems and methods are provided for enhanced pose generation based on conditional modeling of inverse kinematics. An example method includes accessing an autoencoder trained based on poses, with each pose being defined based on location information of joints, and the autoencoder being trained based on conditional information indicating positions of a subset of the joints. The autoencoder is trained to reconstruct, via a latent variable space, each pose based on the conditional information. Information specifying positions of the subset of the joints is obtained via an interactive user interface and the latent variable space is sampled. An output is generated for inclusion in the interactive user interface based on the sampling and the positions.
    Type: Grant
    Filed: April 30, 2020
    Date of Patent: January 4, 2022
    Assignee: Electronic Arts Inc.
    Inventors: Elaheh Akhoundi, Fabio Zinno