Patents Examined by Nicholas R Wilson
  • Patent number: 11210835
    Abstract: A computer generated (CG) hair groom for a virtual character can include strand-based (also referred to as instanced) hair in which many thousands of digital strands represent real human hair strands. Embodiments of systems and methods for transferring CG hair groom data from a first (or source) virtual character to a second (or target) virtual character are provided. Some embodiments can factor in a difference between a hairline of the first virtual character and a hairline of the second virtual character to improve the overall appearance or fit of the hair groom on the second virtual character.
    Type: Grant
    Filed: May 20, 2019
    Date of Patent: December 28, 2021
    Assignee: Magic Leap, Inc.
    Inventor: Takashi Kuribayashi
  • Patent number: 11202036
    Abstract: A merged reality system comprises servers in a cloud to edge infrastructure configured to store and process data and models of virtual replicas of real world elements that provide self-computing capabilities and autonomous behavior to the virtual replicas. The data and models are input through a plurality of software platforms, software engines, and sensors connected to things and user devices. The server is further configured to merge the real and virtual data and models in order to augment the real data with the virtual data. A method thereof comprises mapping the real world into a virtual world, generating virtual replicas of the real world; adding models and data of the virtual replicas; connecting the virtual replicas to corresponding real elements in order to enrich and synchronize the virtual replicas with the real-world elements; merging the real and virtual data; and augmenting the real data with the virtual data.
    Type: Grant
    Filed: June 16, 2020
    Date of Patent: December 14, 2021
    Assignee: THE CALANY Holding S. À R.L.
    Inventor: Cevat Yerli
  • Patent number: 11200736
    Abstract: Systems and methods for synthesizing an image of the face by a head-mounted device (HMD) are disclosed. The HMD may not be able to observe a portion of the face. The systems and methods described herein can generate a mapping from a conformation of the portion of the face that is not imaged to a conformation of the portion of the face observed. The HMD can receive an image of a portion of the face and use the mapping to determine a conformation of the portion of the face that is not observed. The HMD can combine the observed and unobserved portions to synthesize a full face image.
    Type: Grant
    Filed: December 19, 2019
    Date of Patent: December 14, 2021
    Assignee: Magic Leap, Inc.
    Inventor: Adrian Kaehler
  • Patent number: 11189095
    Abstract: Systems and methods include determination of a first component of a set of components under assembly in a physical environment, determination of a first physical position of a user with respect to the first component in the physical environment, determination of a second component of the set of components under assembly based on assembly information associated with the set of components, determination of three-dimensional surface data of the second component, determination of a physical relationship between the first component and the second component based on a model associated with the set of components, determination of a graphical representation of the second component based on the first physical position of the user with respect to the first component, the physical relationship between the first component and the second component, and the three-dimensional surface data of the second component, and presentation of the graphical representation to the user in a view including the first component in the phys
    Type: Grant
    Filed: January 5, 2021
    Date of Patent: November 30, 2021
    Assignee: SAP SE
    Inventor: Stephan Kohlhoff
  • Patent number: 11176637
    Abstract: A method for providing imagery to a user on a display includes receiving eye tracking data. The method also includes determining a gaze location on the display and at least one of a confidence factor of the gaze location, or a speed of the change of the gaze location using the eye tracking data. The method also includes establishing multiple tiles using the gaze location and at least one of the confidence factor or the speed of the change of the gaze location. The method also includes providing a foveated rendered image using the multiple tiles.
    Type: Grant
    Filed: December 3, 2019
    Date of Patent: November 16, 2021
    Assignee: FACEBOOK TECHNOLOGIES, LLC
    Inventors: Behnam Bastani, Tianxin Ning, Haomiao Jiang
  • Patent number: 11176757
    Abstract: An augmented reality (AR) display device can display a virtual assistant character that interacts with the user of the AR device. The virtual assistant may be represented by a robot (or other) avatar that assists the user with contextual objects and suggestions depending on what virtual content the user is interacting with. Animated images may be displayed above the robot's head to display its intents to the user. For example, the robot can run up to a menu and suggest an action and show the animated images. The robot can materialize virtual objects that appear on its hands. The user can remove such an object from the robot's hands and place it in the environment. If the user does not interact with the object, the robot can dematerialize it. The robot can rotate its head to keep looking at the user and/or an object that the user has picked up.
    Type: Grant
    Filed: October 1, 2020
    Date of Patent: November 16, 2021
    Assignee: Magic Leap, Inc.
    Inventors: Kristofer Ryan Whitney, Andrew Moran, Danielle Marie Price, Jonathan Wells Mangagil, Minal Luxman Kalkute
  • Patent number: 11170666
    Abstract: A dental treatment training apparatus allows practice of complex treatments involving high accuracy and skill levels. A dental treatment training apparatus for providing a simulated treatment in which a treatment instrument is applied onto a tooth model includes a display unit that displays, in a superimposed manner, 3D image information based on predefined 3D information about the tooth model and the treatment instrument on a 3D view image, a position detector that detects 3D positional information about the tooth model, the treatment instrument, and the display unit, and a control unit that causes the display unit to display, in a superimposed manner, 3D image information corresponding to an item selected on a superimposed selection operation display for allowing selection of an item associated with the simulated treatment based on the 3D positional information detected by the position detector.
    Type: Grant
    Filed: September 12, 2019
    Date of Patent: November 9, 2021
    Assignee: J.MORITA CORPORATION
    Inventors: Tsutomu Kubota, Gaku Yoshimoto, Toshitaka Sekioka, Tomohisa Takagi
  • Patent number: 11170550
    Abstract: A retargeting engine automatically performs a retargeting operation. The retargeting engine generates an anatomical local model of a digital character based on performance capture data and/or a 3D model of the digital character. The anatomical local model includes an anatomical model corresponding to internal features of the digital character and a local model corresponding to external features of the digital character. The retargeting engine includes a Machine Learning model that maps a set of locations associated with the face of a performer to a corresponding set of locations associated with the face of the digital character. The retargeting engine includes a solver that modifies a set of parameters associated with the anatomical local model to cause the digital character to exhibit one or more facial expressions enacted by the performer, thereby retargeting those facial expressions onto the digital character.
    Type: Grant
    Filed: November 26, 2019
    Date of Patent: November 9, 2021
    Assignee: Disney Enterprises, Inc.
    Inventors: Derek Edward Bradley, Dominik Thabo Beeler
  • Patent number: 11164385
    Abstract: A method for establishing a virtual reality (VR) call between a caller VR device and a callee VR device, the method includes determining which of the caller VR device or the callee VR device should perform a stitching operation associated with the VR call based on a first plurality of parameters associated with the callee VR device and a second plurality of parameters associated with the caller VR device, and causing transmission of one of a plurality of media contents or a stitched media content from the caller VR device to the callee VR device after establishment of the VR call based on the determining.
    Type: Grant
    Filed: November 4, 2019
    Date of Patent: November 2, 2021
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Praveen Chebolu, Varun Bharadwaj Santhebenur Vasudevamurthy, Srinivas Chinthalapudi, Tushar Vrind, Abhishek Bhan, Nila Rajan
  • Patent number: 11158102
    Abstract: Embodiments of the present disclosure provide a method and apparatus for processing information. A method may include: generating voice response information based on voice information sent by a user; generating a phoneme sequence based on the voice response information; generating mouth movement information based on the phoneme sequence, the mouth movement information being used for controlling a mouth movement of a displayed three-dimensional human image when playing the voice response information; and playing the voice response information, and controlling the mouth movement of the three-dimensional human image based on the mouth movement information.
    Type: Grant
    Filed: October 30, 2019
    Date of Patent: October 26, 2021
    Assignee: Beijing Baidu Netcom Science and Technology Co., Ltd.
    Inventors: Xiao Liu, Fuqiang Lyu, Jianxiang Wang, Jianchao Ji
  • Patent number: 11151766
    Abstract: Systems and methods for displaying a virtual character in a mixed reality environment are disclosed. In some embodiments, a view of the virtual character is based on an animation rig comprising primary joints and helper joints. The animation rig may be in a pose defined by spatial relationships between the primary joints and helper joints. The virtual character may be moving in the mixed reality environment. In some instances, the virtual character may be moving based on a comparison of interestingness values associated with elements in the mixed reality environment. The spatial relationship transformation associated with the movement may be indicated by movement information. In some embodiments, the movement information is received from a neural network.
    Type: Grant
    Filed: June 5, 2020
    Date of Patent: October 19, 2021
    Assignee: Magic Leap, Inc.
    Inventors: Thomas Marshall Miller, IV, Nafees Bin Zafar, Sean Michael Comer, James Jonathan Bancroft
  • Patent number: 11151797
    Abstract: An electronic device (1) is configured to obtain an image captured with a camera, determine a location for a sensor, determine a detection zone of the sensor in relation to the image based on the location determined for the sensor, and display the image, a virtual representation (54) of the sensor and a virtual representation (55) of the detection zone superimposed over the image. The electronic device is configured to allow a user to specify or adapt at least one property for the sensor (17). This at least one property includes the location for the sensor and may further include the orientation and/or the settings of the sensor.
    Type: Grant
    Filed: April 1, 2019
    Date of Patent: October 19, 2021
    Assignee: SIGNIFY HOLDING B.V.
    Inventors: Berent Willem Meerbeek, Dirk Valentinus René Engelen, Jochen Renaat Van Gheluwe, Bartel Marinus Van De Sluis, Anthonie Hendrik Bergman
  • Patent number: 11151801
    Abstract: An electronic device includes a communication module, a display, and at least one processor operatively connected with the communication module and the display. The at least one processor may be configured to receive an augmented reality image from at least one external device which performs wireless communication with the communication module via the communication module, display a running screen of an application associated with the augmented reality image on the display, determine whether an object associated with input information input to the electronic device is included in the augmented reality image, and display an additional object on the object based at least in part on the input information. In addition, various embodiments recognized through the specification are possible.
    Type: Grant
    Filed: August 5, 2019
    Date of Patent: October 19, 2021
    Assignee: Samsung Electronics Co., Ltd.
    Inventor: Stephanie Kim Ahn
  • Patent number: 11145133
    Abstract: An illustrative volumetric capture system accesses a two-dimensional (“2D”) image captured by a capture device and depicting a first subject of a particular subject type. The volumetric capture system generates a custom three-dimensional (“3D”) model of the first subject by identifying a parameter representative of a characteristic of the first subject, applying the parameter to a parametric 3D model to generate a custom mesh, and applying a custom texture based on the 2D image to the custom mesh. The volumetric capture system also accesses a motion capture video depicting motion performed by a second subject of the particular subject type. Based on the motion capture video, the volumetric capture system animates the custom 3D model of the first subject to cause the custom 3D model to perform the motion performed by the second subject. Corresponding methods and systems are also disclosed.
    Type: Grant
    Filed: March 26, 2020
    Date of Patent: October 12, 2021
    Assignee: Verizon Patent and Licensing Inc.
    Inventors: Daniel Kopeinigg, Solmaz Hajmohammadi, Sourabh Khire, Trevor Howarth
  • Patent number: 11144197
    Abstract: An electronic device according to certain embodiments may include: a camera module, a display including a touch panel, and a processor, memory including instructions, wherein the instructions are executable by the processor to cause the electronic device to: acquire a first image using the camera module, display the acquired first image through the display, receive a user input to the touch panel designating a partial area of the displayed first image, generate a second image by using the first image by processing image information included in the designated partial area using a function associated with a gesture included in the user input, and display the generated second image through the display.
    Type: Grant
    Filed: January 22, 2020
    Date of Patent: October 12, 2021
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Sungoh Kim, Prushinskiy Valeriy, Hyungsok Yeo, Junghyeon Kim, Hyunhee Park, Kihuk Lee, Jungeun Lee
  • Patent number: 11138435
    Abstract: Empowered by augmented reality (AR) technologies, the present disclosure allows a user to display virtual content in a physical reality and turn an AR-ready handheld mobile device into a dimension measuring tool. The present disclosure allows the user to first display a virtual container asset, with its actual size in physical reality, in any given configuration, and then create a virtual dimensional equivalent of an item-to-be-fit based on dimensional data captured by a 6-degree-of-freedom (6DoF) device or the like. Finally, the user can place the virtual item into the virtual container to evaluate the capacity and fit in the given configuration.
    Type: Grant
    Filed: May 1, 2020
    Date of Patent: October 5, 2021
    Assignee: Volvo Car Corporation
    Inventors: Qinzi Tan, Garrett Gonzales, Caitlyn Mowry
  • Patent number: 11132168
    Abstract: A display method for displaying a plurality of images on a display includes: generating the image with a texture having a resolution corresponding to a size of the image displayed on the display, the texture being selected from a texture memory storing a plurality of textures having different resolutions for the same image; and when storing a plurality of new textures having different resolutions into the texture memory and generating and displaying the image with the new texture but when the texture memory does not have a sufficient free space to store the new textures, deleting the texture stored in the texture memory, from the texture memory, in a lexicographic order from the texture having an old history of use and the texture having a high resolution, and storing the new textures into the free space secured in the texture memory.
    Type: Grant
    Filed: March 26, 2020
    Date of Patent: September 28, 2021
    Inventor: Yusuke Yamada
  • Patent number: 11127223
    Abstract: Various implementations or examples set forth a method for scanning a three-dimensional (3D) environment. The method includes generating, based on sensor data captured by a depth sensor on a device, a 3D mesh representing a physical space; dividing the 3D mesh into a plurality of sub-meshes, wherein each of the plurality of sub-meshes comprises a corresponding set of vertices and a corresponding set of faces comprising edges between pairs of vertices; determining that at least a portion of a first sub-mesh in the plurality of sub-meshes is in a current frame captured by an image sensor on the device; and updating the 3D mesh by texturing the at least a portion of the first sub-mesh with one or more pixels in the current frame onto which the first sub-mesh is projected.
    Type: Grant
    Filed: October 30, 2020
    Date of Patent: September 21, 2021
    Assignee: SPLUNKINC.
    Inventors: Devin Bhushan, Seunghee Han, Caelin Thomas Jackson-King, Jamie Kuppel, Stanislav Yazhenskikh, Jim Jiaming Zhu
  • Patent number: 11126851
    Abstract: In example implementations, an augmented reality (AR) labeler is provided. The AR labeler includes a camera, a processor, a graphical user interface (GUI), and a display. The camera is to capture an image of an object. The processor is communicatively coupled to the camera to receive the image and determine object information. The GUI is communicatively coupled to the processor to receive print parameters. The display is communicatively coupled to the processor to display an AR image of the object with the print parameters, wherein the print parameters are modified in the AR image based on the object information.
    Type: Grant
    Filed: April 19, 2018
    Date of Patent: September 21, 2021
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Ian N. Robinson, Mithra Vankipuram
  • Patent number: 11127179
    Abstract: A mobile device comprises one or more processors, a display, and a camera configured to capture an image of a live scene. The one or more processors are configured to determine a location of the mobile device and display an augmented image based on the captured image. The augmented image includes at least a portion of the image of the live scene and a map including an indication of the determined location of the mobile device. The one or more processors are also configured to display the at least a portion of the image of the live scene in a first portion of the display and displaying the map in a second portion of the display. The augmented image is updated as the mobile device is moved, and the map is docked to the second portion of the display as the augmented image is updated.
    Type: Grant
    Filed: October 30, 2019
    Date of Patent: September 21, 2021
    Assignee: QUALCOMM Incorporated
    Inventor: Arnold Jason Gum