Patents Examined by Yanna Wu
  • Patent number: 11170569
    Abstract: A method for determining a visual scene virtual representation and a highly accurate visual scene-aligned geometric representation for virtual interaction.
    Type: Grant
    Filed: March 18, 2020
    Date of Patent: November 9, 2021
    Assignee: GEOMAGICAL LABS, INC.
    Inventors: Brian Totty, Kevin Wong, Jianfeng Yin, Luis Puig Morales, Paul Gauthier, Salma Jiddi, Qiqin Dai, Brian Pugh, Konstantinos Nektarios Lianos, Angus Dorbie, Yacine Alami, Marc Eder, Christopher Sweeney, Javier Civera
  • Patent number: 11151789
    Abstract: The present development is a method for the visualization and automatic examination of the inner surface of tubular objects. The method uses a virtual camera rig arranged in a specific pattern within the tubular objects inner surface. The rig can be physical or virtual or hypothetical; graphics-based, providing the same functionality of a sequence of virtual cameras. This “Fly-In” method is a more general visualization technique than techniques of the prior art, it is more flexible and does not create distortion, it does not require alternation to the surface for viewing and it can handle multi-branches with variable diameter. It can also provide a clear assessment of the inner surface for immediate examination of the object.
    Type: Grant
    Filed: March 25, 2020
    Date of Patent: October 19, 2021
    Assignee: Kentucky Imaging Technologies
    Inventors: Aly Farag, Mostafa Mohamad, Amal Farag, Asem Ali, Salwa Elshazly
  • Patent number: 11145031
    Abstract: A display apparatus including first display or projector for displaying first images for first eye; second display or projector for displaying second images for second eye; first portion and second portion arranged to face first and second eyes; means for tracking poses of first and second eyes relative to first and second optical axes, respectively; and processor. Processor or external processor is configured to: obtain given pose of given eye relative to given optical axis; generate information pertaining to given visual artifact that is formed over given image at image plane when given image is displayed; determine artifact-superposing portion of given image; and process given image based on generated information and artifact-superposing portion, to generate given artifact-compensated image. Processor displays given artifact-compensated image via given display or projector.
    Type: Grant
    Filed: January 31, 2020
    Date of Patent: October 12, 2021
    Assignee: Varjo Technologies Oy
    Inventor: Ari Antti Peuhkurinen
  • Patent number: 11145099
    Abstract: At least some embodiments of the present disclosure relate to a method of visualizing objects having frictional contact. The method includes transferring masses and momentums of a plurality of particles of an object to a grid including a plurality of grid nodes; updating momentums of the grid nodes based on the transferred masses and momentums of the particles; transferring the updated momentums of the grid nodes to the particles of the object; updating positions of the particles and deformation gradients of the particles based on the updated momentums of the grid nodes; projecting the deformation gradients for plasticity of the object; updating elastic components and plastic components of the object; and outputting a visualization of the object based on the positions of the particles of the object.
    Type: Grant
    Filed: May 17, 2018
    Date of Patent: October 12, 2021
    Assignee: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventors: Joseph M. Teran, Chenfanfu Jiang, Theodore F. Gast
  • Patent number: 11137875
    Abstract: The disclosed technology is generally directed to mixed reality devices. In one example of the technology, a mixed-reality view is caused to be provided to an operator. The mixed-reality view includes both a real-world environment of the operator and holographic aspects. The mixed-reality view is caused to include a visual tether to a tether location. The tether location is a real-world location at which work is to be performed. A gaze determination associated with a gaze of the operator is made. Responsive to a positive gaze determination, the visual tether is caused to pulse.
    Type: Grant
    Filed: May 28, 2019
    Date of Patent: October 5, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Kevin Thomas Mather, Sean Robert Puller, Tsz Fung Wan
  • Patent number: 11137874
    Abstract: The disclosed technology is generally directed to mixed reality devices. In one example of the technology, a mixed-reality view is caused to be provided to an operator. The mixed-reality view includes both a real-world environment of the operator and holographic aspects. While the operator is navigated to a step of the task, the mixed-reality view is caused to include a step card, such that the step card includes at least one instruction associated with the step. The operator is enabled to adjust a state associated with the step card. While the state associated with the step card is a first state: a gaze determination associated with a gaze of the operator is made; and responsive to a positive gaze determination, the step card is caused to move to a location that is associated with a real-world location of the gaze of the operator.
    Type: Grant
    Filed: May 28, 2019
    Date of Patent: October 5, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Andrew Jackson Klein, Ethan Harris Arnowitz, Kevin Thomas Mather, Kyle Mouritsen
  • Patent number: 11127194
    Abstract: There is provided an image processing apparatus and an image processing method that are capable of improving the accuracy of a depth image of a 3D model. A depth image generation unit generates a depth image of a plurality of viewpoints for each object included in a 3D model. The present disclosure is applicable to, for example, an encoding device or the like configured to generate a color image and a depth image of each object of each of a plurality of viewpoints on the basis of 3D data of a 3D model, generate an encoded stream by encoding the images, and generate object range information indicating the range of each object.
    Type: Grant
    Filed: October 11, 2017
    Date of Patent: September 21, 2021
    Assignee: SONY CORPORATION
    Inventors: Goh Kobayashi, Junichi Tanaka, Yuichi Araki
  • Patent number: 11120598
    Abstract: A system or method for training may display a student avatar and an expert avatar. A method may include capturing movement of a user attempting a technique, and generating a student avatar animation from the captured movement. The method may include retrieving a 3D expert avatar animation corresponding to the technique. The method may include displaying the 3D student avatar animation and the 3D expert avatar animation. For example, the animations may be displayed concurrently.
    Type: Grant
    Filed: May 13, 2020
    Date of Patent: September 14, 2021
    Assignee: Visyn Inc.
    Inventors: Jeffrey Thielen, Andrew John Blaylock
  • Patent number: 11113860
    Abstract: The present disclosure provides embodiments of a particle-based inverse kinematic analysis system. The inverse kinematic system can utilize a neural network, also referred to as a deep neural network, which utilizes machine learning processes in order to create poses that are more life-like and realistic. The system can generate prediction models using motion capture data. The motion capture data can be aggregated and analyzed in order to train the neural network. The neural network can determine rules and constraints that govern how joints and connectors of a character model move in order to create realistic motion of the character model within the game application.
    Type: Grant
    Filed: January 13, 2020
    Date of Patent: September 7, 2021
    Assignee: ELECTRONIC ARTS INC.
    Inventors: Paolo Rigiroli, Hitoshi Nishimura
  • Patent number: 11107262
    Abstract: Examples of the disclosed systems and methods may provide for improved and more realistic rendering of virtual characters and a more realistic interaction between a user and virtual characters. For example, the systems and methods describe techniques for mathematically generating a map used for animating facial expressions in a multidimensional animation blendspace. As another example, the systems and methods describe a transition system for dynamically transitioning facial expressions across a face of the virtual character. As another example, realistic physical movements can be added to a virtual character's facial expressions to provide interactivity with other virtual characters.
    Type: Grant
    Filed: August 24, 2020
    Date of Patent: August 31, 2021
    Assignee: Magic Leap, Inc.
    Inventors: Thomas Marshall Miller, IV, The Hung Quach, Jeffrey Lin
  • Patent number: 11093103
    Abstract: Disclosed herein are system, method, and computer program product embodiments for generating collaborating AR workspaces. An embodiment operates by identifying a first user that is participating in an augment reality (AR) meeting space from a first location. A second user participating in the AR meeting space from a second location is identified. A selection of a room configuration for the AR meeting space based on at least one of the first location or the second location is received. The digital canvas is configured in the AR meeting space for at least one of the first user or the second user based on the selected room configuration, wherein a size or shape of the digital canvas is adjusted based on either the first wall or the second wall corresponding to the selected room configuration.
    Type: Grant
    Filed: April 3, 2019
    Date of Patent: August 17, 2021
    Assignee: SPATIAL SYSTEMS INC.
    Inventors: Anand Agarawala, Jinha Lee, Peter Ng, Roman Revzin, Waldo Bronchart, Donghyeon Kim
  • Patent number: 11087517
    Abstract: In particular embodiments, a 2D representation of an object may be provided. A first method may comprise: receiving sketch input identifying a target position for a specified portion of the object; computing a deformation for the object within the context of a character rig specification for the object; and displaying an updated version of the object. A second method may comprise detecting sketch input; classifying the sketch input, based on the 2D representation, as an instantiation of the object; instantiating the object using a 3D model of the object; and displaying a 3D visual representation of the object.
    Type: Grant
    Filed: June 2, 2016
    Date of Patent: August 10, 2021
    Assignees: Disney Enterprises, Inc., ETH Zürich (Eidgenössische Technische Hochschule Zürich)
    Inventors: Robert Walker Sumner, Maurizio Nitti, Stelian Coros, Bernhard Thomaszewski, Fabian Andreas Hahn, Markus Gross, Frederik Rudolf Mutzel
  • Patent number: 11086474
    Abstract: Disclosed herein are system, method, and computer program product embodiments for providing a local scene recreation of an augmented reality meeting space to a mobile device, laptop computer, or other computing device. By decoupling the augmented reality meeting space from virtual reality headsets, the user-base expands to include users that could otherwise not participate in the collaborative augmented reality meeting spaces. Users participating on mobile devices and laptops may choose between multiple modes of interaction including an auto-switch view and manual views as well as interacting with the augmented reality meeting space by installing an augmented reality toolkit. Users may deploy and interact with various forms of avatars representing other users in the augmented reality meeting space.
    Type: Grant
    Filed: April 3, 2019
    Date of Patent: August 10, 2021
    Assignee: SPATIAL SYSTEMS INC.
    Inventors: Jinha Lee, Anand Agarawala, Peter Ng, Tyler Hatch
  • Patent number: 11081080
    Abstract: A display device includes: a host which transmits a first signal through a first interface, and transmits a second signal through a second interface different from the first interface; a display driver integrated circuit including a first interface unit which receives the first signal through the first interface, and a second interface unit which receives the second signal through the second interface; and a display panel which receives a data signal corresponding to the first signal and the second signal from the display driver integrated circuit, and displays an image.
    Type: Grant
    Filed: July 23, 2020
    Date of Patent: August 3, 2021
    Assignee: SAMSUNG DISPLAY CO., LTD.
    Inventors: Ho Seok Han, Hyun Gu Kim, Jun Yong Park
  • Patent number: 11074747
    Abstract: In various embodiments, a sketching application generates models of three-dimensional (3D) objects. In operation, the sketching application generates a first virtual geometry based on a first free-form gesture. Subsequently, the sketching application generates a second virtual geometry based on a first constrained gesture associated with a two-dimensional (2D) physical surface. The sketching application then generates a model of a 3D object based on the first virtual geometry and the second virtual geometry. Advantageously, because the sketching application generates virtual geometries based on a combination of free-form and constrained gestures, the sketching application efficiently generates accurate models of detailed 3D objects.
    Type: Grant
    Filed: October 9, 2018
    Date of Patent: July 27, 2021
    Assignee: AUTODESK, INC.
    Inventors: Karansher Singh, Tovi Grossman, Kazi Rubaiat Habib, George Fitzmaurice, Rahul Arora
  • Patent number: 11069123
    Abstract: Cloud-based real time rendering.
    Type: Grant
    Filed: December 28, 2018
    Date of Patent: July 20, 2021
    Assignee: INTEL CORPORATION
    Inventors: Carson Brownlee, Joshua Barczak, Kai Xiao, Michael Apodaca, Philip Laws, Thomas Raoux, Travis Schluessler
  • Patent number: 11062499
    Abstract: In one embodiment, a method for determining the color for a sampling location may include using a computing system to determine a sampling location within a texture that comprises a plurality of texels. Each texel may encode a distance field indicating a distance between the texel and an edge depicted in the texture and an indicator indicating whether the texel is on a first predetermined side of the edge or a second predetermined side of the edge. The system may select, based on the sampling location, a set of texels in the plurality of texels to use to determine a color for the sampling location. The system may determine that the set of texels have indicators that are the same. The system may then determine, using the indicator of any texel in the set of texels, the color for the sampling location.
    Type: Grant
    Filed: September 26, 2019
    Date of Patent: July 13, 2021
    Assignee: Facebook Technologies, LLC
    Inventor: Larry Seiler
  • Patent number: 11010978
    Abstract: A method and system for generating interactive AR content. The method and system disclosed herein allow a user to digitally capture printed content and interact with that printed content in an AR session. AR session interaction provides access to dynamic, up-to-date, and contextually relevant information based on the characteristics of the printed content, as well as current date, time, and location information collected either from the user or from the devices used to implement the AR session.
    Type: Grant
    Filed: June 10, 2019
    Date of Patent: May 18, 2021
    Assignee: KYOCERA Document Solutions Inc.
    Inventor: Naohiko Kosaka
  • Patent number: 11004252
    Abstract: Real time ray tracing-based adaptive multi frequency shading. For example, one embodiment of an apparatus comprising: rasterization hardware logic to process input data for an image in a deferred rendering pass and to responsively update one or more graphics buffers with first data to be used in a subsequent rendering pass; ray tracing hardware logic to perform ray tracing operations using the first data to generate reflection ray data and to store the reflection ray data in a reflection buffer; and image rendering circuitry to perform texture sampling in a texture buffer based on the reflection ray data in the reflection buffer to render an output image.
    Type: Grant
    Filed: December 28, 2018
    Date of Patent: May 11, 2021
    Assignee: Intel Corporation
    Inventors: Carson Brownlee, Gabor Liktor, Joshua Barczak, Kai Xiao, Michael Apodaca, Thomas Raoux
  • Patent number: 10977865
    Abstract: Various embodiments are described herein for allowing a user in a vehicle to view at least one AR image of a portion of the vehicle's surroundings. At least one real world camera may be used to obtain at least one real world image of the portion of the vehicle's surroundings and at least one display may be used to display the at least one AR image to the user. Location, orientation and field of view data for the at least one real world camera is obtained and a virtual world camera having similar characteristics is generated to obtain at least one virtual world image of a portion of the virtual world data that corresponds to the real world data. The at least one AR image is generated by combining the at least one virtual and real world images.
    Type: Grant
    Filed: August 4, 2016
    Date of Patent: April 13, 2021
    Inventor: Seyed-Nima Yasrebi