Patents Examined by Phi Hoang
  • Patent number: 10991146
    Abstract: A processor receives a request to access one or more levels of a partially resident texture (PRT) resource. The levels represent a texture at different levels of detail (LOD) and the request includes normalized coordinates indicating a location in the texture. The processor accesses a texture descriptor that includes dimensions of a first level of the levels and one or more offsets between a reference level and one or more second levels that are associated with one or more residency maps that indicate texels that are resident in the PRT resource. The processor translates the normalized coordinates to texel coordinates in the one or more residency maps based on the offset and accesses, in response to the request, the one or more residency maps based on the texel coordinates to determine whether texture data indicated by the normalized coordinates is resident in the PRT resource.
    Type: Grant
    Filed: December 20, 2019
    Date of Patent: April 27, 2021
    Assignee: ADVANCED MICRO DEVICES, INC.
    Inventors: Maxim V. Kazakov, Mark Fowler
  • Patent number: 10990240
    Abstract: An artificial reality system is described that renders, presents, and controls user interface elements within an artificial reality environment, and performs actions in response to one or more detected gestures of the user. The artificial reality system captures image data representative of a physical environment and outputs artificial reality content. The artificial reality system renders a container that includes application content items as an overlay to the artificial reality content. The artificial reality system identifies, from the image data, a selection gesture comprising a configuration of a hand that is substantially stationary for a threshold period of time at a first location corresponding to a first application content item within the container, and a subsequent movement of the hand from the first location to a second location outside the container. The artificial reality system renders the first application content item at the second location in response.
    Type: Grant
    Filed: June 7, 2019
    Date of Patent: April 27, 2021
    Assignee: Facebook Technologies, LLC
    Inventors: Jonathan Ravasz, Jasper Stevens, Adam Tibor Varga, Etienne Pinchon, Simon Charles Tickner, Jennifer Lynn Spurlock, Kyle Eric Sorge-Toomey, Robert Ellis, Barrett Fox
  • Patent number: 10977851
    Abstract: A computer system is used to host a virtual reality universe process in which multiple avatars are independently controlled in response to client input. The host provides coordinated motion information for defining coordinated movement between designated portions of multiple avatars, and an application responsive to detect conditions triggering a coordinated movement sequence between two or more avatars. During coordinated movement, user commands for controlling avatar movement may be in part used normally and in part ignored or otherwise processed to cause the involved avatars to respond in part to respective client input and in part to predefined coordinated movement information. Thus, users may be assisted with executing coordinated movement between multiple avatars.
    Type: Grant
    Filed: February 27, 2019
    Date of Patent: April 13, 2021
    Assignee: PFAQUTRUMA RESEARCH LLC
    Inventor: Brian Mark Shuster
  • Patent number: 10976806
    Abstract: Embodiments are disclosed herein for providing an immersive reality experience to a patient during a medical procedure. In one example, a method includes determining a location of a stressful object in a medical environment that includes a patient undergoing a medical procedure. The method further includes, upon determining that the stressful object is in a field of view (FOV) of the patient, adjusting display content based on the location of the stressful object within the FOV and outputting the display content for display on an immersive device.
    Type: Grant
    Filed: December 27, 2019
    Date of Patent: April 13, 2021
    Assignee: GE Precision Healthcare LLC
    Inventors: Laurence Vancamberg, Caroline DeCock, Ludovic Avot, Serge Muller, Julie Manzano
  • Patent number: 10964035
    Abstract: A device is provided for encrypting and/or decrypting a point cloud having a plurality of data points that collectively produce a three-dimensional (“3D”) image. Each data point may have a set of elements with values that define a position of the data point in 3D space and visual characteristics of the data point. Encrypting the point cloud may include deterministically a set of data point to encrypt, and deterministically changing the data point element values of the selected data points so that the 3D image produced by the encrypted data points is different than the 3D produced from the unencrypted data points. Decrypting the resulting encrypted point cloud may include deterministically reselecting the encrypted data points using an encryption key, and deterministically reversing the changes made to the data point element values of the selected data points based on transformations that are specified as part of the encryption key.
    Type: Grant
    Filed: November 20, 2020
    Date of Patent: March 30, 2021
    Assignee: Illuscio, Inc.
    Inventors: Robert Monaghan, Venkatarao Maruvada, Joseph Bogacz
  • Patent number: 10964201
    Abstract: A system includes a processing device and memory device storing instructions that result in accessing a first dataset including aerial imagery data, accessing a second dataset including property boundary data, and identifying property boundaries associated with a geographic area. A plurality of artificial-intelligence (AI) models are applied to the datasets to identify and compute information of interest. Based on the first dataset and constrained by the property boundaries, a building detection model can be applied to identify a building footprint, and a tree detection model can be applied to identify one or more trees. An estimated distance can be determined between each of the trees and a nearest portion of the building footprint as separation data, which can be compared to a defensible space guideline to determine a defensible space adherence score. A wildfire risk map can be generated, including the defensible space adherence score associated with the geographic area.
    Type: Grant
    Filed: September 27, 2019
    Date of Patent: March 30, 2021
    Assignee: THE TRAVELERS INDEMNITY COMPANY
    Inventors: Hoa Ton-That, James Dykstra, John Han, Stefanie M. Walker, Joseph Amuso, George Lee, Kyle J. Kelsey
  • Patent number: 10964070
    Abstract: An augmented reality display method of applying color of hair to eyebrows includes the steps of capturing a head image (30) of a user (2), retrieving an original eyebrow image from the head image (30), extracting a hair representation image (31) covering at least part of the user's hair from the head image (30), executing a process of extracting hair region on the hair representation image (31) for obtaining a hair mask for indicating hair's position, computing a hair color parameter according to the hair mask and the hair representation image (31), executing a process of coloring eyebrows on the head image (30) according to the hair color parameter and position of the original eyebrow image for obtaining an AR (Augmented Reality) head image which its eyebrow color corresponds to the hair color parameter, and displaying the AR head image.
    Type: Grant
    Filed: November 20, 2019
    Date of Patent: March 30, 2021
    Assignee: CAL-COMP BIG DATA, INC.
    Inventor: Yung-Hsuan Lin
  • Patent number: 10960760
    Abstract: Provided are a vehicle control device capable of displaying a high quality seamless image without data transmission delay and a difference in image quality between a plurality of displays, and a method thereof. The vehicle control device includes a plurality of different displays installed in a vehicle and a controller generating a first image having a plurality of pieces of first information, generating a second image having a plurality of pieces of second information, merging the first and second images, dividing the merged image into a plurality of images, and displaying the plurality of divided images on a plurality of displays, respectively.
    Type: Grant
    Filed: March 21, 2019
    Date of Patent: March 30, 2021
    Assignee: LG ELECTRONICS INC.
    Inventors: Honggul Jun, Sujin Kim, Kihyung Lee
  • Patent number: 10955929
    Abstract: An artificial reality system is described that renders, presents, and controls user interface elements within an artificial reality environment, and performs actions in response to one or more detected gestures of the user. The artificial reality system captures image data representative of a physical environment and outputs the artificial reality content. The artificial reality system identifies, from the image data, a gesture comprising a motion of a first digit of a hand and a second digit of the hand to form a pinching configuration a particular number of times within a threshold amount of time. The artificial reality system assigns one or more input characters to one or more of a plurality of digits of the hand and processes a selection of a first input character of the one or more input characters assigned to the second digit of the hand in response to the identified gesture.
    Type: Grant
    Filed: June 7, 2019
    Date of Patent: March 23, 2021
    Assignee: Facebook Technologies, LLC
    Inventors: Jonathan Ravasz, Jasper Stevens, Adam Tibor Varga, Etienne Pinchon, Simon Charles Tickner, Jennifer Lynn Spurlock, Kyle Eric Sorge-Toomey, Robert Ellis, Barrett Fox
  • Patent number: 10952488
    Abstract: Sensor assisted head mounted displays for welding are disclosed. Disclosed example head mounted devices include an optical sensor, an augmented reality controller, a graphics processing unit, and a semi-transparent display. The optical sensor collects an image of a weld environment. The augmented reality controller determines a simulated object to be presented in a field of view, a position in the field of view, and a perspective of the simulated object in the field of view. The graphics processing unit renders the simulated object based on the perspective to represent the simulated object being present in the field of view and in the weld environment. The display presents the rendered simulated object within the field of view based on the position. At least a portion of the weld environment is observable through the display and the lens when the display is presenting the rendered simulated object.
    Type: Grant
    Filed: October 22, 2019
    Date of Patent: March 23, 2021
    Assignee: ILLINOIS TOOL WORKS
    Inventor: Christopher Hsu
  • Patent number: 10958854
    Abstract: An output video is created by at least two cameras recording respective source videos, each having multiple video frames containing video objects imaged by the cameras corresponding to multiple instances of one or more respective source objects traversing the site. Output video objects having a new start display time are computed such that a total duration of display times of all video objects from all source videos is shorter than a cumulative duration of the source videos. The output video objects or graphical representations thereof are rendered at new display times over a background image such that (i) instances imaged by different cameras at different times are represented simultaneously; (ii) at least two output video objects originating from a common camera have different relative display times to their respective source objects; and (iii) in at least one location there are represented instances imaged by two different cameras.
    Type: Grant
    Filed: October 25, 2019
    Date of Patent: March 23, 2021
    Assignee: Briefcam Ltd.
    Inventor: Elhanan Hayim Elboher
  • Patent number: 10957101
    Abstract: A system and method for generating models from digital images in an interactive environment comprising a memory and a processor in communication with the memory. The processor captures or derives metadata for one or more digital images. The processor derives transforms from the metadata to align the digital images with one or more three-dimensional (“3D”) models of objects/structures represented in the digital image. The processor generates an interactive environment which allows a user to view a contextual model of each of the objects/structures in two dimensional (“2D”) and 3D views.
    Type: Grant
    Filed: September 9, 2019
    Date of Patent: March 23, 2021
    Assignee: Geomni, Inc.
    Inventors: Corey David Reed, Ron Richardson, Lyons Jorgensen, Jacob Jenson
  • Patent number: 10937200
    Abstract: In implementations of object-based color adjustment, an image editing system adjusts hue and saturation of a digital image so that objects in the digital image do not appear unnatural. The image editing system quantizes a CIELAB color space into classes that represent pairs of a and b channel values. The image editing system determines probabilities that pixels of a digital image belong to each of the classes, and based on the probabilities, determines a range of hue and a range of saturation for each pixel. An object detector segments objects in the digital image to determine ranges of hue and saturation for each segmented object. The image editing system selectively adjusts the hue and saturation for objects of the digital image based on whether the hue and saturation range for the object include a value of hue and saturation, respectively, selected in a user interface.
    Type: Grant
    Filed: March 19, 2019
    Date of Patent: March 2, 2021
    Assignee: Adobe Inc.
    Inventors: Nishant Kumar, Neeraj Chaudhary
  • Patent number: 10928898
    Abstract: Embodiments of the present disclosure relate to augmented reality (AR) safety enhancement. In embodiments, an eye-gaze time indicating a period in which a user using an AR application is viewing a screen of a mobile device running the AR application can be determined. The eye-gaze time can then be compared to an eye-gaze threshold. In response to a determination that the eye-gaze time exceeds the eye-gaze threshold, an alert can be issued to the mobile device running the AR application. In embodiments, a set of proximity data can be received. The set of proximity data can be analyzed to determine a number of nearby devices. A determination can be made whether the number of nearby devices exceeds a safety threshold. When a determination is made that the number of nearby devices exceeds the safety threshold, an alert can be issued to a device having a running AR application.
    Type: Grant
    Filed: January 3, 2019
    Date of Patent: February 23, 2021
    Assignee: International Business Machines Corporation
    Inventors: Rebecca D. Young, Stewart J. Hyman, Manvendra Gupta, Rhonda L. Childress
  • Patent number: 10930032
    Abstract: Methods, systems, and computer program products for generating concept images of human poses using machine learning models are provided herein. A computer-implemented method includes identifying one or more events from input data by applying a machine learning recognition model to the input data, wherein the identifying comprises (i) detecting multiple entities from the input data and (ii) determining one or more behavioral relationships among the multiple entities in the input data; generating, using a machine learning interpretability model and the identified events, one or more images illustrating one or more human poses related to the identified events; outputting the one or more generated images to at least one user; and updating the machine learning recognition model based at least in part on (i) the one or more generated images and (ii) input from the at least one user.
    Type: Grant
    Filed: August 22, 2019
    Date of Patent: February 23, 2021
    Assignee: International Business Machines Corporation
    Inventors: Samarth Bharadwaj, Saneem Chemmengath, Suranjana Samanta, Karthik Sankaranarayanan
  • Patent number: 10922339
    Abstract: Portable globes may be provided for viewing regions of interest in a Geographical Information System (GIS). A method for providing a portable globe for a GIS may include determining one or more selected regions corresponding to a geographical region of a master globe. The method may further include organizing geospatial data from the master globe based on the selected region and creating the portable globe based on the geospatial data. The portable globe may be smaller in data size than the master globe. The method may include transmitting the portable globe to a local device that may render the selected region at a higher resolution than the remainder of the portable globe in the GIS. A system for providing a portable globe may include a selection module, a fusion module and a transmitter. A system for updating a portable globe may include a packet bundler and a globe cutter.
    Type: Grant
    Filed: April 13, 2020
    Date of Patent: February 16, 2021
    Assignee: Google LLC
    Inventors: Manas Ranjan Jagadev, Eli Dylan Lorimer, Bret Peterson, Vijay Raman, Mark Wheeler
  • Patent number: 10922868
    Abstract: Improvements in the graphics processing pipeline that allow multiple pipelines to cooperate to render a single frame are disclosed. Two approaches are provided. In a first approach, world-space pipelines for the different graphics processing pipelines process all work for draw calls received from a central processing unit (CPU). In a second approach, the world-space pipelines divide up the work. Work that is divided is synchronized and redistributed at various points in the world-space pipeline. In either approach, the triangles output by the world-space pipelines are distributed to the screen-space pipelines based on the portions of the render surface overlapped by the triangles. Triangles are rendered by screen-space pipelines associated with the render surface portions overlapped by those triangles.
    Type: Grant
    Filed: June 26, 2019
    Date of Patent: February 16, 2021
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Mangesh P. Nijasure, Todd Martin, Michael Mantor
  • Patent number: 10916046
    Abstract: Techniques are disclosed for estimating poses from images. In one embodiment, a machine learning model, referred to herein as the “detector,” is trained to estimate animal poses from images in a bottom-up fashion. In particular, the detector may be trained using rendered images depicting animal body parts scattered over realistic backgrounds, as opposed to renderings of full animal bodies. In order to make appearances of the rendered body parts more realistic so that the detector can be trained to estimate poses from images of real animals, the body parts may be rendered using textures that are determined from a translation of rendered images of the animal into corresponding images with more realistic textures via adversarial learning. Three-dimensional poses may also be inferred from estimated joint locations using, e.g., inverse kinematics.
    Type: Grant
    Filed: February 28, 2019
    Date of Patent: February 9, 2021
    Assignees: Disney Enterprises, Inc., ETH Zurich (Eidgenoessische Technische Hochschule Zurich)
    Inventors: Martin Guay, Dominik Tobias Borer, Ahmet Cengiz Öztireli, Robert W. Sumner, Jakob Joachim Buhmann
  • Patent number: 10902605
    Abstract: An apparatus and method for performing multisampling anti-aliasing. For example, one embodiment of an apparatus samples multiple locations within each pixel of an image frame to generate a plurality of image slices. Each image slice comprises a different set of samples for each of the pixels of the image frame. Anti-aliasing is then performed on the image frame using the image slices by first subdividing the plurality of image slices into equal-sized pixel blocks and determining whether each pixel block has one or more different pixel values in different image slices. If so, then edge detection and simple shape detection is performed using pixel data from a pixel block in a single image slice; if not, then edge detection and simple shape detection is performed using the pixel block in multiple image slices.
    Type: Grant
    Filed: June 21, 2018
    Date of Patent: January 26, 2021
    Assignee: Intel Corporation
    Inventor: Filip Strugar
  • Patent number: 10902671
    Abstract: A method and system for 3D modeling of a construction or structure based on producing a 3D image from digital 2D images of the construction or structure, transforming the 3D image to a data point cloud presentation, electing a collection of data points, identifying an object of the construction or structure that matches the collection of data points, attaching corresponding visual images to a record of the identified object and repeating these steps for any collection of data points until completing the construction of a 3D model based on a combination of all identified objects. This 3D modeling is based on photos obtained from digital photographing means mounted on a UAV (Unmanned Aerial Vehicle) and launched to survey the construction or structure.
    Type: Grant
    Filed: November 6, 2019
    Date of Patent: January 26, 2021
    Assignee: MANAM APPLICATIONS LTD.
    Inventors: Gilad Shloosh, Avihu Shagan