Patents Examined by Phong X Nguyen
  • Patent number: 12086376
    Abstract: This application generally relates to defining, displaying and interacting with tags in a 3D model. In an embodiment, a method includes generating, by a system including a processor, a three-dimensional model of an environment based on sets of aligned three-dimensional data captured from the environment, and associating tags with defined locations of the three-dimensional model, wherein the tags are respectively represented by tag icons that are spatially aligned with the defined locations of the three-dimensional model as included in different representations of the three-dimensional model rendered via an interface of a device, wherein the different representations correspond to different perspectives of the three-dimensional model, and wherein selection of the tag icons causes the tags respectively associated therewith to be rendered at the device.
    Type: Grant
    Filed: August 16, 2022
    Date of Patent: September 10, 2024
    Assignee: Matterport, Inc.
    Inventors: James Mildrew, Matthew Tschudy Bell, Dustin Michael Cook, Preston Cowley, Lester Lee, Peter McColgan, Daniel Prochazka, Brian Schulman, James Sundra, Alan Tan
  • Patent number: 12062128
    Abstract: Described herein is a computer-implemented method for simulating texture features of a n-layer target coating, the method including at least the steps of: a) providing known real geometrical properties and known individual ingredients with known real material properties; b) modelling the n-layer target coating in a virtual environment; c) virtually tracing rays of light from one or more light sources towards an aim region defined on a surface of the n-layer target coating; d) virtually collecting rays of light that interacted with the n-layer target coating; e) virtually determining, at least one of an angular, a spectral and a spatial distribution of intensity of the rays of light re-emitted from or reflected by the n-layer target coating; and f) evaluating the determined distribution(s) of intensity and outputting, by an output device, at least one image based on the evaluation.
    Type: Grant
    Filed: July 13, 2020
    Date of Patent: August 13, 2024
    Assignee: BASF COATINGS GMBH
    Inventors: Markus Mundus, Thomas Kantimm
  • Patent number: 12022357
    Abstract: A method includes causing background content to be displayed on a display device with a first virtual object and a second virtual object; causing augmented reality (AR) content to be rendered based on a location of an AR device relative to the display device; determining that the AR content is in front of the first virtual object in the scene when viewed through the AR device and rendering the background content with a cutout in the first virtual object when the first virtual object overlaps with the AR content; and determining that the AR content is behind the second virtual object in the scene when viewed through the AR device and rendering the AR content with a cutout in the AR content when the AR content overlaps with the second virtual object.
    Type: Grant
    Filed: September 10, 2021
    Date of Patent: June 25, 2024
    Assignee: LUCASFILM ENTERTAINMENT COMPANY LTD.
    Inventors: John Gaeta, Michael Koperwas, Nicholas Rasmussen
  • Patent number: 12008703
    Abstract: A computer-implemented method of creating a bounding volume hierarchy (BVH) for a model defined with respect to a local coordinate system for the model. The method includes defining BVH branch nodes within the model, establishing a plurality of local transformation matrices for the BVH; and for each BVH branch node, determining a first bounding volume and associating the branch node with one of the plurality of local transformation matrices that maps between the first bounding volume and a second bounding volume in the local coordinate system.
    Type: Grant
    Filed: July 14, 2021
    Date of Patent: June 11, 2024
    Assignee: Imagination Technologies Limited
    Inventor: Simon Fenney
  • Patent number: 12002162
    Abstract: A method for providing virtual contents in a virtual space based on a common coordinate system includes: detecting a base marker for identifying a fixed point of an actual operation space, and a target marker for identifying an actual operation object, from initial image data indicating an initial state of the actual operation object in the actual operation space; calculating an initial model matrix of a target marker expressing an initial position and orientation of the target marker, and a model matrix of the base marker expressing a position and an orientation of the detected base marker in a virtual operation space having a common coordinate system which has the detected base marker as a reference; and calculating a current model matrix of the target marker expressing a current position and orientation of the target marker by using the calculated model matrix of the base marker.
    Type: Grant
    Filed: July 27, 2020
    Date of Patent: June 4, 2024
    Assignee: KOREA INSTITUTE OF SCIENCE AND TECHNOLOGY
    Inventors: Byoung Hyun Yoo, Yong Jae Lee
  • Patent number: 11922567
    Abstract: The present invention facilitates efficient and effective image processing. A network can comprise: a first system configured to perform a first portion of lighting calculations for an image and combing results of the first portion of lighting calculations for the image with results of a second portion of lighting calculations; and a second system configured to perform the second portion of lighting calculations and forward the results of the second portion of the lighting calculations to the first system. The first and second portion of lighting calculations can be associated with indirect lighting calculations and direct lighting calculations respectively. The first system can be a client in a local location and the second system can be a server in a remote location (e.g., a cloud computing environment). The first system and second system can be in a cloud and a video is transmitted to a local system.
    Type: Grant
    Filed: April 4, 2022
    Date of Patent: March 5, 2024
    Assignee: NVIDIA Corporation
    Inventors: Morgan McGuire, Cyril Crassin, David Luebke, Michael Mara, Brent Oster, Peter Shirley, Peter-Pike Sloan, Christopher Wyman
  • Patent number: 11922556
    Abstract: Apparatuses, systems, and techniques to render images. In at least one embodiment, at least one visibility parameter determined for a first image region is reused for a different second image region that neighbors the first image region (e.g., spatially and/or temporally).
    Type: Grant
    Filed: April 12, 2021
    Date of Patent: March 5, 2024
    Assignee: NVIDIA CORPORATION
    Inventor: Alexey Yuryevich Panteleev
  • Patent number: 11887236
    Abstract: An OSS animated display system for an interventional device (40) including an integration of one or more optical shape sensors and one or more interventional tools. The OSS animated display system employs a monitor (121) and a display controller (110) for controlling a real-time display on the monitor (121) of an animation of a spatial positional relationship between the OSS interventional device (40) and an object (50). The display controller (110) derives the animation of the spatial positional relationship between the OSS interventional device (40) and the object (50) from a shape of the optical shape sensor(s).
    Type: Grant
    Filed: December 29, 2018
    Date of Patent: January 30, 2024
    Assignee: KONINKLIJKE PHILIPS N.V.
    Inventors: Paul Thienphrapa, Neriman Nicoletta Kahya, Olivier Pierre Nempont, Pascal Yves François Cathier, Molly Lara Flexman, Torre Michelle Bydlon, Raoul Florent
  • Patent number: 11880943
    Abstract: The data set receiving unit 13 of the information processing apparatus 1 of an aspect example receives a data set that includes at least BIM data. The route setting processor 151 sets a route, which is arranged inside and/or outside a virtual building represented by the BIM data, based on the data set received. The virtual image set generating processor 152 generates a virtual image set of the virtual building along the route, based on the received data set and the set route. The inference model creating processor 153 creates an inference model by applying machine learning with training data that includes at least the generated virtual image set to a neural network. The inference model created is used to identify data of a building material from data acquired by measuring a building.
    Type: Grant
    Filed: April 25, 2022
    Date of Patent: January 23, 2024
    Assignee: TOPCON CORPORATION
    Inventors: Yasufumi Fukuma, Satoshi Yanobe
  • Patent number: 11783524
    Abstract: A method for providing visual sequences using one or more images comprising: receiving one or more person images of showing at least one face, receiving a message to be enacted by the person, wherein the message comprises at least a text or a emotional and movement command, processing the message to extract or receive an audio data related to voice of the person, and a facial movement data related to expression to be carried on face of the person, processing the image/s, the audio data, and the facial movement data, and generating an animation of the person enacting the message. Wherein emotional and movement command is a GUI or multimedia based instruction to invoke the generation of facial expression/s and or body part/s movement.
    Type: Grant
    Filed: February 10, 2017
    Date of Patent: October 10, 2023
    Inventor: Nitin Vats
  • Patent number: 11756255
    Abstract: Accelerating structure for hybrid ray tracing is characterized by high locality, wherein scene changes are updated locally in one of its hierarchies, without effecting other locations in the structure. Reconstructions of accelerating structures of prior art are replaced by low-cost updates. The efficiency of traversals is improved by a double step traversal.
    Type: Grant
    Filed: August 25, 2021
    Date of Patent: September 12, 2023
    Assignee: Snap Inc.
    Inventors: Reuven Bakalash, Ron Weitzman
  • Patent number: 11736756
    Abstract: A method for providing visual sequences using one or more images comprising: receiving one or more person images of showing atleast one face, using a human body information to identify requirement of the other body part/s; receiving atleast one image or photograph of other human body part/s based on identified requirement; processing the image/s of the person with the image/s of other human body part/s using the human body information to generate a body model of the person, the virtual model comprises face of the person, receiving a message to be enacted by the person, wherein the message comprises atleast a text or a emotional and movement command, processing the message to extract or receive an audio data related to voice of the person, and a facial movement data related to expression to be carried on face of the person, processing the body model, the audio data, and the facial movement data, and generating an animation of the body model of the person enacting the message, Wherein emotional and movement
    Type: Grant
    Filed: February 10, 2017
    Date of Patent: August 22, 2023
    Inventor: Nitin Vats
  • Patent number: 11734805
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media that utilize context-aware sensors and multi-dimensional gesture inputs across a digital image to generate enhanced digital images. In particular, the disclosed systems can provide a dynamic sensor over a digital image within a digital enhancement user interface (e.g., a user interface without visual elements for modifying parameter values). In response to selection of a sensor location, the disclosed systems can determine one or more digital image features at the sensor location. Based on these features, the disclosed systems can select and map parameters to movement directions. Moreover, the disclosed systems can identify a user input gesture comprising movements in one or more directions across the digital image. Based on the movements and the one or more features at the sensor location, the disclosed systems can modify parameter values and generate an enhanced digital image.
    Type: Grant
    Filed: September 8, 2021
    Date of Patent: August 22, 2023
    Assignee: Adobe Inc.
    Inventors: Gregg Wilensky, Mark Nichoson, Edward Wright
  • Patent number: 11710276
    Abstract: In one implementation, a method for improved motion planning. The method includes: obtaining a macro task for a virtual agent within a virtual environment; generating a search-tree based on at least one of the macro task, a state of the virtual environment, and a state of the virtual agent, wherein the search-tree includes a plurality of task nodes corresponding to potential tasks for performance by the virtual agent in furtherance of the macro task; and determining physical motion plans (PMPs) for at least some of the plurality of task nodes within the search-tree in order to generate a lookahead planning gradient for the first time, wherein a granularity of a PMP for a respective task node in the first search-tree is a function of the temporal distance of the respective task node from the first time.
    Type: Grant
    Filed: June 25, 2021
    Date of Patent: July 25, 2023
    Assignee: Apple Inc.
    Inventors: Daniel Laszlo Kovacs, Siva Chandra Mouli Sivapurapu, Payal Jotwani, Noah Jonathan Gamboa
  • Patent number: 11704859
    Abstract: A graphics processing unit (GPU) includes one or more processor cores adapted to execute a software-implemented shader program, and one or more hardware-implemented ray tracing units (RTU) adapted to traverse an acceleration structure to calculate intersections of rays with bounding volumes and graphics primitives. The RTU implements traversal logic to traverse the acceleration structure, stack management, and other tasks to relieve burden on the shader, communicating intersections to the shader which then calculates whether the intersection hit a transparent or opaque portion of the object intersected. Thus, one or more processing cores within the GPU perform accelerated ray tracing by offloading aspects of processing to the RTU, which traverses the acceleration structure within which the 3D environment is represented.
    Type: Grant
    Filed: August 20, 2020
    Date of Patent: July 18, 2023
    Assignee: Sony Interactive Entertainment LLC
    Inventor: Mark Evan Cerny
  • Patent number: 11672602
    Abstract: A method for determining surgical port placement for minimally invasive surgery. Based on received measurements, an instance of a parametric torso model that defines an external surface and a visceral surface each having a dome shape that takes into account an insufflation effect, is determined. Normalized surgical target locations in the parametric torso model are determined in response to an identification of a surgical procedure, and are mapped to un-normalized surgical target locations. Permissible port locations on the instance of the parametric torso model are computed, based on the characteristics of a surgical tool and based on the un-normalized surgical target locations. Other aspects are also described and claimed.
    Type: Grant
    Filed: June 5, 2020
    Date of Patent: June 13, 2023
    Assignee: Verb Surgical Inc.
    Inventors: David R. Monteverde, Danyal Fer, Andrew Bzostek
  • Patent number: 11663775
    Abstract: Methods, system, and computer storage media are provided for generating physical-based materials for rendering digital objects with an appearance of a real-world material. Images depicted the real-world material, including diffuse component images and specular component images, are captured using different lighting patterns, which may include area lights. From the captured images, approximations of one or more material maps are determined using a photometric stereo technique. Based on the approximations and the captured images, a neural network system generates a set of material maps, such as a diffuse albedo material map, a normal material map, a specular albedo material map, and a roughness material map. The material maps from the neural network may be optimized based on a comparison of the input images of the real-world material and images rendered from the material maps.
    Type: Grant
    Filed: April 19, 2021
    Date of Patent: May 30, 2023
    Assignee: Adobe, Inc.
    Inventors: Akshat Dave, Kalyan Krishna Sunkavalli, Yannick Hold-Geoffroy, Milos Hasan
  • Patent number: 11660142
    Abstract: A method for creating surgical simulation information by a computer includes creating a virtual body model corresponding to a body state of a patient for surgery, simulating a specific surgical process on the virtual body model to obtain virtual surgical data, dividing the virtual surgical data into minimum surgical operation units, each unit representing one specific operation, and creating cue sheet data composed of the minimum surgical operation units, wherein the cue sheet data represents the specific surgical process.
    Type: Grant
    Filed: June 26, 2020
    Date of Patent: May 30, 2023
    Assignee: HUTOM CO., LTD.
    Inventors: Jong Hyuck Lee, Woo Jin Hyung, Hoon Mo Yang, Ho Seung Kim
  • Patent number: 11645801
    Abstract: A method for synthesizing a figure of a virtual object includes: obtaining a figure image of the virtual object, and original face images corresponding to a speech segment; extracting a first face key point of the face of the virtual object, and a second face key point of each of the original face images; processing the first face key point to generate a position and posture information of a first three-dimensional 3D face; processing each second face key point to generate vertex information of a second 3D face; generating a target face image corresponding to each original face image based on the position and the posture information of the first 3D face and the vertex information of each second 3D face; and synthesizing a speaking figure segment of the virtual object, corresponding to the speech segment, based on the figure image of the virtual object and each target face image.
    Type: Grant
    Filed: June 15, 2021
    Date of Patent: May 9, 2023
    Assignee: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY CO., LTD.
    Inventors: Hanqi Guo, Tianshu Hu, Mingming Ma, Zhibin Hong
  • Patent number: 11645809
    Abstract: Systems and methods for implementing methods for user selection of a virtual object in a virtual scene. A user input may be received via a user input device. The user input may be an attempt to select a virtual object from a plurality of virtual objects rendered in a virtual scene on a display of a display system. A position and orientation of the user input device may be determined in response to the first user input. A probability the user input may select each virtual object may be calculated via a probability model. Based on the position and orientation of the user input device, a ray-cast procedure and a sphere-cast procedure may be performed to determine the virtual object being selected. The probability of selection may also be considered in determining the virtual object. A virtual beam may be rendered from the user input device to the virtual object.
    Type: Grant
    Filed: March 2, 2021
    Date of Patent: May 9, 2023
    Assignee: zSpace, Inc.
    Inventors: Jonathan J. Hosenpud, Clifford S. Champion, David A. Chavez, Kevin S. Yamada, Alexandre R. Lelievre