Patents Examined by Matthew Salvucci
  • Patent number: 11961202
    Abstract: Disclosed is an editing system for postprocessing three-dimensional (“3D”) image data to realistically recreate the effects associated with viewing or imaging a represented scene with different camera settings or lenses. The system receives an original image and an edit command with a camera setting or a camera lens. The system associates the selection to multiple image adjustments. The system performs a first of the multiple image adjustments on a first set of 3D image data from the original image in response to the first set of 3D image data satisfying specific positional or non-positional values defined for the first image adjustment, and performs a second of the multiple image adjustments on a second set of 3D image data from the original image in response to the second set of 3D image data satisfying the specific positional or non-positional values defined for the second image adjustment.
    Type: Grant
    Filed: August 22, 2023
    Date of Patent: April 16, 2024
    Assignee: Illuscio, Inc.
    Inventors: Max Good, Joseph Bogacz
  • Patent number: 11954814
    Abstract: A computer graphics production control system is configured to generate scenes (including three-dimensional, deformable characters (“3DD characters”)) that can be manipulated to produce still images and/or animated videos. Such control systems may utilize 3DD characters that are controlled by a series of control points that are positioned and/or moved under the control of the artist. Body characteristics of 3DD characters are modeled as a series of inter-related points (e.g., skin triangles) that can be manipulated under the control of the model and the reference points (e.g., bones) of the body.
    Type: Grant
    Filed: February 17, 2023
    Date of Patent: April 9, 2024
    Assignee: Wombat Studio, Inc.
    Inventors: Tianxin Dai, Aric G. S. Bartle, Alexis R. Haraux
  • Patent number: 11925310
    Abstract: A method for generating and updating a three-dimensional representation of a surgical site based on imaging data from an imaging system is disclosed. The method comprises the steps of generating a first image of the surgical site based on structured electromagnetic radiation emitted from the imaging system, receiving a second image of the surgical site, aligning the first image and the second image, generating a three-dimensional representation of the surgical site based on the first image and the second image as aligned, displaying the three-dimensional representation on a display screen, receiving a user selection to manipulate the three-dimensional representation, and updating the three-dimensional representation as displayed on the display screen from a first state to a second state according to the received user selection.
    Type: Grant
    Filed: June 15, 2021
    Date of Patent: March 12, 2024
    Assignee: Cilag GmbH International
    Inventors: Frederick E. Shelton, IV, Jason L. Harris, Daniel J. Mumaw, Kevin M. Fiebig
  • Patent number: 11922645
    Abstract: Disclosed is a system and method for operating an imaging system. The imaging system may move or be moved to acquire image data of a subject at different positions relative to the subject. The image data may, thereafter, be combined to form a single image.
    Type: Grant
    Filed: March 18, 2021
    Date of Patent: March 5, 2024
    Assignee: Medtronic Navigation, Inc.
    Inventors: Xavier Tomas Fernandez, Andre Souza, Robert A. Simpson, Kyo C. Jin, Hong Li, Xiaodong Tao, Patrick A. Helm, Michael P. Marrama
  • Patent number: 11922593
    Abstract: Methods are disclosed for generating a training dataset of concealed shapes and corresponding unveiled shapes of a body for training a neural network. These methods may include generating with the aid of computing means a first dataset comprising a plurality of first surface representations representative of a plurality of bare shapes of a plurality of bodies. The plurality of bare shapes are concealed virtually by means of a computer implemented program in order to obtain a plurality of simulated concealed shapes of the plurality of bodies. The plurality of simulated concealed shapes are applied to a scanning simulator, the scanning simulator generating a second dataset comprising a plurality of second surface representations representative of the plurality of simulated concealed shapes.
    Type: Grant
    Filed: July 10, 2020
    Date of Patent: March 5, 2024
    Assignees: VRIJE UNIVERSITEIT BRUSSEL, Treedy's SPRL
    Inventors: Pengpeng Hu, Adrian Munteanu, Nourbakhsh Nastaran, Stephan Sturges
  • Patent number: 11915369
    Abstract: Apparatus and method for box-box testing. For example, one embodiment of a processor comprises: a bounding volume hierarchy (BVH) generator to construct a BVH comprising a plurality of hierarchically arranged BVH nodes; traversal circuitry to traverse query boxes through the BVH, the traversal circuitry to read a BVH node from a top of a BVH node stack and to read a query box from a local storage or memory, the traversal circuitry further comprising: box-box testing circuitry and/or logic to compare maximum and minimum X, Y, and Z coordinates of the BVH node and the query box and to generate an overlap indication if overlap is detected for each of the X, Y, and Z dimensions; distance determination circuitry and/or logic to generate a distance value representing an extent of overlap between the BVH node and the query box; and sorting circuitry and/or logic to sort the BVH node within a set of one or more additional BVH nodes based on the distance value.
    Type: Grant
    Filed: March 15, 2020
    Date of Patent: February 27, 2024
    Assignee: Intel Corporation
    Inventors: Karthik Vaidyanathan, Carsten Benthin, Sven Woop
  • Patent number: 11900582
    Abstract: Estimating a material property parameter of fabric involves receiving information including a three-dimensional (3D) contour shape of fabric placed over a 3D geometric object, estimating a material property parameter of the fabric used for representing drape shapes of 3D clothes made by the fabric by applying the information to a trained artificial neural network, and providing the material property parameter of the fabric.
    Type: Grant
    Filed: September 6, 2021
    Date of Patent: February 13, 2024
    Assignee: CLO VIRTUAL FASHION INC.
    Inventors: Myung Geol Choi, Eun Jung Ju
  • Patent number: 11887209
    Abstract: A computer-implemented method for generating a 2D or 3D object, including training an autoencoder on a first set of training data to identify a first set of latent variables and generate a first set of output data; training an hourglass predictor on a second set of training data, where the hourglass predictor encoder converts a set of related but different training input data to a second set of latent variables, which decode into a second set of output data of the same type as the first set of output data; and using the hourglass predictor to predict a 2D or 3D object of the same type as the first set of output data based on a 2D or 3D object of the same type as the second set of input data.
    Type: Grant
    Filed: February 25, 2020
    Date of Patent: January 30, 2024
    Assignee: 3SHAPE A/S
    Inventors: Jens Peter Träff, Jens Christian Jørgensen, Alejandro Alonso Diaz, Mathias Bøgh Stokholm, Asger Vejen Hoedt
  • Patent number: 11887235
    Abstract: A method includes receiving a first facial framework and a first captured image of a face. The first facial framework corresponds to the face at a first frame and includes a first facial mesh of facial information. The method also includes projecting the first captured image onto the first facial framework and determining a facial texture corresponding to the face based on the projected first captured image. The method also includes receiving a second facial framework at a second frame that includes a second facial mesh of facial information and updating the facial texture based on the received second facial framework. The method also includes displaying the updated facial texture as a three-dimensional avatar. The three-dimensional avatar corresponds to a virtual representation of the face.
    Type: Grant
    Filed: November 23, 2022
    Date of Patent: January 30, 2024
    Assignee: Google LLC
    Inventors: Tarek Hefny, Nicholas Reiter, Brandon Young, Arun Kandoor, Dillon Cower
  • Patent number: 11887289
    Abstract: A system and method of obtaining an occlusion key using a background pixel map is disclosed. A target image containing a target location suitable for displaying a virtual augmentation is obtained. A stream of current images are transformed into a stationary stream having the camera pose of the target image. These are segmented using a trained neural network. The background pixel map is then the color values of background pixels found at each position within the target location. An occlusion key for a new current image is obtained by first transforming it to conform to the target image and then comparing each pixel in the target location with the color values of background pixels in the background pixel map. The occlusion key is then transformed back to conform to the current image and used for virtual augmentation of the current image.
    Type: Grant
    Filed: May 15, 2023
    Date of Patent: January 30, 2024
    Inventors: Oran Gilad, Samuel Chenillo, Oren Steinfeld
  • Patent number: 11880927
    Abstract: A three-dimensional (3D) object reconstruction neural network system learns to predict a 3D shape representation of an object from a video that includes the object. The 3D reconstruction technique may be used for content creation, such as generation of 3D characters for games, movies, and 3D printing. When 3D characters are generated from video, the content may also include motion of the character, as predicted based on the video. The 3D object construction technique exploits temporal consistency to reconstruct a dynamic 3D representation of the object from an unlabeled video. Specifically, an object in a video has a consistent shape and consistent texture across multiple frames. Texture, base shape, and part correspondence invariance constraints may be applied to fine-tune the neural network system. The reconstruction technique generalizes well—particularly for non-rigid objects.
    Type: Grant
    Filed: May 19, 2023
    Date of Patent: January 23, 2024
    Assignee: NVIDIA Corporation
    Inventors: Xueting Li, Sifei Liu, Kihwan Kim, Shalini De Mello, Jan Kautz
  • Patent number: 11868402
    Abstract: Systems and methods that provide visualization of networks. Data is input into a table structure that represents any hierarchy of entities, relationships and their attributes. The content of the table is processed to extract the entities, relationships and their attributes. These are turned into nodes, edges and a visual representation of their attributes using color gradients, categorical colors, shapes, thickness, text labels, etc.
    Type: Grant
    Filed: February 11, 2020
    Date of Patent: January 9, 2024
    Assignee: Kinaxis Inc.
    Inventors: Jeremie Boudin, Rishad Khan, Ivy Blackmore, Andrew Dunbar
  • Patent number: 11869347
    Abstract: The traffic monitoring system is provided with: a camera which captures an image of a monitoring area including a road and generates image data; a millimeter-wave radar which scans a scanning area included in the monitoring area and generates millimeter-wave data; and an information processing server which is connected to the camera and the millimeter-wave radar and acquires the image data and the millimeter-wave data. The information processing server is provided with: a data synchronization unit which synchronizes the image data with the millimeter-wave data so that the difference between a timing at which the image data is generated and a timing at which the millimeter-wave data is generated is equal to or smaller than a certain value; and a screen generation unit which associates the image data and the millimeter wave, which have been synchronized, with each other and generates a monitoring screen that indicates the road conditions.
    Type: Grant
    Filed: March 29, 2019
    Date of Patent: January 9, 2024
    Assignee: PANASONIC HOLDINGS CORPORATION
    Inventors: Yoji Yokoyama, Makoto Yasugi
  • Patent number: 11861855
    Abstract: System and method for registering aerial and ground data including locating rigid features such as walls in both aerial and ground data, registering the ground rigid data to the aerial rigid data, and transforming the ground data using the transform from the registration, including breaking the data into sectors and aligning the sectors. Deformities in the ground data are accommodated.
    Type: Grant
    Filed: June 16, 2021
    Date of Patent: January 2, 2024
    Assignee: DEKA Products Limited Partnership
    Inventors: Shikhar Dev Gupta, Kartik Khanna
  • Patent number: 11861762
    Abstract: This disclosure describes methods, non-transitory computer readable storage media, and systems that generate synthetized digital images using class-specific generators for objects of different classes. The disclosed system modifies a synthesized digital image by utilizing a plurality of class-specific generator neural networks to generate a plurality of synthesized objects according to object classes identified in the synthesized digital image. The disclosed system determines object classes in the synthesized digital image such as via a semantic label map corresponding to the synthesized digital image. The disclosed system selects class-specific generator neural networks corresponding to the classes of objects in the synthesized digital image. The disclosed system also generates a plurality of synthesized objects utilizing the class-specific generator neural networks based on contextual data associated with the identified objects.
    Type: Grant
    Filed: August 12, 2021
    Date of Patent: January 2, 2024
    Assignee: Adobe Inc.
    Inventors: Yuheng Li, Yijun Li, Jingwan Lu, Elya Shechtman, Krishna Kumar Singh
  • Patent number: 11861788
    Abstract: One or more computing devices implement a mesh analysis for evaluating meshes to be rendered when rendering immersive content. The mesh analysis identifies objects in a three-dimensional scene and determines geometrical complexity values for the objects. Objects with similar geometrical complexities are grouped into areas and a mesh vertices budget is determined for the respective areas. Metadata indicating the area definitions and corresponding mesh vertices budgets are generated. The metadata may be uploaded to a server to simplify meshes in the scene prior to streaming to a client, or the metadata may be provided to a client for use in simplifying the meshes as part of rendering the scene.
    Type: Grant
    Filed: June 14, 2021
    Date of Patent: January 2, 2024
    Assignee: Apple Inc.
    Inventors: Afshin Taghavi Nasrabadi, Maneli Noorkami
  • Patent number: 11842442
    Abstract: In one embodiment, one or more computing systems may access a plurality of images corresponding to a portion of a face of a user. The plurality of images is captured from different viewpoints by a plurality of cameras coupled to an artificial-reality system worn by the user. The one or more computing systems may synthesize, using a machine-learning model, to generate a synthesized image corresponding to the portion of the face of the user. The one or more computing systems may access a three-dimensional (3D) facial model representative of the face of the user and generate a texture image by projecting the synthesized image onto the 3D facial model from a specific camera pose. The one or more computing systems may cause an output image of a facial representation of the user to be rendered using at least the 3D facial model and the texture image.
    Type: Grant
    Filed: December 22, 2022
    Date of Patent: December 12, 2023
    Assignee: Meta Platforms Technologies, LLC
    Inventors: James Allan Booth, Elif Albuz, Peihong Guo, Tong Xiao
  • Patent number: 11836848
    Abstract: A system for real-time updates to a display based upon the location of a camera or a detected location of a human viewing the display or both is disclosed. The system enables real-time filming of an augmented reality display that reflects realistic perspective shifts. The display may be used for filming, or may be used as a “game” or informational screen in a physical location, or other applications. The system also enables the use of real-time special effects that are centered upon an actor or other human to be visualized on a display, with appropriate perspective shift for the location of the human relative to the display and the location of the camera relative to the display.
    Type: Grant
    Filed: July 6, 2022
    Date of Patent: December 5, 2023
    Assignee: ARWALL, INC.
    Inventors: Leon Hui, Rene Amador, William Hellwarth, Michael Plescia
  • Patent number: 11822708
    Abstract: A method comprising: in response to a determination that a user is not consuming or not fully consuming virtual content, rendering to the user, a real-time notification in response to real-time virtual content consumable by the user, wherein the real-time notification directs a user to adopt a particular orientation in the real space for starting or augmenting consumption, of the real-time virtual content.
    Type: Grant
    Filed: June 13, 2018
    Date of Patent: November 21, 2023
    Assignee: NOKIA TECHNOLOGIES OY
    Inventors: Lasse Juhani Laaksonen, Miikka Vilermo, Mikko Tammi, Arto Lehtiniemi
  • Patent number: 11816806
    Abstract: The proposed approach is a system and method that allows a user to calculate a 3D model for each of his or her feet using a simple reference object and a mobile computing device with one or more cameras and/or one or more sensors. The mobile computing device moves around the user's feet to scan and/or capture data of his or her feet via the camera and/or the sensors. The captured sensor data is then processed by the mobile computing device to create two sets of 3D (data) point sets (also referred to as “point clouds”). These point clouds are then matched to a 3D model of an average foot to establish a correspondence between the point clouds and the 3D model. Once the correspondence is established, the mobile computing device is configured to fit one or more morphable models to the user's feet.
    Type: Grant
    Filed: November 24, 2020
    Date of Patent: November 14, 2023
    Assignee: Neatsy, Inc.
    Inventors: Konstantin Semianov, Anton Lebedev, Artem Semyanov