Patents Examined by Vincent Peren
  • Patent number: 11699243
    Abstract: Paired images of substantially the same scene are captured with the same freestanding sensor. The paired images include reflected light illuminated with controlled polarization states that are different between the paired images. Information from the images is applied to a convolutional neural network (CNN) configured to derive a spatially varying bi-directional reflectance distribution function (SVBRDF) for objects in the paired images. Alternatively, the sensor is fixed and oriented to capture images of an object of interest in the scene while a light source traverses a path that intersects the sensor's field of view. Information from the paired images of the scene and from the images captured of the object of interest when the light source traverses the field of view are applied to a CNN to derive a SVBDRF for the object of interest. The image information and the SVBRDF are used to render a representation with artificial lighting conditions.
    Type: Grant
    Filed: December 6, 2021
    Date of Patent: July 11, 2023
    Assignee: Photopotech LLC
    Inventor: Benjamin von Cramon
  • Patent number: 11670043
    Abstract: A virtual viewpoint foreground image generating unit generates a virtual viewpoint foreground image, which is an image of a foreground object seen from a virtual viewpoint without a shadow, based on received multi-viewpoint images and a received virtual viewpoint parameter. A virtual viewpoint background image generating unit generates a virtual viewpoint background image, which is an image of a background object seen from the virtual viewpoint, based on the received multi-viewpoint images and virtual viewpoint parameter. A shadow mask image generating unit generates shadow mask images from the received multi-viewpoint images. A shadow-added virtual viewpoint background image generating unit renders a shadow in the virtual viewpoint background image based on the received virtual viewpoint background image, shadow mask images, and virtual viewpoint parameter.
    Type: Grant
    Filed: May 7, 2021
    Date of Patent: June 6, 2023
    Assignee: CANON KABUSHIKI KAISHA
    Inventor: Keigo Yoneda
  • Patent number: 11631209
    Abstract: A method for computer animation includes receiving, by a processor executing a multi-platform deformation engine system in association with a third-party game engine system, an input file that includes an asset geometry, where the asset geometry defines an asset mesh structure, where the asset geometry may exclude an internal support frame, and where logic for custom deformation steps may be included, altogether in a fashion portable and made to produce consistent results across multiple different software and/or hardware platform environments and/or across real-time and/or offline scenarios. The method also includes applying at least one deformer to the asset mesh structure, where the at least one deformer includes a plurality of user-selectable deformer channels, and where each deformer channel is associated with at least a portion of the asset mesh structure and configured to adjust a visual appearance of the associated portion of the asset mesh structure.
    Type: Grant
    Filed: July 23, 2021
    Date of Patent: April 18, 2023
    Assignee: KITESTRING, INC.
    Inventors: Eric A. Soulvie, Richard R. Hurrey, R. Jason Bickerstaff, Clifford S. Champion, Peter E. McGowan, Robert Ernest Schnurstein
  • Patent number: 11632535
    Abstract: The present disclosure provides a light field imaging system by projecting near-infrared spot in remote sensing based on a multifocal microlens array. The light field imaging system includes a near-infrared spot projection apparatus (100) and a light field imaging component (200), where the near-infrared spot projection apparatus (100) is configured to scatter near-infrared spots on a to-be-observed object to add texture information to a target image, and the light field imaging component (200) is configured to image a target scene light ray with additional texture information. The present disclosure can extend a target depth-of-field (DOF) detection range, and particularly, reconstruct a surface of a weak-texture object.
    Type: Grant
    Filed: December 31, 2020
    Date of Patent: April 18, 2023
    Assignee: Peking University
    Inventors: Lei Yan, Shoujiang Zhao, Peng Yang, Yi Lin, Hongying Zhao, Kaiwen Jiang, Feizhou Zhang, Wenjie Fan, Haimeng Zhao, Jinfa Yang, Fan Liu
  • Patent number: 11624802
    Abstract: Provided herein are methods and systems for generating a model, mapping a monitored space, and used for augmenting the location and paths of devices path in the monitored space, using orientation and obstruction analysis. The disclosure comprises moving a device having a camera and one or more wireless transceiver through the monitored space, exchanging signal transmissions with one or more wireless transceivers present in the monitored space, and taking images, video sequences, or other optical readings. Either the mobile wireless device, the wireless transceivers, or both may have a non-isotropic transmission and reception characteristics, due to antenna structure, occlusions, other objects with radiation impact and/or the like. The images, videos, and/or optical readings, in addition to the received signal characteristics are stored and processed to generate the model, which one or more verification units, configured to verify the location objects or devices in the monitored space, may use.
    Type: Grant
    Filed: February 14, 2021
    Date of Patent: April 11, 2023
    Assignee: NEC Corporation Of America
    Inventor: Tsvi Lev
  • Patent number: 11620781
    Abstract: A system and method for controlling the animation and movement of in-game objects. In some embodiments, the system includes one or more data-driven animation building blocks that can be used to define any character movements. In some embodiments, the data-driven animation blocks are conditioned by how their data is described separately from any explicit code in the core game engine. These building blocks can accept certain inputs from the core code system (e.g., movement direction, desired velocity of movement, and so on). But the game itself is agnostic as to why particular building blocks are used and what animation data (e.g., single animation, parametric blend, defined by user, and so on) the blocks may be associated with.
    Type: Grant
    Filed: October 23, 2020
    Date of Patent: April 4, 2023
    Assignee: TAKE-TWO INTERACTIVE SOFTWARE, INC.
    Inventors: Tobias Kleanthous, Mike Jones, Chris Swinhoe, Arran Cartie, James Stuart Miller, Sven Louis Julia van Soom
  • Patent number: 11615579
    Abstract: Various implementations disclosed herein include devices, systems, and methods that changes a surface attribute of a first object to include a surface attribute of a second object at an intersection of the two objects. For example, changing a surface attribute may include identifying an intersection between a displayed first object and a displayed second object in a graphical environment, determining whether the graphical environment is operating in a first mode or a different second mode in accordance with a determination that the graphical environment is operating in the first mode, responsive to the intersection between the displayed first object and the displayed second object identifying a portion of the first object where in the identified portion is intersecting the second object and changing a surface attribute of the portion of the first object to include a surface attribute of the second object at the intersection including the surface attribute.
    Type: Grant
    Filed: January 21, 2021
    Date of Patent: March 28, 2023
    Assignee: Apple Inc.
    Inventor: Eric G. Thivierge
  • Patent number: 11610360
    Abstract: A real-time neural radiance caching technique for path-traced global illumination is implemented using a neural network for caching scattered radiance components of global illumination. The neural (network) radiance cache handles fully dynamic scenes, and makes no assumptions about the camera, lighting, geometry, and materials. In contrast with conventional caching, the data-driven approach sidesteps many difficulties of caching algorithms, such as locating, interpolating, and updating cache points. The neural radiance cache is trained via online learning during rendering. Advantages of the neural radiance cache are noise reduction and real-time performance. Importantly, the runtime overhead and memory footprint of the neural radiance cache are stable and independent of scene complexity.
    Type: Grant
    Filed: June 7, 2021
    Date of Patent: March 21, 2023
    Assignee: NVIDIA Corporation
    Inventors: Thomas Müller, Fabrice Pierre Armand Rousselle, Jan Novák, Alexander Georg Keller
  • Patent number: 11587287
    Abstract: A method of forming a physical model of a geographic area. The method includes the steps of generating a digital elevation model of a topography of a defined geographic area, scaling the digital elevation model to a predetermined size, determining a parametric function from the scaled digital elevation model according to a predetermined shape, determining a linear function from the parametric function, determining a machine path from the linear function, and forming one or more material portions according to the machine path, thereby forming one or more portions representative of the defined geographic area.
    Type: Grant
    Filed: November 9, 2020
    Date of Patent: February 21, 2023
    Inventor: Thomas Percy
  • Patent number: 11574416
    Abstract: A method includes obtaining a set of images that correspond to a person. The method includes generating a body pose model of the person defined by a branched plurality of neural network systems. Each neural network system models a respective portion of the person between a first body-joint and a second body-joint as dependent on an adjacent portion of the person sharing the first body-joint. Providing the set of images of the respective portion to a first one and a second one of the neural network systems. The first one and second one correspond to adjacent body portions. The method includes determining, jointly by at least the first one and second one of the plurality of neural network systems pose information for the first respective body-joint and the second respective body-joint.
    Type: Grant
    Filed: April 28, 2021
    Date of Patent: February 7, 2023
    Assignee: APPLE INC.
    Inventors: Andreas N. Bigontina, Behrooz Mahasseni, Gutemberg B. Guerra Filho, Saumil B. Patel, Stefan Auer
  • Patent number: 11562468
    Abstract: Apparatus and method for denoising of images generated by a rendering engine such as a ray tracing engine. For example, one embodiment of a system or apparatus comprises: A system comprising: a plurality of nodes to perform ray tracing operations; a dispatcher node to dispatch graphics work to the plurality of nodes, each node to perform ray tracing to render a region of an image frame; at least a first node of the plurality comprising: a ray-tracing renderer to perform ray tracing to render a first region of the image frame; and a denoiser to perform denoising of the first region using a combination of data associated with the first region and data associated with a region outside of the first region, at least some of the data associated with the region outside of the first region to be retrieved from at least one other node.
    Type: Grant
    Filed: February 9, 2021
    Date of Patent: January 24, 2023
    Assignee: INTEL CORPORATION
    Inventors: Carson Brownlee, Ingo Wald, Attila Afra, Johannes Guenther, Jefferson Amstutz, Carsten Benthin
  • Patent number: 11558551
    Abstract: A system and method for a master platform includes receiving first pose data associated with an image sensor of a first device, and a first semantic map generated by the first device, the first semantic map including a simplified object representation in a coordinate space of the first device. The master platform also receives second pose data associated with an image sensor of a second device, and a second semantic map generated by the second device, the second semantic map including a simplified object representation in a coordinate space of the second device. A shared simplified object representation common to the first and semantic maps is identified. The master platform further combines the first semantic and second semantic maps based on the first and second pose data. The first pose data, first semantic map, second pose data, and second semantic map are associated with a common time interval.
    Type: Grant
    Filed: September 3, 2020
    Date of Patent: January 17, 2023
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Michael Sapienza, Ankur Gupta, Abhijit Bendale, Fannie Fontanel
  • Patent number: 11551397
    Abstract: A media sequence includes media items arranged in a sequence. A graph is generated to represent animations available for the media items in the media sequence. The graph includes nodes that represent the available animations. The animations to be used in generating the media sequence is selected via selection of a path through the graph, and the media sequence is generated using the selected animations.
    Type: Grant
    Filed: May 27, 2021
    Date of Patent: January 10, 2023
    Assignee: GoPro, Inc.
    Inventors: Guillaume Abbe, Guillaume Oulès, David Sherpa
  • Patent number: 11532112
    Abstract: The present disclosure generally relates to generating and modifying virtual avatars. An electronic device having a camera and a display apparatus displays a virtual avatar that changes appearance in response to changes in a face in a field of view of the camera. In response to detecting changes in one or more physical features of the face in the field of view of the camera, the electronic device modifies one or more features of the virtual avatar.
    Type: Grant
    Filed: April 1, 2021
    Date of Patent: December 20, 2022
    Assignee: Apple Inc.
    Inventors: Guillaume Pierre André Barlier, Sebastian Bauer, Aurelio Guzman, Nicolas Scapel, Giancarlo Yerkes
  • Patent number: 11521287
    Abstract: A system comprising; analyzing a building model, wherein a set of wall panels are isolated from other assemblies; processing a first set of data associated with the coordinates of the wall panels; processing a second set of data associated with the assembly of the wall panels; creating a set of data associated with the assembly of the wall panel and the coordinates of a set of wall panel members; formulating an assembly of the wall panel, wherein the assembly is a predetermined organization of the wall panels based on the first set of data and the second set of data; calculating the assembly based on a set of limitations, wherein the limitations are based on the shipping vessel; manipulating the assembly, wherein the manipulated assembly is within the limitations of the shipping vessel; and generating a graphical representation of the manipulated assembly.
    Type: Grant
    Filed: March 27, 2020
    Date of Patent: December 6, 2022
    Inventor: Maharaj Jalla
  • Patent number: 11501480
    Abstract: The disclosed embodiments relate to a method for controlling a virtual character (or “avatar”) using a multi-modal model. The multi-modal model may process various input information relating to a user and process the input information using multiple internal models. The multi-modal model may combine the internal models to make believable and emotionally engaging responses by the virtual character. The link to a virtual character may be embedded on a web browser and the avatar may be dynamically generated based on a selection to interact with the virtual character by a user. A report may be generated for a client, the report providing insights as to characteristics of users interacting with a virtual character associated with the client.
    Type: Grant
    Filed: June 4, 2020
    Date of Patent: November 15, 2022
    Assignee: ARTIE, INC.
    Inventors: Armando McIntyre-Kirwin, Ryan Horrigan, Josh Eisenberg
  • Patent number: 11481948
    Abstract: The present disclosure discloses a method, device and storage medium for generating an animation. The method includes: acquiring a configuration file corresponding to a configuration file identifier; determining behavior information and animated resources based on the configuration file; acquiring first animated resources based on first animated resource identifiers in the behavior information; and generating the animation by synthesizing the behavior information and the first animated resources.
    Type: Grant
    Filed: July 22, 2020
    Date of Patent: October 25, 2022
    Assignee: BEIJING DAJIA INTERNET INFORMATION TECHNOLOGY CO., LTD.
    Inventors: Xuan Liu, Zhenlong Bai, Kaijian Jiang, Chao Wang
  • Patent number: 11443483
    Abstract: Systems, methods and instructions for creating building models of physical structures is disclosed. The building model may be a collection of floors defined by outlines containing regions that may be offset relative to a main region, and a collection of connectors. Connectors may have connection points for tracking, routing and sizing. Connectors may indicate elevation changes through georeferenced structural features. Signal elements may also be features that provide corrections when tracking. Feature descriptors are data that describes the structural configuration and signal elements enabling them to be matched to previously collected data in a database. User interface elements assist a user of a tracking device in collecting floor information, structural features and signal features and validating certain collected information based on previously known information. The height of floors may also be inferred based on sensor data from the tracking device.
    Type: Grant
    Filed: August 10, 2020
    Date of Patent: September 13, 2022
    Assignee: TRX SYSTEMS, INC.
    Inventors: Travis Young, Daniel Hakim, Daniel Franchy, Jared Napora, John Karvounis, Jonathan Fetter Degges, Tim Wang, Benjamin Funk, Carole Teolis, Carol Politi, Stuart Woodbury
  • Patent number: 11386587
    Abstract: A line drawing automatic coloring method according to the present disclosure includes: acquiring line drawing data of a target to be colored; receiving at least one local style designation for applying a selected local style to at least one place of the acquired line drawing data; and performing coloring processing reflecting the local style designation on the line drawing data based on a learned model for coloring in which it is learned in advance using the line drawing data and the local style designation as inputs.
    Type: Grant
    Filed: September 19, 2018
    Date of Patent: July 12, 2022
    Assignee: PREFERRED NETWORKS, INC.
    Inventor: Eiichi Matsumoto
  • Patent number: 11348301
    Abstract: Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing at least one program and a method for transforming a motion style of an avatar from a first style to a second style. The program and method include: retrieving, by a processor from a storage device, an avatar depicting motion in a first style; receiving user input selecting a second style; obtaining, based on the user input, a trained machine learning model that performs a non-linear transformation of motion from the first style to the second style; and applying the obtained trained machine learning model to the retrieved avatar to transform the avatar from depicting motion in the first style to depicting motion in the second style.
    Type: Grant
    Filed: December 16, 2020
    Date of Patent: May 31, 2022
    Assignee: Snap Inc.
    Inventors: Harrison Jesse Smith, Chen Cao, Yingying Wang