Patents by Inventor Ruben Villegas

Ruben Villegas has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230360320
    Abstract: This disclosure describes methods, non-transitory computer readable storage media, and systems that generate realistic shading for three-dimensional objects inserted into digital images. The disclosed system utilizes a light encoder neural network to generate a representation embedding of lighting in a digital image. Additionally, the disclosed system determines points of the three-dimensional object visible within a camera view. The disclosed system generates a self-occlusion map for the digital three-dimensional object by determining whether fixed sets of rays uniformly sampled from the points intersects with the digital three-dimensional object. The disclosed system utilizes a generator neural network to determine a shading map for the digital three-dimensional object based on the representation embedding of lighting in the digital image and the self-occlusion map.
    Type: Application
    Filed: July 18, 2023
    Publication date: November 9, 2023
    Inventors: Ruben Villegas, Yunseok Jang, Duygu Ceylan Aksit, Jimei Yang, Xin Sun
  • Publication number: 20230260182
    Abstract: Methods, systems, and non-transitory computer readable storage media are disclosed for utilizing unsupervised learning of discrete human motions to generate digital human motion sequences. The disclosed system utilizes an encoder of a discretized motion model to extract a sequence of latent feature representations from a human motion sequence in an unlabeled digital scene. The disclosed system also determines sampling probabilities from the sequence of latent feature representations in connection with a codebook of discretized feature representations associated with human motions. The disclosed system converts the sequence of latent feature representations into a sequence of discretized feature representations by sampling from the codebook based on the sampling probabilities. Additionally, the disclosed system utilizes a decoder to reconstruct a human motion sequence from the sequence of discretized feature representations.
    Type: Application
    Filed: February 16, 2022
    Publication date: August 17, 2023
    Inventors: Jun Saito, Nitin Saini, Ruben Villegas
  • Patent number: 11704865
    Abstract: This disclosure describes methods, non-transitory computer readable storage media, and systems that generate realistic shading for three-dimensional objects inserted into digital images. The disclosed system utilizes a light encoder neural network to generate a representation embedding of lighting in a digital image. Additionally, the disclosed system determines points of the three-dimensional object visible within a camera view. The disclosed system generates a self-occlusion map for the digital three-dimensional object by determining whether fixed sets of rays uniformly sampled from the points intersects with the digital three-dimensional object. The disclosed system utilizes a generator neural network to determine a shading map for the digital three-dimensional object based on the representation embedding of lighting in the digital image and the self-occlusion map.
    Type: Grant
    Filed: July 22, 2021
    Date of Patent: July 18, 2023
    Assignee: Adobe Inc.
    Inventors: Ruben Villegas, Yunseok Jang, Duygu Ceylan Aksit, Jimei Yang, Xin Sun
  • Patent number: 11657546
    Abstract: Introduced here are techniques for relighting an image by automatically segmenting a human object in an image. The segmented image is input to an encoder that transforms it into a feature space. The feature space is concatenated with coefficients of a target illumination for the image and input to an albedo decoder and a light transport detector to predict an albedo map and a light transport matrix, respectively. In addition, the output of the encoder is concatenated with outputs of residual parts of each decoder and fed to a light coefficients block, which predicts coefficients of the illumination for the image. The light transport matrix and predicted illumination coefficients are multiplied to obtain a shading map that can sharpen details of the image. Scaling the resulting image by the albedo map to produce the relight image. The relight image can be refined to denoise the relight image.
    Type: Grant
    Filed: May 24, 2022
    Date of Patent: May 23, 2023
    Assignee: Adobe Inc.
    Inventors: Xin Sun, Ruben Villegas, Manuel Lagunas Arto, Jimei Yang, Jianming Zhang
  • Publication number: 20230088912
    Abstract: In various examples, historical trajectory information of objects in an environment may be tracked by an ego-vehicle and encoded into a state feature. The encoded state features for each of the objects observed by the ego-vehicle may be used—e.g., by a bi-directional long short-term memory (LSTM) network—to encode a spatial feature. The encoded spatial feature and the encoded state feature for an object may be used to predict lateral and/or longitudinal maneuvers for the object, and the combination of this information may be used to determine future locations of the object. The future locations may be used by the ego-vehicle to determine a path through the environment, or may be used by a simulation system to control virtual objects—according to trajectories determined from the future locations—through a simulation environment.
    Type: Application
    Filed: September 26, 2022
    Publication date: March 23, 2023
    Inventors: Ruben Villegas, Alejandro Troccoli, Iuri Frosio, Stephen Tyree, Wonmin Byeon, Jan Kautz
  • Publication number: 20230037339
    Abstract: One example method involves a processing device that performs operations that include receiving a request to retarget a source motion into a target object. Operations further include providing the target object to a contact-aware motion retargeting neural network trained to retarget the source motion into the target object. The contact-aware motion retargeting neural network is trained by accessing training data that includes a source object performing the source motion. The contact-aware motion retargeting neural network generates retargeted motion for the target object, based on a self-contact having a pair of input vertices. The retargeted motion is subject to motion constraints that: (i) preserve a relative location of the self-contact and (ii) prevent self-penetration of the target object.
    Type: Application
    Filed: July 26, 2021
    Publication date: February 9, 2023
    Inventors: Ruben Villegas, Jun Saito, Jimei Yang, Duygu Ceylan Aksit, Aaron Hertzmann
  • Publication number: 20230037591
    Abstract: This disclosure describes methods, non-transitory computer readable storage media, and systems that generate realistic shading for three-dimensional objects inserted into digital images. The disclosed system utilizes a light encoder neural network to generate a representation embedding of lighting in a digital image. Additionally, the disclosed system determines points of the three-dimensional object visible within a camera view. The disclosed system generates a self-occlusion map for the digital three-dimensional object by determining whether fixed sets of rays uniformly sampled from the points intersects with the digital three-dimensional object. The disclosed system utilizes a generator neural network to determine a shading map for the digital three-dimensional object based on the representation embedding of lighting in the digital image and the self-occlusion map.
    Type: Application
    Filed: July 22, 2021
    Publication date: February 9, 2023
    Inventors: Ruben Villegas, Yunseok Jang, Duygu Ceylan Aksit, Jimei Yang, Xin Sun
  • Patent number: 11514293
    Abstract: In various examples, historical trajectory information of objects in an environment may be tracked by an ego-vehicle and encoded into a state feature. The encoded state features for each of the objects observed by the ego-vehicle may be used—e.g., by a bi-directional long short-term memory (LSTM) network—to encode a spatial feature. The encoded spatial feature and the encoded state feature for an object may be used to predict lateral and/or longitudinal maneuvers for the object, and the combination of this information may be used to determine future locations of the object. The future locations may be used by the ego-vehicle to determine a path through the environment, or may be used by a simulation system to control virtual objects—according to trajectories determined from the future locations—through a simulation environment.
    Type: Grant
    Filed: September 9, 2019
    Date of Patent: November 29, 2022
    Assignee: NVIDIA Corporation
    Inventors: Ruben Villegas, Alejandro Troccoli, Iuri Frosio, Stephen Tyree, Wonmin Byeon, Jan Kautz
  • Publication number: 20220284640
    Abstract: Introduced here are techniques for relighting an image by automatically segmenting a human object in an image. The segmented image is input to an encoder that transforms it into a feature space. The feature space is concatenated with coefficients of a target illumination for the image and input to an albedo decoder and a light transport detector to predict an albedo map and a light transport matrix, respectively. In addition, the output of the encoder is concatenated with outputs of residual parts of each decoder and fed to a light coefficients block, which predicts coefficients of the illumination for the image. The light transport matrix and predicted illumination coefficients are multiplied to obtain a shading map that can sharpen details of the image. Scaling the resulting image by the albedo map to produce the relight image. The relight image can be refined to denoise the relight image.
    Type: Application
    Filed: May 24, 2022
    Publication date: September 8, 2022
    Inventors: Xin Sun, Ruben Villegas, Manuel Lagunas Arto, Jimei Yang, Jianming Zhang
  • Patent number: 11380023
    Abstract: Introduced here are techniques for relighting an image by automatically segmenting a human object in an image. The segmented image is input to an encoder that transforms it into a feature space. The feature space is concatenated with coefficients of a target illumination for the image and input to an albedo decoder and a light transport detector to predict an albedo map and a light transport matrix, respectively. In addition, the output of the encoder is concatenated with outputs of residual parts of each decoder and fed to a light coefficients block, which predicts coefficients of the illumination for the image. The light transport matrix and predicted illumination coefficients are multiplied to obtain a shading map that can sharpen details of the image. Scaling the resulting image by the albedo map to produce the relight image. The relight image can be refined to denoise the relight image.
    Type: Grant
    Filed: March 18, 2020
    Date of Patent: July 5, 2022
    Assignee: Adobe Inc.
    Inventors: Xin Sun, Ruben Villegas, Manuel Lagunas Arto, Jimei Yang, Jianming Zhang
  • Publication number: 20210295571
    Abstract: Introduced here are techniques for relighting an image by automatically segmenting a human object in an image. The segmented image is input to an encoder that transforms it into a feature space. The feature space is concatenated with coefficients of a target illumination for the image and input to an albedo decoder and a light transport detector to predict an albedo map and a light transport matrix, respectively. In addition, the output of the encoder is concatenated with outputs of residual parts of each decoder and fed to a light coefficients block, which predicts coefficients of the illumination for the image. The light transport matrix and predicted illumination coefficients are multiplied to obtain a shading map that can sharpen details of the image. Scaling the resulting image by the albedo map to produce the relight image. The relight image can be refined to denoise the relight image.
    Type: Application
    Filed: March 18, 2020
    Publication date: September 23, 2021
    Inventors: Xin Sun, Ruben Villegas, Manuel Lagunas Arto, Jimei Yang, Jianming Zhang
  • Publication number: 20200082248
    Abstract: In various examples, historical trajectory information of objects in an environment may be tracked by an ego-vehicle and encoded into a state feature. The encoded state features for each of the objects observed by the ego-vehicle may be used—e.g., by a bi-directional long short-term memory (LSTM) network—to encode a spatial feature. The encoded spatial feature and the encoded state feature for an object may be used to predict lateral and/or longitudinal maneuvers for the object, and the combination of this information may be used to determine future locations of the object. The future locations may be used by the ego-vehicle to determine a path through the environment, or may be used by a simulation system to control virtual objects—according to trajectories determined from the future locations—through a simulation environment.
    Type: Application
    Filed: September 9, 2019
    Publication date: March 12, 2020
    Inventors: Ruben Villegas, Alejandro Troccoli, Iuri Frosio, Stephen Tyree, Wonmin Byeon, Jan Kautz
  • Patent number: 10546408
    Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that use a motion synthesis neural network with a forward kinematics layer to generate a motion sequence for a target skeleton based on an initial motion sequence for an initial skeleton. In certain embodiments, the methods, non-transitory computer readable media, and systems use a motion synthesis neural network comprising an encoder recurrent neural network, a decoder recurrent neural network, and a forward kinematics layer to retarget motion sequences. To train the motion synthesis neural network to retarget such motion sequences, in some implementations, the disclosed methods, non-transitory computer readable media, and systems modify parameters of the motion synthesis neural network based on one or both of an adversarial loss and a cycle consistency loss.
    Type: Grant
    Filed: March 20, 2018
    Date of Patent: January 28, 2020
    Assignee: Adobe Inc.
    Inventors: Jimei Yang, Duygu Ceylan, Ruben Villegas
  • Publication number: 20190295305
    Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that use a motion synthesis neural network with a forward kinematics layer to generate a motion sequence for a target skeleton based on an initial motion sequence for an initial skeleton. In certain embodiments, the methods, non-transitory computer readable media, and systems use a motion synthesis neural network comprising an encoder recurrent neural network, a decoder recurrent neural network, and a forward kinematics layer to retarget motion sequences. To train the motion synthesis neural network to retarget such motion sequences, in some implementations, the disclosed methods, non-transitory computer readable media, and systems modify parameters of the motion synthesis neural network based on one or both of an adversarial loss and a cycle consistency loss.
    Type: Application
    Filed: March 20, 2018
    Publication date: September 26, 2019
    Inventors: Jimei Yang, Duygu Ceylan, Ruben Villegas