Patents by Inventor Nathan Carr

Nathan Carr has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11344648
    Abstract: A system and method for an air purification assembly that creates high volume, sterilized straight-line airflow with a significant reduction in electricity consumption utilizing counter rotation of two propellers mounted in reverse to create linear airflow and thrust that sucks in air through an inlet and blows the air out through an outlet. Air purification assembly may also sterilize the air as it passes through light utilizing a light core system with a ring-shaped assembly that has one or more UV-C LEDs that may kill bio-organisms within proximity to the air purification assembly while dissipating the heat created by the UVC LED lights in the light core system.
    Type: Grant
    Filed: February 11, 2020
    Date of Patent: May 31, 2022
    Assignee: Ventorlux, LLC
    Inventor: Nathan Carr
  • Publication number: 20220122222
    Abstract: An improved system architecture uses a Generative Adversarial Network (GAN) including a specialized generator neural network to generate multiple resolution output images. The system produces a latent space representation of an input image. The system generates a first output image at a first resolution by providing the latent space representation of the input image as input to a generator neural network comprising an input layer, an output layer, and a plurality of intermediate layers and taking the first output image from an intermediate layer, of the plurality of intermediate layers of the generator neural network. The system generates a second output image at a second resolution different from the first resolution by providing the latent space representation of the input image as input to the generator neural network and taking the second output image from the output layer of the generator neural network.
    Type: Application
    Filed: July 23, 2021
    Publication date: April 21, 2022
    Inventors: Cameron Smith, Ratheesh Kalarot, Wei-An Lin, Richard Zhang, Niloy Mitra, Elya Shechtman, Shabnam Ghadar, Zhixin Shu, Yannick Hold-Geoffrey, Nathan Carr, Jingwan Lu, Oliver Wang, Jun-Yan Zhu
  • Publication number: 20220122305
    Abstract: An improved system architecture uses a pipeline including an encoder and a Generative Adversarial Network (GAN) including a generator neural network to generate edited images with improved speed, realism, and identity preservation. The encoder produces an initial latent space representation of an input image by encoding the input image. The generator neural network generates an initial output image by processing the initial latent space representation of the input image. The system generates an optimized latent space representation of the input image using a loss minimization technique that minimizes a loss between the input image and the initial output image. The loss is based on target perceptual features extracted from the input image and initial perceptual features extracted from the initial output image. The system outputs the optimized latent space representation of the input image for downstream use.
    Type: Application
    Filed: July 23, 2021
    Publication date: April 21, 2022
    Inventors: Cameron Smith, Ratheesh Kalarot, Wei-An Lin, Richard Zhang, Niloy Mitra, Elya Shechtman, Shabnam Ghadar, Zhixin Shu, Yannick Hold-Geoffrey, Nathan Carr, Jingwan Lu, Oliver Wang, Jun-Yan Zhu
  • Publication number: 20220122221
    Abstract: An improved system architecture uses a pipeline including a Generative Adversarial Network (GAN) including a generator neural network and a discriminator neural network to generate an image. An input image in a first domain and information about a target domain are obtained. The domains correspond to image styles. An initial latent space representation of the input image is produced by encoding the input image. An initial output image is generated by processing the initial latent space representation with the generator neural network. Using the discriminator neural network, a score is computed indicating whether the initial output image is in the target domain. A loss is computed based on the computed score. The loss is minimized to compute an updated latent space representation. The updated latent space representation is processed with the generator neural network to generate an output image in the target domain.
    Type: Application
    Filed: July 23, 2021
    Publication date: April 21, 2022
    Inventors: Cameron Smith, Ratheesh Kalarot, Wei-An Lin, Richard Zhang, Niloy Mitra, Elya Shechtman, Shabnam Ghadar, Zhixin Shu, Yannick Hold-Geoffrey, Nathan Carr, Jingwan Lu, Oliver Wang, Jun-Yan Zhu
  • Patent number: 11244502
    Abstract: Techniques are disclosed for generation of 3D structures. A methodology implementing the techniques according to an embodiment includes initializing systems configured to provide rules that specify edge connections between vertices and parametric properties of the vertices. The rules are applied to an initial set of vertices to generate 3D graphs for each of these vertex-rule-graph (VRG) systems. The initial set of vertices is associated with provided interaction surfaces of a 3D model. Skeleton geometries are generated for the 3D graphs, and an associated objective function is calculated. The objective function is configured to evaluate the fitness of the skeleton geometries based on given geometric and functional constraints. A 3D structure is generated through an iterative application of genetic programming techniques applied to the VRG systems to minimize the objective function. Receiving updated constraints and interaction surfaces, for incorporation in the iterative process.
    Type: Grant
    Filed: November 29, 2017
    Date of Patent: February 8, 2022
    Assignee: Adobe Inc.
    Inventors: Vojt{hacek over (e)}ch Krs, Radomir Mech, Nathan A. Carr
  • Patent number: 11164343
    Abstract: Techniques are disclosed for populating a region of an image with a plurality of brush strokes. For instance, the image is displayed, with the region of the image bounded by a boundary. A user input is received that is indicative of a user-defined brush stroke within the region. One or more synthesized brush strokes are generated within the region, based on the user-defined brush stroke. In some examples, the one or more synthesized brush strokes fill at least a part of the region of the image. The image is displayed, along with the user-defined brush stroke and the one or more synthesized brush strokes within the region of the image.
    Type: Grant
    Filed: October 10, 2020
    Date of Patent: November 2, 2021
    Assignee: Adobe Inc.
    Inventors: Vineet Batra, Praveen Kumar Dhanuka, Nathan Carr, Ankit Phogat
  • Patent number: 11158117
    Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that use a local-lighting-estimation-neural network to estimate lighting parameters for specific positions within a digital scene for augmented reality. For example, based on a request to render a virtual object in a digital scene, a system uses a local-lighting-estimation-neural network to generate location-specific-lighting parameters for a designated position within the digital scene. In certain implementations, the system also renders a modified digital scene comprising the virtual object at the designated position according to the parameters. In some embodiments, the system generates such location-specific-lighting parameters to spatially vary and adapt lighting conditions for different positions within a digital scene.
    Type: Grant
    Filed: May 18, 2020
    Date of Patent: October 26, 2021
    Assignee: Adobe Inc.
    Inventors: Kalyan Sunkavalli, Sunil Hadap, Nathan Carr, Mathieu Garon
  • Publication number: 20210319256
    Abstract: The present disclosure is directed toward systems, methods, and non-transitory computer readable media for generating a modified digital image by identifying patch matches within a digital image utilizing a Gaussian mixture model. For example, the systems described herein can identify sample patches and corresponding matching portions within a digital image. The systems can also identify transformations between the sample patches and the corresponding matching portions. Based on the transformations, the systems can generate a Gaussian mixture model, and the systems can modify a digital image by replacing a target region with target matching portions identified in accordance with the Gaussian mixture model.
    Type: Application
    Filed: May 27, 2021
    Publication date: October 14, 2021
    Inventors: Xin Sun, Sohrab Amirghodsi, Nathan Carr, Michal Lukac
  • Publication number: 20210279916
    Abstract: Techniques and systems are provided for generating a video from texture images, and for reconstructing the texture images from the video. For example, a texture image can be divided into a number of tiles, and the number of tiles can be sorted into a sequence of ordered tiles. The sequence of ordered tiles can be provided to a video coder for generating a coded video. The number of tiles can be encoded based on the sequence of ordered tiles. The encoded video including the encoded sequence of ordered tiles can be decoded. At least a portion of the decoded video can include the number of tiles sorted into a sequence of ordered tiles. A data file associated with at least the portion of the decoded video can be used to reconstruct the texture image using the tiles.
    Type: Application
    Filed: May 26, 2021
    Publication date: September 9, 2021
    Inventors: Gwendal Simon, Viswanathan Swaminathan, Nathan Carr, Stefano Petrangeli
  • Patent number: 11049290
    Abstract: Techniques and systems are provided for generating a video from texture images, and for reconstructing the texture images from the video. For example, a texture image can be divided into a number of tiles, and the number of tiles can be sorted into a sequence of ordered tiles. The sequence of ordered tiles can be provided to a video coder for generating a coded video. The number of tiles can be encoded based on the sequence of ordered tiles. The encoded video including the encoded sequence of ordered tiles can be decoded. At least a portion of the decoded video can include the number of tiles sorted into a sequence of ordered tiles. A data file associated with at least the portion of the decoded video can be used to reconstruct the texture image using the tiles.
    Type: Grant
    Filed: September 26, 2019
    Date of Patent: June 29, 2021
    Assignee: Adobe Inc.
    Inventors: Gwendal Simon, Viswanathan Swaminathan, Nathan Carr, Stefano Petrangeli
  • Patent number: 11037019
    Abstract: The present disclosure is directed toward systems, methods, and non-transitory computer readable media for generating a modified digital image by identifying patch matches within a digital image utilizing a Gaussian mixture model. For example, the systems described herein can identify sample patches and corresponding matching portions within a digital image. The systems can also identify transformations between the sample patches and the corresponding matching portions. Based on the transformations, the systems can generate a Gaussian mixture model, and the systems can modify a digital image by replacing a target region with target matching portions identified in accordance with the Gaussian mixture model.
    Type: Grant
    Filed: February 27, 2018
    Date of Patent: June 15, 2021
    Assignee: ADOBE INC.
    Inventors: Xin Sun, Sohrab Amirghodsi, Nathan Carr, Michal Lukac
  • Patent number: 10964100
    Abstract: According to one general aspect, systems and techniques for rendering a painting stroke of a three-dimensional digital painting include receiving a painting stroke input on a canvas, where the painting stroke includes a plurality of pixels. For each of the pixels in the plurality of pixels, a neighborhood patch of pixels is selected and input into a neural network and a shading function is output from the neural network. The painting stroke is rendered on the canvas using the shading function.
    Type: Grant
    Filed: September 10, 2018
    Date of Patent: March 30, 2021
    Assignee: ADOBE INC.
    Inventors: Xin Sun, Zhili Chen, Nathan Carr, Julio Marco Murria, Jimei Yang
  • Publication number: 20210032118
    Abstract: The present disclosure provides for compositions, methods of making compositions, and methods of using the composition. In an aspect, the composition can be a reactive material that can be used to split a gas such as water or carbon dioxide.
    Type: Application
    Filed: July 30, 2020
    Publication date: February 4, 2021
    Inventors: Helena Hagelin-Weaver, Samantha Roberts, Nathan Carr
  • Publication number: 20200302684
    Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that use a local-lighting-estimation-neural network to estimate lighting parameters for specific positions within a digital scene for augmented reality. For example, based on a request to render a virtual object in a digital scene, a system uses a local-lighting-estimation-neural network to generate location-specific-lighting parameters for a designated position within the digital scene. In certain implementations, the system also renders a modified digital scene comprising the virtual object at the designated position according to the parameters. In some embodiments, the system generates such location-specific-lighting parameters to spatially vary and adapt lighting conditions for different positions within a digital scene.
    Type: Application
    Filed: May 18, 2020
    Publication date: September 24, 2020
    Inventors: Kalyan Sunkavalli, Sunil Hadap, Nathan Carr, Mathieu Garon
  • Publication number: 20200302658
    Abstract: Techniques and systems are provided for generating a video from texture images, and for reconstructing the texture images from the video. For example, a texture image can be divided into a number of tiles, and the number of tiles can be sorted into a sequence of ordered tiles. The sequence of ordered tiles can be provided to a video coder for generating a coded video. The number of tiles can be encoded based on the sequence of ordered tiles. The encoded video including the encoded sequence of ordered tiles can be decoded. At least a portion of the decoded video can include the number of tiles sorted into a sequence of ordered tiles. A data file associated with at least the portion of the decoded video can be used to reconstruct the texture image using the tiles.
    Type: Application
    Filed: September 26, 2019
    Publication date: September 24, 2020
    Inventors: Gwendal Simon, Viswanathan Swaminathan, Nathan Carr, Stefano Petrangeli
  • Patent number: 10783431
    Abstract: Image search techniques and systems involving emotions are described. In one or more implementations, a digital medium environment of a content sharing service is described for image search result configuration and control based on a search request that indicates an emotion. The search request is received that includes one or more keywords and specifies an emotion. Images are located that are available for licensing by matching one or more tags associated with the image with the one or more keywords and as corresponding to the emotion. The emotion of the images is identified using one or more models that are trained using machine learning based at least in part on training images having tagged emotions. Output is controlled of a search result having one or more representations of the images that are selectable to license respective images from the content sharing service.
    Type: Grant
    Filed: November 11, 2015
    Date of Patent: September 22, 2020
    Assignee: Adobe Inc.
    Inventors: Zeke Koch, Gavin Stuart Peter Miller, Jonathan W. Brandt, Nathan A. Carr, Radomir Mech, Walter Wei-Tuh Chang, Scott D. Cohen, Hailin Jin
  • Publication number: 20200254133
    Abstract: A system and method for an air purification assembly that creates high volume, sterilized straight-line airflow with a significant reduction in electricity consumption utilizing counter rotation of two propellers mounted in reverse to create linear airflow and thrust that sucks in air through an inlet and blows the air out through an outlet. Air purification assembly may also sterilize the air as it passes through light utilizing a light core system with a ring-shaped assembly that has one or more UV-C LEDs that may kill bio-organisms within proximity to the air purification assembly while dissipating the heat created by the UVC LED lights in the light core system.
    Type: Application
    Filed: February 11, 2020
    Publication date: August 13, 2020
    Inventor: Nathan Carr
  • Patent number: 10692277
    Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that use a local-lighting-estimation-neural network to estimate lighting parameters for specific positions within a digital scene for augmented reality. For example, based on a request to render a virtual object in a digital scene, a system uses a local-lighting-estimation-neural network to generate location-specific-lighting parameters for a designated position within the digital scene. In certain implementations, the system also renders a modified digital scene comprising the virtual object at the designated position according to the parameters. In some embodiments, the system generates such location-specific-lighting parameters to spatially vary and adapt lighting conditions for different positions within a digital scene.
    Type: Grant
    Filed: March 21, 2019
    Date of Patent: June 23, 2020
    Assignee: ADOBE INC.
    Inventors: Kalyan Sunkavalli, Sunil Hadap, Nathan Carr, Mathieu Garon
  • Patent number: 10665011
    Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that use a local-lighting-estimation-neural network to render a virtual object in a digital scene by using a local-lighting-estimation-neural network to analyze both global and local features of the digital scene and generate location-specific-lighting parameters for a designated position within the digital scene. For example, the disclosed systems extract and combine such global and local features from a digital scene using global network layers and local network layers of the local-lighting-estimation-neural network. In certain implementations, the disclosed systems can generate location-specific-lighting parameters using a neural-network architecture that combines global and local feature vectors to spatially vary lighting for different positions within a digital scene.
    Type: Grant
    Filed: May 31, 2019
    Date of Patent: May 26, 2020
    Assignees: ADOBE INC., UNIVERSITÉ LAVAL
    Inventors: Kalyan Sunkavalli, Sunil Hadap, Nathan Carr, Jean-Francois Lalonde, Mathieu Garon
  • Patent number: 10650599
    Abstract: The present disclosure includes methods and systems for rendering digital images of a virtual environment utilizing full path space learning. In particular, one or more embodiments of the disclosed systems and methods estimate a global light transport function based on sampled paths within a virtual environment. Moreover, in one or more embodiments, the disclosed systems and methods utilize the global light transport function to sample additional paths. Accordingly, the disclosed systems and methods can iteratively update an estimated global light transport function and utilize the estimated global light transport function to focus path sampling on regions of a virtual environment most likely to impact rendering a digital image of the virtual environment from a particular camera perspective.
    Type: Grant
    Filed: July 6, 2018
    Date of Patent: May 12, 2020
    Assignee: ADOBE INC.
    Inventors: Xin Sun, Nathan Carr, Hao Qin