Patents by Inventor John Quan

John Quan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240122820
    Abstract: An aerosol personal care product comprising a composition comprising a compressed gas propellant and a single-phase liquid concentrate; wherein the concentrate comprises at least about 10%, by weight of the concentrate, of one or more emollients and wherein at least one emollient has a viscosity of at least about 20 cP; and wherein the concentrate has a viscosity (cP) to surface tension (dyn/cm) ratio of at most about 1.
    Type: Application
    Filed: October 17, 2023
    Publication date: April 18, 2024
    Inventors: Elton Luis Menon, Julie Beth Hipp, Matthew John Martin, Ke Ming Quan, Julie Savchenko, David Frederick Swaile
  • Patent number: 11948300
    Abstract: Machine learning systems and methods are disclosed for prediction of wound healing, such as for diabetic foot ulcers or other wounds, and for assessment implementations such as segmentation of images into wound regions and non-wound regions. Systems for assessing or predicting wound healing can include a light detection element configured to collect light of at least a first wavelength reflected from a tissue region including a wound, and one or more processors configured to generate an image based on a signal from the light detection element having pixels depicting the tissue region, determine reflectance intensity values for at least a subset of the pixels, determine one or more quantitative features of the subset of the plurality of pixels based on the reflectance intensity values, and generate a predicted or assessed healing parameter associated with the wound over a predetermined time interval.
    Type: Grant
    Filed: March 2, 2023
    Date of Patent: April 2, 2024
    Assignee: Spectral MD, Inc.
    Inventors: Wensheng Fan, John Michael DiMaio, Jeffrey E. Thatcher, Peiran Quan, Faliu Yi, Kevin Plant, Ronald Baxter, Brian McCall, Zhicun Gao, Jason Dwight
  • Publication number: 20240092670
    Abstract: The present invention generally relates to systems and methods for the separation and removal of methane from an agricultural methane digestate, for example, agricultural waste. The systems and methods include an extraction system that exposes the methane digestate to agricultural commodities and a microbial additives resulting in products that can be recycled.
    Type: Application
    Filed: September 16, 2023
    Publication date: March 21, 2024
    Inventors: Julie Sannar, James White, Jim Quan, Ronald Helland, John Woods
  • Publication number: 20230252288
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training an action selection neural network used to select actions to be performed by an agent interacting with an environment. One of the systems includes (i) a plurality of actor computing units, in which each of the actor computing units is configured to maintain a respective replica of the action selection neural network and to perform a plurality of actor operations, and (ii) one or more learner computing units, in which each of the one or more learner computing units is configured to perform a plurality of learner operations.
    Type: Application
    Filed: April 6, 2023
    Publication date: August 10, 2023
    Inventors: David Budden, Gabriel Barth-Maron, John Quan, Daniel George Horgan
  • Publication number: 20230244933
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a neural network used to select actions performed by a reinforcement learning agent interacting with an environment. In one aspect, a method includes maintaining a replay memory, where the replay memory stores pieces of experience data generated as a result of the reinforcement learning agent interacting with the environment. Each piece of experience data is associated with a respective expected learning progress measure that is a measure of an expected amount of progress made in the training of the neural network if the neural network is trained on the piece of experience data. The method further includes selecting a piece of experience data from the replay memory by prioritizing for selection pieces of experience data having relatively higher expected learning progress measures and training the neural network on the selected piece of experience data.
    Type: Application
    Filed: January 30, 2023
    Publication date: August 3, 2023
    Inventors: Tom Schaul, John Quan, David Silver
  • Patent number: 11625604
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training an action selection neural network used to select actions to be performed by an agent interacting with an environment. One of the systems includes (i) a plurality of actor computing units, in which each of the actor computing units is configured to maintain a respective replica of the action selection neural network and to perform a plurality of actor operations, and (ii) one or more learner computing units, in which each of the one or more learner computing units is configured to perform a plurality of learner operations.
    Type: Grant
    Filed: October 29, 2018
    Date of Patent: April 11, 2023
    Assignee: DeepMind Technologies Limited
    Inventors: David Budden, Gabriel Barth-Maron, John Quan, Daniel George Horgan
  • Patent number: 11568250
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a neural network used to select actions performed by a reinforcement learning agent interacting with an environment. In one aspect, a method includes maintaining a replay memory, where the replay memory stores pieces of experience data generated as a result of the reinforcement learning agent interacting with the environment. Each piece of experience data is associated with a respective expected learning progress measure that is a measure of an expected amount of progress made in the training of the neural network if the neural network is trained on the piece of experience data. The method further includes selecting a piece of experience data from the replay memory by prioritizing for selection pieces of experience data having relatively higher expected learning progress measures and training the neural network on the selected piece of experience data.
    Type: Grant
    Filed: May 4, 2020
    Date of Patent: January 31, 2023
    Assignee: DeepMind Technologies Limited
    Inventors: Tom Schaul, John Quan, David Silver
  • Publication number: 20220346717
    Abstract: Systems and methods for tourniquet monitoring and control is provided. A system includes at least one sensor, a housing, a processor, and a user communication module. The at least one sensor is configured to monitor at least one of deployment or operation of the tourniquet. The housing is configured to removably engage the tourniquet to position the at least one sensor to monitor the at least one of deployment or operation of the tourniquet. The processor is configured to receive feedback from the at least one sensor, compare the feedback to at least one of deployment or operation parameters for the tourniquet, and generate a user report. The user communication module is configured to communicate the user report.
    Type: Application
    Filed: August 7, 2020
    Publication date: November 3, 2022
    Inventors: John Quan Nguyen, Avery Lee Goss, Conor Lee Evans, Lilian Witthauer, Matthias Muller
  • Publication number: 20200265305
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training an action selection neural network used to select actions to be performed by an agent interacting with an environment. One of the systems includes (i) a plurality of actor computing units, in which each of the actor computing units is configured to maintain a respective replica of the action selection neural network and to perform a plurality of actor operations, and (ii) one or more learner computing units, in which each of the one or more learner computing units is configured to perform a plurality of learner operations.
    Type: Application
    Filed: October 29, 2018
    Publication date: August 20, 2020
    Inventors: David Budden, Gabriel Barth-Maron, John Quan, Daniel George Horgan
  • Publication number: 20200265312
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a neural network used to select actions performed by a reinforcement learning agent interacting with an environment. In one aspect, a method includes maintaining a replay memory, where the replay memory stores pieces of experience data generated as a result of the reinforcement learning agent interacting with the environment. Each piece of experience data is associated with a respective expected learning progress measure that is a measure of an expected amount of progress made in the training of the neural network if the neural network is trained on the piece of experience data. The method further includes selecting a piece of experience data from the replay memory by prioritizing for selection pieces of experience data having relatively higher expected learning progress measures and training the neural network on the selected piece of experience data.
    Type: Application
    Filed: May 4, 2020
    Publication date: August 20, 2020
    Inventors: Tom Schaul, John Quan, David Silver
  • Patent number: 10650310
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a neural network used to select actions performed by a reinforcement learning agent interacting with an environment. In one aspect, a method includes maintaining a replay memory, where the replay memory stores pieces of experience data generated as a result of the reinforcement learning agent interacting with the environment. Each piece of experience data is associated with a respective expected learning progress measure that is a measure of an expected amount of progress made in the training of the neural network if the neural network is trained on the piece of experience data. The method further includes selecting a piece of experience data from the replay memory by prioritizing for selection pieces of experience data having relatively higher expected learning progress measures and training the neural network on the selected piece of experience data.
    Type: Grant
    Filed: November 11, 2016
    Date of Patent: May 12, 2020
    Assignee: DeepMind Technologies Limited
    Inventors: Tom Schaul, John Quan, David Silver
  • Patent number: 10282662
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a neural network used to select actions performed by a reinforcement learning agent interacting with an environment. In one aspect, a method includes maintaining a replay memory, where the replay memory stores pieces of experience data generated as a result of the reinforcement learning agent interacting with the environment. Each piece of experience data is associated with a respective expected learning progress measure that is a measure of an expected amount of progress made in the training of the neural network if the neural network is trained on the piece of experience data. The method further includes selecting a piece of experience data from the replay memory by prioritizing for selection pieces of experience data having relatively higher expected learning progress measures and training the neural network on the selected piece of experience data.
    Type: Grant
    Filed: May 11, 2018
    Date of Patent: May 7, 2019
    Assignee: DeepMind Technologies Limited
    Inventors: Tom Schaul, John Quan, David Silver
  • Publication number: 20180260707
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a neural network used to select actions performed by a reinforcement learning agent interacting with an environment. In one aspect, a method includes maintaining a replay memory, where the replay memory stores pieces of experience data generated as a result of the reinforcement learning agent interacting with the environment. Each piece of experience data is associated with a respective expected learning progress measure that is a measure of an expected amount of progress made in the training of the neural network if the neural network is trained on the piece of experience data. The method further includes selecting a piece of experience data from the replay memory by prioritizing for selection pieces of experience data having relatively higher expected learning progress measures and training the neural network on the selected piece of experience data.
    Type: Application
    Filed: May 11, 2018
    Publication date: September 13, 2018
    Inventors: Tom Schaul, John Quan, David Silver
  • Publication number: 20170140269
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a neural network used to select actions performed by a reinforcement learning agent interacting with an environment. In one aspect, a method includes maintaining a replay memory, where the replay memory stores pieces of experience data generated as a result of the reinforcement learning agent interacting with the environment. Each piece of experience data is associated with a respective expected learning progress measure that is a measure of an expected amount of progress made in the training of the neural network if the neural network is trained on the piece of experience data. The method further includes selecting a piece of experience data from the replay memory by prioritizing for selection pieces of experience data having relatively higher expected learning progress measures and training the neural network on the selected piece of experience data.
    Type: Application
    Filed: November 11, 2016
    Publication date: May 18, 2017
    Applicant: Google Inc.
    Inventors: Tom Schaul, John Quan, David Silver
  • Publication number: 20140213910
    Abstract: Frequent monitoring of early-stage burns is necessary for deciding optimal treatment and management. Superficial-partial thickness and deep-partial thickness burns, while visually similar, differ dramatically in terms of clinical treatment and are known to progress in severity over time. The disclosed method uses spatial frequency domain imaging (SFDI) far noninvasively mapping quantitative changes in chromophore and optical properties that may be an indicative of burn wound severity. A controlled protocol of graded burn severity is developed and applied to 17 rats. SFDI data is acquired at multiple near-infrared wavelengths over a course of 3 h. Burn severity is verified using hematoxylin and eosin histology. Changes in water concentration (edema), deoxygenated hemoglobin concentration, and optical scattering (tissue denaturation) are statistically significant measures, which are used to differentiate superficial partial-thickness burns from deep-partial thickness burns.
    Type: Application
    Filed: January 24, 2014
    Publication date: July 31, 2014
    Applicant: The Regents of the University of California
    Inventors: Anthony J. Durkin, Amaan Mazhar, John Quan Minh Nguyen