Patents by Inventor John Quan
John Quan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240122820Abstract: An aerosol personal care product comprising a composition comprising a compressed gas propellant and a single-phase liquid concentrate; wherein the concentrate comprises at least about 10%, by weight of the concentrate, of one or more emollients and wherein at least one emollient has a viscosity of at least about 20 cP; and wherein the concentrate has a viscosity (cP) to surface tension (dyn/cm) ratio of at most about 1.Type: ApplicationFiled: October 17, 2023Publication date: April 18, 2024Inventors: Elton Luis Menon, Julie Beth Hipp, Matthew John Martin, Ke Ming Quan, Julie Savchenko, David Frederick Swaile
-
Patent number: 11948300Abstract: Machine learning systems and methods are disclosed for prediction of wound healing, such as for diabetic foot ulcers or other wounds, and for assessment implementations such as segmentation of images into wound regions and non-wound regions. Systems for assessing or predicting wound healing can include a light detection element configured to collect light of at least a first wavelength reflected from a tissue region including a wound, and one or more processors configured to generate an image based on a signal from the light detection element having pixels depicting the tissue region, determine reflectance intensity values for at least a subset of the pixels, determine one or more quantitative features of the subset of the plurality of pixels based on the reflectance intensity values, and generate a predicted or assessed healing parameter associated with the wound over a predetermined time interval.Type: GrantFiled: March 2, 2023Date of Patent: April 2, 2024Assignee: Spectral MD, Inc.Inventors: Wensheng Fan, John Michael DiMaio, Jeffrey E. Thatcher, Peiran Quan, Faliu Yi, Kevin Plant, Ronald Baxter, Brian McCall, Zhicun Gao, Jason Dwight
-
Publication number: 20240092670Abstract: The present invention generally relates to systems and methods for the separation and removal of methane from an agricultural methane digestate, for example, agricultural waste. The systems and methods include an extraction system that exposes the methane digestate to agricultural commodities and a microbial additives resulting in products that can be recycled.Type: ApplicationFiled: September 16, 2023Publication date: March 21, 2024Inventors: Julie Sannar, James White, Jim Quan, Ronald Helland, John Woods
-
Publication number: 20230252288Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training an action selection neural network used to select actions to be performed by an agent interacting with an environment. One of the systems includes (i) a plurality of actor computing units, in which each of the actor computing units is configured to maintain a respective replica of the action selection neural network and to perform a plurality of actor operations, and (ii) one or more learner computing units, in which each of the one or more learner computing units is configured to perform a plurality of learner operations.Type: ApplicationFiled: April 6, 2023Publication date: August 10, 2023Inventors: David Budden, Gabriel Barth-Maron, John Quan, Daniel George Horgan
-
Publication number: 20230244933Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a neural network used to select actions performed by a reinforcement learning agent interacting with an environment. In one aspect, a method includes maintaining a replay memory, where the replay memory stores pieces of experience data generated as a result of the reinforcement learning agent interacting with the environment. Each piece of experience data is associated with a respective expected learning progress measure that is a measure of an expected amount of progress made in the training of the neural network if the neural network is trained on the piece of experience data. The method further includes selecting a piece of experience data from the replay memory by prioritizing for selection pieces of experience data having relatively higher expected learning progress measures and training the neural network on the selected piece of experience data.Type: ApplicationFiled: January 30, 2023Publication date: August 3, 2023Inventors: Tom Schaul, John Quan, David Silver
-
Patent number: 11625604Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training an action selection neural network used to select actions to be performed by an agent interacting with an environment. One of the systems includes (i) a plurality of actor computing units, in which each of the actor computing units is configured to maintain a respective replica of the action selection neural network and to perform a plurality of actor operations, and (ii) one or more learner computing units, in which each of the one or more learner computing units is configured to perform a plurality of learner operations.Type: GrantFiled: October 29, 2018Date of Patent: April 11, 2023Assignee: DeepMind Technologies LimitedInventors: David Budden, Gabriel Barth-Maron, John Quan, Daniel George Horgan
-
Patent number: 11568250Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a neural network used to select actions performed by a reinforcement learning agent interacting with an environment. In one aspect, a method includes maintaining a replay memory, where the replay memory stores pieces of experience data generated as a result of the reinforcement learning agent interacting with the environment. Each piece of experience data is associated with a respective expected learning progress measure that is a measure of an expected amount of progress made in the training of the neural network if the neural network is trained on the piece of experience data. The method further includes selecting a piece of experience data from the replay memory by prioritizing for selection pieces of experience data having relatively higher expected learning progress measures and training the neural network on the selected piece of experience data.Type: GrantFiled: May 4, 2020Date of Patent: January 31, 2023Assignee: DeepMind Technologies LimitedInventors: Tom Schaul, John Quan, David Silver
-
Publication number: 20220346717Abstract: Systems and methods for tourniquet monitoring and control is provided. A system includes at least one sensor, a housing, a processor, and a user communication module. The at least one sensor is configured to monitor at least one of deployment or operation of the tourniquet. The housing is configured to removably engage the tourniquet to position the at least one sensor to monitor the at least one of deployment or operation of the tourniquet. The processor is configured to receive feedback from the at least one sensor, compare the feedback to at least one of deployment or operation parameters for the tourniquet, and generate a user report. The user communication module is configured to communicate the user report.Type: ApplicationFiled: August 7, 2020Publication date: November 3, 2022Inventors: John Quan Nguyen, Avery Lee Goss, Conor Lee Evans, Lilian Witthauer, Matthias Muller
-
Publication number: 20200265305Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training an action selection neural network used to select actions to be performed by an agent interacting with an environment. One of the systems includes (i) a plurality of actor computing units, in which each of the actor computing units is configured to maintain a respective replica of the action selection neural network and to perform a plurality of actor operations, and (ii) one or more learner computing units, in which each of the one or more learner computing units is configured to perform a plurality of learner operations.Type: ApplicationFiled: October 29, 2018Publication date: August 20, 2020Inventors: David Budden, Gabriel Barth-Maron, John Quan, Daniel George Horgan
-
Publication number: 20200265312Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a neural network used to select actions performed by a reinforcement learning agent interacting with an environment. In one aspect, a method includes maintaining a replay memory, where the replay memory stores pieces of experience data generated as a result of the reinforcement learning agent interacting with the environment. Each piece of experience data is associated with a respective expected learning progress measure that is a measure of an expected amount of progress made in the training of the neural network if the neural network is trained on the piece of experience data. The method further includes selecting a piece of experience data from the replay memory by prioritizing for selection pieces of experience data having relatively higher expected learning progress measures and training the neural network on the selected piece of experience data.Type: ApplicationFiled: May 4, 2020Publication date: August 20, 2020Inventors: Tom Schaul, John Quan, David Silver
-
Patent number: 10650310Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a neural network used to select actions performed by a reinforcement learning agent interacting with an environment. In one aspect, a method includes maintaining a replay memory, where the replay memory stores pieces of experience data generated as a result of the reinforcement learning agent interacting with the environment. Each piece of experience data is associated with a respective expected learning progress measure that is a measure of an expected amount of progress made in the training of the neural network if the neural network is trained on the piece of experience data. The method further includes selecting a piece of experience data from the replay memory by prioritizing for selection pieces of experience data having relatively higher expected learning progress measures and training the neural network on the selected piece of experience data.Type: GrantFiled: November 11, 2016Date of Patent: May 12, 2020Assignee: DeepMind Technologies LimitedInventors: Tom Schaul, John Quan, David Silver
-
Patent number: 10282662Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a neural network used to select actions performed by a reinforcement learning agent interacting with an environment. In one aspect, a method includes maintaining a replay memory, where the replay memory stores pieces of experience data generated as a result of the reinforcement learning agent interacting with the environment. Each piece of experience data is associated with a respective expected learning progress measure that is a measure of an expected amount of progress made in the training of the neural network if the neural network is trained on the piece of experience data. The method further includes selecting a piece of experience data from the replay memory by prioritizing for selection pieces of experience data having relatively higher expected learning progress measures and training the neural network on the selected piece of experience data.Type: GrantFiled: May 11, 2018Date of Patent: May 7, 2019Assignee: DeepMind Technologies LimitedInventors: Tom Schaul, John Quan, David Silver
-
Publication number: 20180260707Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a neural network used to select actions performed by a reinforcement learning agent interacting with an environment. In one aspect, a method includes maintaining a replay memory, where the replay memory stores pieces of experience data generated as a result of the reinforcement learning agent interacting with the environment. Each piece of experience data is associated with a respective expected learning progress measure that is a measure of an expected amount of progress made in the training of the neural network if the neural network is trained on the piece of experience data. The method further includes selecting a piece of experience data from the replay memory by prioritizing for selection pieces of experience data having relatively higher expected learning progress measures and training the neural network on the selected piece of experience data.Type: ApplicationFiled: May 11, 2018Publication date: September 13, 2018Inventors: Tom Schaul, John Quan, David Silver
-
Publication number: 20170140269Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a neural network used to select actions performed by a reinforcement learning agent interacting with an environment. In one aspect, a method includes maintaining a replay memory, where the replay memory stores pieces of experience data generated as a result of the reinforcement learning agent interacting with the environment. Each piece of experience data is associated with a respective expected learning progress measure that is a measure of an expected amount of progress made in the training of the neural network if the neural network is trained on the piece of experience data. The method further includes selecting a piece of experience data from the replay memory by prioritizing for selection pieces of experience data having relatively higher expected learning progress measures and training the neural network on the selected piece of experience data.Type: ApplicationFiled: November 11, 2016Publication date: May 18, 2017Applicant: Google Inc.Inventors: Tom Schaul, John Quan, David Silver
-
Publication number: 20140213910Abstract: Frequent monitoring of early-stage burns is necessary for deciding optimal treatment and management. Superficial-partial thickness and deep-partial thickness burns, while visually similar, differ dramatically in terms of clinical treatment and are known to progress in severity over time. The disclosed method uses spatial frequency domain imaging (SFDI) far noninvasively mapping quantitative changes in chromophore and optical properties that may be an indicative of burn wound severity. A controlled protocol of graded burn severity is developed and applied to 17 rats. SFDI data is acquired at multiple near-infrared wavelengths over a course of 3 h. Burn severity is verified using hematoxylin and eosin histology. Changes in water concentration (edema), deoxygenated hemoglobin concentration, and optical scattering (tissue denaturation) are statistically significant measures, which are used to differentiate superficial partial-thickness burns from deep-partial thickness burns.Type: ApplicationFiled: January 24, 2014Publication date: July 31, 2014Applicant: The Regents of the University of CaliforniaInventors: Anthony J. Durkin, Amaan Mazhar, John Quan Minh Nguyen