Patents by Inventor Karl Scholz

Karl Scholz has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11929391
    Abstract: Described herein is an electronic component that may include a substrate, wherein the substrate may include at least two electrodes, wherein the at least two electrodes are each spaced apart from each other on and/or within the substrate. When the electronic component is in a first operating state, an electrolytic material may be disposed at least in a spatial region between the at least two electrodes, wherein the electrolytic material comprises at least one polymerizable material. When the electronic device is in a second operating state, at least one electrical connection may be made between the at least two electrodes, wherein the at least one electrical connection comprises an electrically conductive polymer. The electrically conductive polymer may comprise one or more fiber structures, wherein the one or more fiber structures are in physical contact with the at least two electrodes.
    Type: Grant
    Filed: October 1, 2020
    Date of Patent: March 12, 2024
    Assignee: Technische Universitat Dresden
    Inventors: Hans Kleemann, Matteo Cucchi, Karl Leo, Veronika Scholz, Hsin Tseng, Alexander Lee
  • Publication number: 20240042600
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for data-driven robotic control. One of the methods includes maintaining robot experience data; obtaining annotation data; training, on the annotation data, a reward model; generating task-specific training data for the particular task, comprising, for each experience in a second subset of the experiences in the robot experience data: processing the observation in the experience using the trained reward model to generate a reward prediction, and associating the reward prediction with the experience; and training a policy neural network on the task-specific training data for the particular task, wherein the policy neural network is configured to receive a network input comprising an observation and to generate a policy output that defines a control policy for a robot performing the particular task.
    Type: Application
    Filed: June 8, 2023
    Publication date: February 8, 2024
    Inventors: Serkan Cabi, Ziyu Wang, Alexander Novikov, Ksenia Konyushkova, Sergio Gomez Colmenarejo, Scott Ellison Reed, Misha Man Ray Denil, Jonathan Karl Scholz, Oleg O. Sushkov, Rae Chan Jeong, David Barker, David Budden, Mel Vecerik, Yusuf Aytar, Joao Ferdinando Gomes de Freitas
  • Patent number: 11886997
    Abstract: An off-policy reinforcement learning actor-critic neural network system configured to select actions from a continuous action space to be performed by an agent interacting with an environment to perform a task. An observation defines environment state data and reward data. The system has an actor neural network which learns a policy function mapping the state data to action data. A critic neural network learns an action-value (Q) function. A replay buffer stores tuples of the state data, the action data, the reward data and new state data. The replay buffer also includes demonstration transition data comprising a set of the tuples from a demonstration of the task within the environment. The neural network system is configured to train the actor neural network and the critic neural network off-policy using stored tuples from the replay buffer comprising tuples both from operation of the system and from the demonstration transition data.
    Type: Grant
    Filed: October 7, 2022
    Date of Patent: January 30, 2024
    Assignee: DeepMind Technologies Limited
    Inventors: Olivier Pietquin, Martin Riedmiller, Wang Fumin, Bilal Piot, Mel Vecerik, Todd Andrew Hester, Thomas Rothoerl, Thomas Lampe, Nicolas Manfred Otto Heess, Jonathan Karl Scholz
  • Patent number: 11868882
    Abstract: An off-policy reinforcement learning actor-critic neural network system configured to select actions from a continuous action space to be performed by an agent interacting with an environment to perform a task. An observation defines environment state data and reward data. The system has an actor neural network which learns a policy function mapping the state data to action data. A critic neural network learns an action-value (Q) function. A replay buffer stores tuples of the state data, the action data, the reward data and new state data. The replay buffer also includes demonstration transition data comprising a set of the tuples from a demonstration of the task within the environment. The neural network system is configured to train the actor neural network and the critic neural network off-policy using stored tuples from the replay buffer comprising tuples both from operation of the system and from the demonstration transition data.
    Type: Grant
    Filed: June 28, 2018
    Date of Patent: January 9, 2024
    Assignee: DeepMind Technologies Limited
    Inventors: Olivier Claude Pietquin, Martin Riedmiller, Wang Fumin, Bilal Piot, Mel Vecerik, Todd Andrew Hester, Thomas Rothoerl, Thomas Lampe, Nicolas Manfred Otto Heess, Jonathan Karl Scholz
  • Patent number: 11836599
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for improving operational efficiency within a data center by modeling data center performance and predicting power usage efficiency. An example method receives a state input characterizing a current state of a data center. For each data center setting slate, the state input and the data center setting slate are processed through an ensemble of machine learning models. Each machine learning model is configured to receive and process the state input and the data center setting slate to generate an efficiency score that characterizes a predicted resource efficiency of the data center if the data center settings defined by the data center setting slate are adopted t. The method selects, based on the efficiency scores for the data center setting slates, new values for the data center settings.
    Type: Grant
    Filed: May 26, 2021
    Date of Patent: December 5, 2023
    Assignee: DeepMind Technologies Limited
    Inventors: Richard Andrew Evans, Jim Gao, Michael C. Ryan, Gabriel Dulac-Arnold, Jonathan Karl Scholz, Todd Andrew Hester
  • Publication number: 20230281966
    Abstract: A method for training a neural network to predict keypoints of unseen objects using a training data set including labeled and unlabeled training data is described. The method comprising: receiving the training data set comprising a plurality of training samples, each training sample comprising a set of synchronized images of one or more objects from a respective scene, wherein each image in the set is synchronously taken by a respective camera from a different point of view, and wherein a subset of the set of synchronized images is labeled with ground-truth keypoints and the remaining images in the set are unlabeled; and for each of one or more training samples of the plurality of training samples: training the neural network on the training sample by updating current values of parameters of the neural network to minimize a loss function which is a combination of a supervised loss function and an unsupervised loss function.
    Type: Application
    Filed: July 28, 2021
    Publication date: September 7, 2023
    Inventors: Mel VECERIK, Jonathan Karl SCHOLZ, Jean-Bapiste REGLI
  • Publication number: 20230264734
    Abstract: Disclosed is an electric power steering system for a vehicle, the power steering system including: an electric motor configured to provide mechanical power to a steering mechanism of the vehicle for steering the vehicle; a housing in which the electric motor is disposed, wherein the housing comprises an inner housing in which the electric motor is disposed, and an outer housing disposed around the inner housing, and wherein an air flow path is defined within the housing, the air flow path being defined at least in part by a gap between the inner housing and the outer housing; and an air flow source arranged to generate an air flow along the air flow path to cool the electric motor, wherein the air flow source is independent from the electric motor.
    Type: Application
    Filed: July 28, 2021
    Publication date: August 24, 2023
    Inventor: George Karl SCHOLZ
  • Patent number: 11712799
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for data-driven robotic control. One of the methods includes maintaining robot experience data; obtaining annotation data; training, on the annotation data, a reward model; generating task-specific training data for the particular task, comprising, for each experience in a second subset of the experiences in the robot experience data: processing the observation in the experience using the trained reward model to generate a reward prediction, and associating the reward prediction with the experience; and training a policy neural network on the task-specific training data for the particular task, wherein the policy neural network is configured to receive a network input comprising an observation and to generate a policy output that defines a control policy for a robot performing the particular task.
    Type: Grant
    Filed: September 14, 2020
    Date of Patent: August 1, 2023
    Assignee: DeepMind Technologies Limited
    Inventors: Serkan Cabi, Ziyu Wang, Alexander Novikov, Ksenia Konyushkova, Sergio Gomez Colmenarejo, Scott Ellison Reed, Misha Man Ray Denil, Jonathan Karl Scholz, Oleg O. Sushkov, Rae Chan Jeong, David Barker, David Budden, Mel Vecerik, Yusuf Aytar, Joao Ferdinando Gomes de Freitas
  • Publication number: 20230023189
    Abstract: An off-policy reinforcement learning actor-critic neural network system configured to select actions from a continuous action space to be performed by an agent interacting with an environment to perform a task. An observation defines environment state data and reward data. The system has an actor neural network which learns a policy function mapping the state data to action data. A critic neural network learns an action-value (Q) function. A replay buffer stores tuples of the state data, the action data, the reward data and new state data. The replay buffer also includes demonstration transition data comprising a set of the tuples from a demonstration of the task within the environment. The neural network system is configured to train the actor neural network and the critic neural network off-policy using stored tuples from the replay buffer comprising tuples both from operation of the system and from the demonstration transition data.
    Type: Application
    Filed: October 7, 2022
    Publication date: January 26, 2023
    Inventors: Olivier Pietquin, Martin Riedmiller, Wang Fumin, Bilal Piot, Mel Vecerik, Todd Andrew Hester, Thomas Rothoerl, Thomas Lampe, Nicolas Manfred Otto Heess, Jonathan Karl Scholz
  • Publication number: 20220363323
    Abstract: A downforce system for a vehicle includes: a restrictor configured to restrict a flow of air into a region that is defined at least in part by the restrictor and a ground surface, a rim disposed on the restrictor and configured to form at least a partial seal with the ground surface; a dedicated pressure source disposed outside the restrictor and connected to the restrictor via an air flow path, the pressure source being configured to generate a pressure differential across the restrictor; and a dust and debris removal system configured to prevent dust and debris from exiting the downforce system via the air flow path. By generating a pressure differential across the restrictor, a downforce which acts on the vehicle may be generated. The downforce may result in an improved grip or traction of the vehicle, which may improve handling and safety of the vehicle.
    Type: Application
    Filed: October 16, 2020
    Publication date: November 17, 2022
    Inventor: George Karl SCHOLZ
  • Patent number: 11468321
    Abstract: An off-policy reinforcement learning actor-critic neural network system configured to select actions from a continuous action space to be performed by an agent interacting with an environment to perform a task. An observation defines environment state data and reward data. The system has an actor neural network which learns a policy function mapping the state data to action data. A critic neural network learns an action-value (Q) function. A replay buffer stores tuples of the state data, the action data, the reward data and new state data. The replay buffer also includes demonstration transition data comprising a set of the tuples from a demonstration of the task within the environment. The neural network system is configured to train the actor neural network and the critic neural network off-policy using stored tuples from the replay buffer comprising tuples both from operation of the system and from the demonstration transition data.
    Type: Grant
    Filed: June 28, 2018
    Date of Patent: October 11, 2022
    Assignee: DeepMind Technologies Limited
    Inventors: Olivier Claude Pietquin, Martin Riedmiller, Wang Fumin, Bilal Piot, Mel Vecerik, Todd Andrew Hester, Thomas Rothoerl, Thomas Lampe, Nicolas Manfred Otto Heess, Jonathan Karl Scholz
  • Publication number: 20210287072
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for improving operational efficiency within a data center by modeling data center performance and predicting power usage efficiency. An example method receives a state input characterizing a current state of a data center. For each data center setting slate, the state input and the data center setting slate are processed through an ensemble of machine learning models. Each machine learning model is configured to receive and process the state input and the data center setting slate to generate an efficiency score that characterizes a predicted resource efficiency of the data center if the data center settings defined by the data center setting slate are adopted t. The method selects, based on the efficiency scores for the data center setting slates, new values for the data center settings.
    Type: Application
    Filed: May 26, 2021
    Publication date: September 16, 2021
    Inventors: Richard Andrew Evans, Jim Gao, Michael C. Ryan, Gabriel Dulac-Arnold, Jonathan Karl Scholz, Todd Andrew Hester
  • Publication number: 20210078169
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for data-driven robotic control. One of the methods includes maintaining robot experience data; obtaining annotation data; training, on the annotation data, a reward model; generating task-specific training data for the particular task, comprising, for each experience in a second subset of the experiences in the robot experience data: processing the observation in the experience using the trained reward model to generate a reward prediction, and associating the reward prediction with the experience; and training a policy neural network on the task-specific training data for the particular task, wherein the policy neural network is configured to receive a network input comprising an observation and to generate a policy output that defines a control policy for a robot performing the particular task.
    Type: Application
    Filed: September 14, 2020
    Publication date: March 18, 2021
    Inventors: Serkan Cabi, Ziyu Wang, Alexander Novikov, Ksenia Konyushkova, Sergio Gomez Colmenarejo, Scott Ellison Reed, Misha Man Ray Denil, Jonathan Karl Scholz, Oleg O. Sushkov, Rae Chan Jeong, David Barker, David Budden, Mel Vecerik, Yusuf Aytar, Joao Ferdinando Gomes de Freitas
  • Patent number: 10872294
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training an action selection policy neural network. In one aspect, a method comprises: obtaining an expert observation; processing the expert observation using a generative neural network system to generate a given observation-given action pair, wherein the generative neural network system has been trained to be more likely to generate a particular observation-particular action pair if performing the particular action in response to the particular observation is more likely to result in the environment later reaching the state characterized by a target observation; processing the given observation using the action selection policy neural network to generate a given action score for the given action; and adjusting the current values of the action selection policy neural network parameters to increase the given action score for the given action.
    Type: Grant
    Filed: September 27, 2019
    Date of Patent: December 22, 2020
    Assignee: DeepMind Technologies Limited
    Inventors: Mel Vecerik, Yannick Schroecker, Jonathan Karl Scholz
  • Publication number: 20200272889
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for improving operational efficiency within a data center by modeling data center performance and predicting power usage efficiency. An example method receives a state input characterizing a current state of a data center. For each data center setting slate, the state input and the data center setting slate are processed through an ensemble of machine learning models. Each machine learning model is configured to receive and process the state input and the data center setting slate to generate an efficiency score that characterizes a predicted resource efficiency of the data center if the data center settings defined by the data center setting slate are adopted t. The method selects, based on the efficiency scores for the data center setting slates, new values for the data center settings.
    Type: Application
    Filed: April 30, 2020
    Publication date: August 27, 2020
    Inventors: Richard Andrew Evans, Jim Gao, Michael C. Ryan, Gabriel Dulac-Arnold, Jonathan Karl Scholz, Todd Andrew Hester
  • Publication number: 20200151562
    Abstract: An off-policy reinforcement learning actor-critic neural network system configured to select actions from a continuous action space to be performed by an agent interacting with an environment to perform a task. An observation defines environment state data and reward data. The system has an actor neural network which learns a policy function mapping the state data to action data. A critic neural network learns an action-value (Q) function. A replay buffer stores tuples of the state data, the action data, the reward data and new state data. The replay buffer also includes demonstration transition data comprising a set of the tuples from a demonstration of the task within the environment. The neural network system is configured to train the actor neural network and the critic neural network off-policy using stored tuples from the replay buffer comprising tuples both from operation of the system and from the demonstration transition data.
    Type: Application
    Filed: June 28, 2018
    Publication date: May 14, 2020
    Inventors: Olivier Pietquin, Martin Riedmiller, Wang Fumin, Bilal Piot, Mel Vecerik, Todd Andrew Hester, Thomas Rothörl, Thomas Lampe, Nicolas Manfred Otto Heess, Jonathan Karl Scholz
  • Patent number: 10643121
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for improving operational efficiency within a data center by modeling data center performance and predicting power usage efficiency. An example method receives a state input characterizing a current state of a data center. For each data center setting slate, the state input and the data center setting slate are processed through an ensemble of machine learning models. Each machine learning model is configured to receive and process the state input and the data center setting slate to generate an efficiency score that characterizes a predicted resource efficiency of the data center if the data center settings defined by the data center setting slate are adopted t. The method selects, based on the efficiency scores for the data center setting slates, new values for the data center settings.
    Type: Grant
    Filed: January 19, 2017
    Date of Patent: May 5, 2020
    Assignee: DeepMind Technologies Limited
    Inventors: Richard Andrew Evans, Jim Gao, Michael C. Ryan, Gabriel Dulac-Arnold, Jonathan Karl Scholz, Todd Andrew Hester
  • Publication number: 20200104684
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training an action selection policy neural network. In one aspect, a method comprises: obtaining an expert observation; processing the expert observation using a generative neural network system to generate a given observation-given action pair, wherein the generative neural network system has been trained to be more likely to generate a particular observation-particular action pair if performing the particular action in response to the particular observation is more likely to result in the environment later reaching the state characterized by a target observation; processing the given observation using the action selection policy neural network to generate a given action score for the given action; and adjusting the current values of the action selection policy neural network parameters to increase the given action score for the given action.
    Type: Application
    Filed: September 27, 2019
    Publication date: April 2, 2020
    Inventors: Mel Vecerik, Yannick Schroecker, Jonathan Karl Scholz
  • Publication number: 20180204116
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for improving operational efficiency within a data center by modeling data center performance and predicting power usage efficiency. An example method receives a state input characterizing a current state of a data center. For each data center setting slate, the state input and the data center setting slate are processed through an ensemble of machine learning models. Each machine learning model is configured to receive and process the state input and the data center setting slate to generate an efficiency score that characterizes a predicted resource efficiency of the data center if the data center settings defined by the data center setting slate are adopted t. The method selects, based on the efficiency scores for the data center setting slates, new values for the data center settings.
    Type: Application
    Filed: January 19, 2017
    Publication date: July 19, 2018
    Inventors: Richard Andrew Evans, Jim Gao, Michael C. Ryan, Gabriel Dulac-Arnold, Jonathan Karl Scholz, Todd Andrew Hester
  • Publication number: 20060206299
    Abstract: A computer software product is used to create applications for enabling a dialogue between a human and a computer. The software product provides a programming tool that insulates software developers from time-consuming, technically-challenging programming tasks by enabling the developer to specify generalized instructions to a Dialogue Flow Interpreter, which invokes functions to implement a speech application, automatically populating a library with dialogue objects that are available to other applications. The speech applications created through the DFI may be implemented as COM (component object model) objects, and so the applications can be easily integrated into a variety of different platforms. In addition, “translator” object classes are provided to handle specific types of data, such as currency, numeric data, dates, times, string variables, etc. These translator object classes have utility either as part of the DFI library or as a sub-library separate from dialogue implementation.
    Type: Application
    Filed: January 3, 2006
    Publication date: September 14, 2006
    Applicant: Unisys Corporation
    Inventors: Karl Scholz, James Irwin, Samir Tamri