Patents by Inventor Frank Hutter

Frank Hutter has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230334371
    Abstract: A method for training a machine learning algorithm taking into account at least one inequality constraint. Each of the at least one inequality constraint represents a secondary constraint. The method includes: optimizing hyperparameters for the machine learning algorithm by applying a tree-structured Parzen estimator, wherein the tree-structured Parzen estimator is based on an acquisition function adapted on the basis of the at least one inequality constraint; and training the machine learning algorithm on the basis of the optimized hyperparameters.
    Type: Application
    Filed: April 12, 2023
    Publication date: October 19, 2023
    Inventors: Frank Hutter, Suhei Watanabe
  • Publication number: 20230306265
    Abstract: A method for determining an optimal architecture of a neural network. The method includes: defining a search space by means of a context-free grammar; training neural networks with candidate architectures on the training data, and validating the trained neural networks on the validation data; initializing a Gaussian process, wherein the Gaussian process comprises a Weisfeiler-Lehman graph kernel; adapting the Gaussian process such that given the candidate architectures, the Gaussian process predicts the validation achieved with these candidate architectures; and performing a Bayesian optimization for finding the candidate architecture that achieved the best performance.
    Type: Application
    Filed: March 15, 2023
    Publication date: September 28, 2023
    Inventors: Danny Stoll, Frank Hutter, Simon Schrodi
  • Patent number: 11727277
    Abstract: A method for automatically generating an artificial neural network that encompasses modules and connections that link those modules, successive modules and/or connections being added to a current starting network. Modules and/or connections that are to be added are selected randomly from a predefinable plurality of possible modules and connections that can be added. A plurality of possible refinements of the current starting network respectively are generated by adding to the starting network modules and/or connections that are to be added. One of the refinements from the plurality of possible refinements is then selected in order to serve as a current starting network in a subsequent execution of the method.
    Type: Grant
    Filed: October 24, 2018
    Date of Patent: August 15, 2023
    Assignee: ROBERT BOSCH GMBH
    Inventors: Frank Hutter, Jan Hendrik Metzen, Thomas Elsken
  • Patent number: 11628562
    Abstract: A method for producing a strategy for a robot. The method includes the following steps: initializing the strategy and an episode length; repeated execution of the loop including the following steps: producing a plurality of further strategies as a function of the strategy; applying the plurality of the further strategies for the length of the episode length; ascertaining respectively a cumulative reward, which is obtained in the application of the respective further strategy; updating the strategy as a function of a second plurality of the further strategies that obtained the greatest cumulative rewards. After each execution of the loop, the episode length is increased. A computer program, a device for carrying out the method, and a machine-readable memory element on which the computer program is stored, are also described.
    Type: Grant
    Filed: July 6, 2020
    Date of Patent: April 18, 2023
    Assignee: ROBERT BOSCH GMBH
    Inventors: Frank Hutter, Lior Fuks, Marius Lindauer, Noor Awad
  • Publication number: 20220351498
    Abstract: A method for generating training data for training a machine learning algorithm. The method includes the following steps: providing first training data and generating additional training data from at least one portion of the first training data, wherein the additional training data are generated in each case by applying, to all the training data included in the at least one portion of the first training data, an augmentation function randomly selected from a set of possible augmentation functions; providing the first training data and the additional training data for training the machine learning algorithm.
    Type: Application
    Filed: March 31, 2022
    Publication date: November 3, 2022
    Inventors: Frank Hutter, Samuel Gabriel Mueller
  • Publication number: 20220327390
    Abstract: A method for training a neural network, which includes a first number of layers. In the method, in a training sequence, which includes a plurality of training patterns, using a backpropagation algorithm, when applying the backpropagation algorithm during each of the plurality of training patterns, in each case a second number of layers of the neural network being disregarded, an absolute value of the second number being variable and being randomly selected before each of the number of training patterns under the condition that the absolute value is greater than or equal to zero and simultaneously smaller than an absolute value of the first number, and the second number of layers being an input layer of the neural network and layers of the neural network immediately following the input layer.
    Type: Application
    Filed: March 31, 2022
    Publication date: October 13, 2022
    Inventors: Ben Wilhelm, Frank Hutter, Matilde Gargiani
  • Publication number: 20220292349
    Abstract: A device, computer-implemented method for the processing of digital sensor data and training methods therefor. A plurality of training tasks from a distribution of training tasks are provided, the training tasks characterizing the processing of digital sensor data. A parameter set for an architecture and for weights of an artificial neural network are determined with a first gradient-based learning algorithm and with a second gradient-based algorithm as a function of at least one first training task from the distribution of training tasks. The artificial neural network is trained with the first gradient-based learning algorithm as a function of the parameter set and as a function of a second training task.
    Type: Application
    Filed: June 24, 2020
    Publication date: September 15, 2022
    Applicant: Robert Bosch GmbH
    Inventors: Danny Oliver Stoll, Frank Hutter, Jan Hendrik Metzen, Thomas Elsken
  • Publication number: 20220230416
    Abstract: A computer-implemented method for training a machine learning system including: initializing parameters of the machine learning system and a metaparameter. Repeatedly carrying out the following as a loop: providing a batch of training data points and manipulating the provided training data points or a training method for optimizing the parameters of the machine learning system or a structure of the machine learning system based on the metaparameter. Ascertaining a cost function as a function of instantaneous parameters of the machine learning system and of the instantaneous metaparameters. Adapting the instantaneous parameters as a function of an ascertained first gradient, which has been ascertained with respect to the instantaneous parameters via the ascertained cost function for the training data points, and adapting the metaparameter as a function of a second gradient, which has been ascertained with respect to the metaparameter used in the preceding step via the ascertained cost function.
    Type: Application
    Filed: January 12, 2022
    Publication date: July 21, 2022
    Inventors: Samuel Gabriel Mueller, Andre Biedenkapp, Frank Hutter
  • Publication number: 20220222493
    Abstract: A computer-implemented method for learning a strategy and/or method for learning a synthetic environment. The strategy is configured to control an agent, and the method includes: providing synthetic environment parameters and a real environment and a population of strategies. Subsequently, repeating the following steps for a predetermined number of repetitions as a first loop: carrying out for each strategy of the population of strategies subsequent steps as a second loop: disturb the synthetic environment parameters with random noise; train for a first given number of step the strategy on the disturbed synthetic environment; evaluate the trained strategy on the real environment by determining rewards of the trained strategies; updating the synthetic environment parameters depending on the noise and the rewards. Finally, outputting the evaluated strategy with the highest reward on the real environment or with the best trained strategy on the disturbed synthetic environment.
    Type: Application
    Filed: December 14, 2021
    Publication date: July 14, 2022
    Inventors: Thomas Nierhoff, Fabio Ferreira, Frank Hutter
  • Publication number: 20220114446
    Abstract: A method for creating a neural network, which includes an encoder that is connected to a decoder. The optimization method DARTS is used, a further cell type being added to the cell types of DARTS. A computer program and a device for carrying out the method, and a machine-readable memory element, on which the computer program is stored, are also described.
    Type: Application
    Filed: April 8, 2020
    Publication date: April 14, 2022
    Inventors: Arber Zela, Frank Hutter, Thomas Brox, Tonmoy Saikia, Yassine Marrakchi
  • Publication number: 20220051753
    Abstract: A method for creating a strategy, which is configured to determine a placement of nucleotides within a primary RNA structure as a function of a detail of a predefined secondary structure. The method includes the following steps: initializing the strategy; providing a task representation, the task representation including structural restrictions of the secondary RNA structure and sequential restrictions of the primary RNA structure; determining a primary candidate RNA sequence with the aid of the strategy as a function of the task representation; adapting the strategy with the aid of a reinforcement learning algorithm in such a way that a total loss is optimized.
    Type: Application
    Filed: June 22, 2021
    Publication date: February 17, 2022
    Inventors: Rolf Backofen, Frank Hutter, Frederic Runge
  • Publication number: 20220051138
    Abstract: A method for the transfer learning of hyperparameters of a machine learning algorithm. The method includes providing a current search space and a previous search space. A reduced search space is then created and candidate configurations are drawn repeatedly at random from the reduced search space and from the current search space, and the machine learning algorithm, parameterized in each case with the candidate configurations, is applied. A Tree Parzen Estimator (TPE) is then created as a function of the candidate solutions and the results of the machine learning algorithm parameterized with the candidate configurations, and the drawing of further candidate configurations from the current search space using the TPE is repeated multiple times, the TPE being updated upon each drawing.
    Type: Application
    Filed: August 5, 2021
    Publication date: February 17, 2022
    Inventors: Danny Stoll, Diane Wagner, Frank Hutter, Joerg Franke, Simon Selg
  • Publication number: 20220027743
    Abstract: A method for learning a strategy, which optimally adapts at least one parameter of an evolutionary algorithm. The method includes the following steps: initializing the strategy, which ascertains a parameterization of the parameter as a function of pieces of state information; learning the strategy with the aid of reinforcement learning, it being learned from interactions of the CMA-ES algorithm with a parameterization, determined with the aid of the strategy as a function of the pieces of state information, with the problem instance and with a reward signal, which parameterization is optimal for possible pieces of state information.
    Type: Application
    Filed: July 9, 2021
    Publication date: January 27, 2022
    Inventors: Steven Adriaenssen, Andre Biedenkapp, Frank Hutter, Gresa Shala, Marius Lindauer, Noor Awad
  • Publication number: 20220012636
    Abstract: Computer-implemented method for creating a system, which is suitable for creating in an automated manner a machine learning system for computer vision. The method includes: providing predefined hyperparameters; determining an optimal parameterization of the hyperparameters using BOHB (Bayesian optimization (BO) and Hyperband (HB)) for a plurality of different training data sets; assessing all optimal parameterizations on all training data sets of the plurality of different training data sets with the aid of a normalized metric; creating a matrix, the matrix including the evaluated normalized metric for each parameterization and for each training data set; determining meta-features for each of the training data sets; optimizing a decision tree, which outputs as a function of the meta-features and of the matrix which of the optimal parameterization using BOHB is a suitable parameterization for the given meta-features.
    Type: Application
    Filed: July 2, 2021
    Publication date: January 13, 2022
    Inventors: Marius Lindauer, Arber Zela, Danny Oliver Stoll, Fabio Ferreira, Frank Hutter, Thomas Nierhoff
  • Publication number: 20210383245
    Abstract: A computer-implemented method for planning an operation of a technical system within its environment. The method includes: obtaining state information comprising: a current domain, a time step and a current state; determining by heuristics costs for reachable states from the current state; selecting a heuristics by a policy out of a set of predefined heuristics depending on the state information and costs; choosing the state with the lowest cost returned by the selected heuristic from the reachable states, and determining an operation of the technical system out of the set of possible operation that has to be carried out by the technical system to reach said state with the lowest costreturned by the selected heuristic.
    Type: Application
    Filed: April 28, 2021
    Publication date: December 9, 2021
    Inventors: Jonathan Spitz, Andre Biedenkapp, David Speck, Frank Hutter, Marius Lindauer, Robert Mattmueller
  • Publication number: 20210264256
    Abstract: A method for predicting a suitable configuration of a machine learning system for a first training data set. The method starts by training a plurality of machine learning systems on the first training data set, the machine learning systems and/or the training methods used being configured differently. This is followed by a creation of a second training data set including ascertained performances of the trained machine learning systems and the assigned configuration of the particular machine learning systems and/or training methods. This is followed by a training of a graph isomorphism network, depending on the second training data set, and a prediction in each case of the performance of a plurality of configurations not used for the training, with the aid of the GIN. A computer program and a device for carrying out the method and a machine-readable memory element, on which the computer program is stored, are also described.
    Type: Application
    Filed: November 17, 2020
    Publication date: August 26, 2021
    Inventors: Arber Zela, Frank Hutter, Julien Siems, Lucas Zimmer
  • Publication number: 20210133576
    Abstract: A method for automatically generating an artificial neural network that encompasses modules and connections that link those modules, successive modules and/or connections being added to a current starting network. Modules and/or connections that are to be added are selected randomly from a predefinable plurality of possible modules and connections that can be added. A plurality of possible refinements of the current starting network respectively are generated by adding to the starting network modules and/or connections that are to be added. One of the refinements from the plurality of possible refinements is then selected in order to serve as a current starting network in a subsequent execution of the method.
    Type: Application
    Filed: October 24, 2018
    Publication date: May 6, 2021
    Inventors: Frank Hutter, Jan Hendrik Metzen, Thomas Elsken
  • Publication number: 20210008718
    Abstract: A method for producing a strategy for a robot. The method includes the following steps: initializing the strategy and an episode length; repeated execution of the loop including the following steps: producing a plurality of further strategies as a function of the strategy; applying the plurality of the further strategies for the length of the episode length; ascertaining respectively a cumulative reward, which is obtained in the application of the respective further strategy; updating the strategy as a function of a second plurality of the further strategies that obtained the greatest cumulative rewards. After each execution of the loop, the episode length is increased. A computer program, a device for carrying out the method, and a machine-readable memory element on which the computer program is stored, are also described.
    Type: Application
    Filed: July 6, 2020
    Publication date: January 14, 2021
    Inventors: Frank Hutter, Lior Fuks, Marius Lindauer, Noor Awad
  • Publication number: 20210012183
    Abstract: A method for ascertaining a suitable network configuration for a neural network.
    Type: Application
    Filed: April 17, 2019
    Publication date: January 14, 2021
    Inventors: Thomas Elsken, Frank Hutter, Jan Hendrik Metzen
  • Publication number: 20200410347
    Abstract: A method for ascertaining a suitable network configuration for a neural network for a predefined application that is determined in the form of training data.
    Type: Application
    Filed: April 17, 2019
    Publication date: December 31, 2020
    Inventors: Thomas Elsken, Frank Hutter, Jan Hendrik Metzen