Patents by Inventor Frank Hutter
Frank Hutter has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12321827Abstract: A method for the transfer learning of hyperparameters of a machine learning algorithm. The method includes providing a current search space and a previous search space. A reduced search space is then created and candidate configurations are drawn repeatedly at random from the reduced search space and from the current search space, and the machine learning algorithm, parameterized in each case with the candidate configurations, is applied. A Tree Parzen Estimator (TPE) is then created as a function of the candidate solutions and the results of the machine learning algorithm parameterized with the candidate configurations, and the drawing of further candidate configurations from the current search space using the TPE is repeated multiple times, the TPE being updated upon each drawing.Type: GrantFiled: August 5, 2021Date of Patent: June 3, 2025Assignee: ROBERT BOSCH GMBHInventors: Danny Stoll, Diane Wagner, Frank Hutter, Joerg Franke, Simon Selg
-
Patent number: 12254675Abstract: A computer-implemented method for training a machine learning system including: initializing parameters of the machine learning system and a metaparameter. Repeatedly carrying out the following as a loop: providing a batch of training data points and manipulating the provided training data points or a training method for optimizing the parameters of the machine learning system or a structure of the machine learning system based on the metaparameter. Ascertaining a cost function as a function of instantaneous parameters of the machine learning system and of the instantaneous metaparameters. Adapting the instantaneous parameters as a function of an ascertained first gradient, which has been ascertained with respect to the instantaneous parameters via the ascertained cost function for the training data points, and adapting the metaparameter as a function of a second gradient, which has been ascertained with respect to the metaparameter used in the preceding step via the ascertained cost function.Type: GrantFiled: January 12, 2022Date of Patent: March 18, 2025Assignee: ROBERT BOSCH GMBHInventors: Samuel Gabriel Mueller, Andre Biedenkapp, Frank Hutter
-
Publication number: 20230334371Abstract: A method for training a machine learning algorithm taking into account at least one inequality constraint. Each of the at least one inequality constraint represents a secondary constraint. The method includes: optimizing hyperparameters for the machine learning algorithm by applying a tree-structured Parzen estimator, wherein the tree-structured Parzen estimator is based on an acquisition function adapted on the basis of the at least one inequality constraint; and training the machine learning algorithm on the basis of the optimized hyperparameters.Type: ApplicationFiled: April 12, 2023Publication date: October 19, 2023Inventors: Frank Hutter, Suhei Watanabe
-
Publication number: 20230306265Abstract: A method for determining an optimal architecture of a neural network. The method includes: defining a search space by means of a context-free grammar; training neural networks with candidate architectures on the training data, and validating the trained neural networks on the validation data; initializing a Gaussian process, wherein the Gaussian process comprises a Weisfeiler-Lehman graph kernel; adapting the Gaussian process such that given the candidate architectures, the Gaussian process predicts the validation achieved with these candidate architectures; and performing a Bayesian optimization for finding the candidate architecture that achieved the best performance.Type: ApplicationFiled: March 15, 2023Publication date: September 28, 2023Inventors: Danny Stoll, Frank Hutter, Simon Schrodi
-
Patent number: 11727277Abstract: A method for automatically generating an artificial neural network that encompasses modules and connections that link those modules, successive modules and/or connections being added to a current starting network. Modules and/or connections that are to be added are selected randomly from a predefinable plurality of possible modules and connections that can be added. A plurality of possible refinements of the current starting network respectively are generated by adding to the starting network modules and/or connections that are to be added. One of the refinements from the plurality of possible refinements is then selected in order to serve as a current starting network in a subsequent execution of the method.Type: GrantFiled: October 24, 2018Date of Patent: August 15, 2023Assignee: ROBERT BOSCH GMBHInventors: Frank Hutter, Jan Hendrik Metzen, Thomas Elsken
-
Patent number: 11628562Abstract: A method for producing a strategy for a robot. The method includes the following steps: initializing the strategy and an episode length; repeated execution of the loop including the following steps: producing a plurality of further strategies as a function of the strategy; applying the plurality of the further strategies for the length of the episode length; ascertaining respectively a cumulative reward, which is obtained in the application of the respective further strategy; updating the strategy as a function of a second plurality of the further strategies that obtained the greatest cumulative rewards. After each execution of the loop, the episode length is increased. A computer program, a device for carrying out the method, and a machine-readable memory element on which the computer program is stored, are also described.Type: GrantFiled: July 6, 2020Date of Patent: April 18, 2023Assignee: ROBERT BOSCH GMBHInventors: Frank Hutter, Lior Fuks, Marius Lindauer, Noor Awad
-
Publication number: 20220351498Abstract: A method for generating training data for training a machine learning algorithm. The method includes the following steps: providing first training data and generating additional training data from at least one portion of the first training data, wherein the additional training data are generated in each case by applying, to all the training data included in the at least one portion of the first training data, an augmentation function randomly selected from a set of possible augmentation functions; providing the first training data and the additional training data for training the machine learning algorithm.Type: ApplicationFiled: March 31, 2022Publication date: November 3, 2022Inventors: Frank Hutter, Samuel Gabriel Mueller
-
Publication number: 20220327390Abstract: A method for training a neural network, which includes a first number of layers. In the method, in a training sequence, which includes a plurality of training patterns, using a backpropagation algorithm, when applying the backpropagation algorithm during each of the plurality of training patterns, in each case a second number of layers of the neural network being disregarded, an absolute value of the second number being variable and being randomly selected before each of the number of training patterns under the condition that the absolute value is greater than or equal to zero and simultaneously smaller than an absolute value of the first number, and the second number of layers being an input layer of the neural network and layers of the neural network immediately following the input layer.Type: ApplicationFiled: March 31, 2022Publication date: October 13, 2022Inventors: Ben Wilhelm, Frank Hutter, Matilde Gargiani
-
Publication number: 20220292349Abstract: A device, computer-implemented method for the processing of digital sensor data and training methods therefor. A plurality of training tasks from a distribution of training tasks are provided, the training tasks characterizing the processing of digital sensor data. A parameter set for an architecture and for weights of an artificial neural network are determined with a first gradient-based learning algorithm and with a second gradient-based algorithm as a function of at least one first training task from the distribution of training tasks. The artificial neural network is trained with the first gradient-based learning algorithm as a function of the parameter set and as a function of a second training task.Type: ApplicationFiled: June 24, 2020Publication date: September 15, 2022Applicant: Robert Bosch GmbHInventors: Danny Oliver Stoll, Frank Hutter, Jan Hendrik Metzen, Thomas Elsken
-
Publication number: 20220230416Abstract: A computer-implemented method for training a machine learning system including: initializing parameters of the machine learning system and a metaparameter. Repeatedly carrying out the following as a loop: providing a batch of training data points and manipulating the provided training data points or a training method for optimizing the parameters of the machine learning system or a structure of the machine learning system based on the metaparameter. Ascertaining a cost function as a function of instantaneous parameters of the machine learning system and of the instantaneous metaparameters. Adapting the instantaneous parameters as a function of an ascertained first gradient, which has been ascertained with respect to the instantaneous parameters via the ascertained cost function for the training data points, and adapting the metaparameter as a function of a second gradient, which has been ascertained with respect to the metaparameter used in the preceding step via the ascertained cost function.Type: ApplicationFiled: January 12, 2022Publication date: July 21, 2022Inventors: Samuel Gabriel Mueller, Andre Biedenkapp, Frank Hutter
-
Publication number: 20220222493Abstract: A computer-implemented method for learning a strategy and/or method for learning a synthetic environment. The strategy is configured to control an agent, and the method includes: providing synthetic environment parameters and a real environment and a population of strategies. Subsequently, repeating the following steps for a predetermined number of repetitions as a first loop: carrying out for each strategy of the population of strategies subsequent steps as a second loop: disturb the synthetic environment parameters with random noise; train for a first given number of step the strategy on the disturbed synthetic environment; evaluate the trained strategy on the real environment by determining rewards of the trained strategies; updating the synthetic environment parameters depending on the noise and the rewards. Finally, outputting the evaluated strategy with the highest reward on the real environment or with the best trained strategy on the disturbed synthetic environment.Type: ApplicationFiled: December 14, 2021Publication date: July 14, 2022Inventors: Thomas Nierhoff, Fabio Ferreira, Frank Hutter
-
Publication number: 20220114446Abstract: A method for creating a neural network, which includes an encoder that is connected to a decoder. The optimization method DARTS is used, a further cell type being added to the cell types of DARTS. A computer program and a device for carrying out the method, and a machine-readable memory element, on which the computer program is stored, are also described.Type: ApplicationFiled: April 8, 2020Publication date: April 14, 2022Inventors: Arber Zela, Frank Hutter, Thomas Brox, Tonmoy Saikia, Yassine Marrakchi
-
Publication number: 20220051753Abstract: A method for creating a strategy, which is configured to determine a placement of nucleotides within a primary RNA structure as a function of a detail of a predefined secondary structure. The method includes the following steps: initializing the strategy; providing a task representation, the task representation including structural restrictions of the secondary RNA structure and sequential restrictions of the primary RNA structure; determining a primary candidate RNA sequence with the aid of the strategy as a function of the task representation; adapting the strategy with the aid of a reinforcement learning algorithm in such a way that a total loss is optimized.Type: ApplicationFiled: June 22, 2021Publication date: February 17, 2022Inventors: Rolf Backofen, Frank Hutter, Frederic Runge
-
Publication number: 20220051138Abstract: A method for the transfer learning of hyperparameters of a machine learning algorithm. The method includes providing a current search space and a previous search space. A reduced search space is then created and candidate configurations are drawn repeatedly at random from the reduced search space and from the current search space, and the machine learning algorithm, parameterized in each case with the candidate configurations, is applied. A Tree Parzen Estimator (TPE) is then created as a function of the candidate solutions and the results of the machine learning algorithm parameterized with the candidate configurations, and the drawing of further candidate configurations from the current search space using the TPE is repeated multiple times, the TPE being updated upon each drawing.Type: ApplicationFiled: August 5, 2021Publication date: February 17, 2022Inventors: Danny Stoll, Diane Wagner, Frank Hutter, Joerg Franke, Simon Selg
-
Publication number: 20220027743Abstract: A method for learning a strategy, which optimally adapts at least one parameter of an evolutionary algorithm. The method includes the following steps: initializing the strategy, which ascertains a parameterization of the parameter as a function of pieces of state information; learning the strategy with the aid of reinforcement learning, it being learned from interactions of the CMA-ES algorithm with a parameterization, determined with the aid of the strategy as a function of the pieces of state information, with the problem instance and with a reward signal, which parameterization is optimal for possible pieces of state information.Type: ApplicationFiled: July 9, 2021Publication date: January 27, 2022Inventors: Steven Adriaenssen, Andre Biedenkapp, Frank Hutter, Gresa Shala, Marius Lindauer, Noor Awad
-
Publication number: 20220012636Abstract: Computer-implemented method for creating a system, which is suitable for creating in an automated manner a machine learning system for computer vision. The method includes: providing predefined hyperparameters; determining an optimal parameterization of the hyperparameters using BOHB (Bayesian optimization (BO) and Hyperband (HB)) for a plurality of different training data sets; assessing all optimal parameterizations on all training data sets of the plurality of different training data sets with the aid of a normalized metric; creating a matrix, the matrix including the evaluated normalized metric for each parameterization and for each training data set; determining meta-features for each of the training data sets; optimizing a decision tree, which outputs as a function of the meta-features and of the matrix which of the optimal parameterization using BOHB is a suitable parameterization for the given meta-features.Type: ApplicationFiled: July 2, 2021Publication date: January 13, 2022Inventors: Marius Lindauer, Arber Zela, Danny Oliver Stoll, Fabio Ferreira, Frank Hutter, Thomas Nierhoff
-
Publication number: 20210383245Abstract: A computer-implemented method for planning an operation of a technical system within its environment. The method includes: obtaining state information comprising: a current domain, a time step and a current state; determining by heuristics costs for reachable states from the current state; selecting a heuristics by a policy out of a set of predefined heuristics depending on the state information and costs; choosing the state with the lowest cost returned by the selected heuristic from the reachable states, and determining an operation of the technical system out of the set of possible operation that has to be carried out by the technical system to reach said state with the lowest costreturned by the selected heuristic.Type: ApplicationFiled: April 28, 2021Publication date: December 9, 2021Inventors: Jonathan Spitz, Andre Biedenkapp, David Speck, Frank Hutter, Marius Lindauer, Robert Mattmueller
-
Publication number: 20210264256Abstract: A method for predicting a suitable configuration of a machine learning system for a first training data set. The method starts by training a plurality of machine learning systems on the first training data set, the machine learning systems and/or the training methods used being configured differently. This is followed by a creation of a second training data set including ascertained performances of the trained machine learning systems and the assigned configuration of the particular machine learning systems and/or training methods. This is followed by a training of a graph isomorphism network, depending on the second training data set, and a prediction in each case of the performance of a plurality of configurations not used for the training, with the aid of the GIN. A computer program and a device for carrying out the method and a machine-readable memory element, on which the computer program is stored, are also described.Type: ApplicationFiled: November 17, 2020Publication date: August 26, 2021Inventors: Arber Zela, Frank Hutter, Julien Siems, Lucas Zimmer
-
Publication number: 20210133576Abstract: A method for automatically generating an artificial neural network that encompasses modules and connections that link those modules, successive modules and/or connections being added to a current starting network. Modules and/or connections that are to be added are selected randomly from a predefinable plurality of possible modules and connections that can be added. A plurality of possible refinements of the current starting network respectively are generated by adding to the starting network modules and/or connections that are to be added. One of the refinements from the plurality of possible refinements is then selected in order to serve as a current starting network in a subsequent execution of the method.Type: ApplicationFiled: October 24, 2018Publication date: May 6, 2021Inventors: Frank Hutter, Jan Hendrik Metzen, Thomas Elsken
-
Publication number: 20210008718Abstract: A method for producing a strategy for a robot. The method includes the following steps: initializing the strategy and an episode length; repeated execution of the loop including the following steps: producing a plurality of further strategies as a function of the strategy; applying the plurality of the further strategies for the length of the episode length; ascertaining respectively a cumulative reward, which is obtained in the application of the respective further strategy; updating the strategy as a function of a second plurality of the further strategies that obtained the greatest cumulative rewards. After each execution of the loop, the episode length is increased. A computer program, a device for carrying out the method, and a machine-readable memory element on which the computer program is stored, are also described.Type: ApplicationFiled: July 6, 2020Publication date: January 14, 2021Inventors: Frank Hutter, Lior Fuks, Marius Lindauer, Noor Awad