Patents by Inventor Sebastian Vogel

Sebastian Vogel has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12379897
    Abstract: A processing unit for multiplying a first value by a first multiplicand, or for multiplying the first value by, in each instance, a second and third multiplicand. The processing unit receives the multiplicands in a logarithmic number format, so that the multiplicands are each present in the form of at least one exponent at a specifiable base. The processing unit includes a first register, in which either two exponents of the first multiplicand or the exponent of the second and the exponent of the third multiplicand are stored. A set configuration bit indicates whether either the two exponents of the first multiplicand or the exponent of the second and the exponent of the third multiplicand are stored in the first register. The processing unit includes at least two bitshift operators. A method and a computer program for multiplying the value by the multiplicand are also described.
    Type: Grant
    Filed: July 14, 2020
    Date of Patent: August 5, 2025
    Assignee: ROBERT BOSCH GMBH
    Inventor: Sebastian Vogel
  • Patent number: 12253918
    Abstract: A method for operating a hardware platform for the inference calculation of a layered neural network. In the method: a first portion of input data which are required for the inference calculation of a first layer of the neural network and redundancy information relating to the input data are read in from an external working memory into an internal working memory of the computing unit; the integrity of the input data is checked based on the redundancy information; in response to the input data here being identified as error-free, the computing unit carries out at least part of the first-layer inference calculation for the input data to obtain a work result; redundancy information for the work result is determined, based which the integrity of the work result can be verified; the work result and the redundancy information are written to the external working memory.
    Type: Grant
    Filed: February 12, 2021
    Date of Patent: March 18, 2025
    Assignee: ROBERT BOSCH GMBH
    Inventors: Andre Guntoro, Christoph Schorn, Jo Pletinckx, Leonardo Luiz Ecco, Sebastian Vogel
  • Publication number: 20240211735
    Abstract: A method for operating a hardware platform for the inference calculation of a convolutional neural network. In the method: an input matrix having input data of the neural network is convolved by the acceleration module with a plurality of convolution kernels, so that a multiplicity of two-dimensional output matrices results; the convolution kernels are summed elementwise to form a control kernel; the input matrix is convolved by the acceleration module with the control kernel, so that a two-dimensional control matrix results; each element of the control matrix is compared with the sum of the elements corresponding thereto in the output matrices; if this comparison yields a deviation for an element of the control matrix, then in response it is checked, with at least one additional control calculation, whether an element of at least one output matrix corresponding to this element of the control matrix was correctly calculated.
    Type: Application
    Filed: May 25, 2021
    Publication date: June 27, 2024
    Inventors: Christoph SCHORN, Leonardo Luiz ECCO, Andre GUNTORO, Jo PLETINCKX, Sebastian VOGEL
  • Patent number: 11715019
    Abstract: A method for operating a calculation system including a neural network, in particular a convolutional neural network, the calculation system including a processing unit for the sequential calculation of the neural network and a memory external thereto for buffering intermediate results of the calculations in the processing unit, including: incrementally calculating data sections, which each represent a group of intermediate results, with the aid of a neural network; lossy compression of one or multiple of the data sections to obtain compressed intermediate results; and transmitting the compressed intermediate results to the external memory.
    Type: Grant
    Filed: March 11, 2019
    Date of Patent: August 1, 2023
    Assignee: ROBERT BOSCH GMBH
    Inventors: Andre Guntoro, Armin Runge, Christoph Schorn, Jaroslaw Topp, Sebastian Vogel, Juergen Schirmer
  • Patent number: 11698672
    Abstract: A hardware architecture for an artificial neural network ANN. The ANN includes a consecutive series made up of an input layer, multiple processing layers, and an output layer. Each layer maps a set of input variables onto a set of output variables, and output variables of the input layer and of each processing layer are input variables of the particular layer that follows in the series. The hardware architecture includes a plurality of processing units. The implementation of each layer is split among at least two of the processing units, and at least one resettable switch-off device is provided via which at least one processing unit is selectively deactivatable, independently of the input variables supplied to it, in such a way that at least one further processing unit remains activated in all layers whose implementation is contributed to by this processing unit.
    Type: Grant
    Filed: June 3, 2019
    Date of Patent: July 11, 2023
    Assignee: ROBERT BOSCH GMBH
    Inventors: Juergen Schirmer, Andre Guntoro, Armin Runge, Christoph Schorn, Jaroslaw Topp, Sebastian Vogel
  • Publication number: 20230061541
    Abstract: A method for operating a hardware platform for the inference calculation of a layered neural network. In the method: a first portion of input data which are required for the inference calculation of a first layer of the neural network and redundancy information relating to the input data are read in from an external working memory into an internal working memory of the computing unit; the integrity of the input data is checked based on the redundancy information; in response to the input data here being identified as error-free, the computing unit carries out at least part of the first-layer inference calculation for the input data to obtain a work result; redundancy information for the work result is determined, based which the integrity of the work result can be verified; the work result and the redundancy information are written to the external working memory.
    Type: Application
    Filed: February 12, 2021
    Publication date: March 2, 2023
    Inventors: Andre Guntoro, Christoph Schorn, Jo Pletinckx, Leonardo Luiz Ecco, Sebastian Vogel
  • Patent number: 11593232
    Abstract: A method for verifying a calculation of a neuron value of multiple neurons of a neural network, including: carrying out or triggering a calculation of neuron functions of the multiple neurons, in each case to obtain a neuron value, the neuron functions being determined by individual weightings for each neuron input; calculating a first comparison value as the sum of the neuron values of the multiple neurons; carrying out or triggering a control calculation with one or multiple control neuron functions and with all neuron inputs of the multiple neurons, to obtain a second comparison value as a function of the neuron inputs of the multiple neurons and of the sum of the weightings of the multiple neurons assigned to the respective neuron input; and recognizing an error as a function of the first comparison value and of the second comparison value.
    Type: Grant
    Filed: January 4, 2019
    Date of Patent: February 28, 2023
    Assignee: Robert Bosch GmbH
    Inventors: Andre Guntoro, Armin Runge, Christoph Schorn, Sebastian Vogel, Jaroslaw Topp, Juergen Schirmer
  • Patent number: 11537361
    Abstract: A processing unit and a method for multiplying at least two multiplicands. The multiplicands are present in an exponential notation, that is, each multiplicand is assigned an exponent and a base. The processing unit is configured to carry out a multiplication of the multiplicands and includes at least one bitshift unit, the bitshift unit shifting a binary number a specified number of places, in particular, to the left; an arithmetic unit, which carries out an addition of two input variables and a subtraction of two input variables; and a storage device. A computer program, which is configured to execute the method, and a machine-readable storage element, in which the computer program is stored, are also described.
    Type: Grant
    Filed: May 21, 2019
    Date of Patent: December 27, 2022
    Assignee: Robert Bosch GmbH
    Inventor: Sebastian Vogel
  • Publication number: 20220291899
    Abstract: A processing unit for multiplying a first value by a first multiplicand, or for multiplying the first value by, in each instance, a second and third multiplicand. The processing unit receives the multiplicands in a logarithmic number format, so that the multiplicands are each present in the form of at least one exponent at a specifiable base. The processing unit includes a first register, in which either two exponents of the first multiplicand or the exponent of the second and the exponent of the third multiplicand are stored. A set configuration bit indicates whether either the two exponents of the first multiplicand or the exponent of the second and the exponent of the third multiplicand are stored in the first register. The processing unit includes at least two bitshift operators. A method and a computer program for multiplying the value by the multiplicand are also described.
    Type: Application
    Filed: July 14, 2020
    Publication date: September 15, 2022
    Applicant: Robert Bosch GmbH
    Inventor: Sebastian Vogel
  • Patent number: 11301749
    Abstract: A method for calculating an output of a neural network, including the steps of generating a first neural network that includes discrete edge weights from a neural network that includes precise edge weights by stochastic rounding; of generating a second neural network that includes discrete edge weights from the neural network that includes precise edge weights by stochastic rounding; and of calculating an output by adding together the output of the first neural network and of the second neural network.
    Type: Grant
    Filed: November 8, 2017
    Date of Patent: April 12, 2022
    Assignee: Robert Bosch GmbH
    Inventors: Christoph Schorn, Sebastian Vogel
  • Patent number: 11251699
    Abstract: A relay including a relay coil and a relay switch. The relay coil including a coil beginning and a coil end and being connected to a relay driving circuit. The relay switch being arranged in a load circuit. A first parasitic capacitance between the coil beginning and the relay switch is different than a second parasitic capacitance between the coil end and the relay switch.
    Type: Grant
    Filed: June 11, 2019
    Date of Patent: February 15, 2022
    Assignee: SAMSUNG SDI CO., LTD.
    Inventors: Markus Lettner, Michael Schlick, Sebastian Vogel
  • Publication number: 20210390403
    Abstract: A computer-implemented method for calculating an output value of a neural network including multiple neurons as a function of neuron output values. The method includes: checking neuron functions of one or of multiple neurons of a neuron group; when establishing an error in the neuron group, determining a criticality of the error; correcting the neuron output values of at least one of the one or of the multiple neurons of the neuron group as a function of the criticality of an established error.
    Type: Application
    Filed: May 24, 2021
    Publication date: December 16, 2021
    Inventors: Andre Guntoro, Christoph Schorn, Jo Pletinckx, Leonardo Luiz Ecco, Sebastian Vogel
  • Publication number: 20210256376
    Abstract: A device and method for machine learning using an artificial neural network. For a calculation hardware for the artificial neural network, a layer description is provided, which defines at least one part of a layer of the artificial neural network, the layer description defining a tensor for input values of at least one part of this layer, a tensor for weights of at least one part of this layer, and a tensor for output values of at least one part of this layer, in particular of its starting address. A message that includes a start address of the tensor for the input values, or of the tensor for the weighs, or of the tensor for the output values is sent by the calculation hardware for transfer of the input values, or the weights, or the output values, is sent by the calculation hardware.
    Type: Application
    Filed: February 10, 2021
    Publication date: August 19, 2021
    Inventors: Sebastian Vogel, Christoph Schorn, Michael Klaiber
  • Publication number: 20210232208
    Abstract: A hardware architecture for an artificial neural network ANN. The ANN includes a consecutive series made up of an input layer, multiple processing layers, and an output layer. Each layer maps a set of input variables onto a set of output variables, and output variables of the input layer and of each processing layer are input variables of the particular layer that follows in the series. The hardware architecture includes a plurality of processing units. The implementation of each layer is split among at least two of the processing units, and at least one resettable switch-off device is provided via which at least one processing unit is selectively deactivatable, independently of the input variables supplied to it, in such a way that at least one further processing unit remains activated in all layers whose implementation is contributed to by this processing unit.
    Type: Application
    Filed: June 3, 2019
    Publication date: July 29, 2021
    Inventors: Juergen Schirmer, Andre Guntoro, Armin Runge, Christoph Schorn, Jaroslaw Topp, Sebastian Vogel
  • Publication number: 20210224037
    Abstract: A processing unit and a method for multiplying at least two multiplicands. The multiplicands are present in an exponential notation, that is, each multiplicand is assigned an exponent and a base. The processing unit is configured to carry out a multiplication of the multiplicands and includes at least one bitshift unit, the bitshift unit shifting a binary number a specified number of places, in particular, to the left; an arithmetic unit, which carries out an addition of two input variables and a subtraction of two input variables; and a storage device. A computer program, which is configured to execute the method, and a machine-readable storage element, in which the computer program is stored, are also described.
    Type: Application
    Filed: May 21, 2019
    Publication date: July 22, 2021
    Inventor: Sebastian Vogel
  • Publication number: 20190386562
    Abstract: A relay including a relay coil and a relay switch. The relay coil including a coil beginning and a coil end and being connected to a relay driving circuit. The relay switch being arranged in a load circuit. A first parasitic capacitance between the coil beginning and the relay switch is different than a second parasitic capacitance between the coil end and the relay switch.
    Type: Application
    Filed: June 11, 2019
    Publication date: December 19, 2019
    Inventors: Markus LETTNER, Michael SCHLICK, Sebastian VOGEL
  • Publication number: 20190279095
    Abstract: A method for operating a calculation system including a neural network, in particular a convolutional neural network, the calculation system including a processing unit for the sequential calculation of the neural network and a memory external thereto for buffering intermediate results of the calculations in the processing unit, including: incrementally calculating data sections, which each represent a group of intermediate results, with the aid of a neural network; lossy compression of one or multiple of the data sections to obtain compressed intermediate results; and transmitting the compressed intermediate results to the external memory.
    Type: Application
    Filed: March 11, 2019
    Publication date: September 12, 2019
    Inventors: Andre Guntoro, Armin Runge, Christoph Schorn, Jaroslaw Topp, Sebastian Vogel, Juergen Schirmer
  • Publication number: 20190266476
    Abstract: A method for calculating an output of a neural network, including the steps of generating a first neural network that includes discrete edge weights from a neural network that includes precise edge weights by stochastic rounding; of generating a second neural network that includes discrete edge weights from the neural network that includes precise edge weights by stochastic rounding; and of calculating an output by adding together the output of the first neural network and of the second neural network.
    Type: Application
    Filed: November 8, 2017
    Publication date: August 29, 2019
    Inventors: Christoph Schorn, Sebastian Vogel
  • Publication number: 20190251005
    Abstract: A method for verifying a calculation of a neuron value of multiple neurons of a neural network, including: carrying out or triggering a calculation of neuron functions of the multiple neurons, in each case to obtain a neuron value, the neuron functions being determined by individual weightings for each neuron input; calculating a first comparison value as the sum of the neuron values of the multiple neurons; carrying out or triggering a control calculation with one or multiple control neuron functions and with all neuron inputs of the multiple neurons, to obtain a second comparison value as a function of the neuron inputs of the multiple neurons and of the sum of the weightings of the multiple neurons assigned to the respective neuron input; and recognizing an error as a function of the first comparison value and of the second comparison value.
    Type: Application
    Filed: January 4, 2019
    Publication date: August 15, 2019
    Inventors: Andre Guntoro, Armin Runge, Christoph Schorn, Sebastian Vogel, Jaroslaw Topp, Juergen Schirmer