Patents by Inventor Andre GUNTORO

Andre GUNTORO has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240152332
    Abstract: A method for approximatively determining at least one scalar product of at least one input vector with a weight vector. Input components of the input vector and weight components of the weight vector are present in binary form. At least one matrix circuit is used, wherein the memory cells are programmed according to bits of the weight components. Bits with the same significance of at least a portion of the weight components are respectively programmed in memory cells of the same column. For each of one or more subsets of the input components, a bit sum determination is carried out. To a corresponding subset of the row lines, voltages are applied according to bits with the same significance of the respective subset of the input components and a limited bit sum is determined as the output value of the respective analog-to-digital converter.
    Type: Application
    Filed: November 2, 2023
    Publication date: May 9, 2024
    Inventors: Cecilia Eugenia De La Parra Aparicio, Andre Guntoro, Taha Soliman
  • Patent number: 11955197
    Abstract: A memory device comprising a cell field having memory cells, N bit lines, which are respectively connected to at least one of the memory cells of the cell field, N being a whole number greater than one, N sense amplifiers; a bit shift circuit, which has S switch element rows, S being a whole number greater than one and a row number in the range from zero to S?1 being assignable to each switch element row. Each switch element row includes at least one semiconductor switch element connected to one of the bit lines and one of the sense amplifiers. Switch elements of each row connect all bit lines, whose bit line number is smaller than or equal to N minus the row number, to sense amplifiers, so that the respective sense amplifier number is equal to the respective bit line number plus the row number.
    Type: Grant
    Filed: May 17, 2022
    Date of Patent: April 9, 2024
    Assignee: ROBERT BOSCH GMBH
    Inventors: Andre Guntoro, Chirag Sudarshan, Christian Weis, Leonardo Luiz Ecco, Taha Ibrahim Ibrahim Soliman, Norbert Wehn
  • Publication number: 20240036825
    Abstract: A scalar product circuit for computing a binary scalar product of an input vector and a weight vector. The scalar product circuit includes one or multiple adders and at least one matrix circuit including memory cells that are arranged in multiple rows and multiple columns in the form of a matrix, each memory cell including a first memory state and a second memory state. Each matrix circuit includes at least one weight range including one or multiple bit sections, the matrix circuit including an analog-to-digital converter and a bit shifting unit connected thereto for each bit section, the column lines of the bit section being connected to the analog-to-digital converter, and a column selection switching element being provided for each column. The bit shifting units are connected to one of the adders, those bit shifting units that are included in a weight range being connected to the same adder.
    Type: Application
    Filed: September 16, 2021
    Publication date: February 1, 2024
    Inventors: Andre Guntoro, Taha Ibrahim Ibrahim Soliman, Tobias Kirchner
  • Patent number: 11715019
    Abstract: A method for operating a calculation system including a neural network, in particular a convolutional neural network, the calculation system including a processing unit for the sequential calculation of the neural network and a memory external thereto for buffering intermediate results of the calculations in the processing unit, including: incrementally calculating data sections, which each represent a group of intermediate results, with the aid of a neural network; lossy compression of one or multiple of the data sections to obtain compressed intermediate results; and transmitting the compressed intermediate results to the external memory.
    Type: Grant
    Filed: March 11, 2019
    Date of Patent: August 1, 2023
    Assignee: ROBERT BOSCH GMBH
    Inventors: Andre Guntoro, Armin Runge, Christoph Schorn, Jaroslaw Topp, Sebastian Vogel, Juergen Schirmer
  • Patent number: 11698672
    Abstract: A hardware architecture for an artificial neural network ANN. The ANN includes a consecutive series made up of an input layer, multiple processing layers, and an output layer. Each layer maps a set of input variables onto a set of output variables, and output variables of the input layer and of each processing layer are input variables of the particular layer that follows in the series. The hardware architecture includes a plurality of processing units. The implementation of each layer is split among at least two of the processing units, and at least one resettable switch-off device is provided via which at least one processing unit is selectively deactivatable, independently of the input variables supplied to it, in such a way that at least one further processing unit remains activated in all layers whose implementation is contributed to by this processing unit.
    Type: Grant
    Filed: June 3, 2019
    Date of Patent: July 11, 2023
    Assignee: ROBERT BOSCH GMBH
    Inventors: Juergen Schirmer, Andre Guntoro, Armin Runge, Christoph Schorn, Jaroslaw Topp, Sebastian Vogel
  • Patent number: 11645502
    Abstract: A model calculation unit for calculating an RBF model is described, including a hard-wired processor core designed as hardware for calculating a fixedly predefined processing algorithm in coupled functional blocks, the processor core being designed to calculate an output variable for an RBF model as a function of one or multiple input variable(s) of nodes V[j,k], of length scales (L[j,k]), of weighting parameters p3[j,k] predefined for each node, the output variable being formed as a sum of a value calculated for each node V[j,k], the value resulting from a product of a weighting parameter p3[j,k] assigned to the particular node V[j,k], and a result of an exponential function of a value resulting from the input variable vector as a function of a square distance of the particular node (V[j,k]), weighted by the length scales (L[j,k]), the length scales (L[j,k]) being provided separately for each of the nodes as local length scales.
    Type: Grant
    Filed: September 5, 2017
    Date of Patent: May 9, 2023
    Assignee: ROBERT BOSCH GMBH
    Inventors: Andre Guntoro, Ernst Kloppenburg, Heiner Markert, Holger Ulmer
  • Patent number: 11645499
    Abstract: A model calculating unit for calculating a neural layer of a multilayer perceptron model having a hardwired processor core developed in hardware for calculating a definitely specified computing algorithm in coupled functional blocks. The processor core is designed to calculate, as a function of one or multiple input variables of an input variable vector, of a weighting matrix having weighting factors and an offset value specified for each neuron, an output variable for each neuron for a neural layer of a multilayer perceptron model having a number of neurons, a sum of the values of the input variables weighted by the weighting factor, determined by the neuron and the input variable, and the offset value specified for the neuron being calculated for each neuron and the result being transformed using an activation function in order to obtain the output variable for the neuron.
    Type: Grant
    Filed: September 4, 2017
    Date of Patent: May 9, 2023
    Assignee: ROBERT BOSCH GMBH
    Inventors: Andre Guntoro, Ernst Kloppenburg, Heiner Markert, Martin Schiegg
  • Publication number: 20230072032
    Abstract: An arithmetic unit for calculating an approximate value for a product or a sum of two inputted numbers. The arithmetic unit includes arithmetic modules, at least one of the arithmetic modules being provided to calculate individual products or individual sums of digits of the inputted numbers, and the arithmetic modules being connected in an adder that is designed to calculate digits of the product or of the sum from the individual products or from the individual sums. An arithmetic module that is required for the calculation of at least one individual product or individual sum, and/or is required for the propagation of this individual product or this individual sum onto the product or the sum, being absent in the arithmetic unit or being connected there in such a way that it is capable of being selectively deactivated, completely or partially, for the running time of the arithmetic unit.
    Type: Application
    Filed: May 10, 2021
    Publication date: March 9, 2023
    Inventors: Taha Soliman, Andre Guntoro, Cecilia Eugenia De La Parra Aparicio
  • Patent number: 11599787
    Abstract: A hardware-implemented multi-layer perceptron model calculation unit includes: a processor core to calculate output quantities of a neuron layer based on input quantities of an input vector; a memory that has, for each neuron layer, a respective configuration segment for storing configuration parameters and a respective data storage segment for storing the input quantities of the input vector and the one or more output quantities; and a DMA unit to successively instruct the processor core to: calculate respective neuron layers based on the configuration parameters of each configuration segment, calculate input quantities of the input vector defined thereby, and store respectively resulting output quantities in a data storage segment defined by the corresponding configuration parameters, the configuration parameters of configuration segments successively taken into account indicating a data storage region for the resulting output quantities corresponding to the data storage region for the input quantities for
    Type: Grant
    Filed: September 4, 2017
    Date of Patent: March 7, 2023
    Assignee: ROBERT BOSCH GMBH
    Inventors: Andre Guntoro, Heiner Markert
  • Publication number: 20230061541
    Abstract: A method for operating a hardware platform for the inference calculation of a layered neural network. In the method: a first portion of input data which are required for the inference calculation of a first layer of the neural network and redundancy information relating to the input data are read in from an external working memory into an internal working memory of the computing unit; the integrity of the input data is checked based on the redundancy information; in response to the input data here being identified as error-free, the computing unit carries out at least part of the first-layer inference calculation for the input data to obtain a work result; redundancy information for the work result is determined, based which the integrity of the work result can be verified; the work result and the redundancy information are written to the external working memory.
    Type: Application
    Filed: February 12, 2021
    Publication date: March 2, 2023
    Inventors: Andre Guntoro, Christoph Schorn, Jo Pletinckx, Leonardo Luiz Ecco, Sebastian Vogel
  • Patent number: 11593232
    Abstract: A method for verifying a calculation of a neuron value of multiple neurons of a neural network, including: carrying out or triggering a calculation of neuron functions of the multiple neurons, in each case to obtain a neuron value, the neuron functions being determined by individual weightings for each neuron input; calculating a first comparison value as the sum of the neuron values of the multiple neurons; carrying out or triggering a control calculation with one or multiple control neuron functions and with all neuron inputs of the multiple neurons, to obtain a second comparison value as a function of the neuron inputs of the multiple neurons and of the sum of the weightings of the multiple neurons assigned to the respective neuron input; and recognizing an error as a function of the first comparison value and of the second comparison value.
    Type: Grant
    Filed: January 4, 2019
    Date of Patent: February 28, 2023
    Assignee: Robert Bosch GmbH
    Inventors: Andre Guntoro, Armin Runge, Christoph Schorn, Sebastian Vogel, Jaroslaw Topp, Juergen Schirmer
  • Patent number: 11537090
    Abstract: A model calculating unit for the selective calculation of an RBF model or of a neural layer of a multilayer perceptron model having a hardwired processor core designed in hardware for calculating a fixedly specified computing algorithm in coupled function blocks. The processor core is designed to calculate an output variable for an RBF model as a function of one or multiple input variables of an input variable vector, of supporting points, of length scales, of parameters specified for each supporting point, the processor core furthermore being designed to calculate an output variable for each neuron for the neural layer of the multilayer perceptron model having a number of neurons as a function of the one or the multiple input variables of the input variable vector, of a weighting matrix having weighting factors and an offset value specified for each neuron.
    Type: Grant
    Filed: September 4, 2017
    Date of Patent: December 27, 2022
    Assignee: Robert Bosch GmbH
    Inventors: Andre Guntoro, Ernst Kloppenburg, Heiner Markert, Martin Schiegg
  • Publication number: 20220383937
    Abstract: A memory device comprising a plurality of memory cells situated in a first cell field, multiple first bit lines, each respectively connected to multiple memory cells of the first cell field to enable access to the memory cells via the bit line, and multiple sense amplifier pairs which respectively comprise a first and a second sense amplifier. Each first bit line is assigned to a sense amplifier pair, each first bit line being connected to a respective first semiconductor switch element, through which the bit line is electroconductively connectible to and insulatable from the first sense amplifier of the sense amplifier pair, to which the bit line is assigned. Each first bit line is connected to a respective second semiconductor switch element, through which the bit line is electroconductively connectible to and insulatable from the second sense amplifier of the sense amplifier pair, to which the bit line is assigned.
    Type: Application
    Filed: May 17, 2022
    Publication date: December 1, 2022
    Inventors: Andre Guntoro, Chirag Sudarshan, Christian Weis, Leonardo Luiz Ecco, Taha Soliman, Norbert Wehn
  • Publication number: 20220383913
    Abstract: A memory device comprising a cell field having memory cells, N bit lines, which are respectively connected to at least one of the memory cells of the cell field, N being a whole number greater than one, N sense amplifiers; a bit shift circuit, which has S switch element rows, S being a whole number greater than one and a row number in the range from zero to S?1 being assignable to each switch element row. Each switch element row includes at least one semiconductor switch element connected to one of the bit lines and one of the sense amplifiers. Switch elements of each row connect all bit lines, whose bit line number is smaller than or equal to N minus the row number, to sense amplifiers, so that the respective sense amplifier number is equal to the respective bit line number plus the row number.
    Type: Application
    Filed: May 17, 2022
    Publication date: December 1, 2022
    Inventors: Andre Guntoro, Chirag Sudarshan, Christian Weis, Leonardo Luiz Ecco, Taha Soliman, Norbert Wehn
  • Patent number: 11449737
    Abstract: A model calculation unit for calculating a multilayer perceptron model, the model calculation unit being designed in hardware and being hardwired, including: a process or core; a memory; a DMA unit, which is designed to successively instruct the processor core to calculate a neuron layer, in each case based on input variables of an assigned input variable vector and to store the respectively resulting output variables of an output variable vector in an assigned data memory section, the data memory section for the input variable vector assigned to at least one of the neuron layers at least partially including in each case the data memory sections of at least two of the output variable vectors of two different neuron layers.
    Type: Grant
    Filed: September 4, 2017
    Date of Patent: September 20, 2022
    Assignee: Robert Bosch GmbH
    Inventors: Andre Guntoro, Heiner Markert, Martin Schiegg
  • Patent number: 11360443
    Abstract: A model calculation unit for calculating a gradient with respect to a certain input variable of input variables of a predefined input variable vector for an RBF model with the aid of a hard-wired processor core designed as hardware for calculating a fixedly predefined processing algorithm in coupled functional blocks, the processor core being designed to calculate the gradient with respect to the certain input variable for an RBF model as a function of one or multiple input variable(s) of the input variable vector of an input dimension, of a number of nodes, of length scales predefined for each node and each input dimension, and of parameters of the RBF function predefined for each node.
    Type: Grant
    Filed: September 4, 2017
    Date of Patent: June 14, 2022
    Assignee: Robert Bosch GmbH
    Inventors: Andre Guntoro, Heiner Markert, Martin Schiegg
  • Patent number: 11216721
    Abstract: A method for calculating a neuron layer of a multi-layer perceptron model that includes a permanently hardwired processor core configured in hardware for calculating a permanently predefined processing algorithm in coupled functional blocks, a neuron of a neuron layer of the perceptron model being calculated with the aid of an activation function, the activation function corresponding to a simplified sigmoid function and to a simplified tan h function, the activation function being formed by zero-point mirroring of the negative definition range of the exponential function.
    Type: Grant
    Filed: September 4, 2017
    Date of Patent: January 4, 2022
    Assignee: Robert Bosch GmbH
    Inventor: Andre Guntoro
  • Publication number: 20210390403
    Abstract: A computer-implemented method for calculating an output value of a neural network including multiple neurons as a function of neuron output values. The method includes: checking neuron functions of one or of multiple neurons of a neuron group; when establishing an error in the neuron group, determining a criticality of the error; correcting the neuron output values of at least one of the one or of the multiple neurons of the neuron group as a function of the criticality of an established error.
    Type: Application
    Filed: May 24, 2021
    Publication date: December 16, 2021
    Inventors: Andre Guntoro, Christoph Schorn, Jo Pletinckx, Leonardo Luiz Ecco, Sebastian Vogel
  • Publication number: 20210295133
    Abstract: A model calculating unit for calculating a neural layer of a multilayer perceptron model having a hardwired processor core developed in hardware for calculating a definitely specified computing algorithm in coupled functional blocks. The processor core is designed to calculate, as a function of one or multiple input variables of an input variable vector, of a weighting matrix having weighting factors and an offset value specified for each neuron, an output variable for each neuron for a neural layer of a multilayer perceptron model having a number of neurons, a sum of the values of the input variables weighted by the weighting factor, determined by the neuron and the input variable, and the offset value specified for the neuron being calculated for each neuron and the result being transformed using an activation function in order to obtain the output variable for the neuron.
    Type: Application
    Filed: September 4, 2017
    Publication date: September 23, 2021
    Inventors: Andre Guntoro, Ernst Kloppenburg, Heiner Markert, Martin Schiegg
  • Publication number: 20210286327
    Abstract: A model calculation unit for calculating a gradient with respect to a certain input variable of input variables of a predefined input variable vector for an RBF model with the aid of a hard-wired processor core designed as hardware for calculating a fixedly predefined processing algorithm in coupled functional blocks, the processor core being designed to calculate the gradient with respect to the certain input variable for an RBF model as a function of one or multiple input variable(s) of the input variable vector of an input dimension, of a number of nodes, of length scales predefined for each node and each input dimension, and of parameters of the RBF function predefined for each node.
    Type: Application
    Filed: September 4, 2017
    Publication date: September 16, 2021
    Inventors: Andre Guntoro, Heiner Markert, Martin Schiegg