Patents by Inventor Olivier Bichler
Olivier Bichler has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11886719Abstract: A memory circuit for storing parsimonious data and intended to receive an input vector of size Iz, includes an encoder, a memory block comprising a first memory region and a second memory region divided into a number Iz of FIFO memories, each FIFO memory being associated with one component of the input vector, only non-zero data being saved in the FIFO memories, a decoder, the encoder being configured to generate an indicator of non-zero data for each component of the input vector, the memory circuit being configured to write the non-zero data of the input data vector to the respective FIFO memories and to write the indicator of non-zero data to the first memory region, the decoder being configured to read the outputs of the FIFO memories and the associated indicator in the first memory region.Type: GrantFiled: June 18, 2022Date of Patent: January 30, 2024Assignee: COMMISSARIAT A L'ENERGIE ATOMIQUE ET AUX ENERGIES ALTERNATIVESInventors: Vincent Lorrain, Olivier Bichler, David Briand, Johannes Christian Thiele
-
Publication number: 20230030058Abstract: A memory circuit for storing parsimonious data and intended to receive an input vector of size lz, includes an encoder, a memory block comprising a first memory region and a second memory region divided into a number lz of FIFO memories, each FIFO memory being associated with one component of the input vector, only non-zero data being saved in the FIFO memories, a decoder, the encoder being configured to generate an indicator of non-zero data for each component of the input vector, the memory circuit being configured to write the non-zero data of the input data vector to the respective FIFO memories and to write the indicator of non-zero data to the first memory region, the decoder being configured to read the outputs of the FIFO memories and the associated indicator in the first memory region.Type: ApplicationFiled: June 18, 2022Publication date: February 2, 2023Inventors: Vincent LORRAIN, Olivier Bichler, David Briand, Johannes Christian Thiele
-
Publication number: 20230014185Abstract: A computer-implemented method for coding a digital signal intended to be processed by a digital computing system includes the steps of: receiving a sample of the digital signal quantized on a number Nd of bits, decomposing the sample into a plurality of binary words of parameterizable bit size Np, coding the sample through a plurality of pairs of values, each pair comprising one of the binary words and an address corresponding to the position of the binary word in the sample, transmitting the pairs of values to an integration unit in order to carry out a MAC operation between the sample and a weighting coefficient.Type: ApplicationFiled: December 10, 2020Publication date: January 19, 2023Inventors: Johannes Christian THIELE, Olivier BICHLER, Marc DURANTON, Vincent LORRAIN
-
Patent number: 11551073Abstract: A modulation device includes at least one memristive device, and a control block, the modulation device having an equivalent conductance yi(t) produced by the at least one memristive device and the control block being configured to receive a clock signal and perform a first modification of the equivalent conductance yi(t) upon receipt of each clock signal, receive an input voltage pulse and perform a second modification of the equivalent conductance yi(t) upon receipt of each input voltage pulse, the first and second modifications being in opposite directions.Type: GrantFiled: December 4, 2017Date of Patent: January 10, 2023Assignees: COMMISSARIAT A L'ENERGIE ATOMIQUE ET AUX ENERGIES ALTERNATIVES, INSTITUT NATIONAL DE LA SANTE ET DE LA RECHERCHE MEDICALEInventors: Thilo Werner, Olivier Bichler, Elisa Vianello, Blaise Yvert
-
Publication number: 20230004351Abstract: A computer-implemented method is provided for coding a digital signal quantized on a given number Nd of bits and intended to be processed by a digital computing system, the signal being coded on a predetermined number Np of bits which is strictly less than Nd, the method including the steps of: receiving a digital signal composed of a plurality of samples, decomposing each sample into a sum of k maximum values which are equal to 2NP?1 and a residual value, with k being a positive or zero integer, successively transmitting the values obtained after decomposition to an integration unit for carrying out a MAC operation between the sample and a weighting coefficient.Type: ApplicationFiled: December 10, 2020Publication date: January 5, 2023Inventors: Johannes Christian THIELE, Olivier BICHLER, Vincent LORRAIN
-
Patent number: 11507804Abstract: A processor for computing at least one convolution layer of a convolutional neural network is provided, in response to an input event, the convolutional neural network comprising at least one convolution kernel, the convolution kernel containing weight coefficients. The processor comprises at least one convolution module configured to compute the one or more convolution layers, each convolution module comprising a set of elementary processing units for computing the internal value of the convolution-layer neurons that are triggered by the input event, each convolution module being configured to match the weight coefficients of the kernel with certain at least of the elementary processing units of the module in parallel, the number of elementary processing units being independent of the number of neurons of the convolution layer.Type: GrantFiled: April 27, 2017Date of Patent: November 22, 2022Assignee: COMMISSARIAT A L'ENERGIE ATOMIQUE ET AUX ENERGIES ALTERNATIVESInventors: Olivier Bichler, Antoine Dupret, Vincent Lorrain
-
Patent number: 11423296Abstract: A device for distributing the convolution coefficients of the least one convolutional kernel of a convolutional neural network is provided, the coefficients being carried by an input bus, to a set of processing units in a processor based on a convolutional-neural-network architecture. The device comprises at least one switching network that is controlled by at least one control unit, the switching network comprising a set of switches that are arranged to apply circular shifts to at least one portion of the input bus. For each convolution kernel, each control unit is configured to dynamically control certain at least of the switches of the switching networks in response to an input event applied to the convolution kernel and at least one parameter representing the maximum size of the convolution kernels.Type: GrantFiled: April 27, 2017Date of Patent: August 23, 2022Assignee: COMMISSARIAT A L'ENERGIE ATOMIQUE ET AUX ENERGIES ALTERNATIVESInventors: Olivier Bichler, Antoine Dupret, Vincent Lorrain
-
Patent number: 11423287Abstract: A computer based on a spiking neural network, includes at least one maximum pooling layer. In response to an input spike received by a neuron of the maximum pooling layer, the device is configured so as to receive the address of the activated synapse. The device comprises an address comparator configured so as to compare the address of the activated synapse with a set of reference addresses. Each reference address is associated with a hardness value and with a pooling neuron. The device activates a neuron of the maximum pooling layer if the address of the activated synapse is equal to one of the reference addresses and the hardness value associated with this reference address has the highest value from among the hardness values associated with the other reference addresses of the set.Type: GrantFiled: July 11, 2018Date of Patent: August 23, 2022Assignee: COMMISSARIAT A L'ENERGIE ATOMIQUE ET AUX ENERGIES ALTERNATIVESInventors: Vincent Lorrain, Olivier Bichler
-
Publication number: 20220092397Abstract: This electronic calculator comprises a plurality of electronic calculation blocks, each of which is configured to implement one or more respective processing layers of an artificial neural network. The calculation blocks are of at least two different types among: a first type with fixed topology, fixed operation, and fixed parameters, a second type with fixed topology, fixed operation, and modifiable parameters, and a third type with modifiable topology, modifiable operation, and modifiable parameters. For each processing layer implemented by the respective calculation block, the topology is a connection topology for each artificial neuron; the operation is a type of processing to be performed for each artificial neuron; and the parameters include values able to be determined via training of the neural network.Type: ApplicationFiled: September 15, 2021Publication date: March 24, 2022Applicant: Commissariat à l'énergie atomique et aux énergies alternativesInventors: Vincent LORRAIN, Olivier BICHLER, David BRIAND, Johannes Christian THIELE
-
Publication number: 20220092417Abstract: A shift-and-add multiplier able to perform multiplication operations by multiplicative values, configured to receive as input a binary value and to deliver the product of the value and of a respective multiplicative value. It includes a set of shift units, each connected to the input and configured to perform a bit shift of the value received at the input, varying from one shift unit to another; and a set of summation units, configured to sum the outputs of the shift units. It includes a set of multiplexing unit(s) connected between the set of shift units and the set of summation unit(s), and a control unit configured to control the set of multiplexing unit(s) to select respective outputs of the shift units according to the multiplicative value and to deliver them to the set of summation unit(s).Type: ApplicationFiled: September 20, 2021Publication date: March 24, 2022Applicant: Commissariat à l'énergie atomique et aux énergies alternativesInventors: Vincent Lorrain, Olivier Bichler, David Briand, Johannes Christian Thiele
-
Patent number: 11263519Abstract: A method for unsupervised learning of a multilevel hierarchical network of artificial neurons wherein each neuron is interconnected with artificial synapses to neurons of a lower hierarchical level and to neurons of an upper hierarchical level.Type: GrantFiled: November 20, 2018Date of Patent: March 1, 2022Assignee: COMMISSARIAT A L'ENERGIE ATOMIQUE ET AUX ENERGIES ALTERNATIVESInventors: Johannes Christian Thiele, Olivier Bichler
-
Patent number: 11222254Abstract: A neuron circuit is capable of producing a weighted sum of digitized input signals and applying an activation function to the weighted sum so as to produce a digitized activation signal as output. The circuit includes at least: one multiplier multiplying each input signal (x1 to xn) with a weight value (w1j to wnj), one accumulator accumulating the results of the multiplier so as to produce the weighted sum, and one activation unit executing the activation function. The activation unit comprises at least one shift unit and at least one saturation unit capable of approximating a non-linear activation function. The result of the approximated activation function is obtained by one or more arithmetic shifts applied to the weighted sum.Type: GrantFiled: December 7, 2016Date of Patent: January 11, 2022Assignee: COMMISSARIAT A L'ENERGIE ATOMIQUE ET AUX ENERGIES ALTERNATIVESInventors: Alexandre Carbon, Olivier Bichler, Marc Duranton, Jean-Marc Philippe
-
Publication number: 20210397968Abstract: A new implementation is provided for an error back-propagation algorithm that is suited to the hardware constraints of a device implementing a spiking neural network. The invention notably uses binary or ternary encoding of the errors calculated in the back-propagation phase to adapt its implementation to the constraints of the network, and thus to avoid having to use floating-point number multiplication operators. More generally, the invention proposes a global adaptation of the back-propagation algorithm to the specific constraints of a spiking neural network. In particular, the invention makes it possible to use the same propagation infrastructure to propagate the data and to back-propagate the errors in the training phase. The invention proposes a generic implementation of a spiking neuron that is suitable for implementing any type of spiking neural network, in particular convolutional networks.Type: ApplicationFiled: October 22, 2019Publication date: December 23, 2021Inventors: Johannes THIELE, Olivier BICHLER
-
Publication number: 20210241071Abstract: A computer for computing a convolutional layer of an artificial neural network, includes at least one set of at least two partial sum computing modules connected in series, a storage member for storing the coefficients of at least one convolution filter, each partial sum computing module comprising at least one computing unit configured so as to carry out a multiplication of an item of input data of the computer and a coefficient of a convolution filter, followed by an addition of the output of the preceding partial sum computing module in the series, each set furthermore comprising, for each partial sum computing module except the first in the series, a shift register connected at input for storing the item of input data for the processing duration of the preceding partial sum computing modules in the series.Type: ApplicationFiled: August 28, 2019Publication date: August 5, 2021Inventors: Vincent LORRAIN, Olivier BICHLER, Mickael GUIBERT
-
Publication number: 20210232897Abstract: A processor for computing at least one convolution layer of a convolutional neural network is provided, in response to an input event, the convolutional neural network comprising at least one convolution kernel, the convolution kernel containing weight coefficients. The processor comprises at least one convolution module configured to compute the one or more convolution layers, each convolution module comprising a set of elementary processing units for computing the internal value of the convolution-layer neurons that are triggered by the input event, each convolution module being configured to match the weight coefficients of the kernel with certain at least of the elementary processing units of the module in parallel, the number of elementary processing units being independent of the number of neurons of the convolution layer.Type: ApplicationFiled: April 27, 2017Publication date: July 29, 2021Inventors: Olivier BICHLER, Antoine DUPRET, Vincent LORRAIN
-
Patent number: 11055608Abstract: A convolutional neural network is provided comprising artificial neurons arranged in layers, each comprising output matrices. An output matrix comprises output neurons and is connected to an input matrix, comprising input neurons, by synapses associated with a convolution matrix comprising weight coefficients associated with the output neurons of an output matrix. Each synapse consists of a set of memristive devices storing a weight coefficient of the convolution matrix. In response to a change of the output value of an input neuron, the neural network dynamically associates each set of memristive devices with an output neuron connected to the input neuron. The neural network comprises accumulator(s) for each output neuron; to accumulate the values of the weight coefficients stored in the sets of memristive devices dynamically associated with the output neuron, the output value of the output neuron being determined from the value accumulated in the accumulator(s).Type: GrantFiled: August 18, 2015Date of Patent: July 6, 2021Assignee: COMMISSARIAT A L'ENERGIE ATOMIQUE ET AUX ENERGIES ALTERNATIVESInventor: Olivier Bichler
-
Patent number: 11017293Abstract: A programming method for an artificial neuron network having synapses, each including a single resistive random-access memory having first and second electrodes on either side of an active zone, the method including determining a number N of conductance intervals, where N?3; for each memory: choosing a conductance interval from amongst the N intervals; a step i) for application of a voltage pulse of a first type between the first and second electrodes, and for reading the conductance value of the memory; if the conductance value does not belong to the previously chosen conductance interval, a sub-step ii) for application of a voltage pulse of a second type between the first and second electrodes, and for reading the conductance value; if the conductance value does not belong to the chosen conductance interval, a step according to which step i) is reiterated, with steps i) and ii) being repeated until the conductance value belongs to the interval.Type: GrantFiled: August 15, 2016Date of Patent: May 25, 2021Assignee: COMMISSARIAT À L'ÉNERGIE ATOMIQUE ET AUX ÉNERGIES ALTERNATIVESInventors: Elisa Vianello, Olivier Bichler
-
Publication number: 20200210807Abstract: A computer based on a spiking neural network, includes at least one maximum pooling layer. In response to an input spike received by a neuron of the maximum pooling layer, the device is configured so as to receive the address of the activated synapse. The device comprises an address comparator configured so as to compare the address of the activated synapse with a set of reference addresses. Each reference address is associated with a hardness value and with a pooling neuron. The device activates a neuron of the maximum pooling layer if the address of the activated synapse is equal to one of the reference addresses and the hardness value associated with this reference address has the highest value from among the hardness values associated with the other reference addresses of the set.Type: ApplicationFiled: July 11, 2018Publication date: July 2, 2020Inventors: Vincent LORRAIN, Olivier BICHLER
-
Publication number: 20200019850Abstract: An impulse-neuron-type neuromorphic circuit comprises a capacitor (Cmem) having a membrane voltage (Vmem), a first action comparator (1) for comparing the membrane voltage with a first action voltage (Vact, Vthreshold_high), a first regulation comparator (4) for comparing the membrane voltage with a first regulation voltage (Vreg), a device for reinitialising the membrane voltage (3) a register of threshold exceeds (5) and a regulator (2). The regulator is configured: in case of exceeding the first regulation voltage (Vreg Vthreshold_low) by the membrane voltage, to control the device for reinitialising the membrane voltage (3) and modify the register of threshold exceeds (5); and in case of exceeding the first action voltage (Vact Vthreshold_high) by the membrane voltage, to control the device for reinitialising the membrane voltage (3) and query the register of threshold exceeds to decide whether or not to generate an action potential (Spa) on an output of the neuromorphic circuit.Type: ApplicationFiled: July 11, 2019Publication date: January 16, 2020Inventors: Alexandre Valentian, Olivier Bichler, François Rummens
-
Publication number: 20190303751Abstract: A modulation device includes at least one memristive device, and a control block, the modulation device having an equivalent conductance yi(t) produced by the at least one memristive device and the control block being configured to receive a clock signal and perform a first modification of the equivalent conductance yi(t) upon receipt of each clock signal, receive an input voltage pulse and perform a second modification of the equivalent conductance yi(t) upon receipt of each input voltage pulse, the first and second modifications being in opposite directions.Type: ApplicationFiled: December 4, 2017Publication date: October 3, 2019Inventors: Thilo WERNER, Olivier BICHLER, Elisa VIANELLO, Blaise YVERT