Digital Neural Network Patents (Class 706/41)
  • Patent number: 11977972
    Abstract: Residual semi-recurrent neural networks (RSNN) can be configured to receive both time invariant input and time variant input data to generate one or more time series predictions. The time invariant input can be processed by a multilayer perceptron of the RSNN. The output of the multilayer perceptron can be used as an initial state for a recurrent neural network unit of the RSNN. The recurrent neural network unit can also receive time invariant input, and process the time invariant input with the time invariant input to generate an output. The outputs of the multilayer perceptron and the recurrent neural network unit can be combined to generate the one or more time series predictions.
    Type: Grant
    Filed: March 7, 2023
    Date of Patent: May 7, 2024
    Assignee: Sanofi
    Inventors: Qi Tang, Youran Qi
  • Patent number: 11954582
    Abstract: Disclosed is a neural network accelerator including a first bit operator generating a first multiplication result by performing multiplication on first feature bits of input feature data and first weight bits of weight data, a second bit operator generating a second multiplication result by performing multiplication on second feature bits of the input feature data and second weight bits of the weight data, an adder generating an addition result by performing addition based on the first multiplication result and the second multiplication result, a shifter shifting a number of digits of the addition result depending on a shift value to generate a shifted addition result, and an accumulator generating output feature data based on the shifted addition result.
    Type: Grant
    Filed: December 21, 2022
    Date of Patent: April 9, 2024
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Sungju Ryu, Hyungjun Kim, Jae-Joon Kim
  • Patent number: 11947873
    Abstract: An exemplary method for identifying media may include receiving user input associated with a request for media, where that user input includes unstructured natural language speech including one or more words; identifying at least one context associated with the user input; causing a search for the media based on the at least one context and the user input; determining, based on the at least one context and the user input, at least one media item that satisfies the request; and in accordance with a determination that the at least one media item satisfies the request, obtaining the at least one media item.
    Type: Grant
    Filed: April 9, 2021
    Date of Patent: April 2, 2024
    Assignee: Apple Inc.
    Inventors: Ryan M. Orr, Daniel J. Mandel, Andrew J. Sinesio, Connor J. Barnett
  • Patent number: 11943364
    Abstract: In one embodiment, a set of feature vectors can be derived from any biometric data, and then using a deep neural network (“DNN”) on those one-way homomorphic encryptions (i.e., each biometrics' feature vector) an authentication system can determine matches or execute searches on encrypted data. Each biometrics' feature vector can then be stored and/or used in conjunction with respective classifications, for use in subsequent comparisons without fear of compromising the original biometric data. In various embodiments, the original biometric data is discarded responsive to generating the encrypted values. In another embodiment, the homomorphic encryption enables computations and comparisons on cypher text without decryption of the encrypted feature vectors. Security of such privacy enable biometrics can be increased by implementing an assurance factor (e.g., liveness) to establish a submitted biometric has not been spoofed or faked.
    Type: Grant
    Filed: February 28, 2022
    Date of Patent: March 26, 2024
    Assignee: Private Identity LLC
    Inventor: Scott Edward Streit
  • Patent number: 11893271
    Abstract: A computing-in-memory circuit includes a Resistive Random Access Memory (RRAM) array and a peripheral circuit. The RRAM array comprises a plurality of memory cells arranged in an array pattern, and each memory cell is configured to store a data of L bits, L being an integer not less than 2. The peripheral circuit is configured to, in a storage mode, write more than one convolution kernels into the RRAM array, and in a computation mode, input elements that need to be convolved in a pixel matrix into the RRAM array and read a current of each column of memory cells, wherein each column of memory cells stores one convolution kernel correspondingly, and one element of the convolution kernel is stored in one memory cell correspondingly, and one element of the pixel matrix is correspondingly input into a word line that a row of memory cells connect.
    Type: Grant
    Filed: July 23, 2020
    Date of Patent: February 6, 2024
    Assignee: INSTITUTE OF MICROELECTRONICS OF THE CHINESE ACADEMY OF SCIENCES
    Inventors: Feng Zhang, Renjun Song
  • Patent number: 11868874
    Abstract: A 2D array-based neuromorphic processor includes: axon circuits each being configured to receive a first input corresponding to one bit from among bits indicating n-bit activation; first direction lines extending in a first direction from the axon circuits; second direction lines intersecting the first direction lines; synapse circuits disposed at intersections of the first direction lines and the second direction lines, and each being configured to store a second input corresponding to one bit from among bits indicating an m-bit weight and to output operation values of the first input and the second input; and neuron circuits connected to the first or second direction lines, each of the neuron circuits being configured to receive an operation value output from at least one of the synapse circuits, based on time information assigned individually to the synapse circuits, and to perform an arithmetic operation by using the operation values.
    Type: Grant
    Filed: March 10, 2023
    Date of Patent: January 9, 2024
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Sungho Kim, Cheheung Kim, Jaeho Lee
  • Patent number: 11868870
    Abstract: A neuromorphic apparatus configured to process a multi-bit neuromorphic operation including a single axon circuit, a single synaptic circuit, a single neuron circuit, and a controller. The single axon circuit is configured to receive, as a first input, an i-th bit of an n-bit axon. The single synaptic circuit is configured to store, as a second input, a j-th bit of an m-bit synaptic weight and output a synaptic operation value between the first input and the second input. The single neuron circuit is configured to obtain each bit value of a multi-bit neuromorphic operation result between the n-bit axon and the m-bit synaptic weight, based on the output synaptic operation value. The controller is configured to respectively determine the i-th bit and the j-th bit to be sequentially assigned for each time period of different time periods to the single axon circuit and the single synaptic circuit.
    Type: Grant
    Filed: August 30, 2019
    Date of Patent: January 9, 2024
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Sungho Kim, Cheheung Kim, Jaeho Lee
  • Patent number: 11853888
    Abstract: A processor-implemented method of performing convolution operations in a neural network includes generating a plurality of first sub-bit groups and a plurality of second sub-bit groups, respectively from at least one pixel value of an input feature map and at least one predetermined weight, performing a convolution operation on a first pair that includes a first sub-bit group including a most significant bit (MSB) of the at least one pixel value and a second sub-bit group including an MSB of the at least one predetermined weight, based on the plurality of second sub-bit groups, obtaining a maximum value of a sum of results for convolution operations of remaining pairs excepting the first pair, and based on a result of the convolution operation on the first pair and the maximum value, determining whether to perform the convolution operations of the remaining pairs.
    Type: Grant
    Filed: December 28, 2022
    Date of Patent: December 26, 2023
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Joonho Song, Namjoon Kim, Sehwan Lee, Deokjin Joo
  • Patent number: 11853880
    Abstract: One aspect of this description relates to a convolutional neural network (CNN). The CNN includes a memory cell array including a plurality of memory cells. Each memory cell includes at least one first capacitive element of a plurality of first capacitive elements. Each memory cell is configured to multiply a weight bit and an input bit to generate a product. The at least one first capacitive element is enabled when the product satisfies a predetermined threshold. The CNN includes a reference cell array including a plurality of second capacitive elements. The CNN includes a memory controller configured to compare a first signal associated with the plurality of first capacitive elements with a second signal associated with at least one second capacitive element of the plurality of second capacitive elements, and, based on the comparison, determine whether the at least one first capacitive element is enabled.
    Type: Grant
    Filed: August 25, 2020
    Date of Patent: December 26, 2023
    Assignee: TAIWAN SEMICONDUCTOR MANUFACTURING COMPANY LIMITED
    Inventors: Jaw-Juinn Horng, Szu-Chun Tsao
  • Patent number: 11836608
    Abstract: Techniques and systems are provided for implementing a convolutional neural network. One or more convolution accelerators are provided that each include a feature line buffer memory, a kernel buffer memory, and a plurality of multiply-accumulate (MAC) circuits arranged to multiply and accumulate data. In a first operational mode the convolutional accelerator stores feature data in the feature line buffer memory and stores kernel data in the kernel data buffer memory. In a second mode of operation, the convolutional accelerator stores kernel decompression tables in the feature line buffer memory.
    Type: Grant
    Filed: November 18, 2022
    Date of Patent: December 5, 2023
    Assignees: STMICROELECTRONICS S.r.l., STMicroelectronics International N.V.
    Inventors: Thomas Boesch, Giuseppe Desoli, Surinder Pal Singh, Carmine Cappetta
  • Patent number: 11838046
    Abstract: Examples described herein include systems and methods which include wireless devices and systems with examples of full duplex compensation with a self-interference noise calculator. The self-interference noise calculator may be coupled to antennas of a wireless device and configured to generate adjusted signals that compensate self-interference. The self-interference noise calculator may include a network of processing elements configured to combine transmission signals into intermediate results according to input data and delayed versions of the intermediate results. Each set of intermediate results may be combined in the self-interference noise calculator to generate a corresponding adjusted signal. The adjusted signal is received by a corresponding wireless receiver to compensate for the self-interference noise generated by a wireless transmitter transmitting on the same frequency band as the wireless receiver is receiving.
    Type: Grant
    Filed: April 7, 2021
    Date of Patent: December 5, 2023
    Inventor: Fa-Long Luo
  • Patent number: 11830244
    Abstract: An image recognition method and apparatus based on a systolic array, and a medium are disclosed. The method includes: converting obtained image feature information into a one-dimensional feature vector; converting an obtained weight matrix into a one-dimensional weight vector, and allocating a corresponding weight group to each node in a trained three-dimensional systolic array model; performing multiply-accumulate of the feature vector and a weight value on the one-dimensional feature vector in parallel by using the three-dimensional systolic array model, to obtain a feature value corresponding to each node, the feature values with different values reflecting article categories contained in an image; and determining an article category contained in the image according to the feature value corresponding to each node and a pre-established corresponding relationship between the feature value and the article category.
    Type: Grant
    Filed: April 26, 2021
    Date of Patent: November 28, 2023
    Assignee: INSPUR SUZHOU INTELLIGENT TECHNOLOGY CO., LTD.
    Inventors: Gang Dong, Yaqian Zhao, Rengang Li, Hongbin Yang, Haiwei Liu, Dongdong Jiang
  • Patent number: 11801636
    Abstract: A method and an apparatus for additive manufacturing pertaining to high efficiency, energy beam patterning and beam steering to effectively and efficiently utilize the source energy. In one embodiment recycling and reuse of unwanted light includes a source of multiple light patterns produced by one or more light valves, with at least one of the multiple light patterns being formed from rejected patterned light. An image relay is used to direct the multiple light patterns, and a beam routing system receives the multiple light patterns and respectively directs them toward defined areas on a powder bed.
    Type: Grant
    Filed: April 27, 2021
    Date of Patent: October 31, 2023
    Assignee: Seurat Technologies, Inc.
    Inventors: James A. DeMuth, Francis L. Leard, Erik Toomre
  • Patent number: 11797831
    Abstract: Disclosed is a methods and apparatus which can improve defect tolerability of a hardware-based neural network. In one embodiment, a method for performing a calculation of values on first neurons of a first layer in a neural network, includes: receiving a first pattern of a memory cell array; determining a second pattern of the memory cell array according to a third pattern; determining at least one pair of columns of the memory cell array according to the first pattern and the second pattern; switching input data of two columns of each of the at least one pair of columns of the memory cell array; and switching output data of the two columns in each of the at least one pair of columns of the memory cell array so as to determine the values on the first neurons of the first layer.
    Type: Grant
    Filed: August 8, 2022
    Date of Patent: October 24, 2023
    Assignee: Taiwan Semiconductor Manufacturing Co., Ltd.
    Inventors: Win-San Khwa, Yu-Der Chih, Yi-Chun Shih, Chien-Yin Liu
  • Patent number: 11783187
    Abstract: An approach is provided for progressive training of long-lived, evolving machine learning architectures. The approach involves, for example, determining alternative paths for the evolution of the machine learning model from a first architecture to a second architecture. The approach also involves determining one or more migration step alternatives in the alternative paths. The migration steps, for instance, include architecture options for the evolution of the machine learning model. The approach further involves processing data using the options to determine respective model performance data. The approach further involves selecting a migration step from the one or more migration step alternatives based on the respective model performance data to control a rate of migration steps over a rate of training in the evolution of the machine learning model. The approach further involves initiating a deployment the selected migration step to the machine learning model.
    Type: Grant
    Filed: March 4, 2020
    Date of Patent: October 10, 2023
    Assignee: HERE GLOBAL B.V.
    Inventor: Tero Juhani Keski-Valkama
  • Patent number: 11775612
    Abstract: In order to provide a learning data generating apparatus that is able to efficiently restrain erroneous detections, the learning data generating apparatus includes a data acquisition unit configured to acquire learning data including teacher data, and a generation unit configured to generate generated learning data based on the learning data and a generating condition, wherein the generation unit converts teacher data of a positive instance into teacher data of a negative instance according to a preset rule when generating the generated learning data.
    Type: Grant
    Filed: October 28, 2021
    Date of Patent: October 3, 2023
    Assignee: Canon Kabushiki Kaisha
    Inventors: Shinji Yamamoto, Takato Kimura
  • Patent number: 11769053
    Abstract: The present disclosure includes apparatuses and methods for operating neural networks. An example apparatus includes a plurality of neural networks, wherein the plurality of neural networks are configured to receive a particular portion of data and wherein each of the plurality of neural networks are configured to operate on the particular portion of data during a particular time period to make a determination regarding a characteristic of the particular portion of data.
    Type: Grant
    Filed: December 6, 2021
    Date of Patent: September 26, 2023
    Assignee: Micron Technology, Inc.
    Inventor: Perry V. Lea
  • Patent number: 11733970
    Abstract: An artificial intelligence system includes a neural network layer including an arithmetic operation circuit that performs an arithmetic operation of a sigmoid function. The arithmetic operation circuit includes a first circuit configured to perform an exponent arithmetic operation using a Napier's constant e as a base and output a first calculation result when an exponent in the exponent arithmetic operation is a negative number, wherein an absolute value of the exponent is used in the exponent arithmetic operation, and a second circuit configured to subtract the first calculation result obtained by the first circuit from 1 and output the subtracted value.
    Type: Grant
    Filed: March 3, 2020
    Date of Patent: August 22, 2023
    Assignees: Kabushiki Kaisha Toshiba, Toshiba Electronic Devices & Storage Corporation
    Inventor: Masanori Nishizawa
  • Patent number: 11715280
    Abstract: Disclosed are an object detection device and a control method. A method for controlling an object detection device comprises the steps of: receiving one image; dividing the received image into a predetermined number of local areas on the basis of the size of a convolutional layer of a convolution neural network (CNN); identifying small objects at the same time by inputting a number of the divided local areas corresponding to the number of CNN channels to each of a plurality of CNN channels; sequentially repeating the identifying of the small objects for each of the remaining divided local areas; selecting MM mode or MB mode; setting an object detection target area corresponding to the number of CNN channels on the basis of the selected mode; and detecting the small objects at the same time by inputting each set object detection target area to each of the plurality of CNN channels.
    Type: Grant
    Filed: August 1, 2019
    Date of Patent: August 1, 2023
    Assignee: KYUNGPOOK NATIONAL UNIVERSITY INDUSTRY-ACADEMIC COOPERATION FOUNDATION
    Inventors: Min Young Kim, Byeong Hak Kim, Jong Hyeok Lee
  • Patent number: 11675693
    Abstract: A novel and useful neural network (NN) processing core incorporating inter-device connectivity and adapted to implement artificial neural networks (ANNs). A chip-to-chip interface spreads a given ANN model across multiple devices in a seamless manner. The NN processor is constructed from self-contained computational units organized in a hierarchical architecture. The homogeneity enables simpler management and control of similar computational units, aggregated in multiple levels of hierarchy. Computational units are designed with minimal overhead as possible, where additional features and capabilities are aggregated at higher levels in the hierarchy. On-chip memory provides storage for content inherently required for basic operation at a particular hierarchy and is coupled with the computational resources in an optimal ratio. Lean control provides just enough signaling to manage only the operations required at a particular hierarchical level.
    Type: Grant
    Filed: April 3, 2018
    Date of Patent: June 13, 2023
    Inventors: Avi Baum, Or Danon, Hadar Zeitlin, Daniel Ciubotariu, Rami Feig
  • Patent number: 11676043
    Abstract: A mechanism is provided in a data processing system having a processor and a memory. The memory comprises instructions which are executed by the processor to cause the processor to implement a training system for finding an optimal surface for hierarchical classification task on an ontology. The training system receives a training data set and a hierarchical classification ontology data structure. The training system generates a neural network architecture based on the training data set and the hierarchical classification ontology data structure. The neural network architecture comprises an indicative layer, a parent tier (PT) output and a lower leaf tier (LLT) output. The training system trains the neural network architecture to classify the training data set to leaf nodes at the LLT output and parent nodes at the PT output. The indicative layer in the neural network architecture determines a surface that passes through each path from a root to a leaf node in the hierarchical ontology data structure.
    Type: Grant
    Filed: March 4, 2019
    Date of Patent: June 13, 2023
    Assignee: International Business Machines Corporation
    Inventors: Pathirage Dinindu Sujan Udayanga Perera, Orna Raz, Ramani Routray, Vivek Krishnamurthy, Sheng Hua Bao, Eitan D. Farchi
  • Patent number: 11663040
    Abstract: The computer system includes a node including a processor and a memory, and the processor and the memory serve as arithmetic operation resources. The computer system has an application program that operates using the arithmetic operation resources, and a storage controlling program that operates using the arithmetic operation resources for processing data to be inputted to and outputted from a storage device by the application program. The computer system has use resource amount information that associates operation states of the application program and the arithmetic operation resources that are to be used by the application program and the storage controlling program. The computer system changes allocation of the arithmetic operation resources to the application program and the storage controlling program used by the application program on the basis of an operation state of the application program and the use resource amount information.
    Type: Grant
    Filed: March 19, 2021
    Date of Patent: May 30, 2023
    Assignee: HITACHI, LTD.
    Inventors: Kohei Tatara, Yoshinori Ohira, Masakuni Agetsuma
  • Patent number: 11663461
    Abstract: Instruction distribution in an array of neural network cores is provided. In various embodiments, a neural inference chip is initialized with core microcode. The chip comprises a plurality of neural cores. The core microcode is executable by the neural cores to execute a tensor operation of a neural network. The core microcode is distributed to the plurality of neural cores via an on-chip network. The core microcode is executed synchronously by the plurality of neural cores to compute a neural network layer.
    Type: Grant
    Filed: July 5, 2018
    Date of Patent: May 30, 2023
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Hartmut Penner, Dharmendra S. Modha, John V. Arthur, Andrew S. Cassidy, Rathinakumar Appuswamy, Pallab Datta, Steven K. Esser, Myron D. Flickner, Jennifer Klamo, Jun Sawada, Brian Taba
  • Patent number: 11663451
    Abstract: A 2D array-based neuromorphic processor includes: axon circuits each being configured to receive a first input corresponding to one bit from among bits indicating n-bit activation; first direction lines extending in a first direction from the axon circuits; second direction lines intersecting the first direction lines; synapse circuits disposed at intersections of the first direction lines and the second direction lines, and each being configured to store a second input corresponding to one bit from among bits indicating an m-bit weight and to output operation values of the first input and the second input; and neuron circuits connected to the second direction lines, each of the neuron circuits being configured to receive an operation value output from at least one of the synapse circuits, based on time information assigned individually to the synapse circuits, and to perform a multi-bit operation by using the operation values and the time information.
    Type: Grant
    Filed: February 13, 2019
    Date of Patent: May 30, 2023
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Sungho Kim, Cheheung Kim, Jaeho Lee
  • Patent number: 11636883
    Abstract: To provide a semiconductor device which can execute the product-sum operation. The semiconductor device includes a first memory cell, a second memory cell, and an offset circuit. First analog data is stored in the first memory cell, and reference analog data is stored in the second memory cell. The first memory cell and the second memory cell supply a first current and a second current, respectively, when a reference potential is applied as a selection signal. The offset circuit has a function of supplying a third current corresponding to a differential current between the first current and the second current. In the semiconductor device, the first memory and the second memory supply a fourth current and a fifth current, respectively, when a potential corresponding to second analog data is applied as a selection signal.
    Type: Grant
    Filed: November 2, 2020
    Date of Patent: April 25, 2023
    Assignee: Semiconductor Energy Laboratory Co., Ltd.
    Inventor: Yoshiyuki Kurokawa
  • Patent number: 11599777
    Abstract: In an example, an apparatus comprises a plurality of execution units comprising and logic, at least partially including hardware logic, to traverse a solution space, score a plurality of solutions to a scheduling deep learning network execution, and select a preferred solution from the plurality of solutions to implement the deep learning network. Other embodiments are also disclosed and claimed.
    Type: Grant
    Filed: April 28, 2017
    Date of Patent: March 7, 2023
    Assignee: Intel Corporation
    Inventors: Eran Ben-Avi, Neta Zmora, Guy Jacob, Lev Faivishevsky, Jeremie Dreyfuss, Tomer Bar-On, Jacob Subag, Yaniv Fais, Shira Hirsch, Orly Weisel, Zigi Walter, Yarden Oren
  • Patent number: 11580366
    Abstract: An event-driven neural network including a plurality of interconnected core circuits is provided. Each core circuit includes an electronic synapse array that has multiple digital synapses interconnecting a plurality of digital electronic neurons. A synapse interconnects an axon of a pre-synaptic neuron with a dendrite of a post-synaptic neuron. A neuron integrates input spikes and generates a spike event in response to the integrated input spikes exceeding a threshold. Each core circuit also has a scheduler that receives a spike event and delivers the spike event to a selected axon in the synapse array based on a schedule for deterministic event delivery.
    Type: Grant
    Filed: October 28, 2019
    Date of Patent: February 14, 2023
    Assignee: International Business Machines Corporation
    Inventors: Filipp Akopyan, John V. Arthur, Rajit Manohar, Paul A. Merolla, Dharmendra S. Modha, Alyosha Molnar, William P. Risk, III
  • Patent number: 11568243
    Abstract: A processor-implemented method of performing convolution operations in a neural network includes generating a plurality of first sub-bit groups and a plurality of second sub-bit groups, respectively from at least one pixel value of an input feature map and at least one predetermined weight, performing a convolution operation on a first pair that includes a first sub-bit group including a most significant bit (MSB) of the at least one pixel value and a second sub-bit group including an MSB of the at least one predetermined weight, based on the plurality of second sub-bit groups, obtaining a maximum value of a sum of results for convolution operations of remaining pairs excepting the first pair, and based on a result of the convolution operation on the first pair and the maximum value, determining whether to perform the convolution operations of the remaining pairs.
    Type: Grant
    Filed: December 5, 2019
    Date of Patent: January 31, 2023
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Joonho Song, Namjoon Kim, Sehwan Lee, Deokjin Joo
  • Patent number: 11568225
    Abstract: A signal processing method and apparatus, where the apparatus includes an input interface configured to receive an input signal matrix and a weight matrix, a processor configured to interleave the input signal matrix to obtain an interleaved signal matrix, partition the interleaved signal matrix, interleave the weight matrix to obtain an interleaved weight matrix, process the interleaved weight matrix to obtain a plurality of sparsified partitioned weight matrices, perform matrix multiplication on the sparsified partitioned weight matrices and a plurality of partitioned signal matrices to obtain a plurality of matrix multiplication results, and an output interface configured to output a signal processing result.
    Type: Grant
    Filed: December 6, 2019
    Date of Patent: January 31, 2023
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventor: Ruosheng Xu
  • Patent number: 11562215
    Abstract: An artificial neural network circuit includes a crossbar circuit, and a processing circuit. The crossbar circuit transmits a signal between layered neurons of an artificial neural network. The crossbar circuit includes input bars, output bars arranged intersecting the input bars, and memristors. The processing circuit calculates a sum of signals flowing into each of the output bars. The processing circuit calculates, as the sum of the signals, a sum of signals flowing into a plurality of separate output bars and conductance values of the corresponding memristors are set so as to cooperate to give a desired weight to the signal to be transmitted.
    Type: Grant
    Filed: October 1, 2019
    Date of Patent: January 24, 2023
    Assignee: DENSO CORPORATION
    Inventors: Irina Kataeva, Shigeki Otsuka
  • Patent number: 11562212
    Abstract: A method performs XNOR-equivalent operations by adjusting column thresholds of a compute-in-memory array of an artificial neural network. The method includes adjusting an activation threshold generated for each column of the compute-in-memory array based on a function of a weight value and an activation value. The method also includes calculating a conversion bias current reference based on an input value from an input vector to the compute-in-memory array, the compute-in-memory array being programmed with a set of weights. The adjusted activation threshold and the conversion bias current reference are used as a threshold for determining the output values of the compute-in-memory array.
    Type: Grant
    Filed: September 9, 2019
    Date of Patent: January 24, 2023
    Assignee: Qualcomm Incorporated
    Inventors: Zhongze Wang, Edward Teague, Max Welling
  • Patent number: 11556771
    Abstract: Novel connection between neurons of a neural network is provided. A perceptron included in the neural network includes a plurality of neurons; the neuron includes a synapse circuit and an activation function circuit; and the synapse circuit includes a plurality of memory cells. A bit line selected by address information for selecting a memory cell is shared by a plurality of perceptrons. The memory cell is supplied with a weight coefficient of an analog signal, and the synapse circuit is supplied with an input signal. The memory cell multiplies the input signal by the weight coefficient and converts the multiplied result into a first current. The synapse circuit generates a second current by adding a plurality of first currents and converts the second current into a first potential.
    Type: Grant
    Filed: April 2, 2018
    Date of Patent: January 17, 2023
    Assignee: SEMICONDUCTOR ENERGY LABORATORY CO., LTD.
    Inventors: Shintaro Harada, Hiroki Inoue, Takeshi Aoki
  • Patent number: 11551028
    Abstract: A novel and useful system and method of improved power performance and lowered memory requirements for an artificial neural network based on packing memory utilizing several structured sparsity mechanisms. The invention applies to neural network (NN) processing engines adapted to implement mechanisms to search for structured sparsity in weights and activations, resulting in a considerably reduced memory usage. The sparsity guided training mechanism synthesizes and generates structured sparsity weights. A compiler mechanism within a software development kit (SDK), manipulates structured weight domain sparsity to generate a sparse set of static weights for the NN. The structured sparsity static weights are loaded into the NN after compilation and utilized by both the structured weight domain sparsity mechanism and the structured activation domain sparsity mechanism.
    Type: Grant
    Filed: May 21, 2020
    Date of Patent: January 10, 2023
    Inventors: Avi Baum, Or Danon, Daniel Chibotero, Gilad Nahor
  • Patent number: 11537687
    Abstract: A method comprises accessing a flattened input stream that includes a set of parallel vectors representing a set of input values of a kernel-sized tile of an input tensor that is to be convolved with a kernel. An expanded kernel is received that is generated by permuting values from the kernel. A control pattern is received that includes a set of vectors each corresponding to the output value position for the kernel-sized tile of the output and indicating a vector of the flattened input stream to access input values. The method further comprises generating, for each output position of each kernel-sized tile of the output, a dot product between a first vector that includes values of the flattened input stream as selected by the control pattern, and a second vector corresponding to a vector in the expanded kernel corresponding to the output position.
    Type: Grant
    Filed: November 18, 2019
    Date of Patent: December 27, 2022
    Assignee: GROQ, INC.
    Inventors: Jonathan Alexander Ross, Tom Hawkins, Gregory Michael Thorson, Matt Boyd
  • Patent number: 11531868
    Abstract: Some embodiments provide a method for a neural network inference circuit that executes a neural network including computation nodes at multiple layers. Each of a set of the nodes includes a dot product of input values and weight values. The method reads multiple input values for a particular layer from a memory location of the circuit. A first set of the input values are used for a first dot product for a first node of the layer. The method stores the input values in a cache. The method computes the first dot product for the first node using the first set of input values. Without requiring a read of any input values from any additional memory locations, the method computes a second dot product for a second node of the particular layer using a subset of the first set of input values and a second set of the input values.
    Type: Grant
    Filed: March 15, 2019
    Date of Patent: December 20, 2022
    Assignee: PERCEIVE CORPORATION
    Inventors: Kenneth Duong, Jung Ko, Steven L. Teig
  • Patent number: 11474593
    Abstract: A system having sensor modules and a computing device. Each sensor module has an inertial measurement unit attached to a portion of a user to generate motion data identifying a sequence of orientations of the portion. The computing device provides the sequences of orientations measured by the sensor modules as input to an artificial neural network, obtains as output from the artificial neural network a predicted orientation measurement of a part of the user, and controls an application by setting an orientation of a rigid part of a skeleton model of the user according to the predicted orientation measurement. The artificial neural network can be trained to predict orientations measured using an optical tracking system based on orientations measured using inertial measurement units and/or to prediction orientation measurements of some rigid parts in a kinematic chain based on orientation measurements of other rigid parts in the kinematic chain.
    Type: Grant
    Filed: May 7, 2018
    Date of Patent: October 18, 2022
    Assignee: Finch Technologies Ltd.
    Inventors: Viktor Vladimirovich Erivantcev, Alexander Sergeevich Lobanov, Alexey Ivanovich Kartashov, Daniil Olegovich Goncharov
  • Patent number: 11461625
    Abstract: Lossy tensor compression and decompression circuits compress and decompress tensor elements based on the values of neighboring tensor elements. The lossy compression circuit scales each decompressed tensor element of a tile by a scaling factor that is based on the maximum value that can be represented by the number of bits used to represent a compressed tensor element, and the greatest value and least value of the tensor elements of the tile. The lossy decompression circuit performs the inverse of the lossy compression. The compression circuit and decompression circuit have parallel multiplier circuits and parallel adder circuits to perform the lossy compression and lossy decompression, respectively.
    Type: Grant
    Filed: May 22, 2020
    Date of Patent: October 4, 2022
    Assignee: XILINX, INC.
    Inventors: Michael Wu, Christopher H. Dick
  • Patent number: 11423983
    Abstract: A memory device for in-memory computation includes data channels, a memory cell array, a maximum accumulated weight generating array, a minimum accumulated weight generating array, a reference generator and a comparator. The data channels are selectively enabled according to data input. The memory cell array generates an accumulated data weight value according to the quantity of enabled data channels, a first resistance and a second resistance. The maximum accumulated weight generating array generates a maximum accumulated weight value according to the quantity of enabled data channels and the first resistance. The minimum accumulated weight generating array generates a minimum accumulated weight value according to the quantity of enabled data channels and the second resistance. The reference generator generates reference value(s) according to the maximum and minimum accumulated weight values.
    Type: Grant
    Filed: May 17, 2021
    Date of Patent: August 23, 2022
    Assignee: INDUSTRIAL TECHNOLOGY RESEARCH INSTITUTE
    Inventors: Chih-Sheng Lin, Sih-Han Li, Yu-Hui Lin, Jian-Wei Su
  • Patent number: 11392802
    Abstract: In one embodiment, a set of feature vectors can be derived from any biometric data, and then using a deep neural network (“DNN”) on those one-way homomorphic encryptions (i.e., each biometrics' feature vector) can determine matches or execute searches on encrypted data. Each biometrics' feature vector can then be stored and/or used in conjunction with respective classifications, for use in subsequent comparisons without fear of compromising the original biometric data. In various embodiments, the original biometric data is discarded responsive to generating the encrypted values. In another embodiment, the homomorphic encryption enables computations and comparisons on cypher text without decryption. This improves security over conventional approaches. Searching biometrics in the clear on any system, represents a significant security vulnerability. In various examples described herein, only the one-way encrypted biometric data is available on a given device.
    Type: Grant
    Filed: March 7, 2018
    Date of Patent: July 19, 2022
    Assignee: Private Identity LLC
    Inventor: Scott Edward Streit
  • Patent number: 11379707
    Abstract: A computer-implemented method that includes receiving, by a processing unit, an instruction that specifies data values for performing a tensor computation. In response to receiving the instruction, the method may include, performing, by the processing unit, the tensor computation by executing a loop nest comprising a plurality of loops, wherein a structure of the loop nest is defined based on one or more of the data values of the instruction. The tensor computation can be at least a portion of a computation of a neural network layer. The data values specified by the instruction may comprise a value that specifies a type of the neural network layer, and the structure of the loop nest can be defined at least in part by the type of the neural network layer.
    Type: Grant
    Filed: November 22, 2017
    Date of Patent: July 5, 2022
    Assignee: Google LLC
    Inventors: Ravi Narayanaswami, Dong Hyuk Woo, Olivier Temam, Harshit Khaitan
  • Patent number: 11354569
    Abstract: A neural network computation circuit includes in-area multiple-word line selection circuits that are provided in one-to-one correspondence to a plurality of word line areas into which a plurality of word lines included in a memory array are logically divided. Each of the in-area multiple-word line selection circuits sets one or more word lines in a selected state or a non-selected state, and includes a first latch and a second latch provided for each word line.
    Type: Grant
    Filed: March 2, 2020
    Date of Patent: June 7, 2022
    Assignee: PANASONIC CORPORATION
    Inventors: Masayoshi Nakayama, Kazuyuki Kouno, Yuriko Hayata, Takashi Ono, Reiji Mochida
  • Patent number: 11347964
    Abstract: A hardware circuit in which integer numbers are used to represent fixed-point numbers having an integer part and a fractional part is disclosed. The hardware circuit comprises a multiply-accumulate unit configured to perform convolution operations using input data and weights and, in dependence thereon, to generate an intermediate result. The hardware circuit comprises a bias bit shifter configured to shift a bias value bitwise by a bias shift value so as to provide a bit-shifted bias value, a carry bit shifter configured to shift a carry value bitwise by a carry shift value so as to provide a bit-shifted carry value, an adder tree configured to add the intermediate result, the bit-shifted bias value and the bit-shifted carry value so as to provide a multiple-accumulate result and a multiply-accumulate bit shifter configured to shift the multiple-accumulate result bitwise by a multiply-accumulate shift value) to provide a bit-shifted multiply-accumulate result.
    Type: Grant
    Filed: August 7, 2017
    Date of Patent: May 31, 2022
    Assignee: RENESAS ELECTRONICS CORPORATION
    Inventor: Matthias Nahr
  • Patent number: 11270198
    Abstract: Disclosed is a neuromorphic-processing systems including, in some embodiments, a special-purpose host processor operable as a stand-alone host processor; a neuromorphic co-processor including an artificial neural network; and a communications interface between the host processor and the co-processor configured to transmit information therebetween. The co-processor is configured to enhance special-purpose processing of the host processor with the artificial neural network. Also disclosed is a method of a neuromorphic-processing system having the special-purpose host processor and the neuromorphic co-processor including, in some embodiments, enhancing the special-purpose processing of the host processor with the artificial neural network of the co-processor. In some embodiments, the host processor is a hearing-aid processor.
    Type: Grant
    Filed: July 28, 2018
    Date of Patent: March 8, 2022
    Assignee: Syntiant
    Inventors: Kurt F. Busch, Jeremiah H. Holleman, III, Pieter Vorenkamp, Stephen W. Bailey
  • Patent number: 11263077
    Abstract: Novel and useful system and methods of several functional safety mechanisms for use in an artificial neural network (ANN) processor. The mechanisms can be deployed individually or in combination to provide a desired level of safety in neural networks. Multiple strategies are applied involving redundancy by design, redundancy through spatial mapping as well as self-tuning procedures that modify static (weights) and monitor dynamic (activations) behavior. The various mechanisms of the present invention address ANN system level safety in situ, as a system level strategy that is tightly coupled with the processor architecture. The NN processor incorporates several functional safety concepts which reduce its risk of failure that occurs during operation from going unnoticed. The mechanisms function to detect and promptly flag and report the occurrence of an error with some mechanisms capable of correction as well.
    Type: Grant
    Filed: September 29, 2020
    Date of Patent: March 1, 2022
    Inventors: Roi Seznayov, Guy Kaminitz, Daniel Chibotero, Ori Katz, Amir Shmul, Yuval Adelstein, Nir Engelberg, Or Danon, Avi Baum
  • Patent number: 11252417
    Abstract: A method of configuring an image encoder emulator. Input image data is encoded at an encoding stage comprising a network of inter-connected weights, and decoded at a decoding stage to generate a first distorted version of the input image data. The first distorted version is compared with a second distorted version of the input image data generated using an external encoder to determine a distortion difference score. A rate prediction model is used to predict an encoding bitrate associated with encoding the input image data to a quality corresponding to the first distorted version. A rate difference score is determined by comparing the predicted encoding bitrate with an encoding bitrate used by the external encoder to encode the input image data to a quality corresponding to the second distorted version. The weights of the encoding stage are trained based on the distortion difference score and the rate difference score.
    Type: Grant
    Filed: September 30, 2020
    Date of Patent: February 15, 2022
    Assignee: Size Limited
    Inventor: Ioannis Andreopoulos
  • Patent number: 11240177
    Abstract: Among other aspects, the present invention relates to a network comprising a plurality of interconnected core circuits (10) particularly arranged on several units (6), wherein each core circuit (10) comprises: an electronic array (8, 9) comprising a plurality of computing nodes (90) and a plurality of memory circuits (80) which is configured to receive incoming events, wherein each computing node (90) is configured to generate an event comprising a data packet when incoming events received by the respective computing node (90) satisfy a pre-defined criterion, and a circuit which is configured to append destination address and additional source information, particularly source core ID, to the respective data packet, and a local first router (R1) for providing intra-core connectivity and/or delivering events to intermediate level second router (R2) for inter-core connectivity and to higher level third router (R3) for inter-unit connectivity, and a broadcast driver (7) for broadcasting incoming events to all the
    Type: Grant
    Filed: April 27, 2016
    Date of Patent: February 1, 2022
    Inventors: Saber Moradi, Giacomo Indiveri, Ning Qiao, Fabio Stefanini
  • Patent number: 11222260
    Abstract: The present disclosure includes apparatuses and methods for operating neural networks. An example apparatus includes a plurality of neural networks, wherein the plurality of neural networks are configured to receive a particular portion of data and wherein each of the plurality of neural networks are configured to operate on the particular portion of data during a particular time period to make a determination regarding a characteristic of the particular portion of data.
    Type: Grant
    Filed: March 22, 2017
    Date of Patent: January 11, 2022
    Assignee: Micron Technology, Inc.
    Inventor: Perry V. Lea
  • Patent number: 11195092
    Abstract: The present disclosure includes apparatuses and methods for operating neural networks. An example apparatus includes a plurality of neural networks, wherein the plurality of neural networks are configured to receive a particular portion of data and wherein each of the plurality of neural networks are configured to operate on the particular portion of data during a particular time period to make a determination regarding a characteristic of the particular portion of data.
    Type: Grant
    Filed: March 22, 2017
    Date of Patent: December 7, 2021
    Assignee: Micron Technology, Inc.
    Inventor: Perry V. Lea
  • Patent number: 11195096
    Abstract: Techniques that facilitate improving an efficiency of a neural network are described. In one embodiment, a system is provided that comprises a memory that stores computer-executable components and a processor that executes computer-executable components stored in the memory. In one implementation, the computer-executable components comprise an initialization component that selects an initial value of an output limit, wherein the output limit indicates a range for an output of an activation function of a neural network. The computer-executable components further comprise a training component that modifies the initial value of the output limit during training to a second value of the output limit, the second value of the output limit being provided as a parameter to the activation function. The computer-executable components further comprise an activation function component that determines the output of the activation function based on the second value of the output limit as the parameter.
    Type: Grant
    Filed: October 24, 2017
    Date of Patent: December 7, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Jungwook Choi, Kailash Gopalakrishnan, Charbel Sakr, Swagath Venkataramani, Zhuo Wang
  • Patent number: 11170291
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for computing a layer output for a convolutional neural network layer, the method comprising: receiving a plurality of activation inputs; forming a plurality of vector inputs from the plurality of activation inputs, each vector input comprising values from a distinct region within the multi-dimensional matrix; sending the plurality of vector inputs to one or more cells along a first dimension of the systolic array; generating a plurality of rotated kernel structures from each of the plurality of kernel; sending each kernel structure and each rotated kernel structure to one or more cells along a second dimension of the systolic array; causing the systolic array to generate an accumulated output based on the plurality of value inputs and the plurality of kernels; and generating the layer output from the accumulated output.
    Type: Grant
    Filed: April 24, 2020
    Date of Patent: November 9, 2021
    Assignee: Google LLC
    Inventors: Jonathan Ross, Gregory Michael Thorson