Digital Neural Network Patents (Class 706/41)
  • Patent number: 12260630
    Abstract: Methods, apparatus, systems, and articles of manufacture are disclosed to implement parallel architectures for neural network classifiers. An example non-transitory computer readable medium comprises instructions that, when executed, cause a machine to at least: process a first stream using first neural network blocks, the first stream based on an input image; process a second stream using second neural network blocks, the second stream based on the input image; fuse a result of the first neural network blocks and the second neural network blocks; perform average pooling on the fused result; process a fully connected layer based on the result of the average pooling; and classify the image based on the output of the fully connected layer.
    Type: Grant
    Filed: June 25, 2021
    Date of Patent: March 25, 2025
    Assignee: Intel Corporation
    Inventors: Ankit Goyal, Alexey Bochkovskiy, Vladlen Koltun
  • Patent number: 12210952
    Abstract: A reorganizable neural network computing device is provided. The computing device includes a data processing array unit including a plurality of operators disposed at locations corresponding to a row and a column. One or more chaining paths which transfer the first input data from the operator of the first row of the data processing array to the operator of the second row are optionally formed. The plurality of first data input processors of the computing device transfer the first input data for a layer of the neural network to the operators along rows of the data processing array unit, and the plurality of second data input processors of the computing device transfer the second input data to the operators along the columns of the data processing array.
    Type: Grant
    Filed: November 27, 2018
    Date of Patent: January 28, 2025
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Young-Su Kwon, Chan Kim, Hyun Mi Kim, Jeongmin Yang, Chun-Gi Lyuh, Jaehoon Chung, Yong Cheol Peter Cho
  • Patent number: 12211582
    Abstract: An in-memory computation (IMC) circuit includes a memory array formed by memory cells arranged in row-by-column matrix. Computational weights for an IMC operation are stored in the memory cells. Each column includes a bit line connected to the memory cells. A switching circuit is connected between each bit line and a corresponding column output. The switching circuit is controlled to turn on to generate the analog signal dependent on the computational weight and for a time duration controlled by the coefficient data signal. A column combining circuit combines (by addition and/or subtraction) and integrates analog signals at the column outputs of the biasing circuits. The addition/subtraction is dependent on one or more a sign of the coefficient data and a sign of the computational weight and may further implement a binary weighting function.
    Type: Grant
    Filed: April 12, 2022
    Date of Patent: January 28, 2025
    Assignee: STMicroelectronics S.r.l.
    Inventors: Marco Pasotti, Marcella Carissimi, Alessio Antolini, Eleonora Franchi Scarselli, Antonio Gnudi, Andrea Lico
  • Patent number: 12205017
    Abstract: Disclosed is a methods and apparatus which can improve defect tolerability of a hardware-based neural network. In one embodiment, a method for performing a calculation of values on first neurons of a first layer in a neural network, includes: receiving a first pattern of a memory cell array; determining a second pattern of the memory cell array according to a third pattern; determining at least one pair of columns of the memory cell array according to the first pattern and the second pattern; switching input data of two columns of each of the at least one pair of columns of the memory cell array; and switching output data of the two columns in each of the at least one pair of columns of the memory cell array so as to determine the values on the first neurons of the first layer.
    Type: Grant
    Filed: August 8, 2023
    Date of Patent: January 21, 2025
    Assignee: Taiwan Semiconductor Manufacturing Co., Ltd.
    Inventors: Win-San Khwa, Yu-Der Chih, Yi-Chun Shih, Chien-Yin Liu
  • Patent number: 12190079
    Abstract: A semiconductor device having a novel structure is provided. The semiconductor device includes a plurality of operation circuits that can switch different kinds of operation processing; a plurality of switch circuits that can switch a connection state between the operation circuits; and a controller. The operation circuit includes a first memory that stores data corresponding to a weight parameter used in the plurality of kinds of operation processing. The operation circuit executes a product-sum operation by switching weight data in accordance with a context. The switch circuit includes a second memory that stores data for switching a plurality of connection states in response to switching of a second context signal. The controller generates a second context signal on the basis of a first context signal. The amount of data stored in the second memory can be smaller than the amount of data stored in the first memory in the operation circuit.
    Type: Grant
    Filed: April 8, 2022
    Date of Patent: January 7, 2025
    Assignee: Semiconductor Energy Laboratory Co., Ltd.
    Inventors: Munehiro Kozuma, Takeshi Aoki, Seiichi Yoneda, Yoshiyuki Kurokawa
  • Patent number: 12190591
    Abstract: A method for extracting an oil storage tank based on a high-spatial-resolution remote sensing image is provided, including: acquiring an oil storage tank sample, and randomly dividing the oil storage tank sample into a training set and a testing set; building an oil storage tank extraction model based on a Res2-Unet model structure, wherein the Res2-Unet is a deep learning network based on a UNet semantic segmentation structure, and a Res2Net convolution block is configured to change a feature interlayer learning to a granular learning and is arranged in a residual mode; and performing a precision verification on the testing set.
    Type: Grant
    Filed: April 19, 2022
    Date of Patent: January 7, 2025
    Assignee: Aerospace Information Research Institute, Chinese Academy of Sciences
    Inventors: Bo Yu, Yu Wang, Fang Chen, Lei Wang
  • Patent number: 12169769
    Abstract: Systems and methods for performing a quantization of artificial neural networks (ANNs) are provided. An example method may include receiving a description of an ANN and sets of inputs to neurons of the ANN, the description including sets of weights of the inputs, the weights being of a first data type, determining a first interval of the first data type to be mapped to a second interval of a second data type; performing computations of sums of products of the weights and the inputs to obtain a set of sum results, wherein the computations are performed using at least one number within the second interval, the number being a result of mapping of a number of the first interval to a number of the second interval, determining a measure of saturations in sum results, and adjusting, based on the measure of saturations, one of the first and second intervals.
    Type: Grant
    Filed: January 20, 2020
    Date of Patent: December 17, 2024
    Assignee: MIPSOLOGY SAS
    Inventors: Benoit Chappet de Vangel, Gabriel Gouvine
  • Patent number: 12165042
    Abstract: Neural network hardware acceleration data parallelism is performed by an integrated circuit including a plurality of memory banks, each memory bank among the plurality of memory banks configured to store values and to transmit stored values, a plurality of computation units, each computation unit among the plurality of computation units including one of a channel pipeline and a multiply-and-accumulate (MAC) element configured to perform a mathematical operation on an input data value and a weight value to produce a resultant data value, and a computation controller configured to cause a value transmission to be received by more than one computation unit or memory bank.
    Type: Grant
    Filed: April 13, 2023
    Date of Patent: December 10, 2024
    Assignee: EDGECORTIX INC.
    Inventors: Nikolay Nez, Oleg Khavin, Tanvir Ahmed, Jens Huthmann, Sakyasingha Dasgupta
  • Patent number: 12156221
    Abstract: A method of wireless communication by a transmitting device transforms a transmit waveform by an encoder neural network to control power amplifier (PA) operation with respect to non-linearities. The method also transmits the transformed transmit waveform across a propagation channel. A method of wireless communication by a receiving device receives a waveform transformed by an encoder neural network. The method also recovers, with a decoder neural network, the encoder input symbols from the received waveform. A transmitting device for wireless communication calculates distortion error based on a non-distorted digital transmit waveform and a distorted digital transmit waveform. The transmitting device also compresses the distortion error with an encoder neural network of an auto-encoder. The transmitting device transmits to a receiving device the compressed distortion error to compensate for power amplifier (PA) non-linearity.
    Type: Grant
    Filed: February 22, 2021
    Date of Patent: November 26, 2024
    Assignee: QUALCOMM Incorporated
    Inventors: June Namgoong, Taesang Yoo, Naga Bhushan, Krishna Kiran Mukkavilli, Tingfang Ji
  • Patent number: 12153930
    Abstract: A processing device is provided which comprises memory configured to store data and a processor configured to execute a forward activation of the neural network using a low precision floating point (FP) format, scale up values of numbers represented by the low precision FP format and process the scaled up values of the numbers as non-zero values for the numbers. The processor is configured to scale up the values of one or more numbers, via scaling parameters, to a scaled up value equal to or greater than a floor of a dynamic range of the low precision FP format. The scaling parameters are, for example, static parameters or alternatively, parameters determined during execution of the neural network.
    Type: Grant
    Filed: December 29, 2021
    Date of Patent: November 26, 2024
    Assignee: Advanced Micro Devices, Inc.
    Inventor: Hai Xiao
  • Patent number: 12136031
    Abstract: A method of flattening channel data of an input feature map in an inference system includes retrieving pixel values of a channel of a plurality of channels of the input feature map from a memory and storing the pixel values in a buffer, extracting first values of a first region having a first size from among the pixel values stored in the buffer, the first region corresponding to an overlap region of a kernel of the inference system with channel data of the input feature map, rearranging second values corresponding to the overlap region of the kernel from among the first values in the first region, and identifying a first group of consecutive values from among the rearranged second values for supplying to a first dot-product circuit of the inference system.
    Type: Grant
    Filed: May 18, 2023
    Date of Patent: November 5, 2024
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Ali Shafiee Ardestani, Joseph Hassoun
  • Patent number: 12125563
    Abstract: A system for predicting subject enrollment for a study includes a time-to-first-enrollment (TTFE) model and a first-enrollment-to-last-enrollment (FELE) model for each site in the study. The TTFE model includes a Gaussian distribution with a generalized linear mixed effects model solved with maximum likelihood point estimation or with Bayesian regression, and the FELE model includes a negative binomial distribution with a generalized linear mixed effects model solved with maximum likelihood point estimation or with Bayesian regression estimation.
    Type: Grant
    Filed: October 4, 2022
    Date of Patent: October 22, 2024
    Assignee: MEDIDATA SOLUTIONS, INC.
    Inventors: Hrishikesh Karvir, Fanyi Zhang, Jingshu Liu, Michael Elashoff, Christopher Bound
  • Patent number: 12086567
    Abstract: A natural result of the different quantities of inputs and outputs in computational elements such as logic gates is a triangular shape when embedded in a material. The duplication of this shape while reducing empty space produces a curved structure, which taken to its ultimate expression is spherical. The inputs of the gates create a surface which is larger than the surface created by the outputs. In its spherical expression the input surface is the outside surface of the sphere, while the output is an inside surface of this sphere. Computation occurs as entropy transport from the outside surface, through layers, to an inside surface. Current digital logic designs are often composed of two-input, one output designs. Several logic functions including NAND, NOR, AND, OR, XNOR, XOR are characterized by multiple inputs and one output. Other types of computational functions such as those of neurons and artificial neurons also may take multiple inputs and fewer or singular outputs.
    Type: Grant
    Filed: March 14, 2022
    Date of Patent: September 10, 2024
    Inventor: Jesse Forrest Fabian
  • Patent number: 12067480
    Abstract: A signal processing method and apparatus, where the apparatus includes an input interface configured to receive an input signal matrix and a weight matrix, a processor configured to interleave the input signal matrix to obtain an interleaved signal matrix, partition the interleaved signal matrix, interleave the weight matrix to obtain an interleaved weight matrix, process the interleaved weight matrix to obtain a plurality of sparsified partitioned weight matrices, perform matrix multiplication on the sparsified partitioned weight matrices and a plurality of partitioned signal matrices to obtain a plurality of matrix multiplication results, and an output interface configured to output a signal processing result.
    Type: Grant
    Filed: January 27, 2023
    Date of Patent: August 20, 2024
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventor: Ruosheng Xu
  • Patent number: 12061968
    Abstract: A computer-implemented method that includes receiving, by a processing unit, an instruction that specifies data values for performing a tensor computation. In response to receiving the instruction, the method may include, performing, by the processing unit, the tensor computation by executing a loop nest comprising a plurality of loops, wherein a structure of the loop nest is defined based on one or more of the data values of the instruction. The tensor computation can be at least a portion of a computation of a neural network layer. The data values specified by the instruction may comprise a value that specifies a type of the neural network layer, and the structure of the loop nest can be defined at least in part by the type of the neural network layer.
    Type: Grant
    Filed: June 21, 2022
    Date of Patent: August 13, 2024
    Assignee: Google LLC
    Inventors: Ravi Narayanaswami, Dong Hyuk Woo, Olivier Temam, Harshit Khaitan
  • Patent number: 12056602
    Abstract: A circuit structure for implementing a multilayer artificial neural network, the circuit comprising: a plurality of memristors implementing a synaptic grid array, the memristors storing weights of the network; and a calculation and control module configured to calculate the value of weight adjustments within the network.
    Type: Grant
    Filed: September 26, 2020
    Date of Patent: August 6, 2024
    Assignee: Qatar Foundation for Education, Science, and Community Development
    Inventors: Yin Yang, Shiping Wen, Tingwen Huang
  • Patent number: 12050913
    Abstract: A processing core for the efficient execution of a directed graph is disclosed. The processing core includes a memory and a first and a second data tile stored in the memory. The first and second data tiles include a first and a second set of data elements stored contiguously in the memory. The processing core also includes metadata relationally stored with the first data tile in the memory. The processing core also includes an execution engine, a control unit, and an instruction. Execution of the instruction uses the execution engine, a first data element in the first set of data elements, and a second data element in the second set of data elements. The control unit conditions execution of the instruction using the metadata. A standard execution of the instruction generates a standard output. A conditional execution of the instruction operation generates a conditionally executed output.
    Type: Grant
    Filed: October 26, 2020
    Date of Patent: July 30, 2024
    Assignee: Tenstorrent Inc.
    Inventors: Ljubisa Bajic, Milos Trajkovic, Ivan Hamer
  • Patent number: 12040821
    Abstract: Systems and methods for processing data for a neural network are described. The system comprises non-transitory memory configured to receive data bits defining a kernel of weights, the data bits being suitable for processing input data; and a data processing unit, configured to: receive bits defining a kernel of weights for the neural network, the kernel of weights comprising one or more non-zero value weights and one or more zero-valued weights; generate a set of mask bits, a position of each bit in the set of mask bits corresponds to a position within the kernel of weights and the value of each bit indicates whether a weight in the corresponding position is a zero-valued weight or a non-zero value weight; and transmit the non-zero value weights and the set of mask bits for storage, the non-zero value weights and the set of mask bits represent the kernel of weights.
    Type: Grant
    Filed: August 3, 2022
    Date of Patent: July 16, 2024
    Assignee: Arm Limited
    Inventor: John Wakefield Brothers, III
  • Patent number: 12033063
    Abstract: In an example, an apparatus comprises a plurality of execution units comprising and logic, at least partially including hardware logic, to traverse a solution space, score a plurality of solutions to a scheduling deep learning network execution, and select a preferred solution from the plurality of solutions to implement the deep learning network. Other embodiments are also disclosed and claimed.
    Type: Grant
    Filed: February 24, 2023
    Date of Patent: July 9, 2024
    Assignee: Intel Corporation
    Inventors: Eran Ben-Avi, Neta Zmora, Guy Jacob, Lev Faivishevsky, Jeremie Dreyfuss, Tomer Bar-On, Jacob Subag, Yaniv Fais, Shira Hirsch, Orly Weisel, Zigi Walter, Yarden Oren
  • Patent number: 12026606
    Abstract: A fractal calculating device according to an embodiment of the present application is included in an integrated circuit device. The integrated circuit device includes a universal interconnect interface and other processing devices. The calculating device interacts with other processing devices to jointly complete a user specified calculation operation. The integrated circuit device may also comprise a storage device. The storage device is respectively connected with the calculating device and other processing devices and is used for data storage of the computing device and other processing devices.
    Type: Grant
    Filed: April 26, 2020
    Date of Patent: July 2, 2024
    Assignee: CAMBRICON TECHNOLOGIES CORPORATION LIMITED
    Inventors: Shaoli Liu, Guang Jiang, Yongwei Zhao, Jun Liang
  • Patent number: 12020141
    Abstract: A deep learning apparatus for an artificial neural network (ANN) having pipeline architecture. The deep learning apparatus for an ANN simultaneously performs output value processing, corrected input data processing, corrected output value processing, weight correction, input bias correction, and output bias correction using pipeline architecture, thereby reducing calculation time for learning and reducing required memory capacity.
    Type: Grant
    Filed: August 16, 2018
    Date of Patent: June 25, 2024
    Assignee: DEEPX CO., LTD.
    Inventor: Lokwon Kim
  • Patent number: 11996883
    Abstract: A computer-implemented method for recovering one or more active sub-signals from a composite signal of a blind source is provided. The method includes: performing an offline preparing process to generate a lookup table of Continuous Wavelet Transform (CWT)-basic elements (WBEs); when receiving raw data of the composite signal of the blind source, performing a real-time process to recover the one or more active sub-signals of the composite signal according to the lookup table of WBEs by formulating the composite signal by a Realistic Adaptive Harmonic Model (RAHM); and generating analysis data comprising one or more attributes of the active sub-signals.
    Type: Grant
    Filed: August 9, 2022
    Date of Patent: May 28, 2024
    Inventor: Ka Luen Fung
  • Patent number: 11977972
    Abstract: Residual semi-recurrent neural networks (RSNN) can be configured to receive both time invariant input and time variant input data to generate one or more time series predictions. The time invariant input can be processed by a multilayer perceptron of the RSNN. The output of the multilayer perceptron can be used as an initial state for a recurrent neural network unit of the RSNN. The recurrent neural network unit can also receive time invariant input, and process the time invariant input with the time invariant input to generate an output. The outputs of the multilayer perceptron and the recurrent neural network unit can be combined to generate the one or more time series predictions.
    Type: Grant
    Filed: March 7, 2023
    Date of Patent: May 7, 2024
    Assignee: Sanofi
    Inventors: Qi Tang, Youran Qi
  • Patent number: 11954582
    Abstract: Disclosed is a neural network accelerator including a first bit operator generating a first multiplication result by performing multiplication on first feature bits of input feature data and first weight bits of weight data, a second bit operator generating a second multiplication result by performing multiplication on second feature bits of the input feature data and second weight bits of the weight data, an adder generating an addition result by performing addition based on the first multiplication result and the second multiplication result, a shifter shifting a number of digits of the addition result depending on a shift value to generate a shifted addition result, and an accumulator generating output feature data based on the shifted addition result.
    Type: Grant
    Filed: December 21, 2022
    Date of Patent: April 9, 2024
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Sungju Ryu, Hyungjun Kim, Jae-Joon Kim
  • Patent number: 11947873
    Abstract: An exemplary method for identifying media may include receiving user input associated with a request for media, where that user input includes unstructured natural language speech including one or more words; identifying at least one context associated with the user input; causing a search for the media based on the at least one context and the user input; determining, based on the at least one context and the user input, at least one media item that satisfies the request; and in accordance with a determination that the at least one media item satisfies the request, obtaining the at least one media item.
    Type: Grant
    Filed: April 9, 2021
    Date of Patent: April 2, 2024
    Assignee: Apple Inc.
    Inventors: Ryan M. Orr, Daniel J. Mandel, Andrew J. Sinesio, Connor J. Barnett
  • Patent number: 11943364
    Abstract: In one embodiment, a set of feature vectors can be derived from any biometric data, and then using a deep neural network (“DNN”) on those one-way homomorphic encryptions (i.e., each biometrics' feature vector) an authentication system can determine matches or execute searches on encrypted data. Each biometrics' feature vector can then be stored and/or used in conjunction with respective classifications, for use in subsequent comparisons without fear of compromising the original biometric data. In various embodiments, the original biometric data is discarded responsive to generating the encrypted values. In another embodiment, the homomorphic encryption enables computations and comparisons on cypher text without decryption of the encrypted feature vectors. Security of such privacy enable biometrics can be increased by implementing an assurance factor (e.g., liveness) to establish a submitted biometric has not been spoofed or faked.
    Type: Grant
    Filed: February 28, 2022
    Date of Patent: March 26, 2024
    Assignee: Private Identity LLC
    Inventor: Scott Edward Streit
  • Patent number: 11893271
    Abstract: A computing-in-memory circuit includes a Resistive Random Access Memory (RRAM) array and a peripheral circuit. The RRAM array comprises a plurality of memory cells arranged in an array pattern, and each memory cell is configured to store a data of L bits, L being an integer not less than 2. The peripheral circuit is configured to, in a storage mode, write more than one convolution kernels into the RRAM array, and in a computation mode, input elements that need to be convolved in a pixel matrix into the RRAM array and read a current of each column of memory cells, wherein each column of memory cells stores one convolution kernel correspondingly, and one element of the convolution kernel is stored in one memory cell correspondingly, and one element of the pixel matrix is correspondingly input into a word line that a row of memory cells connect.
    Type: Grant
    Filed: July 23, 2020
    Date of Patent: February 6, 2024
    Assignee: INSTITUTE OF MICROELECTRONICS OF THE CHINESE ACADEMY OF SCIENCES
    Inventors: Feng Zhang, Renjun Song
  • Patent number: 11868874
    Abstract: A 2D array-based neuromorphic processor includes: axon circuits each being configured to receive a first input corresponding to one bit from among bits indicating n-bit activation; first direction lines extending in a first direction from the axon circuits; second direction lines intersecting the first direction lines; synapse circuits disposed at intersections of the first direction lines and the second direction lines, and each being configured to store a second input corresponding to one bit from among bits indicating an m-bit weight and to output operation values of the first input and the second input; and neuron circuits connected to the first or second direction lines, each of the neuron circuits being configured to receive an operation value output from at least one of the synapse circuits, based on time information assigned individually to the synapse circuits, and to perform an arithmetic operation by using the operation values.
    Type: Grant
    Filed: March 10, 2023
    Date of Patent: January 9, 2024
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Sungho Kim, Cheheung Kim, Jaeho Lee
  • Patent number: 11868870
    Abstract: A neuromorphic apparatus configured to process a multi-bit neuromorphic operation including a single axon circuit, a single synaptic circuit, a single neuron circuit, and a controller. The single axon circuit is configured to receive, as a first input, an i-th bit of an n-bit axon. The single synaptic circuit is configured to store, as a second input, a j-th bit of an m-bit synaptic weight and output a synaptic operation value between the first input and the second input. The single neuron circuit is configured to obtain each bit value of a multi-bit neuromorphic operation result between the n-bit axon and the m-bit synaptic weight, based on the output synaptic operation value. The controller is configured to respectively determine the i-th bit and the j-th bit to be sequentially assigned for each time period of different time periods to the single axon circuit and the single synaptic circuit.
    Type: Grant
    Filed: August 30, 2019
    Date of Patent: January 9, 2024
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Sungho Kim, Cheheung Kim, Jaeho Lee
  • Patent number: 11853880
    Abstract: One aspect of this description relates to a convolutional neural network (CNN). The CNN includes a memory cell array including a plurality of memory cells. Each memory cell includes at least one first capacitive element of a plurality of first capacitive elements. Each memory cell is configured to multiply a weight bit and an input bit to generate a product. The at least one first capacitive element is enabled when the product satisfies a predetermined threshold. The CNN includes a reference cell array including a plurality of second capacitive elements. The CNN includes a memory controller configured to compare a first signal associated with the plurality of first capacitive elements with a second signal associated with at least one second capacitive element of the plurality of second capacitive elements, and, based on the comparison, determine whether the at least one first capacitive element is enabled.
    Type: Grant
    Filed: August 25, 2020
    Date of Patent: December 26, 2023
    Assignee: TAIWAN SEMICONDUCTOR MANUFACTURING COMPANY LIMITED
    Inventors: Jaw-Juinn Horng, Szu-Chun Tsao
  • Patent number: 11853888
    Abstract: A processor-implemented method of performing convolution operations in a neural network includes generating a plurality of first sub-bit groups and a plurality of second sub-bit groups, respectively from at least one pixel value of an input feature map and at least one predetermined weight, performing a convolution operation on a first pair that includes a first sub-bit group including a most significant bit (MSB) of the at least one pixel value and a second sub-bit group including an MSB of the at least one predetermined weight, based on the plurality of second sub-bit groups, obtaining a maximum value of a sum of results for convolution operations of remaining pairs excepting the first pair, and based on a result of the convolution operation on the first pair and the maximum value, determining whether to perform the convolution operations of the remaining pairs.
    Type: Grant
    Filed: December 28, 2022
    Date of Patent: December 26, 2023
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Joonho Song, Namjoon Kim, Sehwan Lee, Deokjin Joo
  • Patent number: 11836608
    Abstract: Techniques and systems are provided for implementing a convolutional neural network. One or more convolution accelerators are provided that each include a feature line buffer memory, a kernel buffer memory, and a plurality of multiply-accumulate (MAC) circuits arranged to multiply and accumulate data. In a first operational mode the convolutional accelerator stores feature data in the feature line buffer memory and stores kernel data in the kernel data buffer memory. In a second mode of operation, the convolutional accelerator stores kernel decompression tables in the feature line buffer memory.
    Type: Grant
    Filed: November 18, 2022
    Date of Patent: December 5, 2023
    Assignees: STMICROELECTRONICS S.r.l., STMicroelectronics International N.V.
    Inventors: Thomas Boesch, Giuseppe Desoli, Surinder Pal Singh, Carmine Cappetta
  • Patent number: 11838046
    Abstract: Examples described herein include systems and methods which include wireless devices and systems with examples of full duplex compensation with a self-interference noise calculator. The self-interference noise calculator may be coupled to antennas of a wireless device and configured to generate adjusted signals that compensate self-interference. The self-interference noise calculator may include a network of processing elements configured to combine transmission signals into intermediate results according to input data and delayed versions of the intermediate results. Each set of intermediate results may be combined in the self-interference noise calculator to generate a corresponding adjusted signal. The adjusted signal is received by a corresponding wireless receiver to compensate for the self-interference noise generated by a wireless transmitter transmitting on the same frequency band as the wireless receiver is receiving.
    Type: Grant
    Filed: April 7, 2021
    Date of Patent: December 5, 2023
    Inventor: Fa-Long Luo
  • Patent number: 11830244
    Abstract: An image recognition method and apparatus based on a systolic array, and a medium are disclosed. The method includes: converting obtained image feature information into a one-dimensional feature vector; converting an obtained weight matrix into a one-dimensional weight vector, and allocating a corresponding weight group to each node in a trained three-dimensional systolic array model; performing multiply-accumulate of the feature vector and a weight value on the one-dimensional feature vector in parallel by using the three-dimensional systolic array model, to obtain a feature value corresponding to each node, the feature values with different values reflecting article categories contained in an image; and determining an article category contained in the image according to the feature value corresponding to each node and a pre-established corresponding relationship between the feature value and the article category.
    Type: Grant
    Filed: April 26, 2021
    Date of Patent: November 28, 2023
    Assignee: INSPUR SUZHOU INTELLIGENT TECHNOLOGY CO., LTD.
    Inventors: Gang Dong, Yaqian Zhao, Rengang Li, Hongbin Yang, Haiwei Liu, Dongdong Jiang
  • Patent number: 11801636
    Abstract: A method and an apparatus for additive manufacturing pertaining to high efficiency, energy beam patterning and beam steering to effectively and efficiently utilize the source energy. In one embodiment recycling and reuse of unwanted light includes a source of multiple light patterns produced by one or more light valves, with at least one of the multiple light patterns being formed from rejected patterned light. An image relay is used to direct the multiple light patterns, and a beam routing system receives the multiple light patterns and respectively directs them toward defined areas on a powder bed.
    Type: Grant
    Filed: April 27, 2021
    Date of Patent: October 31, 2023
    Assignee: Seurat Technologies, Inc.
    Inventors: James A. DeMuth, Francis L. Leard, Erik Toomre
  • Patent number: 11797831
    Abstract: Disclosed is a methods and apparatus which can improve defect tolerability of a hardware-based neural network. In one embodiment, a method for performing a calculation of values on first neurons of a first layer in a neural network, includes: receiving a first pattern of a memory cell array; determining a second pattern of the memory cell array according to a third pattern; determining at least one pair of columns of the memory cell array according to the first pattern and the second pattern; switching input data of two columns of each of the at least one pair of columns of the memory cell array; and switching output data of the two columns in each of the at least one pair of columns of the memory cell array so as to determine the values on the first neurons of the first layer.
    Type: Grant
    Filed: August 8, 2022
    Date of Patent: October 24, 2023
    Assignee: Taiwan Semiconductor Manufacturing Co., Ltd.
    Inventors: Win-San Khwa, Yu-Der Chih, Yi-Chun Shih, Chien-Yin Liu
  • Patent number: 11783187
    Abstract: An approach is provided for progressive training of long-lived, evolving machine learning architectures. The approach involves, for example, determining alternative paths for the evolution of the machine learning model from a first architecture to a second architecture. The approach also involves determining one or more migration step alternatives in the alternative paths. The migration steps, for instance, include architecture options for the evolution of the machine learning model. The approach further involves processing data using the options to determine respective model performance data. The approach further involves selecting a migration step from the one or more migration step alternatives based on the respective model performance data to control a rate of migration steps over a rate of training in the evolution of the machine learning model. The approach further involves initiating a deployment the selected migration step to the machine learning model.
    Type: Grant
    Filed: March 4, 2020
    Date of Patent: October 10, 2023
    Assignee: HERE GLOBAL B.V.
    Inventor: Tero Juhani Keski-Valkama
  • Patent number: 11775612
    Abstract: In order to provide a learning data generating apparatus that is able to efficiently restrain erroneous detections, the learning data generating apparatus includes a data acquisition unit configured to acquire learning data including teacher data, and a generation unit configured to generate generated learning data based on the learning data and a generating condition, wherein the generation unit converts teacher data of a positive instance into teacher data of a negative instance according to a preset rule when generating the generated learning data.
    Type: Grant
    Filed: October 28, 2021
    Date of Patent: October 3, 2023
    Assignee: Canon Kabushiki Kaisha
    Inventors: Shinji Yamamoto, Takato Kimura
  • Patent number: 11769053
    Abstract: The present disclosure includes apparatuses and methods for operating neural networks. An example apparatus includes a plurality of neural networks, wherein the plurality of neural networks are configured to receive a particular portion of data and wherein each of the plurality of neural networks are configured to operate on the particular portion of data during a particular time period to make a determination regarding a characteristic of the particular portion of data.
    Type: Grant
    Filed: December 6, 2021
    Date of Patent: September 26, 2023
    Assignee: Micron Technology, Inc.
    Inventor: Perry V. Lea
  • Patent number: 11733970
    Abstract: An artificial intelligence system includes a neural network layer including an arithmetic operation circuit that performs an arithmetic operation of a sigmoid function. The arithmetic operation circuit includes a first circuit configured to perform an exponent arithmetic operation using a Napier's constant e as a base and output a first calculation result when an exponent in the exponent arithmetic operation is a negative number, wherein an absolute value of the exponent is used in the exponent arithmetic operation, and a second circuit configured to subtract the first calculation result obtained by the first circuit from 1 and output the subtracted value.
    Type: Grant
    Filed: March 3, 2020
    Date of Patent: August 22, 2023
    Assignees: Kabushiki Kaisha Toshiba, Toshiba Electronic Devices & Storage Corporation
    Inventor: Masanori Nishizawa
  • Patent number: 11715280
    Abstract: Disclosed are an object detection device and a control method. A method for controlling an object detection device comprises the steps of: receiving one image; dividing the received image into a predetermined number of local areas on the basis of the size of a convolutional layer of a convolution neural network (CNN); identifying small objects at the same time by inputting a number of the divided local areas corresponding to the number of CNN channels to each of a plurality of CNN channels; sequentially repeating the identifying of the small objects for each of the remaining divided local areas; selecting MM mode or MB mode; setting an object detection target area corresponding to the number of CNN channels on the basis of the selected mode; and detecting the small objects at the same time by inputting each set object detection target area to each of the plurality of CNN channels.
    Type: Grant
    Filed: August 1, 2019
    Date of Patent: August 1, 2023
    Assignee: KYUNGPOOK NATIONAL UNIVERSITY INDUSTRY-ACADEMIC COOPERATION FOUNDATION
    Inventors: Min Young Kim, Byeong Hak Kim, Jong Hyeok Lee
  • Patent number: 11676043
    Abstract: A mechanism is provided in a data processing system having a processor and a memory. The memory comprises instructions which are executed by the processor to cause the processor to implement a training system for finding an optimal surface for hierarchical classification task on an ontology. The training system receives a training data set and a hierarchical classification ontology data structure. The training system generates a neural network architecture based on the training data set and the hierarchical classification ontology data structure. The neural network architecture comprises an indicative layer, a parent tier (PT) output and a lower leaf tier (LLT) output. The training system trains the neural network architecture to classify the training data set to leaf nodes at the LLT output and parent nodes at the PT output. The indicative layer in the neural network architecture determines a surface that passes through each path from a root to a leaf node in the hierarchical ontology data structure.
    Type: Grant
    Filed: March 4, 2019
    Date of Patent: June 13, 2023
    Assignee: International Business Machines Corporation
    Inventors: Pathirage Dinindu Sujan Udayanga Perera, Orna Raz, Ramani Routray, Vivek Krishnamurthy, Sheng Hua Bao, Eitan D. Farchi
  • Patent number: 11675693
    Abstract: A novel and useful neural network (NN) processing core incorporating inter-device connectivity and adapted to implement artificial neural networks (ANNs). A chip-to-chip interface spreads a given ANN model across multiple devices in a seamless manner. The NN processor is constructed from self-contained computational units organized in a hierarchical architecture. The homogeneity enables simpler management and control of similar computational units, aggregated in multiple levels of hierarchy. Computational units are designed with minimal overhead as possible, where additional features and capabilities are aggregated at higher levels in the hierarchy. On-chip memory provides storage for content inherently required for basic operation at a particular hierarchy and is coupled with the computational resources in an optimal ratio. Lean control provides just enough signaling to manage only the operations required at a particular hierarchical level.
    Type: Grant
    Filed: April 3, 2018
    Date of Patent: June 13, 2023
    Inventors: Avi Baum, Or Danon, Hadar Zeitlin, Daniel Ciubotariu, Rami Feig
  • Patent number: 11663461
    Abstract: Instruction distribution in an array of neural network cores is provided. In various embodiments, a neural inference chip is initialized with core microcode. The chip comprises a plurality of neural cores. The core microcode is executable by the neural cores to execute a tensor operation of a neural network. The core microcode is distributed to the plurality of neural cores via an on-chip network. The core microcode is executed synchronously by the plurality of neural cores to compute a neural network layer.
    Type: Grant
    Filed: July 5, 2018
    Date of Patent: May 30, 2023
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Hartmut Penner, Dharmendra S. Modha, John V. Arthur, Andrew S. Cassidy, Rathinakumar Appuswamy, Pallab Datta, Steven K. Esser, Myron D. Flickner, Jennifer Klamo, Jun Sawada, Brian Taba
  • Patent number: 11663040
    Abstract: The computer system includes a node including a processor and a memory, and the processor and the memory serve as arithmetic operation resources. The computer system has an application program that operates using the arithmetic operation resources, and a storage controlling program that operates using the arithmetic operation resources for processing data to be inputted to and outputted from a storage device by the application program. The computer system has use resource amount information that associates operation states of the application program and the arithmetic operation resources that are to be used by the application program and the storage controlling program. The computer system changes allocation of the arithmetic operation resources to the application program and the storage controlling program used by the application program on the basis of an operation state of the application program and the use resource amount information.
    Type: Grant
    Filed: March 19, 2021
    Date of Patent: May 30, 2023
    Assignee: HITACHI, LTD.
    Inventors: Kohei Tatara, Yoshinori Ohira, Masakuni Agetsuma
  • Patent number: 11663451
    Abstract: A 2D array-based neuromorphic processor includes: axon circuits each being configured to receive a first input corresponding to one bit from among bits indicating n-bit activation; first direction lines extending in a first direction from the axon circuits; second direction lines intersecting the first direction lines; synapse circuits disposed at intersections of the first direction lines and the second direction lines, and each being configured to store a second input corresponding to one bit from among bits indicating an m-bit weight and to output operation values of the first input and the second input; and neuron circuits connected to the second direction lines, each of the neuron circuits being configured to receive an operation value output from at least one of the synapse circuits, based on time information assigned individually to the synapse circuits, and to perform a multi-bit operation by using the operation values and the time information.
    Type: Grant
    Filed: February 13, 2019
    Date of Patent: May 30, 2023
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Sungho Kim, Cheheung Kim, Jaeho Lee
  • Patent number: 11636883
    Abstract: To provide a semiconductor device which can execute the product-sum operation. The semiconductor device includes a first memory cell, a second memory cell, and an offset circuit. First analog data is stored in the first memory cell, and reference analog data is stored in the second memory cell. The first memory cell and the second memory cell supply a first current and a second current, respectively, when a reference potential is applied as a selection signal. The offset circuit has a function of supplying a third current corresponding to a differential current between the first current and the second current. In the semiconductor device, the first memory and the second memory supply a fourth current and a fifth current, respectively, when a potential corresponding to second analog data is applied as a selection signal.
    Type: Grant
    Filed: November 2, 2020
    Date of Patent: April 25, 2023
    Assignee: Semiconductor Energy Laboratory Co., Ltd.
    Inventor: Yoshiyuki Kurokawa
  • Patent number: 11599777
    Abstract: In an example, an apparatus comprises a plurality of execution units comprising and logic, at least partially including hardware logic, to traverse a solution space, score a plurality of solutions to a scheduling deep learning network execution, and select a preferred solution from the plurality of solutions to implement the deep learning network. Other embodiments are also disclosed and claimed.
    Type: Grant
    Filed: April 28, 2017
    Date of Patent: March 7, 2023
    Assignee: Intel Corporation
    Inventors: Eran Ben-Avi, Neta Zmora, Guy Jacob, Lev Faivishevsky, Jeremie Dreyfuss, Tomer Bar-On, Jacob Subag, Yaniv Fais, Shira Hirsch, Orly Weisel, Zigi Walter, Yarden Oren
  • Patent number: 11580366
    Abstract: An event-driven neural network including a plurality of interconnected core circuits is provided. Each core circuit includes an electronic synapse array that has multiple digital synapses interconnecting a plurality of digital electronic neurons. A synapse interconnects an axon of a pre-synaptic neuron with a dendrite of a post-synaptic neuron. A neuron integrates input spikes and generates a spike event in response to the integrated input spikes exceeding a threshold. Each core circuit also has a scheduler that receives a spike event and delivers the spike event to a selected axon in the synapse array based on a schedule for deterministic event delivery.
    Type: Grant
    Filed: October 28, 2019
    Date of Patent: February 14, 2023
    Assignee: International Business Machines Corporation
    Inventors: Filipp Akopyan, John V. Arthur, Rajit Manohar, Paul A. Merolla, Dharmendra S. Modha, Alyosha Molnar, William P. Risk, III
  • Patent number: 11568225
    Abstract: A signal processing method and apparatus, where the apparatus includes an input interface configured to receive an input signal matrix and a weight matrix, a processor configured to interleave the input signal matrix to obtain an interleaved signal matrix, partition the interleaved signal matrix, interleave the weight matrix to obtain an interleaved weight matrix, process the interleaved weight matrix to obtain a plurality of sparsified partitioned weight matrices, perform matrix multiplication on the sparsified partitioned weight matrices and a plurality of partitioned signal matrices to obtain a plurality of matrix multiplication results, and an output interface configured to output a signal processing result.
    Type: Grant
    Filed: December 6, 2019
    Date of Patent: January 31, 2023
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventor: Ruosheng Xu