Patents Examined by Tan V. Mai
  • Patent number: 11314844
    Abstract: A singular value decomposition (SVD) is computed of a first matrix to define a left matrix, a diagonal matrix, and a right matrix. The left matrix, the diagonal matrix, and the right matrix are updated using an arrowhead matrix structure defined from the diagonal matrix and by adding a next observation vector to a last row of the first matrix. The updated left matrix, the updated diagonal matrix, and the updated right matrix are updated using a diagonal-plus-rank-one (DPR1) matrix structure defined from the updated diagonal matrix and by removing an observation vector from a first row of the first matrix. Eigenpairs of the DPR1 matrix are computed based on whether a value computed from the updated left matrix is positive or negative. The left matrix updated in (C), the diagonal matrix updated in (C), and the right matrix updated in (C) are output.
    Type: Grant
    Filed: October 19, 2021
    Date of Patent: April 26, 2022
    Assignee: SAS Institute Inc.
    Inventors: Hansi Jiang, Arin Chaudhuri
  • Patent number: 11301545
    Abstract: Disclosed herein includes a system, a method, and a device for multiply-accumulate operation. In one aspect, an input operand is received by control circuitry. In one aspect, the control circuitry determines a sparsity of the input operand, where the sparsity may indicate whether a value of the input operand has a predetermined value or not. In one aspect, the control circuitry determines a stationarity of the input operand, where the stationarity may indicate whether the value of the input operand changes over one or more clock cycles. In one aspect, the input operand is provided to multiply-accumulate circuitry as an input, according to the determined sparsity and stationarity of the input operand.
    Type: Grant
    Filed: July 11, 2019
    Date of Patent: April 12, 2022
    Assignee: FACEBOOK TECHNOLOGIES, LLC
    Inventor: Liangzhen Lai
  • Patent number: 11301264
    Abstract: Processing cores with the ability to suppress operations based on a contribution estimate for those operations for purposes of increasing the overall performance of the core are disclosed. Associated methods that can be conducted by such processing cores are also disclosed. One such method includes generating a reference value for a composite computation. A complete execution of the composite computation generates a precise output and requires execution of a set of component computations. The method also includes generating a component computation approximation. The method also includes evaluating the component computation approximation with the reference value. The method also includes executing a partial execution of the composite computation using the component computation approximation to produce an estimated output.
    Type: Grant
    Filed: February 11, 2020
    Date of Patent: April 12, 2022
    Assignee: Tenstorrent Inc.
    Inventors: Ljubisa Bajic, Milos Trajkovic, Ivan Hamer, Syed Gilani
  • Patent number: 11294985
    Abstract: Techniques are provided for efficient matrix multiplication using in-memory analog parallel processing, with applications for neural networks and artificial intelligence processors. A methodology implementing the techniques according to an embodiment includes storing two matrices in-memory. The first matrix is stored in transposed form such that the transposed first matrix has the same number of rows as the second matrix. The method further includes reading columns of the matrices from the memory in parallel, using disclosed bit line functional read operations and cross bit line functional read operations, which are employed to generate analog dot products between the columns. Each of the dot products corresponds to an element of the matrix multiplication product of the two matrices. In some embodiments, one of the matrices may be used to store neural network weighting factors, and the other matrix may be used to store input data to be processed by the neural network.
    Type: Grant
    Filed: October 30, 2018
    Date of Patent: April 5, 2022
    Assignee: Intel Corporation
    Inventors: Amrita Mathuriya, Sasikanth Manipatruni, Dmitri Nikonov, Ian Young, Ram Krishnamurthy
  • Patent number: 11295200
    Abstract: Some embodiments provide a method for a neural network inference circuit that executes a neural network including multiple nodes. The method loads a first set of weight values into a first set of weight value buffers, a second set of weight values into a second set of weight value buffers, a first set of input values into a first set of input value buffers, and a second set of input values into a second set of input value buffers. In a first clock cycle, the method computes a first dot product of the first set of weight values and the first set of input values. In a second clock cycle, the method computes a second dot product of the second set of weight values and the second set of input values. The method adds the first and second dot products to compute a dot product for the node.
    Type: Grant
    Filed: December 6, 2018
    Date of Patent: April 5, 2022
    Assignee: PERCEIVE CORPORATION
    Inventors: Jung Ko, Kenneth Duong, Steven L. Teig
  • Patent number: 11294986
    Abstract: Techniques regarding an iterative energy-scaled variational quantum eigensolver process are provided. For example, one or more embodiments described herein can comprise a system, which can comprise a memory that can store computer executable components. The system can also comprise a processor, operably coupled to the memory, and that can execute the computer executable components stored in the memory. The computer executable components can comprise a read-out component that determines a ground state energy value of a quantum Hamiltonian by employing a variational quantum eigensolver (VQE) algorithm, wherein VQE algorithm utilizes a symmetry that emerges at an energy scale of the quantum Hamiltonian.
    Type: Grant
    Filed: November 22, 2019
    Date of Patent: April 5, 2022
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Antonio Mezzacapo, Richard Chen, Marco Pistoia
  • Patent number: 11281432
    Abstract: A true random number generator (TRNG) is disclosed that includes an enclosure. The enclosure encloses a radioactive source defining a radioactive source surface and a cavity separating the radioactive source from an array of cells that define an array surface with an edge. Each cell in the array comprises a detector constructed to detect electrons within the cavity from the decay of the radioactive source and constructed to produce a signal for the detected energy. A projection of the radioactive source surface onto the array surface extends beyond the edge and encompasses the array surface.
    Type: Grant
    Filed: October 28, 2021
    Date of Patent: March 22, 2022
    Assignee: RANDAEMON SP. Z O.O.
    Inventor: Jan Jakub Tatarkiewicz
  • Patent number: 11281746
    Abstract: An arithmetic operation method for a convolutional layer in a neural network includes: generating a coefficient matrix by converting a kernel used in the convolutional layer such that the coefficient matrix is associated with an input vector obtained by expanding, into one column, a feature map input to the convolutional layer; searching for non-zero elements included in the coefficient matrix; assigning multiplications of the non-zero elements included in the coefficient matrix and corresponding elements of the input vector to a plurality of calculators with each of the multiplications being handled as a unit of process, so as to level out the numbers of units of process among the calculators, each of the calculators being capable of performing a process in parallel with one another; and sequentially performing, by the calculators, the assigned multiplications, and sequentially adding, by the calculators, results of the multiplications to corresponding elements of an output vector.
    Type: Grant
    Filed: September 14, 2017
    Date of Patent: March 22, 2022
    Assignee: MITSUBISHI ELECTRIC CORPORATION
    Inventors: Susumu Tanaka, Masashi Mori, Kazushige Hashimoto
  • Patent number: 11275998
    Abstract: The present disclosure relates generally to techniques for improving the implementation of certain operations on an integrated circuit. In particular, deep learning techniques, which may use a deep neural network (DNN) topology, may be implemented more efficiently using low-precision weights and activation values by efficiently performing down conversion of data to a lower precision and by preventing data overflow during suitable computations. Further, by more efficiently mapping multipliers to programmable logic on the integrated circuit device, the resources used by the DNN topology to perform, for example, inference tasks may be reduced, resulting in improved integrated circuit operating speeds.
    Type: Grant
    Filed: May 31, 2018
    Date of Patent: March 15, 2022
    Assignee: Intel Corporation
    Inventors: Martin Langhammer, Sudarshan Srinivasan, Gregg William Baeckler, Duncan Moss, Sasikanth Avancha, Dipankar Das
  • Patent number: 11269594
    Abstract: Adder circuits and associated methods for processing a set of at least three floating-point numbers to be added together include identifying, from among the at least three numbers, at least two numbers that have the same sign—that is, at least two numbers that are both positive or both negative. The identified at least two numbers are added together (608) using one or more same-sign floating-point adders (120, 220a, 320, 420). A same-sign floating-point adder comprises circuitry configured to add together floating-point numbers having the same sign and does not include circuitry configured to add together numbers having different signs.
    Type: Grant
    Filed: July 20, 2020
    Date of Patent: March 8, 2022
    Assignee: Imagination Technologies Limited
    Inventors: Sam Elliott, Jonas Olof Gunnar Källén, Casper Van Benthem
  • Patent number: 11269973
    Abstract: Repeating patterns are identified in a matrix. Based on the identification of the repeating patterns, instructions are generated, which are executable by processing cores of a dot product engine to allocate analog multiplication crossbars of the dot product engine to perform multiplication of the matrix with a vector.
    Type: Grant
    Filed: April 28, 2020
    Date of Patent: March 8, 2022
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Mashood Abdulla Kodavanji, Soumitra Chatterjee, Chinmay Ghosh, Mohan Parthasarathy
  • Patent number: 11263292
    Abstract: A method for performing a matrix multiplication operation is provided. The method includes: obtaining a matrix B1, a matrix A2, and an index matrix, wherein the index matrix comprises indexes, in a matrix A1, of elements in the matrix A2; generating m matrices B2 based on the index matrix and the matrix B1, wherein the m matrices B2 are all matrices with t rows and n columns, and each row of each matrix B2 is a row indicated in the matrix B1 by a corresponding element in the index matrix; and generating a matrix C based on the matrix A2 and the m matrices B2, wherein the matrix C is a product of the matrix A1 and the matrix B1.
    Type: Grant
    Filed: May 19, 2021
    Date of Patent: March 1, 2022
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Leijun He, Bin Xu, Kaixing Wang
  • Patent number: 11262980
    Abstract: A computing accelerator using a lookup table. The accelerator may accelerate floating point multiplications by retrieving the fraction portion of the product of two floating-point operands from a lookup table, or by retrieving the product of two floating-point operands of two floating-point operands from a lookup table, or it may retrieve dot products of floating point vectors from a lookup table. The accelerator may be implemented in a three-dimensional memory assembly. It may use approximation, the symmetry of a multiplication lookup table, and zero-skipping to improve performance.
    Type: Grant
    Filed: July 1, 2020
    Date of Patent: March 1, 2022
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Krishna T. Malladi, Peng Gu, Hongzhong Zheng, Robert Brennan
  • Patent number: 11256779
    Abstract: A calculation apparatus according to an embodiment includes one or more processing circuits configured to function as an interaction unit, a first addition unit, and a time evolution unit. The interaction unit generates N first intermediate variables obtained by performing a matrix computing on the N first variables and the coefficient matrix at the first time. The first addition unit calculates N second variables at the second time at which the sampling period elapses from the first time. The time evolution unit executes a time evolution process on the N second variables at the first time to generate N first variables at the second time. If the N first variables at the second time unsatisfied a predetermined constraint condition, the time evolution unit changes the N second variables at the second time in a direction of satisfying the constraint condition.
    Type: Grant
    Filed: February 26, 2020
    Date of Patent: February 22, 2022
    Assignee: KABUSHIKI KAISHA TOSHIBA
    Inventors: Taro Kanao, Kosuke Tatsumura, Hayato Goto
  • Patent number: 11258456
    Abstract: A method for compressing a quantum state vector includes: aggregating a group of several neighboring states of the vector into a cluster of states of the vector, a parameter representative of the probability of this cluster being associated with it and corresponding to the sum of the probabilities of the aggregated neighboring states in this cluster, the probability of each aggregated neighboring state being below a given aggregation threshold, and/or the sum of the probabilities of the aggregated neighboring states in a cluster being below another given aggregation threshold; and preserving a state of the vector not aggregated in a cluster, the parameter representative of its probability remaining unchanged. The method includes several steps of aggregating several distinct groups of several neighboring states of the vector, respectively into several clusters of states of the vector, and/or an aggregation step and a preservation step.
    Type: Grant
    Filed: September 23, 2019
    Date of Patent: February 22, 2022
    Assignee: BULL SAS
    Inventor: Jean Noël Quintin
  • Patent number: 11250108
    Abstract: A matrix processing method includes: determining a quantity of non-zero elements in a to-be-processed matrix, where the to-be-processed matrix is a one-dimensional matrix; generating a distribution matrix of the to-be-processed matrix, where the distribution matrix is used to indicate a position of a non-zero element in the to-be-processed matrix; combining the quantity of non-zero elements, values of all non-zero elements in the to-be-processed matrix arranged sequentially, and the distribution matrix, to obtain a compressed matrix of the to-be-processed matrix.
    Type: Grant
    Filed: May 8, 2020
    Date of Patent: February 15, 2022
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Zhenjiang Dong, Chio In Ieong, Hu Liu, Hai Chen
  • Patent number: 11249725
    Abstract: A true random number generator (TRNG) is disclosed, comprising an enclosure enclosing, a radiation source (preferably radioactive nickel), and a cavity separating the radioactive nickel from a linear array of cells. The cells include a silicon substrate with a detector constructed to detect electrons within the cavity from the decay of the nickel and to produce a signal for the detected energy. The amplifier connected to the detector amplifies the signal and passes it to the memory for storage. A control block is connected to each cell in the linear array (a) sends a word line signal to each cell, causing the memory to report its contents to an output buffer/memory via a bit line, and also (b) sends a reset signal to each cell, causing the memory to erase.
    Type: Grant
    Filed: August 24, 2021
    Date of Patent: February 15, 2022
    Assignee: RANDAEMON SP. ZO.O.
    Inventors: Wieslaw Bohdan Kuźmicz, Jan Jakub Tatarkiewicz
  • Patent number: 11249723
    Abstract: A method related to posit tensor processing can include receiving, by a plurality of multiply-accumulator (MAC) units coupled to one another, a plurality of universal number (unum) or posit bit strings organized in a matrix and to be used as operands in a plurality of respective recursive operations performed using the plurality of MAC units and performing, using the MAC units, the plurality of respective recursive operations. Iterations of the respective recursive operations are performed using at least one bit string that is a same bit string as was used in a preceding iteration of the respective recursive operations. The method can further include prior to receiving the plurality of unum or posit bit strings, performing an operation to organize the plurality of unum or posit bit strings to achieve a threshold bandwidth ratio, a threshold latency, or both during performance of the plurality of respective recursive operations.
    Type: Grant
    Filed: April 2, 2020
    Date of Patent: February 15, 2022
    Assignee: Micron Technology, Inc.
    Inventor: Vijay S. Ramesh
  • Patent number: 11250105
    Abstract: A computation unit that comprises (i) a multiplicand vector decomposer that generates a decomposed multiplicand vector which uses a sequence of first and second concatenated multiplicand sub-elements (1st2ndCMCSE) in a lower-precision format (LPF) to represent corresponding ones of multiplicand elements in a multiplicand vector in a higher-precision format (HPF), (ii) a multiplier vector decomposer that generates a decomposed multiplier vector which uses a sequence of first and second concatenated multiplier sub-elements (1st2ndCMLSE) in the LPF to represent corresponding ones of multiplier elements in a multiplier vector in the HPF, (iii) a multiplicand tensor encoder that encodes double reads of the sequence of the 1st2ndCMCSE in a decomposed multiplicand tensor, and (iv) a product vector generator that generates a product vector containing a sequence of first and second concatenated product sub-elements by executing general matrix-matrix multiplication (GeMM) operations between the double reads of the 1st2
    Type: Grant
    Filed: May 12, 2020
    Date of Patent: February 15, 2022
    Assignee: SambaNova Systems, Inc.
    Inventors: Mingran Wang, Xiaoyan Li, Yongning Sheng
  • Patent number: 11238130
    Abstract: A signal processing method and apparatus, where the method includes partitioning a signal matrix to obtain X×H fractal signal matrices, partitioning a weight matrix to obtain H×Y fractal weight matrices, obtaining an operation sequence of X×H×Y matrix multiplications based on performance parameters, and processing the X×H×Y matrix multiplications to obtain X×Y result matrices, where the operation sequence of the X×H×Y matrix multiplications is obtained.
    Type: Grant
    Filed: June 29, 2020
    Date of Patent: February 1, 2022
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventor: Ruosheng Xu