Patents by Inventor Abbas Rahimi

Abbas Rahimi has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240143693
    Abstract: A composite vector is received. A first candidate component vector is generated and evaluated. The first candidate component vector is selected, based on the evaluating, as an accurate component vector. The first candidate component vector is unbundled from the composite vector. The unbundling results in a first reduced vector.
    Type: Application
    Filed: November 1, 2022
    Publication date: May 2, 2024
    Inventors: Zuzanna Dominika Domitrz, Michael Andreas Hersche, Kumudu Geethan Karunaratne, Abu Sebastian, Abbas Rahimi
  • Publication number: 20240127009
    Abstract: A probability distribution corresponding to the kernel function is determined and weights are sampled from the determined probability distribution corresponding to the given kernel function. Memristive devices of an analog crossbar are programmed based on the sampled weights, where each memristive device of the analog crossbar is configured to represent a corresponding weight. Two matrix-vector multiplication operations are performed on an analog input x and an analog input y using the programmed crossbar and a dot product is computed on results of the matrix-vector multiplication operations.
    Type: Application
    Filed: September 30, 2022
    Publication date: April 18, 2024
    Inventors: Julian Röttger Büchel, Abbas Rahimi, Manuel Le Gallo-Bourdeau, Irem Boybat Kara, Abu Sebastian
  • Publication number: 20240086682
    Abstract: A 3D compute-in-memory accelerator system and method for efficient inference of Mixture of Expert (MoE) neural network models. The system includes a plurality of compute-in-memory cores, each in-memory core including multiple tiers of in-memory compute cells. One or more tiers of in-memory compute cells correspond to an expert sub-model of the MoE model. One or more expert sub-models are selected for activation propagation based on a function-based routing, the tiers of the corresponding experts being activated based on this function. In one embodiment, this function is a hash-based tier selection function used for dynamic routing of inputs and output activations. In embodiments, the function is applied to select a single expert or multiple experts with input data-based or with layer-activation-based MoEs for single tier activation. Further, the system is configured as a multi-model system with single expert model selection or with a multi-model system with multi-expert selection.
    Type: Application
    Filed: September 13, 2022
    Publication date: March 14, 2024
    Inventors: Julian Roettger Buechel, Manuel Le Gallo-Bourdeau, Irem Boybat Kara, Abbas Rahimi, Abu Sebastian
  • Publication number: 20240054317
    Abstract: A computerized neuro-vector-symbolic architecture, that: receives image data associated with an artificial intelligence (AI) task; processes the image data using a frontend that comprises an artificial neural network (ANN) and a vector-symbolic architecture (VSA); and processes an output of the frontend using a backend that comprises a symbolic logical reasoning engine, to solve the AI task. The AI task, for example, may be an abstract visual reasoning task.
    Type: Application
    Filed: August 4, 2022
    Publication date: February 15, 2024
    Inventors: Michael Andreas Hersche, Abu Sebastian, Abbas Rahimi
  • Publication number: 20240054178
    Abstract: The disclosure includes a computer-implemented method of factorizing a vector by utilizing resonator network modules. Such modules include an unbinding module, as well as search-in-superposition modules. The method includes the following steps. A product vector is fed to the unbinding module to obtain unbound vectors. The latter represent estimates of codevectors of the product vector. A first operation is performed on the unbound vectors to obtain quasi-orthogonal vectors. The first operation is reversible. The quasi-orthogonal vectors are fed to the search-in-superposition modules, which rely on a single codebook. In this way, transformed vectors are obtained, utilizing a single codebook. A second operation is performed on the transformed vectors. The second operation is an inverse operation of the first operation, which makes it possible to obtain refined estimates of the codevectors.
    Type: Application
    Filed: August 11, 2022
    Publication date: February 15, 2024
    Inventors: Jovin Langenegger, Kumudu Geethan Karunaratne, Michael Andreas Hersche, Abu Sebastian, Abbas Rahimi
  • Publication number: 20230419091
    Abstract: Embodiments are disclosed for a method. The method includes determining a granularity of hypervectors. The method also includes receiving an input hypervector representing a data structure. Additionally, the method includes performing an iterative process to factorize the input hypervector into individual hypervectors representing the cognitive concepts. The iterative process includes, for each concept: determining an unbound version of a hypervector representing the concept by a blockwise unbinding operation between the input hypervector and estimate hypervectors of other concepts. The iterative process further includes determining a similarity vector indicating a similarity of the unbound version of the hypervector with each candidate code hypervector of the concept. Additionally, the iterative process includes generating an estimate of a hypervector representing the concept by a linear combination of the candidate code hypervectors, and weights of the similarity vector.
    Type: Application
    Filed: June 27, 2022
    Publication date: December 28, 2023
    Inventors: Michael Andreas Hersche, Abu Sebastian, Abbas Rahimi
  • Publication number: 20230419088
    Abstract: Embodiments are disclosed for a method. The method includes bundling a set of M code hypervectors, each of dimension D, where M>1. The bundling includes receiving an M-dimensional vector comprising weights for weighting the set of code hypervectors. The bundling further includes mapping the M-dimensional vector to an S-dimensional vector, sk, such that each element of the S-dimensional vector, sk, indicates one of the set of code hypervectors, where S=D/L and L?1. Additionally, the bundling includes building a hypervector such that an ith element of the built hypervector is an ith element of the code hypervector indicated in an ith element of the S-dimensional vector, sk.
    Type: Application
    Filed: June 27, 2022
    Publication date: December 28, 2023
    Inventors: Michael Andreas Hersche, Abbas Rahimi
  • Publication number: 20230325435
    Abstract: The present disclosure relates to a resonator network system comprising a set of resonator networks, each resonator network being configured to execute a resonator network, the resonator network being configured to receive an input hypervector representing a data structure and to perform an iterative process in order to factorize the input hypervector into individual hypervectors representing a set of concepts respectively, the set of N resonator networks being associated with N permutations respectively. The resonator network system being configured for applying the N permutations to N first hypervectors respectively, the N first hypervectors representing a set of N data structures respectively; and combining the N permuted hypervectors into a bundled hypervector. The resonator networks being configured for processing the bundled hypervector respectively, thereby factorizing the first hypervectors.
    Type: Application
    Filed: April 8, 2022
    Publication date: October 12, 2023
    Inventor: Abbas Rahimi
  • Publication number: 20230297816
    Abstract: Predefined concepts are represented by codebooks. Each codebook includes candidate code hypervectors that represent items of a respective concept of the predefined concepts. A neuromorphic memory device with a crossbar array structure includes row lines and column lines stores a value of respective code hypervectors of an codebook. An input hypervector is stored in an input buffer. A plurality of estimate buffers are each associated with a different subset of row lines and a different codebook and initially store estimated hypervectors. An unbound hypervector is computed using the input hypervector and all the estimated hypervectors. An attention vector is computed that indicates a similarity of the unbound hypervector with one estimated hypervector. A linear combination of the one estimated hypervector, weighted by the attention vector, is computed and is stored in the estimate buffer that is associated with the one estimated hypervector.
    Type: Application
    Filed: March 16, 2022
    Publication date: September 21, 2023
    Inventors: Kumudu Geethan Karunaratne, Michael Andreas Hersche, Giovanni Cherubini, Abu Sebastian, Abbas Rahimi
  • Publication number: 20230206057
    Abstract: A computer-implemented method for performing a classification of an input signal by a neural network includes: computing, by a feature extraction unit of the neural network, a D-dimensional query vector, wherein D is an integer; generating, by a classification unit of the neural network, a set of C fixed D-dimensional quasi-orthogonal bipolar vectors as a fixed classification matrix, wherein C is an integer corresponding to a number of classes of the classification unit; and performing a classification of a query vector based, at least in part, on the fixed classification matrix.
    Type: Application
    Filed: December 29, 2021
    Publication date: June 29, 2023
    Inventors: Michael Andreas Hersche, Kumudu Geethan Karunaratne, Giovanni Cherubini, Abu Sebastian, Abbas Rahimi
  • Publication number: 20230206035
    Abstract: A computer-implemented method for performing a classification of an input signal utilizing a neural network includes: computing, by a feature extraction unit of the neural network, a query vector; and performing, by a classification unit, a factorization of the query vector to a plurality of codebook vectors of a plurality of codebooks to determine a corresponding class of a number of classes. A set of combinations of vector products of the plurality of codebook vectors of the plurality of codebooks establishes a number of classes of the classification unit.
    Type: Application
    Filed: December 29, 2021
    Publication date: June 29, 2023
    Inventors: Michael Andreas Hersche, Kumudu Geethan Karunaratne, Giovanni Cherubini, Abu Sebastian, Abbas Rahimi
  • Publication number: 20230206056
    Abstract: A computer-implemented method for factorizing hypervectors in a resonator network includes: receiving an input hypervector representing a data structure; performing an iterative process for each concept in a set of concepts associated with the data structure in order to factorize the input hypervector into a plurality of individual hypervectors representing the set of concepts, wherein the iterative process includes: generating a first estimate of an individual hypervector representing a concept in the set of concepts; generating a similarity vector indicating a similarity of the estimate of the individual hypervector with each candidate attribute hypervector of a plurality of candidate attribute hypervectors representing an attribute associated with the concept; and generating a second estimate of the individual hypervector based, at least in part, on a linear combination of the plurality of candidate attribute hypervectors and performing a non-linear function on the linear combination of the plurality of ca
    Type: Application
    Filed: December 29, 2021
    Publication date: June 29, 2023
    Inventors: Michael Andreas Hersche, Kumudu Geethan Karunaratne, Giovanni Cherubini, Abu Sebastian, Abbas Rahimi
  • Publication number: 20230092627
    Abstract: The invention is notably directed to a sensor system for performing distributed sensing and classification of sensor data. The sensor system comprises a set of distributed sensor nodes for sensing the sensor data. The sensor system is configured to encode the sensor data of each sensor node of a set of distributed sensor nodes for sensing the sensor data as high-dimensional vectors and to transmit the high-dimensional vectors over a respective link between the respective sensor node and a receiver system. The sensor system is further configured to superpose the high-dimensional vectors of the sensor data from the set of sensor nodes by physical superposition, thereby generating a superposed high-dimensional vector and to classify the superposed high-dimensional vectors at the receiver system.
    Type: Application
    Filed: September 21, 2021
    Publication date: March 23, 2023
    Inventors: Abbas Rahimi, Abu Sebastian
  • Patent number: 11574209
    Abstract: A system for hyper-dimensional computing for inference tasks may be provided. The device comprises an item memory for storing hyper-dimensional item vectors, a query transformation unit connected to the item memory, the query transformation unit being adapted for forming a hyper-dimensional query vector from a query input and hyper-dimensional base vectors stored in the item memory, and an associative memory adapted for storing a plurality of hyper-dimensional profile vectors and for determining a distance between the hyper-dimensional query vector and the plurality of hyper-dimensional profile vectors, wherein the item memory and the associative memory are adapted for in-memory computing using memristive devices.
    Type: Grant
    Filed: May 30, 2019
    Date of Patent: February 7, 2023
    Assignees: International Business Machines Corporation, ETH ZURICH (EIDGENOESSISCHE TECHNISCHE HOCHSCHULE ZURICH)
    Inventors: Kumudu Geethan Karunaratne, Manuel Le Gallo-Bourdeau, Giovanni Cherubini, Abu Sebastian, Abbas Rahimi, Luca Benini
  • Publication number: 20220383063
    Abstract: The present disclosure relates to a method for representing an ordered group of symbols with a hypervector. The method comprises sequentially applying on at least part of the input hypervector associated with a current symbol a predefined number of circular shift operations associated with the current symbol, resulting in a shifted hypervector. A rotate operation may be applied on the shifted hypervector, resulting in an output hypervector. If the current symbol is not the last symbol of the ordered group of symbols the output hypervector may be provided as the input hypervector associated with a subsequent symbol of the current symbol; otherwise, the output hypervector of the last symbol of the ordered group of symbols may be provided as a hypervector that represents the ordered group of symbols.
    Type: Application
    Filed: May 27, 2021
    Publication date: December 1, 2022
    Inventors: Kumudu Geethan Karunaratne, Abbas Rahimi, Manuel Le Gallo-Bourdeau, Giovanni Cherubini, Abu Sebastian
  • Publication number: 20220180167
    Abstract: The present disclosure relates to a method for classifying a query information element using the similarity between the query information element and a set of support information elements. A resulting set of similarity scores is transformed using a sharpening function such that the transformed scores are decreasing as negative similarity scores increase and the transformed scores are increasing as positive similarity scores increase. A class of the query information element is determined based on the transformed similarity scores.
    Type: Application
    Filed: December 3, 2020
    Publication date: June 9, 2022
    Inventors: Kumudu Geethan Karunaratne, Manuel Le Gallo-Bourdeau, Giovanni Cherubini, Abu Sebastian, Abbas Rahimi
  • Patent number: 11244723
    Abstract: The invention is directed to a device for high-dimensional encoding of a plurality of sequences of quantitative data signals. The device comprises a plurality of input channels for receiving the plurality of sequences of quantitative data signals and an encoding unit. The encoding unit is configured to perform a temporal high-dimensional encoding of n-grams of the plurality of sequences of quantitative data signals; thereby creating a plurality of temporally encoded hypervectors for the plurality of input channels. The encoding unit is further configured to perform a spatial high-dimensional encoding of the plurality of temporally encoded hypervectors, thereby creating a temporally and spatially encoded hypervector. The device further comprises a configuration controller. The configuration controller is adapted to configure the high-dimensional encoding in dependence on one or more hyperparameter values.
    Type: Grant
    Filed: October 5, 2020
    Date of Patent: February 8, 2022
    Assignees: International Business Machines Corporation, ETH ZURICH
    Inventors: Kumudu Geethan Karunaratne, Manuel Le Gallo-Bourdeau, Giovanni Cherubini, Abu Sebastian, Abbas Rahimi, Luca Benini
  • Patent number: 11227656
    Abstract: The invention is directed a device for high-dimensional encoding of a plurality of sequences of quantitative data signals. The device comprises a memory crossbar array comprising a plurality of resistive devices, a first peripheral circuit connected to the memory crossbar array, and a second peripheral circuit connected to the first peripheral circuit. The device is configured to receive the plurality of sequences of quantitative data signals via a plurality of input channels and to store elements of a plurality of precomputed basis hypervectors as conductance states of the resistive devices. The plurality of basis hypervectors are bound to respective input channels. The first peripheral circuit performs a temporal encoding of n-grams of the quantitative data signals thereby creating a plurality of temporally encoded hypervectors. The second peripheral circuit performs a spatial encoding of the plurality of temporally encoded hypervectors. This creates a temporally and spatially encoded hypervector.
    Type: Grant
    Filed: October 5, 2020
    Date of Patent: January 18, 2022
    Assignee: International Business Machines Corporation
    Inventors: Kumudu Geethan Karunaratne, Manuel Le Gallo-Bourdeau, Giovanni Cherubini, Abu Sebastian, Abbas Rahimi, Luca Benini
  • Patent number: 11226763
    Abstract: The invention is notably directed at a device for high-dimensional computing comprising an associative memory module. The associative memory module comprises one or more planar crossbar arrays. The one or more planar crossbar arrays comprise a plurality of resistive memory elements. The device is configured to program profile vector elements of profile hypervectors as conductance states of the resistive memory elements and to apply query vector elements of query hypervectors as read voltages to the one or more crossbar arrays. The device is further configured to perform a distance computation between the profile hypervectors and the query hypervectors by measuring output current signals of the one or more crossbar arrays. The invention further concerns a related method and a related computer program product.
    Type: Grant
    Filed: May 30, 2019
    Date of Patent: January 18, 2022
    Assignees: International Business Machines Corporation, ETH ZURICH (EIDGENOESSISCHE TECHNISCHE HOCHSCHULE ZURICH)
    Inventors: Manuel Le Gallo-Bourdeau, Kumudu Geethan Karunaratne, Giovanni Cherubini, Abu Sebastian, Abbas Rahimi, Luca Benini
  • Patent number: 10971226
    Abstract: The device provides a resistive memory device for storing elements of hyper-dimensional vectors, in particular digital hyper-dimensional, as conductive statuses in components in particular in 2D-memristors, of the resistive memory device, wherein the resistive memory device provides a first crossbar array of the components, wherein the components are memristive 2D components addressable by word-lines and bit-lines, and a peripheral circuit connected to the word-lines and bit-lines and adapted for encoding operations by activating the word-lines and bit-lines sequentially in a predefined manner.
    Type: Grant
    Filed: May 30, 2019
    Date of Patent: April 6, 2021
    Assignees: International Business Machines Corporation, ETH ZURICH (EIDGENOESSISCHE TECHNISCHE HOCHSCHULE ZURICH)
    Inventors: Manuel Le Gallo-Bourdeau, Kumudu Geethan Karunaratne, Giovanni Cherubini, Abu Sebastian, Abbas Rahimi, Luca Benini