Patents by Inventor Michael Andreas Hersche
Michael Andreas Hersche has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20250103849Abstract: An embodiment establishes a neural network that comprises a plurality of layers. The embodiment receives a plurality of input data sequences into a layer of the neural network, the plurality of input data sequences comprises a first input data sequence and a second input data sequence. The embodiment superposes the first input data sequence and the second input data sequence, thereby creating a superposed embedding. The embodiment transforms the superposed embedding by applying a function to the superposed embedding, thereby creating a transformed superposed embedding. The embodiment infers a first output data element corresponding to the first input data sequence and a second output data element corresponding to the second input data sequence via application of an unbinding operation on the transformed superposed embedding.Type: ApplicationFiled: September 21, 2023Publication date: March 27, 2025Applicant: International Business Machines CorporationInventors: Michael Andreas Hersche, Kumudu Geethan Karunaratne, Abu Sebastian, Abbas Rahimi
-
Publication number: 20250086250Abstract: An approach for bundling a set of hypervectors may be provided herein. The approach may involve encoding a data structure into a plurality of hypervectors. The approach may further involve calculating the element-wise sum of a set of hypervectors to generate a sum hypervector. A plurality of blocks may be produced from the sum hypervectors. The block elements of the sum hypervector may be selected based on a selection criterion. A selection criterion may include a threshold value or simply be the largest element per block. Additionally, the approach may involve setting the non-selected elements of the sum hypervector to zero.Type: ApplicationFiled: September 11, 2023Publication date: March 13, 2025Inventors: Aleksandar Terzic, Jovin Langenegger, Michael Andreas Hersche, Abu Sebastian, Abbas Rahimi, Kumudu Geethan Karunaratne
-
Publication number: 20250086251Abstract: An approach for factorizing hypervectors using a resonator network may be provided herein. The approach may involve providing alternative implementations of a step for each step of the iterative process of the resonator network. An input hypervector representing a data structure may be received, by a resonator network. The approach may further involve selecting a step from the provided implementation of each step of the iterative process. The iterative process may be executed based on the selected implementations, thereby factorizing the input hypervector.Type: ApplicationFiled: September 11, 2023Publication date: March 13, 2025Inventors: Aleksandar Terzic, Jovin Langenegger, Michael Andreas Hersche, Abu Sebastian, Abbas Rahimi, Kumudu Geethan Karunaratne
-
Publication number: 20240202515Abstract: The present disclosure relates to training a classifier. The classifier includes a controller and an explicit memory. The training may include iteratively receiving one or more second training datasets, each comprising second data samples of a set of one or more associated novel classes, adding to the explicit memory one or more second output vectors indicative of the set of one or more associated novel classes, in response to providing the one or more second training datasets to the classifier, retraining the classifier using the one or more second training datasets and the first training dataset by minimizing a distance between the one or more second output vectors and the one or more prototype vectors, determining a set of updated prototype vectors indicative of first training dataset and the one or more second training datasets, and updating the explicit memory with the set of updated prototype vectors.Type: ApplicationFiled: December 2, 2022Publication date: June 20, 2024Inventors: Kumudu Geethan Karunaratne, Michael Andreas Hersche, Giovanni Cherubini, Abu Sebastian, Abbas Rahimi
-
Publication number: 20240143693Abstract: A composite vector is received. A first candidate component vector is generated and evaluated. The first candidate component vector is selected, based on the evaluating, as an accurate component vector. The first candidate component vector is unbundled from the composite vector. The unbundling results in a first reduced vector.Type: ApplicationFiled: November 1, 2022Publication date: May 2, 2024Inventors: Zuzanna Dominika Domitrz, Michael Andreas Hersche, Kumudu Geethan Karunaratne, Abu Sebastian, Abbas Rahimi
-
Publication number: 20240054178Abstract: The disclosure includes a computer-implemented method of factorizing a vector by utilizing resonator network modules. Such modules include an unbinding module, as well as search-in-superposition modules. The method includes the following steps. A product vector is fed to the unbinding module to obtain unbound vectors. The latter represent estimates of codevectors of the product vector. A first operation is performed on the unbound vectors to obtain quasi-orthogonal vectors. The first operation is reversible. The quasi-orthogonal vectors are fed to the search-in-superposition modules, which rely on a single codebook. In this way, transformed vectors are obtained, utilizing a single codebook. A second operation is performed on the transformed vectors. The second operation is an inverse operation of the first operation, which makes it possible to obtain refined estimates of the codevectors.Type: ApplicationFiled: August 11, 2022Publication date: February 15, 2024Inventors: Jovin Langenegger, Kumudu Geethan Karunaratne, Michael Andreas Hersche, Abu Sebastian, Abbas Rahimi
-
Publication number: 20240054317Abstract: A computerized neuro-vector-symbolic architecture, that: receives image data associated with an artificial intelligence (AI) task; processes the image data using a frontend that comprises an artificial neural network (ANN) and a vector-symbolic architecture (VSA); and processes an output of the frontend using a backend that comprises a symbolic logical reasoning engine, to solve the AI task. The AI task, for example, may be an abstract visual reasoning task.Type: ApplicationFiled: August 4, 2022Publication date: February 15, 2024Inventors: Michael Andreas Hersche, Abu Sebastian, Abbas Rahimi
-
Publication number: 20230419091Abstract: Embodiments are disclosed for a method. The method includes determining a granularity of hypervectors. The method also includes receiving an input hypervector representing a data structure. Additionally, the method includes performing an iterative process to factorize the input hypervector into individual hypervectors representing the cognitive concepts. The iterative process includes, for each concept: determining an unbound version of a hypervector representing the concept by a blockwise unbinding operation between the input hypervector and estimate hypervectors of other concepts. The iterative process further includes determining a similarity vector indicating a similarity of the unbound version of the hypervector with each candidate code hypervector of the concept. Additionally, the iterative process includes generating an estimate of a hypervector representing the concept by a linear combination of the candidate code hypervectors, and weights of the similarity vector.Type: ApplicationFiled: June 27, 2022Publication date: December 28, 2023Inventors: Michael Andreas Hersche, Abu Sebastian, Abbas Rahimi
-
Publication number: 20230419088Abstract: Embodiments are disclosed for a method. The method includes bundling a set of M code hypervectors, each of dimension D, where M>1. The bundling includes receiving an M-dimensional vector comprising weights for weighting the set of code hypervectors. The bundling further includes mapping the M-dimensional vector to an S-dimensional vector, sk, such that each element of the S-dimensional vector, sk, indicates one of the set of code hypervectors, where S=D/L and L?1. Additionally, the bundling includes building a hypervector such that an ith element of the built hypervector is an ith element of the code hypervector indicated in an ith element of the S-dimensional vector, sk.Type: ApplicationFiled: June 27, 2022Publication date: December 28, 2023Inventors: Michael Andreas Hersche, Abbas Rahimi
-
Publication number: 20230297816Abstract: Predefined concepts are represented by codebooks. Each codebook includes candidate code hypervectors that represent items of a respective concept of the predefined concepts. A neuromorphic memory device with a crossbar array structure includes row lines and column lines stores a value of respective code hypervectors of an codebook. An input hypervector is stored in an input buffer. A plurality of estimate buffers are each associated with a different subset of row lines and a different codebook and initially store estimated hypervectors. An unbound hypervector is computed using the input hypervector and all the estimated hypervectors. An attention vector is computed that indicates a similarity of the unbound hypervector with one estimated hypervector. A linear combination of the one estimated hypervector, weighted by the attention vector, is computed and is stored in the estimate buffer that is associated with the one estimated hypervector.Type: ApplicationFiled: March 16, 2022Publication date: September 21, 2023Inventors: Kumudu Geethan Karunaratne, Michael Andreas Hersche, Giovanni Cherubini, Abu Sebastian, Abbas Rahimi
-
Publication number: 20230206056Abstract: A computer-implemented method for factorizing hypervectors in a resonator network includes: receiving an input hypervector representing a data structure; performing an iterative process for each concept in a set of concepts associated with the data structure in order to factorize the input hypervector into a plurality of individual hypervectors representing the set of concepts, wherein the iterative process includes: generating a first estimate of an individual hypervector representing a concept in the set of concepts; generating a similarity vector indicating a similarity of the estimate of the individual hypervector with each candidate attribute hypervector of a plurality of candidate attribute hypervectors representing an attribute associated with the concept; and generating a second estimate of the individual hypervector based, at least in part, on a linear combination of the plurality of candidate attribute hypervectors and performing a non-linear function on the linear combination of the plurality of caType: ApplicationFiled: December 29, 2021Publication date: June 29, 2023Inventors: Michael Andreas Hersche, Kumudu Geethan Karunaratne, Giovanni Cherubini, Abu Sebastian, Abbas Rahimi
-
Publication number: 20230206057Abstract: A computer-implemented method for performing a classification of an input signal by a neural network includes: computing, by a feature extraction unit of the neural network, a D-dimensional query vector, wherein D is an integer; generating, by a classification unit of the neural network, a set of C fixed D-dimensional quasi-orthogonal bipolar vectors as a fixed classification matrix, wherein C is an integer corresponding to a number of classes of the classification unit; and performing a classification of a query vector based, at least in part, on the fixed classification matrix.Type: ApplicationFiled: December 29, 2021Publication date: June 29, 2023Inventors: Michael Andreas Hersche, Kumudu Geethan Karunaratne, Giovanni Cherubini, Abu Sebastian, Abbas Rahimi
-
Publication number: 20230206035Abstract: A computer-implemented method for performing a classification of an input signal utilizing a neural network includes: computing, by a feature extraction unit of the neural network, a query vector; and performing, by a classification unit, a factorization of the query vector to a plurality of codebook vectors of a plurality of codebooks to determine a corresponding class of a number of classes. A set of combinations of vector products of the plurality of codebook vectors of the plurality of codebooks establishes a number of classes of the classification unit.Type: ApplicationFiled: December 29, 2021Publication date: June 29, 2023Inventors: Michael Andreas Hersche, Kumudu Geethan Karunaratne, Giovanni Cherubini, Abu Sebastian, Abbas Rahimi