Patents by Inventor Saeed JAHROMI

Saeed JAHROMI has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240160899
    Abstract: A system and method for improving a convolutional neural network (CNN) are described herein. The system includes a processor receiving a weight tensor having N parameters, the weight tensor corresponding to a convolutional layer of the CNN. The processor factorizes the weight tensor to obtain a corresponding factorized weight tensor, the factorized weight tensor having M parameters, where M<N. The processor supplies the factorized weight tensor to a classification layer of the CNN, thereby generating an improved CNN. In an embodiment, the processor (a) determines a rank of the weight tensor and (b) decomposes the weight tensor into a core tensor and a number R of factor matrices, where R corresponds to the rank of the weight tensor. In another embodiment, the processor (a) determines a decomposition rank R and (b) factorizes the weight tensor as a sum of a number R of tensor products.
    Type: Application
    Filed: December 15, 2022
    Publication date: May 16, 2024
    Applicant: Multiverse Computing SL
    Inventors: Saeed Jahromi, Román Orús
  • Patent number: 11907325
    Abstract: A computer-implemented method is provided whereby an equation with a cost function for minimization is solved by a tensor network. Coefficients of tensors of the tensor network are modified so as to reduce a value of the cost function in an iterative process until convergence is reached, at which point the concerned Unconstrained Optimization problem is solved and the values of the variables of the cost function are provided.
    Type: Grant
    Filed: March 26, 2021
    Date of Patent: February 20, 2024
    Assignee: MULTIVERSE COMPUTING S.L.
    Inventors: Samuel Mugel, Román Orús, Saeed Jahromi, Serkan Sahin
  • Publication number: 20230409665
    Abstract: A method of embedding ordinary differential equations (ODEs) into tensor radial basis networks is presented herein. The method involves receiving a tensored basis function having D dimensions and zeroth-, first-, and second-derivative coefficients A_d, B_d, and C_d; defining A_hat, B_hat, and C_hat as a function of A, B, and D, and C_hat as function of A, C, and D, respectively; defining an orthogonal exotic algebra a, b, c; applying a, b, and c, along with A_hat, B_hat, and C_hat, as coefficients for the zeroth-derivative, first-derivative, and second-derivative terms; and embedding the updated tensored basis function by forming a matrix product state (MPS). The MPS can be trained by initializing MPS 3-tensors with random coefficients and sweeping left and right along the MPS and updating the MPS 3-tensors.
    Type: Application
    Filed: July 5, 2022
    Publication date: December 21, 2023
    Applicant: Multiverse Computing SL
    Inventors: Samuel Palmer, Raj Patel, Román Orús, Saeed Jahromi, Chia-Wei Hsing, Serkan Sahin
  • Publication number: 20230409961
    Abstract: A method of applying non-linear regression on a set of data points to get an estimate is described herein. The method includes receiving a set of N datapoints, separating the set of N datapoints into Nb batches, receiving a family of fitting functions, and minimizing a log-cosh cost function for each batch by selecting parameters that minimize the log-cosh cost function. The parameters are obtained by: receiving a matrix product state (MPS) model and training the MPS to minimize loss over all the Nb batches, including choosing an MPS with M+D tensors. All tensors except D correspond to one datapoint in each of the Nb batches, D extra tensors in the MPS have a physical dimension of size M corresponding to the number of possible outputs for a given batch, and the coefficients of the tensors in the MPS minimize the log-cosh cost function sequentially over all the Nb batches.
    Type: Application
    Filed: July 5, 2022
    Publication date: December 21, 2023
    Applicant: Multiverse Computing SL
    Inventors: Chia-Wei Hsing, Román Orús, Samuel Mugel, Saeed Jahromi, Serkan Sahin, Samuel Palmer
  • Publication number: 20230341824
    Abstract: A device or system configured to: receive a set of data associated with a monitored target, the set of data having N features, where N is a natural number greater than one; input the N features of the received set of data into a trained neural network for determining a condition or characteristic of the target with a plurality of sets of historical data associated with the target, each set of the plurality of sets of historical data having N features, the neural network at least having N inputs and one or more outputs, the neural network having one or more hidden layers, each hidden layer being a tensor network in the form of a matrix product operator, MPO, with a respective plurality of tensors and having a respective predetermined activation function per hidden layer or per tensor in the MPO; and input the N features into the neural network, determining a condition or characteristic of the target by processing the one or more outputs. Also, a device or system configured to train such neural network.
    Type: Application
    Filed: April 26, 2022
    Publication date: October 26, 2023
    Inventors: Román ORÚS, Samuel MUGEL, Serkan SAHIN, Saeed JAHROMI, Chia-Wei HSING, Raj PATEL, Samuel PALMER
  • Publication number: 20230342644
    Abstract: A computer-implemented method including: receiving data including a probability distribution of a dataset or a multivariate probability distribution about a target, the probability distribution relating to a plurality of discrete random variables; providing a tensor codifying the probability distribution such that each configuration of the plurality of discrete random variables has its respective probability codified therein; encoding the tensor into a tensor network in the form of a matrix product state, where an external index of each tensor of the tensor network represents one discrete random variable of the plurality discrete random variables, and an internal index or internal indices of each tensor of the tensor network represents correlation between the tensor and the corresponding adjacent tensor of the tensor network; and computing at least one moment of the probability distribution by processing the tensor network for sampling of the probability distribution.
    Type: Application
    Filed: April 26, 2022
    Publication date: October 26, 2023
    Inventors: Román ORÚS, Samuel MUGEL, Saeed JAHROMI
  • Publication number: 20220269751
    Abstract: A computer-implemented method is provided whereby an equation with a cost function for minimization is solved by a tensor network. Coefficients of tensors of the tensor network are modified so as to reduce a value of the cost function in an iterative process until convergence is reached, at which point the concerned Unconstrained Optimization problem is solved and the values of the variables of the cost function are provided.
    Type: Application
    Filed: March 26, 2021
    Publication date: August 25, 2022
    Inventors: Samuel MUGEL, Román ORÚS, Saeed JAHROMI, Serkan SAHIN