Abstract: A system and method for data compression using quantum computing are provided. The system receives an initial set of assets and corresponding asset weights. The asset weights are encoded using binary asset holding variables. Cardinality constraints are generated for the asset weights. The cardinality constraints are encoded into qubits. An optimization objective function is minimized using the qubits encoding the cardinality constraints. A subset of assets that replicates the behavior of the initial set of assets is obtained based on the minimized optimization objective function.
Type:
Application
Filed:
October 20, 2023
Publication date:
March 13, 2025
Applicant:
Multiverse Computing SL
Inventors:
Román ORÚS, Asier RODRIGUEZ, Samuel PALMER, Samuel MUGEL
Abstract: A system and method for improving a convolutional neural network (CNN) are described herein. The system includes a processor receiving a weight tensor having N parameters, the weight tensor corresponding to a convolutional layer of the CNN. The processor factorizes the weight tensor to obtain a corresponding factorized weight tensor, the factorized weight tensor having M parameters, where M<N. The processor supplies the factorized weight tensor to a classification layer of the CNN, thereby generating an improved CNN. In an embodiment, the processor (a) determines a rank of the weight tensor and (b) decomposes the weight tensor into a core tensor and a number R of factor matrices, where R corresponds to the rank of the weight tensor. In another embodiment, the processor (a) determines a decomposition rank R and (b) factorizes the weight tensor as a sum of a number R of tensor products.
Abstract: A method of applying non-linear regression on a set of data points to get an estimate is described herein. The method includes receiving a set of N datapoints, separating the set of N datapoints into Nb batches, receiving a family of fitting functions, and minimizing a log-cosh cost function for each batch by selecting parameters that minimize the log-cosh cost function. The parameters are obtained by: receiving a matrix product state (MPS) model and training the MPS to minimize loss over all the Nb batches, including choosing an MPS with M+D tensors. All tensors except D correspond to one datapoint in each of the Nb batches, D extra tensors in the MPS have a physical dimension of size M corresponding to the number of possible outputs for a given batch, and the coefficients of the tensors in the MPS minimize the log-cosh cost function sequentially over all the Nb batches.
Type:
Application
Filed:
July 5, 2022
Publication date:
December 21, 2023
Applicant:
Multiverse Computing SL
Inventors:
Chia-Wei Hsing, Román Orús, Samuel Mugel, Saeed Jahromi, Serkan Sahin, Samuel Palmer
Abstract: A method of embedding ordinary differential equations (ODEs) into tensor radial basis networks is presented herein. The method involves receiving a tensored basis function having D dimensions and zeroth-, first-, and second-derivative coefficients A_d, B_d, and C_d; defining A_hat, B_hat, and C_hat as a function of A, B, and D, and C_hat as function of A, C, and D, respectively; defining an orthogonal exotic algebra a, b, c; applying a, b, and c, along with A_hat, B_hat, and C_hat, as coefficients for the zeroth-derivative, first-derivative, and second-derivative terms; and embedding the updated tensored basis function by forming a matrix product state (MPS). The MPS can be trained by initializing MPS 3-tensors with random coefficients and sweeping left and right along the MPS and updating the MPS 3-tensors.
Type:
Application
Filed:
July 5, 2022
Publication date:
December 21, 2023
Applicant:
Multiverse Computing SL
Inventors:
Samuel Palmer, Raj Patel, Román Orús, Saeed Jahromi, Chia-Wei Hsing, Serkan Sahin
Abstract: Systems and methods for performing tensor contractions are provided. The system includes a processing system and a programmable logic in communication with the processing system via a controller. The processing system includes a processing unit and a memory for storing tensors. The programmable logic includes an input data arbitrator for routing a first input tensor and a second input tensor from the controller to a tensor contraction block; the tensor contraction block that includes a network of arrays of processing elements for performing matrix multiplication operations on the first and second input tensor; and an output data arbitrator for routing an output of the tensor contraction block to the processing system. The network of arrays of processing elements may include N arrays of processing elements, where N corresponds to the rank of the output tensor.
Type:
Application
Filed:
December 28, 2021
Publication date:
June 29, 2023
Applicant:
Multiverse Computing SL
Inventors:
Soydan Eskisan, Samuel Palmer, Samuel Mugel, Román Orús