Patents Assigned to Multiverse Computing SL
-
Publication number: 20250208834Abstract: The system and method for generating random numbers involves a quantum device, a data input module, a quantum annealing device, a read out module, an error mitigation module, and an output module. The quantum device obtains random numbers. The data input module enters numerical data corresponding to magnetic fields to be applied to each quantum bit of the quantum device, a time parameter, and a state count to be sampled by the quantum device. The quantum annealing device implements a quantum evolution with a quantum operator consisting only of magnetic fields. The read out module measures the quantum bits at the end of the evolution. The error mitigation module minimizes the effect of a temperature parameter by fine-tuning the magnetic fields of the annealing. The output module measures the quantum bits after the whole procedure, and produces a random string of output bits.Type: ApplicationFiled: December 27, 2023Publication date: June 26, 2025Applicant: Multiverse Computing SLInventor: Román Orús
-
Publication number: 20250086144Abstract: A system and method for data compression using quantum computing are provided. The system receives an initial set of assets and corresponding asset weights. The asset weights are encoded using binary asset holding variables. Cardinality constraints are generated for the asset weights. The cardinality constraints are encoded into qubits. An optimization objective function is minimized using the qubits encoding the cardinality constraints. A subset of assets that replicates the behavior of the initial set of assets is obtained based on the minimized optimization objective function.Type: ApplicationFiled: October 20, 2023Publication date: March 13, 2025Applicant: Multiverse Computing SLInventors: Román ORÚS, Asier RODRIGUEZ, Samuel PALMER, Samuel MUGEL
-
Publication number: 20240160899Abstract: A system and method for improving a convolutional neural network (CNN) are described herein. The system includes a processor receiving a weight tensor having N parameters, the weight tensor corresponding to a convolutional layer of the CNN. The processor factorizes the weight tensor to obtain a corresponding factorized weight tensor, the factorized weight tensor having M parameters, where M<N. The processor supplies the factorized weight tensor to a classification layer of the CNN, thereby generating an improved CNN. In an embodiment, the processor (a) determines a rank of the weight tensor and (b) decomposes the weight tensor into a core tensor and a number R of factor matrices, where R corresponds to the rank of the weight tensor. In another embodiment, the processor (a) determines a decomposition rank R and (b) factorizes the weight tensor as a sum of a number R of tensor products.Type: ApplicationFiled: December 15, 2022Publication date: May 16, 2024Applicant: Multiverse Computing SLInventors: Saeed Jahromi, Román Orús
-
Publication number: 20230409961Abstract: A method of applying non-linear regression on a set of data points to get an estimate is described herein. The method includes receiving a set of N datapoints, separating the set of N datapoints into Nb batches, receiving a family of fitting functions, and minimizing a log-cosh cost function for each batch by selecting parameters that minimize the log-cosh cost function. The parameters are obtained by: receiving a matrix product state (MPS) model and training the MPS to minimize loss over all the Nb batches, including choosing an MPS with M+D tensors. All tensors except D correspond to one datapoint in each of the Nb batches, D extra tensors in the MPS have a physical dimension of size M corresponding to the number of possible outputs for a given batch, and the coefficients of the tensors in the MPS minimize the log-cosh cost function sequentially over all the Nb batches.Type: ApplicationFiled: July 5, 2022Publication date: December 21, 2023Applicant: Multiverse Computing SLInventors: Chia-Wei Hsing, Román Orús, Samuel Mugel, Saeed Jahromi, Serkan Sahin, Samuel Palmer
-
Publication number: 20230409665Abstract: A method of embedding ordinary differential equations (ODEs) into tensor radial basis networks is presented herein. The method involves receiving a tensored basis function having D dimensions and zeroth-, first-, and second-derivative coefficients A_d, B_d, and C_d; defining A_hat, B_hat, and C_hat as a function of A, B, and D, and C_hat as function of A, C, and D, respectively; defining an orthogonal exotic algebra a, b, c; applying a, b, and c, along with A_hat, B_hat, and C_hat, as coefficients for the zeroth-derivative, first-derivative, and second-derivative terms; and embedding the updated tensored basis function by forming a matrix product state (MPS). The MPS can be trained by initializing MPS 3-tensors with random coefficients and sweeping left and right along the MPS and updating the MPS 3-tensors.Type: ApplicationFiled: July 5, 2022Publication date: December 21, 2023Applicant: Multiverse Computing SLInventors: Samuel Palmer, Raj Patel, Román Orús, Saeed Jahromi, Chia-Wei Hsing, Serkan Sahin
-
Publication number: 20230205838Abstract: Systems and methods for performing tensor contractions are provided. The system includes a processing system and a programmable logic in communication with the processing system via a controller. The processing system includes a processing unit and a memory for storing tensors. The programmable logic includes an input data arbitrator for routing a first input tensor and a second input tensor from the controller to a tensor contraction block; the tensor contraction block that includes a network of arrays of processing elements for performing matrix multiplication operations on the first and second input tensor; and an output data arbitrator for routing an output of the tensor contraction block to the processing system. The network of arrays of processing elements may include N arrays of processing elements, where N corresponds to the rank of the output tensor.Type: ApplicationFiled: December 28, 2021Publication date: June 29, 2023Applicant: Multiverse Computing SLInventors: Soydan Eskisan, Samuel Palmer, Samuel Mugel, Román Orús