Patents by Inventor Brian Leo Quanz

Brian Leo Quanz has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250139486
    Abstract: Techniques are described herein regarding utilizing a quantum transformation to generate a transformed dataset from an original dataset and a quantum feature map. For example, one or more embodiments described herein can comprise a system, which can comprise a memory that can store computer executable components. The system can also comprise a processor, operably coupled to the memory, and that can execute the computer executable components stored in the memory. The computer executable components can include operations to transform a qubit of a quantum feature map from an initial state to a transformed state based on an input value of a first classical dataset. The computer executable components can further include operations that generate a second classical dataset based on the input value and the transformed state.
    Type: Application
    Filed: October 30, 2023
    Publication date: May 1, 2025
    Inventors: Brian Leo Quanz, Noriaki Shimada, Shungo Miyabe, Jae-Eun Park, Das Pemmaraju, Chee-Kong Lee, Takahiro Yamamoto
  • Publication number: 20250139494
    Abstract: A computer-implemented method for forecasting a future value of one or more elements of a time-series of data includes obtaining a time-series of data, obtaining a library having a plurality of selected loss functions, obtaining at least one Business Specification Rule (BSR), each BSR including a Context, a Metric and a Priority, for each selected loss function, generating input-associated perturbated outputs based on the BSRs and the time-series of data by training a deep learning artificial intelligence (DLAI) model to learn a set of learned weights to be given to each of the selected loss functions, deriving a custom composite loss function based on the sets of learned weights for the plurality of selected loss functions in the LFL, and using the custom composite loss function to train a final DLAI model on the time-series of data. The final DLAI model may then be used to forecast future outcomes.
    Type: Application
    Filed: November 1, 2023
    Publication date: May 1, 2025
    Inventors: Sumanta Mukherjee, Arindam Jati, Vijay Ekambaram, Brian Leo Quanz
  • Publication number: 20240428108
    Abstract: Systems and methods for quantum machine learning are described. A plurality of qubits can be entangled to create a cluster state. The plurality of qubits can include at least an input qubit, an output qubit, and at least one ancilla qubit. The input qubit can represent data among a training data set of a machine learning model represented by a unitary operation. Sequential local measurements of the cluster state can be performed to generate a plurality of measurement outcomes. At least one of the plurality of qubits can be rotated according to the plurality of measurement outcomes and rotation parameters of the unitary operation. The sequential local measurements and rotation of the plurality of qubits can transform an input state of the input qubit into an output state of the output qubit. The machine learning model can be trained based on the output state of the output qubit.
    Type: Application
    Filed: June 26, 2023
    Publication date: December 26, 2024
    Inventors: Chee-Kong Lee, Jae-Eun Park, Brian Leo Quanz, VAIBHAW KUMAR
  • Patent number: 12165057
    Abstract: A machine learning system that uses a split net configuration to incorporate arbitrary constraints receives a set of input data and a set of functional constraints. The machine learning system jointly optimizes a deep learning model by using the set of input data and a wide learning model by using the set of constraints. The deep learning model includes an input layer, an output layer, and an intermediate layer between the input layer and the output layer. The wide learning model includes an input layer and an output layer but no intermediate layer. The machine learning system provides a machine learning model comprising the optimized deep learning model and the optimized wide learning model.
    Type: Grant
    Filed: December 28, 2020
    Date of Patent: December 10, 2024
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Pavithra Harsha, Brian Leo Quanz, Shivaram Subramanian, Wei Sun, Max Biggs
  • Publication number: 20240211801
    Abstract: Mechanisms are provided for automatic identification of a reconciliation computer tool for producing coherent reconciled data from base data generated by a computer model. A machine learning training operation is executed on one or more performance prediction computer model(s) (PPCMs) based on first input features of at least one hierarchical dataset, and second input features of a plurality of different reconciliation computer tools. The PPCM(s) generate a prediction of performance of a corresponding reconciliation computer tool based on the first and second input features. Features are extracted from a runtime hierarchical dataset and input into the trained PPCM(s) which generate predictions of performance of a plurality of reconciliation computer tools based on the extracted features of the runtime hierarchical dataset. The reconciliation computer tools are ranked relative to one another based on the predictions of performance.
    Type: Application
    Filed: December 27, 2022
    Publication date: June 27, 2024
    Inventors: Anna Yanchenko, Wesley M. Gifford, Brian Leo Quanz, Nam H. Nguyen, Pavithra Harsha
  • Publication number: 20240211835
    Abstract: Mechanisms are provided for performing automated and dynamic reconciliation of forecasts for hierarchical datasets. A machine learning training is executed on a dynamic reconciliation computer model engine to train the dynamic reconciliation computer model engine, based on historical data and forecast data, to learn an association of reconciliation computer models with structural changes in a hierarchical dataset. Runtime forecast data is generated based on a runtime hierarchical dataset, and the trained dynamic reconciliation computer model engine is executed on the runtime forecast data to reconcile the runtime forecast data across a hierarchy of the runtime forecast data. The trained dynamic reconciliation computer model applies different reconciliation computer models to the runtime forecast data based on structural changes in the runtime forecast data.
    Type: Application
    Filed: December 27, 2022
    Publication date: June 27, 2024
    Inventors: Anna Yanchenko, Wesley M. Gifford, Brian Leo Quanz, Nam H. Nguyen, Pavithra Harsha
  • Publication number: 20240135312
    Abstract: Mechanisms are provided for generating a resource allocation in an omnichannel distribution network. Demand forecast data and current inventory data related to a resource and the omnichannel distribution network are obtained and an ally-adversary bimodal inventory optimization (BIO) computer model is instantiated that includes an adversary component that simulates, through a computer simulation, a worst-case scenario of resource demand and resource availability, and an ally component that limits the adversary component based on a simulation of a limited best-case scenario of resource demand and resource availability. The BIO computer model is applied to the demand forecast data and current inventory data, to generate a predicted consumption for the resource. A resource allocation recommendation is generated for allocating the resource to locations of the omnichannel distribution network based on the predicted consumption, which is output to a downstream computing system for further processing.
    Type: Application
    Filed: October 13, 2022
    Publication date: April 25, 2024
    Inventors: Shivaram Subramanian, Pavithra Harsha, Ali Koc, Brian Leo Quanz, Mahesh Ramakrishna, Dhruv Shah
  • Publication number: 20240104396
    Abstract: An example operation may include one or more of storing a hierarchical data set, receiving a plurality of predicted outputs from a plurality of nodes in a distributed computing environment, respectively, wherein each predicted output is generated by a different node via execution of a time-series forecasting model on a different subset of lowest level data in the hierarchical data set, combining the plurality of predicted outputs via bottom-up aggregation to generate one or more additional predicted outputs for the time-series forecasting model based on one or more levels above the lowest level in the hierarchical time-series data set, determining error values for the time-series forecasting model at each level of the hierarchical data set based on the received and the one or more additional generated predicted outputs, and modifying a parameter of the time-series forecasting model based on the determined error values.
    Type: Application
    Filed: September 27, 2022
    Publication date: March 28, 2024
    Inventors: Arindam Jati, Vijay Ekambaram, Sumanta Mukherjee, Brian Leo Quanz, Pavithra Harsha
  • Publication number: 20240045926
    Abstract: An example operation may include one or more of storing a hierarchical time-series data set in memory, initially training a first time-series forecasting model based on a lower level of time-series data in the hierarchical data set, training a second time-series teaching forecasting model based on an upper level of time-series data from the hierarchical data set which includes an additional level of aggregation with respect to the lower level of time-series data, optimizing one or more parameters of the initially trained first time-series forecasting model based on predicted outputs from the trained second time-series forecasting model in comparison to predicted outputs from the initially trained first time-series forecasting model, and storing the modified first time-series forecasting model in the memory.
    Type: Application
    Filed: August 2, 2022
    Publication date: February 8, 2024
    Inventors: Arindam Jati, Vijay Ekambaram, Sumanta Mukherjee, Brian Leo Quanz, Wesley M. Gifford, Pavithra Harsha
  • Publication number: 20230325469
    Abstract: One or more systems, devices, computer program products and/or computer-implemented methods of use provided herein relate to classifying accuracy of analytical model, such as a neural network. A system can comprise a memory that stores computer executable components, and a processor that executes the computer executable components stored in the memory, wherein the computer executable components can comprise an accessing component that accesses an analytical model, a deviation component that generates combined results of the analytical model in response to a set of inputs that vary in degree of perturbation of a set of test data, and an analysis component that compares a range of the combined results to a range of the ideal results.
    Type: Application
    Filed: April 7, 2022
    Publication date: October 12, 2023
    Inventors: Yair Zvi Schiff, Brian Leo Quanz, Payel Das, Pin-Yu Chen
  • Publication number: 20230214764
    Abstract: A processor may estimate uncensored demand from historical supply chain data. The processor may ingest historical data. The processor may convert the historical data to a dataset of multiple time series corresponding to sales for different products and locations and channels across multiple time points that is usable by an uncensored demand estimation machine learning model. The processor may train the uncensored demand estimation machine learning model by applying optimization solver techniques for deep learning.
    Type: Application
    Filed: December 31, 2021
    Publication date: July 6, 2023
    Inventors: Brian Leo Quanz, Pavithra Harsha, Dhruv Shah, Mahesh Ramakrishna, Ali Koc
  • Publication number: 20230196278
    Abstract: A processor in an omnichannel environment, over a specific network with transaction level operations, may receive one or more input configurations. The processor may identify, based on the one or more input configurations, one or more articles. The processor may identify one or more key performance indicators (KPIs) associated with the one or more articles. The processor may compute, based on an uncensored demand trajectory, an impact on the KPIs over a specified period in the omnichannel environment. The processor may provide the impact to a user.
    Type: Application
    Filed: December 16, 2021
    Publication date: June 22, 2023
    Inventors: Pavithra Harsha, Brian Leo Quanz, Ali Koc, Dhruv Shah, Shivaram Subramanian, Ajay Ashok Deshpande, Chandrasekhar Narayanaswami
  • Publication number: 20230186371
    Abstract: In an approach to improve order management by performing sustainable order fulfillment optimization through computer analysis, embodiments receive an order from a user through the order management system for performing sustainable order fulfillment. Further, embodiments estimate carbon emissions and economic costs of fulfilling the order from a plurality of nodes, and output an optimal sustainable order fulfillment. Additionally, responsive to receiving confirmation to implement the output optimal sustainable order fulfillment, embodiments place the optimal sustainable order fulfillment for the received order.
    Type: Application
    Filed: December 14, 2021
    Publication date: June 15, 2023
    Inventors: Kedar Kulkarni, Reginald Eugene Bryant, Isaac Waweru Wambugu, Pavithra Harsha, Brian Leo Quanz, Chandrasekhar Narayanaswami
  • Publication number: 20230186331
    Abstract: In an aspect, input data can be received, including at least time series data associated with purchases of at least one product and causal influencer data associated with the purchases. The causal influencer data can include at least non-stationary data, where lost shares associated with said at least one product are unobserved. An artificial neural network can be trained based on the received input data to predict a future global demand associated with at least one product and individual market shares associated with at least one product. The artificial neural network can include at least a first temporal network to predict the global demand and a second temporal network to predict each of the individual market shares. The first temporal network and the second temporal network can be trained simultaneously.
    Type: Application
    Filed: December 13, 2021
    Publication date: June 15, 2023
    Inventors: Shivaram Subramanian, Brian Leo Quanz, Pavithra Harsha, Ajay Ashok Deshpande, Markus Ettl
  • Publication number: 20230041035
    Abstract: A computer implemented method of improving parameters of a critic approximator module includes receiving, by a mixed integer program (MIP) actor, (i) a current state and (ii) a predicted performance of an environment from the critic approximator module. The MIP actor solves a mixed integer mathematical problem based on the received current state and the predicted performance of the environment. The MIP actor selects an action a and applies the action to the environment based on the solved mixed integer mathematical problem. A long-term reward is determined and compared to the predicted performance of the environment by the critic approximator module. The parameters of the critic approximator module are iteratively updated based on an error between the determined long-term reward and the predicted performance.
    Type: Application
    Filed: May 23, 2022
    Publication date: February 9, 2023
    Inventors: Pavithra Harsha, Ashish Jagmohan, Brian Leo Quanz, Divya Singhvi
  • Patent number: 11568267
    Abstract: Embodiments relate to a system, program product, and method for inducing creativity in an artificial neural network (ANN) having an encoder and decoder. Neurons are automatically selected and manipulated from one or more layers of the encoder. An encoded vector is sampled for an encoded image. Decoder neurons and a corresponding activation pattern are evaluated with respect to the encoded image. The decoder neurons that correspond to the activation pattern are selected, and an activation setting of the selected decoder neurons is changed. One or more novel data instances are automatically generated from an original latent space of the selectively changed decoder neurons.
    Type: Grant
    Filed: March 12, 2020
    Date of Patent: January 31, 2023
    Assignee: International Business Machines Corporation
    Inventors: Payel Das, Brian Leo Quanz, Pin-Yu Chen, Jae-Wook Ahn
  • Publication number: 20220207412
    Abstract: A machine learning system that incorporates arbitrary constraints is provided. The machine learning system selects a set of domain-specific constraints from a plurality of sets of domain-specific constraints. The machine learning system selects a set of general functional relationships from a plurality of sets of general functional relationships. The machine learning system maps the selected set of general functional relationships and the selected set of domain-specific constraints to a set of learning transforms. The machine learning system modifies a machine learning specification according to the set of learning transforms, wherein the machine learning specification specifies a model construction, a model setup, and a training objective function. The machine learning system optimizes a machine learning model according to the modified machine learning specification.
    Type: Application
    Filed: December 28, 2020
    Publication date: June 30, 2022
    Inventors: Pavithra Harsha, Brian Leo Quanz, Shivaram Subramanian, Wei Sun, Max Biggs
  • Publication number: 20220207413
    Abstract: A machine learning system that incorporates arbitrary constraints into deep learning model is provided. The machine learning system provides a set of penalty data points en a set of arbitrary constraints in addition to a set of original training data points. The machine learning system assigns a penalty to each penalty data point in the set of penalty data points. The machine learning system optimizes a machine learning model by solving an objective function based on an original loss function and a penalty loss function. The original loss function is evaluated over a set of original training data points and the penalty loss function is evaluated over the set of penalty data points. The machine learning system provides the optimized machine learning model based on a solution of the objective function.
    Type: Application
    Filed: December 28, 2020
    Publication date: June 30, 2022
    Inventors: Pavithra Harsha, Brian Leo Quanz, Shivaram Subramanian, Wei Sun, Max Biggs
  • Publication number: 20220207347
    Abstract: A machine learning system that uses a split net configuration to incorporate arbitrary constraints receives a set of input data and a set of functional constraints. The machine learning system jointly optimizes a deep learning model by using the set of input data and a wide learning model by using the set of constraints. The deep learning model includes an input layer, an output layer, and an intermediate layer between the input layer and the output layer. The wide learning model includes an input layer and an output layer but no intermediate layer. The machine learning system provides a machine learning model comprising the optimized deep learning model and the optimized wide learning model.
    Type: Application
    Filed: December 28, 2020
    Publication date: June 30, 2022
    Inventors: Pavithra Harsha, Brian Leo Quanz, Shivaram Subramanian, Wei Sun, Max Biggs
  • Publication number: 20220147669
    Abstract: In various embodiments, a computing device, a non-transitory storage medium, and a computer implemented method of improving a computational efficiency of a computing platform in processing a time series data includes receiving the time series data and grouping it into a hierarchy of partitions of related time series. The hierarchy has different partition levels. A computation capability of a computing platform is determined. A partition level, from the different partition levels, is selected based on the determined computation capability. One or more modeling tasks are defined, each modeling task including a group of time series of the plurality of time series, based on the selected partition level. One or more modeling tasks are executed in parallel on the computing platform by, for each modeling task, training a model using all the time series in the group of time series of the corresponding modeling task.
    Type: Application
    Filed: April 15, 2021
    Publication date: May 12, 2022
    Inventors: Brian Leo Quanz, Wesley M. Gifford, Stuart Siegel, Dhruv Shah, Jayant R. Kalagnanam, Chandrasekhar Narayanaswami, Vijay Ekambaram, Vivek Sharma