Patents by Inventor Elliot Meyerson

Elliot Meyerson has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230351162
    Abstract: A residual estimation with an I/O kernel (“RIO”) framework provides estimates of predictive uncertainty of neural networks, and reduces their point-prediction errors. The process captures neural network (“NN”) behavior by estimating their residuals with an I/O kernel using a modified Gaussian process (“GP”). RIO is applicable to real-world problems, and, by using a sparse GP approximation, scales well to large datasets. RIO can be applied directly to any pretrained NNs without modifications to model architecture or training pipeline.
    Type: Application
    Filed: May 9, 2023
    Publication date: November 2, 2023
    Applicant: Cognizant Technology Solutions U.S. Corporation
    Inventors: Xin Qiu, Risto Miikkulainen, Elliot Meyerson
  • Patent number: 11681901
    Abstract: A residual estimation with an I/O kernel (“RIO”) framework provides estimates of predictive uncertainty of neural networks, and reduces their point-prediction errors. The process captures neural network (“NN”) behavior by estimating their residuals with an I/O kernel using a modified Gaussian process (“GP”). RIO is applicable to real-world problems, and, by using a sparse GP approximation, scales well to large datasets. RIO can be applied directly to any pretrained NNs without modifications to model architecture or training pipeline.
    Type: Grant
    Filed: May 21, 2020
    Date of Patent: June 20, 2023
    Assignee: Cognizant Technology Solutions U.S. Corporation
    Inventors: Xin Qiu, Risto Miikkulainen, Elliot Meyerson
  • Patent number: 11669716
    Abstract: A process for training and sharing generic functional modules across multiple diverse (architecture, task) pairs for solving multiple diverse problems is described. The process is based on decomposing the general multi-task learning problem into several fine-grained and equally-sized subproblems, or pseudo-tasks. Training a set of (architecture, task) pairs then corresponds to solving a set of related pseudo-tasks, whose relationships can be exploited by shared functional modules. An efficient search algorithm is introduced for optimizing the mapping between pseudo-tasks and the modules that solve them, while simultaneously training the modules themselves.
    Type: Grant
    Filed: March 12, 2020
    Date of Patent: June 6, 2023
    Assignee: Cognizant Technology Solutions U.S. Corp.
    Inventors: Elliot Meyerson, Risto Miikkulainen
  • Publication number: 20230025388
    Abstract: A system and method of combining and improving sets of diverse prescriptors for Evolutionary Surrogate-assisted Prescription (ESP) model is described. The prescriptors are distilled into neural networks and evolved further using ESP. The system and method can handle diverse sets of prescriptors in that it makes no assumptions about the form of the input (i.e., contexts) of the initial prescriptors; it relies only on the prescriptions made in order to distill each prescriptor to a neural network with a fixed form. The resulting set of high performing prescriptors provides a practical way for ESP to incorporate external human and machine knowledge and generate more accurate and fitting set of solutions.
    Type: Application
    Filed: June 8, 2022
    Publication date: January 26, 2023
    Applicant: Cognizant Technology Solutions U.S. Corporation
    Inventors: Elliot Meyerson, Risto Miikkulainen, Olivier Francon, Babak Hodjat, Darren Sargent
  • Publication number: 20220207362
    Abstract: A general prediction model is based on an observer traveling around a continuous space, measuring values at some locations, and predicting them at others. The observer is completely agnostic about any particular task being solved; it cares only about measurement locations and their values. A machine learning framework in which seemingly unrelated tasks can be solved by a single model is proposed, whereby input and output variables are embedded into a shared space. The approach is shown to (1) recover intuitive locations of variables in space and time, (2) exploit regularities across related datasets with completely disjoint input and output spaces, and (3) exploit regularities across seemingly unrelated tasks, outperforming task-specific single-task models and multi-task learning alternatives.
    Type: Application
    Filed: December 17, 2021
    Publication date: June 30, 2022
    Applicant: Cognizant Technology Solutions U.S. Corporation
    Inventors: Elliot Meyerson, Risto Miikkulainen
  • Patent number: 11250314
    Abstract: The technology disclosed identifies parallel ordering of shared layers as a common assumption underlying existing deep multitask learning (MTL) approaches. This assumption restricts the kinds of shared structure that can be learned between tasks. The technology disclosed demonstrates how direct approaches to removing this assumption can ease the integration of information across plentiful and diverse tasks. The technology disclosed introduces soft ordering as a method for learning how to apply layers in different ways at different depths for different tasks, while simultaneously learning the layers themselves. Soft ordering outperforms parallel ordering methods as well as single-task learning across a suite of domains. Results show that deep MTL can be improved while generating a compact set of multipurpose functional primitives, thus aligning more closely with our understanding of complex real-world processes.
    Type: Grant
    Filed: October 26, 2018
    Date of Patent: February 15, 2022
    Assignee: Cognizant Technology Solutions U.S. Corporation
    Inventors: Elliot Meyerson, Risto Miikkulainen
  • Patent number: 11247100
    Abstract: Roughly described, a computer system uses a behavior-driven algorithm that is better able to find optimum solutions to a problem by balancing the use of fitness and novelty measures in evolutionary optimization. In competition among candidate individuals, a domination estimate between a pair of individuals is determined by both their fitness estimate difference and their behavior difference relative to one another. An increase in the fitness estimate difference of one individual of the pair over the other increases the domination estimate of the first individual. An increase in the behavior distance between the pair of individuals decreases the domination estimate of both of the individuals. Individuals with a higher domination estimate are more likely to survive competitions among the candidate individuals.
    Type: Grant
    Filed: July 21, 2020
    Date of Patent: February 15, 2022
    Assignee: Cognizant Technology Solutions U.S. Corporation
    Inventors: Elliot Meyerson, Risto Miikkulainen
  • Publication number: 20220013241
    Abstract: The present invention relates to an ESP decision optimization system for epidemiological modeling. ESP based modeling approach is used to predict how non-pharmaceutical interventions (NPIs) affect a given pandemic, and then automatically discover effective NPI strategies as control measures. The ESP decision optimization system comprises of a data-driven predictor, a supervised machine learning model, trained with historical data on how given actions in given contexts led to specific outcomes. The Predictor is then used as a surrogate in order to evolve prescriptor, i.e. neural networks that implement decision policies (i.e. NPIs) resulting in best possible outcomes. Using the data-driven LSTM model as the Predictor, a Prescriptor is evolved in a multi-objective setting to minimize the pandemic impact.
    Type: Application
    Filed: June 23, 2021
    Publication date: January 13, 2022
    Applicant: Cognizant Technology Solutions U.S. Corporation
    Inventors: Elliot Meyerson, Olivier Francon
  • Patent number: 11030529
    Abstract: Evolution and coevolution of neural networks via multitask learning is described. The foundation is (1) the original soft ordering, which uses a fixed architecture for the modules and a fixed routing (i.e. network topology) that is shared among all tasks. This architecture is then extended in two ways with CoDeepNEAT: (2) by coevolving the module architectures (CM), and (3) by coevolving both the module architectures and a single shared routing for all tasks using (CMSR). An alternative evolutionary process (4) keeps the module architecture fixed, but evolves a separate routing for each task during training (CTR). Finally, approaches (2) and (4) are combined into (5), where both modules and task routing are coevolved (CMTR).
    Type: Grant
    Filed: December 13, 2018
    Date of Patent: June 8, 2021
    Assignee: Cognizant Technology Solutions U.S. Corporation
    Inventors: Jason Zhi Liang, Elliot Meyerson, Risto Miikkulainen
  • Patent number: 11003994
    Abstract: A system and method for evolving a deep neural network structure that solves a provided problem includes: a memory storing a candidate supermodule genome database having a pool of candidate supermodules having values for hyperparameters for identifying a plurality of neural network modules in the candidate supermodule and further storing fixed multitask neural networks; a training module that assembles and trains N enhanced fixed multitask neural networks and trains each enhanced fixed multitask neural network using training data; an evaluation module that evaluates a performance of each enhanced fixed multitask neural network using validation data; a competition module that discards supermodules in accordance with assigned fitness values and saves others in an elitist pool; an evolution module that evolves the supermodules in the elitist pool; and a solution harvesting module providing for deployment of a selected one of the enhanced fixed multitask neural networks, instantiated with supermodules selected fr
    Type: Grant
    Filed: December 7, 2018
    Date of Patent: May 11, 2021
    Assignee: Cognizant Technology Solutions U.S. Corporation
    Inventors: Jason Zhi Liang, Elliot Meyerson, Risto Miikkulainen
  • Publication number: 20200372327
    Abstract: A residual estimation with an I/O kernel (“RIO”) framework provides estimates of predictive uncertainty of neural networks, and reduces their point-prediction errors. The process captures neural network (“NN”) behavior by estimating their residuals with an I/O kernel using a modified Gaussian process (“GP”). RIO is applicable to real-world problems, and, by using a sparse GP approximation, scales well to large datasets. RIO can be applied directly to any pretrained NNs without modifications to model architecture or training pipeline.
    Type: Application
    Filed: May 21, 2020
    Publication date: November 26, 2020
    Applicant: Cognizant Technology Solutions U.S. Corporation
    Inventors: Xin Qiu, Risto Miikkulainen, Elliot Meyerson
  • Publication number: 20200346073
    Abstract: Roughly described, a computer system uses a behavior-driven algorithm that is better able to find optimum solutions to a problem by balancing the use of fitness and novelty measures in evolutionary optimization. In competition among candidate individuals, a domination estimate between a pair of individuals is determined by both their fitness estimate difference and their behavior difference relative to one another. An increase in the fitness estimate difference of one individual of the pair over the other increases the domination estimate of the first individual. An increase in the behavior distance between the pair of individuals decreases the domination estimate of both of the individuals. Individuals with a higher domination estimate are more likely to survive competitions among the candidate individuals.
    Type: Application
    Filed: July 21, 2020
    Publication date: November 5, 2020
    Applicant: Cognizant Technology Solutions U.S. Corporation
    Inventors: Elliot Meyerson, Risto Miikkulainen
  • Publication number: 20200293888
    Abstract: A process for training and sharing generic functional modules across multiple diverse (architecture, task) pairs for solving multiple diverse problems is described. The process is based on decomposing the general multi-task learning problem into several fine-grained and equally-sized subproblems, or pseudo-tasks. Training a set of (architecture,task) pairs then corresponds to solving a set of related pseudo-tasks, whose relationships can be exploited by shared functional modules. An efficient search algorithm is introduced for optimizing the mapping between pseudo-tasks and the modules that solve them, while simultaneously training the modules themselves.
    Type: Application
    Filed: March 12, 2020
    Publication date: September 17, 2020
    Applicant: Cognizant Technology Solutions U.S. Corporation
    Inventors: Elliot Meyerson, Risto Miikkulainen
  • Patent number: 10744372
    Abstract: Roughly described, a computer system uses a behavior-driven algorithm that is better able to find optimum solutions to a problem by balancing the use of fitness and novelty measures in evolutionary optimization. In competition among candidate individuals, a domination estimate between a pair of individuals is determined by both their fitness estimate difference and their behavior difference relative to one another. An increase in the fitness estimate difference of one individual of the pair over the other increases the domination estimate of the first individual. An increase in the behavior distance between the pair of individuals decreases the domination estimate of both of the individuals. Individuals with a higher domination estimate are more likely to survive competitions among the candidate individuals.
    Type: Grant
    Filed: March 5, 2018
    Date of Patent: August 18, 2020
    Assignee: Cognizant Technology Solutions U.S. Corporation
    Inventors: Elliot Meyerson, Risto Miikkulainen
  • Publication number: 20200143243
    Abstract: An evolutionary AutoML framework called LEAF optimizes hyperparameters, network architectures and the size of the network. LEAF makes use of both evolutionary algorithms (EAs) and distributed computing frameworks. A multiobjective evolutionary algorithm is used to maximize the performance and minimize the complexity of the evolved networks simultaneously by calculating the Pareto front given a group of individuals that have been evaluated for multiple obj ectives.
    Type: Application
    Filed: November 1, 2019
    Publication date: May 7, 2020
    Applicant: Cognizant Technology Solutions U.S. Corporation
    Inventors: Jason Zhi Liang, Elliot Meyerson, Risto Miikkulainen
  • Publication number: 20190244108
    Abstract: A multi-task (MTL) process is adapted to the single-task learning (STL) case, i.e., when only a single task is available for training. The process is formalized as pseudo-task augmentation (PTA), in which a single task has multiple distinct decoders projecting the output of the shared structure to task predictions. By training the shared structure to solve the same problem in multiple ways, PTA simulates the effect of training towards distinct but closely-related tasks drawn from the same universe. Training dynamics with multiple pseudo-tasks strictly subsumes training with just one, and a class of algorithms is introduced for controlling pseudo-tasks in practice.
    Type: Application
    Filed: February 8, 2019
    Publication date: August 8, 2019
    Applicant: Cognizant Technology Solutions U.S. Corporation
    Inventors: Elliot Meyerson, Risto Miikkulainen
  • Publication number: 20190180186
    Abstract: A system and method for evolving a deep neural network structure that solves a provided problem includes: a memory storing a candidate supermodule genome database having a pool of candidate supermodules having values for hyperparameters for identifying a plurality of neural network modules in the candidate supermodule and further storing fixed multitask neural networks; a training module that assembles and trains N enhanced fixed multitask neural networks and trains each enhanced fixed multitask neural network using training data; an evaluation module that evaluates a performance of each enhanced fixed multitask neural network using validation data; a competition module that discards supermodules in accordance with assigned fitness values and saves others in an elitist pool; an evolution module that evolves the supermodules in the elitist pool; and a solution harvesting module providing for deployment of a selected one of the enhanced fixed multitask neural networks, instantiated with supermodules selected fr
    Type: Application
    Filed: December 7, 2018
    Publication date: June 13, 2019
    Applicant: Sentient Technologies (Barbados) Limited
    Inventors: Jason Zhi Liang, Elliot Meyerson, Risto Miikkulainen
  • Publication number: 20190180188
    Abstract: Evolution and coevolution of neural networks via multitask learning is described. The foundation is (1) the original soft ordering, which uses a fixed architecture for the modules and a fixed routing (i.e. network topology) that is shared among all tasks. This architecture is then extended in two ways with CoDeepNEAT: (2) by coevolving the module architectures (CM), and (3) by coevolving both the module architectures and a single shared routing for all tasks using (CMSR). An alternative evolutionary process (4) keeps the module architecture fixed, but evolves a separate routing for each task during training (CTR). Finally, approaches (2) and (4) are combined into (5), where both modules and task routing are coevolved (CMTR).
    Type: Application
    Filed: December 13, 2018
    Publication date: June 13, 2019
    Applicant: Cognizant Technology Solutions U.S. Corporation
    Inventors: Jason Zhi Liang, Elliot Meyerson, Risto Miikkulainen
  • Publication number: 20190130257
    Abstract: The technology disclosed identifies parallel ordering of shared layers as a common assumption underlying existing deep multitask learning (MTL) approaches. This assumption restricts the kinds of shared structure that can be learned between tasks. The technology disclosed demonstrates how direct approaches to removing this assumption can ease the integration of information across plentiful and diverse tasks. The technology disclosed introduces soft ordering as a method for learning how to apply layers in different ways at different depths for different tasks, while simultaneously learning the layers themselves. Soft ordering outperforms parallel ordering methods as well as single-task learning across a suite of domains. Results show that deep MTL can be improved while generating a compact set of multipurpose functional primitives, thus aligning more closely with our understanding of complex real-world processes.
    Type: Application
    Filed: October 26, 2018
    Publication date: May 2, 2019
    Applicant: SENTIENT TECHNOLOGIES (BARBADOS) LIMITED
    Inventors: Elliot MEYERSON, Risto MIIKKULAINEN
  • Publication number: 20180250554
    Abstract: Roughly described, a computer system uses a behavior-driven algorithm that is better able to find optimum solutions to a problem by balancing the use of fitness and novelty measures in evolutionary optimization. In competition among candidate individuals, a domination estimate between a pair of individuals is determined by both their fitness estimate difference and their behavior difference relative to one another. An increase in the fitness estimate difference of one individual of the pair over the other increases the domination estimate of the first individual. An increase in the behavior distance between the pair of individuals decreases the domination estimate of both of the individuals. Individuals with a higher domination estimate are more likely to survive competitions among the candidate individuals.
    Type: Application
    Filed: March 5, 2018
    Publication date: September 6, 2018
    Applicant: SENTIENT TECHNOLOGIES (BARBADOS) LIMITED
    Inventors: Elliot Meyerson, Risto Miikkulainen