Patents Examined by Fen Christopher Tamulonis
  • Patent number: 11973841
    Abstract: Systems and methods are provided for building a user model. The system includes a processor and a non-transitory storage medium accessible to the processor. The processor is configured to obtain user data from a database, where the user data include user behavior for a plurality of apps installed on one or more user terminals. The processor selects at least one rating parameters using the user data, where the at least one rating parameters indicates a rating of relevant app usage. The system builds the user model based on a rating matrix comprising the at least one rating parameters.
    Type: Grant
    Filed: December 29, 2015
    Date of Patent: April 30, 2024
    Assignee: Yahoo Ad Tech LLC
    Inventors: Ayman Farahat, Tarun Bhatia
  • Patent number: 11921861
    Abstract: Methods, systems, and computer program products for providing the status of model extraction in the presence of colluding users are provided herein. A computer-implemented method includes generating, for each of multiple users, a summary of user input to a machine learning model; comparing the generated summaries to boundaries of multiple feature classes within an input space of the machine learning model; computing correspondence metrics based at least in part on the comparisons; identifying, based at least in part on the computed metrics, one or more of the multiple users as candidates for extracting portions of the machine learning model in an adversarial manner; and generating and outputting an alert, based on the identified users, to an entity related to the machine learning model.
    Type: Grant
    Filed: May 21, 2018
    Date of Patent: March 5, 2024
    Assignee: International Business Machines Corporation
    Inventors: Manish Kesarwani, Vijay Arya, Sameep Mehta
  • Patent number: 11915138
    Abstract: Methods and apparatus for reducing a size of a neural network model, the method including: compressing data of the neural network model; identifying structure information of a vector register, wherein the structure information includes a number of registers included in the vector register; comparing a number of elements in the compressed data with a first condition, wherein the first condition is determined based on the number of registers in the vector register; and in response to the number of elements satisfying the first condition, associating the compressed data with the vector register to enable loading the compressed data to the vector register.
    Type: Grant
    Filed: February 18, 2020
    Date of Patent: February 27, 2024
    Assignee: Alibaba Group Holding Limited
    Inventors: Weifeng Zhang, Guoyang Chen, Yu Pu, Yongzhi Zhang, Yuan Xie
  • Patent number: 11893485
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for processing inputs using a neural network system that includes a batch normalization layer. One of the methods includes receiving a respective first layer output for each training example in the batch; computing a plurality of normalization statistics for the batch from the first layer outputs; normalizing each component of each first layer output using the normalization statistics to generate a respective normalized layer output for each training example in the batch; generating a respective batch normalization layer output for each of the training examples from the normalized layer outputs; and providing the batch normalization layer output as an input to the second neural network layer.
    Type: Grant
    Filed: January 22, 2021
    Date of Patent: February 6, 2024
    Assignee: Google LLC
    Inventors: Sergey Ioffe, Corinna Cortes
  • Patent number: 11842261
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for reinforcement learning. One of the methods includes selecting an action to be performed by the agent using both a slow updating recurrent neural network and a fast updating recurrent neural network that receives a fast updating input that includes the hidden state of the slow updating recurrent neural network.
    Type: Grant
    Filed: December 14, 2020
    Date of Patent: December 12, 2023
    Assignee: DeepMind Technologies Limited
    Inventors: Iain Robert Dunning, Wojciech Czarnecki, Maxwell Elliot Jaderberg
  • Patent number: 11836630
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a neural network. In one aspect, a method includes maintaining data specifying, for each of the network parameters, current values of a respective set of distribution parameters that define a posterior distribution over possible values for the network parameter. A respective current training value for each of the network parameters is determined from a respective temporary gradient value for the network parameter. The current values of the respective sets of distribution parameters for the network parameters are updated in accordance with the respective current training values for the network parameters. The trained values of the network parameters are determined based on the updated current values of the respective sets of distribution parameters.
    Type: Grant
    Filed: September 17, 2020
    Date of Patent: December 5, 2023
    Assignee: DeepMind Technologies Limited
    Inventors: Meire Fortunato, Charles Blundell, Oriol Vinyals
  • Patent number: 11803747
    Abstract: A method for determining a placement for machine learning model operations across multiple hardware devices is described.
    Type: Grant
    Filed: May 20, 2020
    Date of Patent: October 31, 2023
    Assignee: Google LLC
    Inventors: Samuel Bengio, Mohammad Norouzi, Benoit Steiner, Jeffrey Adgate Dean, Hieu Hy Pham, Azalia Mirhoseini, Quoc V. Le, Naveen Kumar, Yuefeng Zhou, Rasmus Munk Larsen
  • Patent number: 11651270
    Abstract: A method and system are provided for combining models. The method includes forming, by a computer having a processor and a memory, model pairs from a model ensemble that includes a plurality of models. The method further includes comparing the model pairs based on sets of output results produced by the model pairs to provide comparison results. The method also includes constructing, by the computer, a combination model from at least one of the model pairs based on the comparison results. The comparing step is performed using user-generated set-based feedback.
    Type: Grant
    Filed: March 22, 2016
    Date of Patent: May 16, 2023
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Vittorio Castelli, Radu Florian, Taesun Moon, Avirup Sil
  • Patent number: 11593704
    Abstract: Techniques for tuning a machine learning algorithm using automatically determined optimal hyperparameters are described. An exemplary method includes receiving a request to determine a search space for at least one hyperparameter of a machine learning algorithm; determining, according to the request, optimal hyperparameter values from the search space for at least the one hyperparameter of the machine learning algorithm based on an evaluation of hyperparameters from the same machine learning algorithm on different datasets; and tuning the machine learning algorithm using the determined optimal hyperparameter values for the at least one hyperparameter of the machine learning algorithm to generate a machine learning model.
    Type: Grant
    Filed: June 27, 2019
    Date of Patent: February 28, 2023
    Assignee: Amazon Technologies, Inc.
    Inventors: Rodolphe Jenatton, Miroslav Miladinovic, Valerio Perrone
  • Patent number: 11586897
    Abstract: According to an embodiment, a reinforcement learning system includes a memristor array in which each of a plurality of first direction lines corresponds to one of a plurality of states, and each of a plurality of second direction lines corresponds to one of a plurality of actions, a first voltage application unit that individually applies voltage to the first direction lines, a second voltage application unit that individually applies voltage to the second direction lines, a action decision circuit that decides action to be selected by an agent in a state corresponding to a first direction line to which a readout voltage is applied, a action storage unit that stores action selected by the agent in each state that can be caused in an environment, and a trace storage unit that stores a time at which the state is caused by action selected by the agent.
    Type: Grant
    Filed: March 4, 2019
    Date of Patent: February 21, 2023
    Assignee: KABUSHIKI KAISHA TOSHIBA
    Inventors: Yoshifumi Nishi, Radu Berdan, Takao Marukame, Kumiko Nomura
  • Patent number: 11580426
    Abstract: Systems and methods for determining relative importance of one or more variables in a non-parametric model include: receiving, raw values of the variables corresponding to one or more entities; processing the raw values using a statistical model to obtain probability values for the variables and an overall prediction value for each entity; determining a plurality of cumulative distributions for the variables based on the raw values and the number of entities having a specific raw value; grouping the variables into a plurality of equally sized buckets based on the cumulative distributions; determining a mean probability value for each bucket; assigning a rank number for each bucket based on the mean probability values; compiling a table for the entities based on the raw values and the buckets corresponding to the raw values; and determining the relative importance of the variables for the entities based on the rank numbers.
    Type: Grant
    Filed: October 8, 2020
    Date of Patent: February 14, 2023
    Assignee: CAPITAL ONE SERVICES, LLC
    Inventors: Ruoyo Shao, Kurt Adrian Wolf, Sang Jin Park, Jacky Huang Zheng Kwok, Cheng Jiang
  • Patent number: 11556810
    Abstract: A method, computer system, and a computer program product for assessing a likelihood of success associated with developing at least one machine learning (ML) solution is provided. The present invention may include generating a set of questions based on a set of raw training data. The present invention may also include computing a feasibility score based on an answer corresponding with each question from the generated set of questions. The present invention may then include, in response to determining that the computed feasibility score satisfies a threshold, computing a level of effort associated with developing the at least one ML solution to address a problem. The present invention may further include presenting, to a user, a plurality of results associated with assessing the likelihood of success of the at least one ML solution.
    Type: Grant
    Filed: July 11, 2019
    Date of Patent: January 17, 2023
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Pathirage Dinindu Sujan Udayanga Perera, Orna Raz, Ramani Routray, Eitan Daniel Farchi
  • Patent number: 11544605
    Abstract: A question and answer (QA) system, computer program product, and computer-implemented method configured to determine an answer to a question that includes a measurement value. In one example, the QA system receives a question and analyzes the question to identify a measurement value specified in the question. The QA system determines relevant passages to the question. The QA system assigns a measurement value confidence score to a relevant passage based on a comparison of the measurement value specified in the question and a second measurement value specified in the relevant passage. The QA system determines an order of the relevant passages using the measurement value confidence score of each of the relevant passages. The QA system determines an answer to the question based on the order of the relevant passages.
    Type: Grant
    Filed: March 7, 2018
    Date of Patent: January 3, 2023
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Kyle M. Brake, Stephen A. Boxwell, Keith G. Frost, Stanley J. Vernier
  • Patent number: 11461683
    Abstract: Methods, apparatuses and systems directed to pattern learning, recognition, and metrology. In some particular implementations, the invention provides a flexible pattern recognition platform including pattern recognition engines that can be dynamically adjusted to implement specific pattern recognition configurations for individual pattern recognition applications. In certain implementations, the present invention provides for methods and systems suitable for analyzing and recognizing patterns in biological signals such as multi-electrode array waveform data. In other implementations, the present invention also provides for a partition configuration where knowledge elements can be grouped and pattern recognition operations can be individually configured and arranged to allow for multi-level pattern recognition schemes. In other implementations, the present invention provides methods and systems for dynamic learning of patterns in supervised and unsupervised manners.
    Type: Grant
    Filed: April 9, 2020
    Date of Patent: October 4, 2022
    Assignee: DataShapes, Inc.
    Inventors: Tyson J. Thomas, Kristopher Robert Buschelman, Frank G. Evans, Karl P. Geiger, Michael P. Kelley, Eric C. Schneider, Timothy J. Carruthers, Jeffrey Brian Adams
  • Patent number: 11449793
    Abstract: An artificial intelligence platform system includes at least a server designed and configured to receive training data. Receiving training data includes receiving a first training set including a plurality of first data entries, each first data entry of the plurality of first data entries including at least an element of user data and at least a correlated first constitutional label. At least a server receives at least a user input datum from a user client device. At least a server generates at least an output as a function of the at least a user input datum and the training data. At least a server retrieves at least a stored user datum as a function of the at least a user input datum and the at least an output. At least a server transmits the at least a stored user datum to a user client device.
    Type: Grant
    Filed: July 3, 2019
    Date of Patent: September 20, 2022
    Assignee: KPN INNOVATIONS, LLC.
    Inventor: Kenneth Neumann
  • Patent number: 11410059
    Abstract: A bias estimation apparatus according to an embodiment estimates a bias included in a measured values by each sensor. The bias estimation apparatus includes a reference model builder, a temporary bias generator, a corrected measured value calculator, a similarity calculator, a similarity selector, a score calculator, and an estimated bias determiner. The reference model builder builds a reference model of the measured value packs. The temporary bias generator generates a temporary bias pack. The corrected measured value calculator calculates corrected measured value packs. The similarity calculator calculates a similarity of each corrected measured value pack. The similarity selector selects a part of the similarities according to their values from among the similarities. The score calculator calculates a score based on the selected similarities. The estimated bias determiner determines an estimated bias which is an estimated value of the bias based on the score.
    Type: Grant
    Filed: January 31, 2017
    Date of Patent: August 9, 2022
    Assignee: KABUSHIKI KAISHA TOSHIBA
    Inventors: Takuro Moriyama, Hideyuki Aisu, Hisaaki Hatano, Kenichi Fujiwara
  • Patent number: 11353833
    Abstract: A method includes using a computational network to learn and predict time-series data. The computational network includes one or more layers, each having an encoder and a decoder. The encoder of each layer multiplicatively combines (i) current feed-forward information from a lower layer or a computational network input and (ii) past feedback information from a higher layer or that layer. The encoder of each layer generates current feed-forward information for the higher layer or that layer. The decoder of each layer multiplicatively combines (i) current feedback information from the higher layer or that layer and (ii) at least one of the current feed-forward information from the lower layer or the computational network input or past feed-forward information from the lower layer or the computational network input. The decoder of each layer generates current feedback information for the lower layer or a computational network output.
    Type: Grant
    Filed: August 21, 2017
    Date of Patent: June 7, 2022
    Assignee: Goldman Sachs & Co. LLC
    Inventor: Paul Burchard
  • Patent number: 11281973
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for processing inputs using a neural network system that includes a batch normalization layer. One of the methods includes receiving a respective first layer output for each training example in the batch; computing a plurality of normalization statistics for the batch from the first layer outputs; normalizing each component of each first layer output using the normalization statistics to generate a respective normalized layer output for each training example in the batch; generating a respective batch normalization layer output for each of the training examples from the normalized layer outputs; and providing the batch normalization layer output as an input to the second neural network layer.
    Type: Grant
    Filed: July 30, 2021
    Date of Patent: March 22, 2022
    Assignee: Google LLC
    Inventors: Sergey Ioffe, Corinna Cortes
  • Patent number: 11276002
    Abstract: Hybrid training of deep networks includes a multi-layer neural network. The training includes setting a current learning algorithm for the multi-layer neural network to a first learning algorithm. The training further includes iteratively applying training data to the neural network, determining a gradient for parameters of the neural network based on the applying of the training data, updating the parameters based on the current learning algorithm, and determining whether the current learning algorithm should be switched to a second learning algorithm based on the updating. The training further includes, in response to the determining that the current learning algorithm should be switched to a second learning algorithm, changing the current learning algorithm to the second learning algorithm and initializing a learning rate of the second learning algorithm based on the gradient and a step used by the first learning algorithm to update the parameters of the neural network.
    Type: Grant
    Filed: March 20, 2018
    Date of Patent: March 15, 2022
    Assignee: salesforce.com, inc.
    Inventors: Nitish Shirish Keskar, Richard Socher
  • Patent number: 11270194
    Abstract: Artificial neural networks (ANNs) are a distributed computing model in which computation is accomplished with many simple processing units, called neurons, with data embodied by the connections between neurons, called synapses and by the strength of these connections, the synaptic weights. An attractive implementation of ANNs uses the conductance of non-volatile memory (NVM) elements to record the synaptic weight, with the important multiply—accumulate step performed in place, at the data. In this application, the non-idealities in the response of the NVM such as nonlinearity, saturation, stochasticity and asymmetry in response to programming pulses lead to reduced network performance compared to an ideal network implementation. A method is shown that improves performance by distributing the synaptic weight across multiple conductances of varying significance, implementing carry operations between less-significant signed analog conductance-pairs to more-significant analog conductance-pairs.
    Type: Grant
    Filed: July 26, 2017
    Date of Patent: March 8, 2022
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventor: Geoffrey W Burr