Patents Examined by Michael J Huntley
  • Patent number: 11461655
    Abstract: A system and method for controlling a nodal network. The method includes estimating an effect on the objective caused by the existence or non-existence of a direct connection between a pair of nodes and changing a structure of the nodal network based at least in part on the estimate of the effect. A nodal network includes a strict partially ordered set, a weighted directed acyclic graph, an artificial neural network, and/or a layered feed-forward neural network.
    Type: Grant
    Filed: January 28, 2019
    Date of Patent: October 4, 2022
    Assignee: D5AI LLC
    Inventors: James K. Baker, Bradley J. Baker
  • Patent number: 11455569
    Abstract: Handshake protocol layer features are extracted from training data associated with encrypted network traffic of a plurality of classified devices. Record protocol layer features are extracted from the training data. One or more models are trained based on the extracted handshake protocol layer features and the extracted record protocol layer features. The one or more models are applied to an observed encrypted network traffic stream associated with a device to determine a predicted device classification of the device.
    Type: Grant
    Filed: January 9, 2019
    Date of Patent: September 27, 2022
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Enriquillo Valdez, Pau-Chen Cheng, Ian Michael Molloy, Dimitrios Pendarakis
  • Patent number: 11449731
    Abstract: Provided are a computer program product, a learning apparatus and a learning method. The method includes calculating a first propagation value that is propagated from a propagation source node to a propagation destination node in a neural network including nodes, based on node values of the propagation source node at time points and a weight corresponding to passage of time points based on a first attenuation coefficient. The method also includes updating the first attenuation coefficient by using a first update parameter, that is based on a first propagation value, and an error of the node value of the propagation destination node.
    Type: Grant
    Filed: January 16, 2020
    Date of Patent: September 20, 2022
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventor: Takayuki Osogami
  • Patent number: 11449743
    Abstract: Methods and a system for generating and employing dimensionality-reduction for generating statistical models. Traditionally, constructing models used for statistical analysis has been a laborious process requiring significant amounts of analyst input and processing time. Embodiments of the inventions address the above problem by providing an efficient method for pre-screening variable to automatically prune the dimensionality of the input space to a tractable size while retaining the most predictive power possible.
    Type: Grant
    Filed: June 17, 2015
    Date of Patent: September 20, 2022
    Assignee: HRB Innovations, Inc.
    Inventor: Laurie Croslin Scott
  • Patent number: 11449754
    Abstract: The present invention discloses a neural network training method for a memristor memory for memristor errors, which is mainly used for solving the problem of decrease in inference accuracy of a neural network based on the memristor memory due to a process error and a dynamic error. The method comprises the following steps: performing modeling on a conductance value of a memristor under the influence of the process error and the dynamic error, and performing conversion to obtain a distribution of corresponding neural network weights; constructing a prior distribution of the weights by using the weight distribution obtained after modeling, and performing Bayesian neural network training based on variational inference to obtain a variational posterior distribution of the weights; and converting a mean value of the variational posterior of the weights into a target conductance value of the memristor memory.
    Type: Grant
    Filed: February 16, 2022
    Date of Patent: September 20, 2022
    Assignee: ZHEJIANG UNIVERSITY
    Inventors: Cheng Zhuo, Xunzhao Yin, Qingrong Huang, Di Gao
  • Patent number: 11449788
    Abstract: Systems and methods for the annotation of source data in accordance with embodiments of the invention are disclosed. In one embodiment, a data annotation server system obtains a set of source data, provides at least one subset of source data to at least one annotator device, obtains a set of annotation data from the at least one annotator device for each subset of source data, classifies the source data based on the annotation data using a machine classifier for each subset of source data, generates annotator model data describing the characteristics of the at least one annotator device, and generates source data model data describing at least one piece of source data in the set of source data, where the source data model data includes label data identifying the estimated ground truth for each piece of source data in the set of source data.
    Type: Grant
    Filed: March 19, 2018
    Date of Patent: September 20, 2022
    Assignee: California Institute of Technology
    Inventors: Pietro Perona, Grant Van Horn, Steven J. Branson
  • Patent number: 11443169
    Abstract: A computer implemented method for adapting a model for recognition processing to a target-domain is disclosed. The method includes preparing a first distribution in relation to a part of the model, in which the first distribution is derived from data of a training-domain for the model. The method also includes obtaining a second distribution in relation to the part of the model by using data of the target-domain. The method further includes tuning one or more parameters of the part of the model so that difference between the first and the second distributions becomes small.
    Type: Grant
    Filed: February 19, 2016
    Date of Patent: September 13, 2022
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventor: Gakuto Kurata
  • Patent number: 11436494
    Abstract: An optimal power flow computation method based on multi-task deep learning is provided, which is related to the field of smart power grids.
    Type: Grant
    Filed: April 10, 2022
    Date of Patent: September 6, 2022
    Assignee: Zhejiang Lab
    Inventors: Gang Huang, Longfei Liao, Wei Hua
  • Patent number: 11429854
    Abstract: A method for training a computerized mechanical device, comprising: receiving data documenting actions of an actuator performing a task in a plurality of iterations; calculating using the data a neural network dataset and used for performing the task; gathering in a plurality of reward iterations a plurality of scores given by an instructor to a plurality of states, each comprising at least one sensor value, while a robotic actuator performs the task according to the neural network; calculating using the plurality of scores a reward dataset used for computing a reward function; updating at least some of the neural network's plurality of parameters by receiving in each of a plurality of policy iterations a reward value computed by applying the reward function to another state comprising at least one sensor value, while the robotic actuator performs the task according to the neural network; and outputting the updated neural network.
    Type: Grant
    Filed: December 4, 2017
    Date of Patent: August 30, 2022
    Assignee: Technion Research & Development Foundation Limited
    Inventors: Ran El-Yaniv, Bar Hilleli
  • Patent number: 11429871
    Abstract: Embodiments include techniques for detection of data offloading through instrumentation analysis, where the techniques include monitoring, via a processor, an execution of a job, and analyzing processes associated with the job to determine a pattern. The techniques also include determining whether the pattern of the job is associated with a pattern for a workload type, and classifying the job based at least in part on the determination.
    Type: Grant
    Filed: May 18, 2017
    Date of Patent: August 30, 2022
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Nicholas P. Sardino, Anthony Sofia, Robert W. St. John
  • Patent number: 11423337
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a distilled machine learning model. One of the methods includes training a cumbersome machine learning model, wherein the cumbersome machine learning model is configured to receive an input and generate a respective score for each of a plurality of classes; and training a distilled machine learning model on a plurality of training inputs, wherein the distilled machine learning model is also configured to receive inputs and generate scores for the plurality of classes, comprising: processing each training input using the cumbersome machine learning model to generate a cumbersome target soft output for the training input; and training the distilled machine learning model to, for each of the training inputs, generate a soft output that matches the cumbersome target soft output for the training input.
    Type: Grant
    Filed: April 7, 2020
    Date of Patent: August 23, 2022
    Assignee: Google LLC
    Inventors: Oriol Vinyals, Jeffrey Adgate Dean, Geoffrey E. Hinton
  • Patent number: 11416765
    Abstract: Methods and systems for training a machine learning algorithm (MLA) comprising: acquiring a first set of training samples having a plurality of features, iteratively training a first predictive model based on the plurality of features and generating a respective first prediction error indicator. Analyzing the respective first prediction error indicator for each iteration to determine an overfitting point, and determining at least one evaluation starting point. Acquiring an indication of a new set of training objects, and iteratively retraining the first predictive model with at least one training object from the at least one evaluation starting point to obtain a plurality of retrained first predictive models and generating a respective retrained prediction error indicator. Based on a plurality of retrained prediction error indicators and a plurality of the associated first prediction error indicators, selecting one of the first set of training samples and the at least one training object.
    Type: Grant
    Filed: February 9, 2018
    Date of Patent: August 16, 2022
    Assignee: YANDEX EUROPE AG
    Inventor: Pavel Aleksandrovich Burangulov
  • Patent number: 11410072
    Abstract: Systems and methods are provided for the detection of sentiment in writing. A plurality of texts is received from a larger collection of writing samples with a computer system. A set of seed words from the plurality of texts are labeled as being of positive sentiment or of negative sentiment with the computer system. The set of seed words is expanded in size with the computer system to provide an expanded set of seed words. Intensity values are assigned to words of the expanded set of seed words. Each of the words of the expanded set of seed words is assigned three intensity values: a value corresponding to the strength of the word's association with a positive polarity class, a value corresponding to the strength of the word's association with a negative polarity class, and a value corresponding to the strength of the word's association with a neutral polarity class.
    Type: Grant
    Filed: October 19, 2012
    Date of Patent: August 9, 2022
    Assignee: Educational Testing Service
    Inventors: Jill Burstein, Beata Beigman Klebanov, Joel Tetreault, Nitin Madnani, Adam Faulkner
  • Patent number: 11410062
    Abstract: The present teaching generally relates to removing perturbations from predictive scoring. In one embodiment, data representing a plurality of events detected by a content provider may be received, the data indicating a time that a corresponding event occurred and whether the corresponding event was fraudulent. First category data may be generated by grouping each event into one of a number of categories, each category being associated with a range of times. A first measure of risk for each category may be determined, where the first measure of risk indicates a likelihood that a future event occurring at a future time is fraudulent. Second category data may be generated by processing the first category data and a second measure of risk for each category may be determined. Measure data representing the second measure of risk for each category and the range of times associated with that category may be stored.
    Type: Grant
    Filed: December 19, 2017
    Date of Patent: August 9, 2022
    Assignee: YAHOO AD TECH LLC
    Inventors: Liang Wang, Angus Xianen Qiu, Shengjun Pan
  • Patent number: 11409535
    Abstract: A processing device and related products are disclosed. The processing device includes a main unit and a plurality of basic units in communication with the main unit. The main unit is configured to perform a first set of operations in a neural network in series, and transmit data to the plurality of basic units. The plurality of basic units are configured to receive the data transmitted from the main unit, perform a second set of operations in the neural network in parallel based on the data received from the main unit, and return operation results to the main unit.
    Type: Grant
    Filed: October 23, 2018
    Date of Patent: August 9, 2022
    Assignee: CAMBRICON TECHNOLOGIES CORPORATION LIMITED
    Inventors: Shaoli Liu, Tianshi Chen, Bingrui Wang, Yao Zhang
  • Patent number: 11403513
    Abstract: A computer-implemented method of training a student machine learning system comprises receiving data indicating execution of an expert, determining one or more actions performed by the expert during the execution and a corresponding state-action Jacobian, and training the student machine learning system using a linear-feedback-stabilized policy. The linear-feedback-stabilized policy may be based on the state-action Jacobian. Also a neural network system for representing a space of probabilistic motor primitives, implemented by one or more computers. The neural network system comprises an encoder configured to generate latent variables based on a plurality of inputs, each input comprising a plurality of frames, and a decoder configured to generate an action based on one or more of the latent variables and a state.
    Type: Grant
    Filed: September 27, 2019
    Date of Patent: August 2, 2022
    Assignee: DeepMind Technologies Limited
    Inventors: Leonard Hasenclever, Vu Pham, Joshua Merel, Alexandre Galashov
  • Patent number: 11403545
    Abstract: A pattern recognition apparatus for discriminative training includes: a similarity calculator that calculates similarities among training data; a statistics calculator that calculates statistics from the similarities in accordance with current labels for the training data; and a discriminative probabilistic linear discriminant analysis (PLDA) trainer that receives the training data, the statistics of the training data, the current labels and PLDA parameters, and updates the PLDA parameters and the labels of the training data.
    Type: Grant
    Filed: March 9, 2017
    Date of Patent: August 2, 2022
    Assignee: NEC CORPORATION
    Inventors: Qiongqiong Wang, Takafumi Koshinaka
  • Patent number: 11379746
    Abstract: A method of processing image data in a connectionist network comprises a plurality of units, wherein the method implements a multi-channel unit forming a respective one of the plurality of units, and wherein the method comprises: receiving, at the data input, a plurality of input picture elements representing an image acquired by means of a multi-channel image sensor, wherein the plurality of input picture elements comprise a first and at least a second portion of input picture elements, wherein the first portion of input picture elements represents a first channel of the image sensor and the second portion of input picture elements represents a second channel of the image sensor; processing of the first and at least second portion of input picture elements separately from each other; and outputting, at the data output, the processed first and second portions of input picture elements.
    Type: Grant
    Filed: November 28, 2018
    Date of Patent: July 5, 2022
    Assignee: Aptiv Technologies Limited
    Inventors: Farzin G. Rajabizadeh, Narges Milani, Daniel Schugk, Lutz Roese-Koerner, Yu Su, Dennis Mueller
  • Patent number: 11373090
    Abstract: In automated assistant systems, a deep-learning model in form of a long short-term memory (LSTM) classifier is used for mapping questions to classes, with each class having a manually curated answer. A team of experts manually create the training data used to train this classifier. Relying on human curation often results in such linguistic training biases creeping into training data, since every individual has a specific style of writing natural language and uses some words in specific context only. Deep models end up learning these biases, instead of the core concept words of the target classes. In order to correct these biases, meaningful sentences are automatically generated using a generative model, and then used for training a classification model. For example, a variational autoencoder (VAE) is used as the generative model for generating novel sentences and a language model (LM) is utilized for selecting sentences based on likelihood.
    Type: Grant
    Filed: September 18, 2018
    Date of Patent: June 28, 2022
    Assignee: Tata Consultancy Services Limited
    Inventors: Puneet Agarwal, Mayur Patidar, Lovekesh Vig, Gautam Shroff
  • Patent number: 11366954
    Abstract: A text preparation apparatus is configured to in the decoding processing: perform first-layer recurrent neural network processing for phrase types to be used in the text and second-layer recurrent neural network processing for words appropriate for each of the phrase types; determine a phrase appropriate for each of the phrase types based on outputs of the second-layer recurrent neural network processing; generate a first vector set from a state vector of a previous step in the first-layer recurrent neural network processing and the feature vector sets, each vector of the first vector set being generated based on similarity degrees between individual vectors in one of the feature vector sets and the state vector; generate a second vector based on similarity degrees between individual vectors in the first vector set and the state vector; and input the second vector to a given step in the first-layer recurrent neural network processing.
    Type: Grant
    Filed: February 13, 2018
    Date of Patent: June 21, 2022
    Assignee: HITACHI, LTD.
    Inventors: Bin Tong, Makoto Iwayama