Patents Examined by Abdullah-Al Kawsar
  • Patent number: 11960990
    Abstract: Disclosed are systems, methods, and other implementations that include a machine-implemented artificial neural network including a plurality of nodes, with the nodes forming at plurality of layers including an input layer, at least one hidden layer, and an output layer, and a plurality of links, with each link coupling a corresponding source node and a receiving node. At least one link is configured to evaluate a piecewise linear function of a value provided by a first source node, from the plurality of nodes, to yield a value for a first receiving node coupled to the at least one link. Each node of a hidden layer is configured to aggregate values provided by links for which that node is the receiving node of the link, with the first receiving node providing non-linear output resulting, in part, from the at least one link configured to evaluate the piecewise linear function.
    Type: Grant
    Filed: July 31, 2020
    Date of Patent: April 16, 2024
    Assignee: NanoSemi, Inc.
    Inventors: Alexandre Megretski, Alexandre Marques
  • Patent number: 11960975
    Abstract: A method for multi-instance learning (MIL)-based classification of a streaming input is described. The method includes running a first biased MIL model using extracted features from a subset of instances received in the streaming input to obtain a first classification result. The method also includes running a second biased MIL model using the extracted features to obtain a second classification result. The first biased MIL model is biased opposite the second biased MIL model. The method further includes classifying the streaming input based on the classification results of the first biased MIL model and the second biased MIL model.
    Type: Grant
    Filed: March 2, 2017
    Date of Patent: April 16, 2024
    Assignee: QUALCOMM Incorporated
    Inventors: Dineel Sule, Subrato Kumar De, Wei Ding
  • Patent number: 11948352
    Abstract: The exchange of weight gradients among the processing nodes can introduce a substantial bottleneck to the training process. Instead of remaining idle during the weight gradients exchange process, a processing node can update its own set of weights for the next iteration of the training process using the processing node's local weight gradients. The next iteration of training can be started by using these speculative weights until the weight gradients exchange process completes and a global weights update is available. If the speculative weights is close enough to the weight values from the global weights update, the training process at the processing node can continue training using the results computed from the speculative weights to reduce the overall training time.
    Type: Grant
    Filed: March 26, 2020
    Date of Patent: April 2, 2024
    Assignee: Amazon Technologies, Inc.
    Inventors: Patricio Kaplan, Randy Renfu Huang
  • Patent number: 11922292
    Abstract: Methods, systems, and apparatus, including computer-readable media, are described for a hardware circuit configured to implement a neural network. The circuit includes a first memory, respective first and second processor cores, and a shared memory. The first memory provides data for performing computations to generate an output for a neural network layer. Each of the first and second cores include a vector memory for storing vector values derived from the data provided by the first memory. The shared memory is disposed generally intermediate the first memory and at least one core and includes: i) a direct memory access (DMA) data path configured to route data between the shared memory and the respective vector memories of the first and second cores and ii) a load-store data path configured to route data between the shared memory and respective vector registers of the first and second cores.
    Type: Grant
    Filed: May 14, 2020
    Date of Patent: March 5, 2024
    Assignee: Google LLC
    Inventors: Thomas Norrie, Andrew Everett Phelps, Norman Paul Jouppi, Matthew Leever Hedlund
  • Patent number: 11916769
    Abstract: Example methods and apparatus to onboard return path data providers for audience measurement are disclosed herein. Example apparatus disclosed herein to predict return path data quality include a classification engine to compute a first data set of model features from validation tuning data reported from media metering devices and a second data set of model features from return path data reported from return path data devices. The example apparatus also include a prediction engine to train a machine learning algorithm based on the first data set, apply the trained machine learning algorithm to the second data set to predict quality of the return path data reported from the return path data devices, and determine an onboarding status for a return path data provider based on an aggregate predicted quality of the return path data reported from the return path data devices.
    Type: Grant
    Filed: April 30, 2020
    Date of Patent: February 27, 2024
    Assignee: The Nielsen Company (US), LLC
    Inventors: David J. Kurzynski, Samantha M. Mowrer, Michael Grotelueschen, Vince Tambellini, Demetrios Fassois, Jean Guerrettaz
  • Patent number: 11886989
    Abstract: Using a deep learning inference system, respective similarities are measured for each of a set of intermediate representations to input information used as an input to the deep learning inference system. The deep learning inference system includes multiple layers, each layer producing one or more associated intermediate representations. Selection is made of a subset of the set of intermediate representations that are most similar to the input information. Using the selected subset of intermediate representations, a partitioning point is determined in the multiple layers used to partition the multiple layers into two partitions defined so that information leakage for the two partitions will meet a privacy parameter when a first of the two partitions is prevented from leaking information. The partitioning point is output for use in partitioning the multiple layers of the deep learning inference system into the two partitions.
    Type: Grant
    Filed: September 10, 2018
    Date of Patent: January 30, 2024
    Assignee: International Business Machines Corporation
    Inventors: Zhongshu Gu, Heqing Huang, Jialong Zhang, Dong Su, Dimitrios Pendarakis, Ian Michael Molloy
  • Patent number: 11842260
    Abstract: A computer-implemented method, a computer program product, and a computer system for incremental and decentralized pruning of a machine learning model in federated learning. A federated learning system determines a serial sequence of participating in model pruning by agents in the federated learning system. A server in the federated learning system sends, to a first agent in the serial sequence, an initial model to trigger a federated pruning process for the machine learning model. The each of agents in the serial sequence prunes the machine learning model. The each of agents in the serial sequence generates an intermediately pruned model for an immediately next agent to prune. A final agent in the serial sequence prunes the machine learning model and generates a finally pruned model. The final agent sends the finally pruned model to the server.
    Type: Grant
    Filed: September 25, 2020
    Date of Patent: December 12, 2023
    Assignee: International Business Machines Corporation
    Inventors: Wei-Han Lee, Changchang Liu, Shiqiang Wang, Bong Jun Ko, Yuang Jiang
  • Patent number: 11836638
    Abstract: Organizations are constantly flooded with questions, ranging from mundane to the unanswerable. It is therefore respective department that actively looks for automated assistance, especially to alleviate the burden of routine, but time-consuming tasks. The embodiments of the present disclosure provide BiLSTM-Siamese Network based Classifier for identifying target class of queries and providing responses to queries pertaining to the identified target class, which acts as an automated assistant that alleviates burden of answering queries in well-defined domains. Siamese Model (SM) is trained for a epochs, and then the same Base-Network is used to train Classification Model (CM) for b epochs iteratively until best accuracy is observed on validation test, wherein SM ensures it learns which sentences are similar/dissimilar semantically while CM learns to predict target class of every user query. Here a and b are assumed to be hyper parameters and are tuned for best performance on the validation set.
    Type: Grant
    Filed: March 5, 2018
    Date of Patent: December 5, 2023
    Assignee: TATA CONSULTANCY SERVICES LIMITED
    Inventors: Puneet Agarwal, Prerna Khurana, Gautam Shroff, Lovekesh Vig, Ashwin Srinivasan
  • Patent number: 11816586
    Abstract: A method for event identification including receiving event information pertaining to events occurring with respect to a computing environment, each event having a measurement metric; evaluating by a probability function the measurement metric for each event to determine when the measurement metric is above a predetermined probability threshold or below a probability threshold wherein above a probability threshold or below a probability threshold is classified as alarm data; processing the alarm data through a decision tree to determine based on historical data when the alarm data is significant or when the alarm data is not significant and to reduce the number of alarm data to a predetermined number of significant alarm data; and displaying the predetermined number of significant alarm data to a user.
    Type: Grant
    Filed: November 13, 2017
    Date of Patent: November 14, 2023
    Assignee: International Business Machines Corporation
    Inventors: Xue Feng Gao, Hui Qing Shi, James C. Thorburn, Yu Fen Yuan, Qing Feng Zhang
  • Patent number: 11816539
    Abstract: A method and system to determine a rating for evaluation of a health care procedure is disclosed. The system includes a user interface accepting a request for a rating of the health care procedure. A database includes input data metrics each related to one of health care provider quality metrics, health care provider cost metrics, health care facility quality metrics, and health care facility quality metrics. A machine learning system is trained to provide a rating target value for the health care procedure based on the neural processing of the data factors. The system evaluates a number of machine learning algorithms to determine the best learning algorithm for a particular rating target value based on test data supplied to the machine learning algorithms. The best learning algorithm for the particular rating target value is used by the system to determine the rating for evaluation.
    Type: Grant
    Filed: March 24, 2017
    Date of Patent: November 14, 2023
    Assignee: SurgeonCheck LLC
    Inventors: Marc Granson, Jennifer Shields, Thomas A. Woolman
  • Patent number: 11783181
    Abstract: A method for executing a multi-task deep learning model for learning trends in multivariate time series is presented. The method includes collecting multi-variate time series data from a plurality of sensors, jointly learning both local and global contextual features for predicting a trend of the multivariate time series by employing a tensorized long short-term memory (LSTM) with adaptive shared memory (TLASM) to learn historical dependency of historical trends, and employing a multi-task one-dimensional convolutional neural network (1dCNN) to extract salient features from local raw time series data to model a short-term dependency between local time series data and subsequent trends.
    Type: Grant
    Filed: August 7, 2020
    Date of Patent: October 10, 2023
    Inventors: Wei Cheng, Haifeng Chen, Jingchao Ni, Dongkuan Xu, Wenchao Yu
  • Patent number: 11763132
    Abstract: Detecting sequences of computer-executed operations, including training a BLSTM to determine forward and backward probabilities of encountering each computer-executed operations within a training set of consecutive computer-executed operations in forward and backward execution directions of the operations, and identifying reference sequences of operations within the training set where for each given one of the sequences the forward probability of encountering a first computer-executed operation in the given sequence is below a predefined lower threshold, the forward probability of encountering a last computer-executed operation in the given sequence is above a predefined upper threshold, the backward probability of encountering the last computer-executed operation in the given sequence is below the predefined lower threshold, and the backward probability of encountering the first computer-executed operation in the given sequence is above the predefined upper threshold, and where the predefined lower threshold
    Type: Grant
    Filed: June 11, 2019
    Date of Patent: September 19, 2023
    Assignee: International Business Machines Corporation
    Inventors: Guy Lev, Boris Rozenberg, Yehoshua Sagron
  • Patent number: 11734568
    Abstract: The present disclosure provides systems and methods for modification (e.g., pruning, compression, quantization, etc.) of artificial neural networks based on estimations of the utility of network connections (also known as “edges”). In particular, the present disclosure provides novel techniques for estimating the utility of one or more edges of a neural network in a fashion that requires far less expenditure of resources than calculation of the actual utility. Based on these estimated edge utilities, a computing system can make intelligent decisions regarding network pruning, network quantization, or other modifications to a neural network. In particular, these modifications can reduce resource requirements associated with the neural network. By making these decisions with knowledge of and based on the utility of various edges, this reduction in resource requirements can be achieved with only a minimal, if any, degradation of network performance (e.g., prediction accuracy).
    Type: Grant
    Filed: February 13, 2019
    Date of Patent: August 22, 2023
    Assignee: GOOGLE LLC
    Inventors: Jyrki Alakuijala, Ruud van Asseldonk, Robert Obryk, Krzysztof Potempa
  • Patent number: 11720795
    Abstract: Disclosed is a neural network structure enabling efficient training of the network and a method thereto. The structure is a ladder-type structure wherein one or more lateral input(s) is/are taken to decoding functions. By minimizing one or more cost function(s) belonging to the structure the neural network structure may be trained in an efficient way.
    Type: Grant
    Filed: November 26, 2014
    Date of Patent: August 8, 2023
    Assignee: Canary Capital LLC
    Inventor: Harri Valpola
  • Patent number: 11676025
    Abstract: A method for training an automated learning system includes processing training input with a first neural network and processing the output of the first neural network with a second neural network. The input layer of the second neural network corresponding to the output layer of the first neural network. The output layer of the second neural network corresponding to the input layer of the first neural network. An objective function is determined using the output of the second neural network and a predetermined modification magnitude. The objective function is approximated using random Cauchy projections which are propagated through the second neural network.
    Type: Grant
    Filed: October 29, 2018
    Date of Patent: June 13, 2023
    Assignees: Robert Bosch GmbH, Carnegie Mellon University
    Inventors: Jeremy Zico Kolter, Eric Wong, Frank R. Schmidt, Jan Hendrik Metzen
  • Patent number: 11645512
    Abstract: Memory layout and conversion are disclosed to improve neural network (NN) inference performance. For one example, a NN selects a memory layout for a neural network (NN) among a plurality of different memory layouts based on thresholds derived from performance simulations of the NN. The NN stores multi-dimensional NN kernel computation data using the selected memory layout during NN inference. The memory layouts to be selected can be a channel, height, width, and batches (CHWN) layout, a batches, height, width and channel (NHWC) layout, and a batches, channel, height and width (NCHW) layout. If the multi-dimensional NN kernel computation data is not in the selected memory layout, the NN transforms the multi-dimensional NN kernel computation data for the selected memory layout.
    Type: Grant
    Filed: April 30, 2019
    Date of Patent: May 9, 2023
    Assignee: BAIDU USA LLC
    Inventor: Min Guo
  • Patent number: 11640533
    Abstract: A system, an apparatus and methods for utilizing software and hardware portions of a neural network to fix, or hardwire, certain portions, while modifying other portions are provided. A first set of weights for layers of the first neural network are established, and selected weights are modified to generate a second set of weights, based on a second dataset. The second set of weights is then used to train a second neural network.
    Type: Grant
    Filed: August 3, 2018
    Date of Patent: May 2, 2023
    Assignee: Arm Limited
    Inventors: Paul Nicholas Whatmough, Matthew Mattina, Jesse Garrett Beu
  • Patent number: 11636309
    Abstract: Systems and methods for modeling complex probability distributions are described. One embodiment includes a method for training a restricted Boltzmann machine (RBM), wherein the method includes generating, from a first set of visible values, a set of hidden values in a hidden layer of a RBM and generating a second set of visible values in a visible layer of the RBM based on the generated set of hidden values. The method includes computing a set of likelihood gradients based on the first set of visible values and the generated set of visible values, computing a set of adversarial gradients using an adversarial model based on at least one of the set of hidden values and the set of visible values, computing a set of compound gradients based on the set of likelihood gradients and the set of adversarial gradients, and updating the RBM based on the set of compound gradients.
    Type: Grant
    Filed: January 16, 2019
    Date of Patent: April 25, 2023
    Assignee: Unlearn.AI, Inc.
    Inventors: Charles Kenneth Fisher, Aaron Michael Smith, Jonathan Ryan Walsh
  • Patent number: 11630994
    Abstract: A method of training a neural network includes, at a local computing node, receiving remote parameters from a set of one or more remote computing nodes, initiating execution of a forward pass in a local neural network in the local computing node to determine a final output based on the remote parameters, initiating execution of a backward pass in the local neural network to determine updated parameters for the local neural network, and prior to completion of the backward pass, transmitting a subset of the updated parameters to the set of remote computing nodes.
    Type: Grant
    Filed: February 17, 2018
    Date of Patent: April 18, 2023
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Khaled Hamidouche, Michael W LeBeane, Walter B Benton, Michael L Chu
  • Patent number: 11620506
    Abstract: Techniques are described herein for training and applying memory neural networks, such as “condensed” memory neural networks (“C-MemNN”) and/or “average” memory neural networks (“A-MemNN”). In various embodiments, the memory neural networks may be iteratively trained using training data in the form of free form clinical notes and clinical reference documents. In various embodiments, during each iteration of the training, a so-called “condensed” memory state may be generated and used as part of the next iteration. Once trained, a free form clinical note associated with a patient may be applied as input across the memory neural network to predict one or more diagnoses or outcomes of the patient.
    Type: Grant
    Filed: September 18, 2017
    Date of Patent: April 4, 2023
    Assignee: KONINKLIJKE PHILIPS N.V.
    Inventors: Aaditya Prakash, Sheikh Sadid AL Hasan, Oladimeji Feyisetan Farri, Kathy Mi Young Lee, Vivek Varma Datla, Ashequl Qadir, Junyi Liu