Patents Examined by Ying Yu Chen
  • Patent number: 11694122
    Abstract: A distributed, online machine learning system is presented. Contemplated systems include many private data servers, each having local private data. Researchers can request that relevant private data servers train implementations of machine learning algorithms on their local private data without requiring de-identification of the private data or without exposing the private data to unauthorized computing systems. The private data servers also generate synthetic or proxy data according to the data distributions of the actual data. The servers then use the proxy data to train proxy models. When the proxy models are sufficiently similar to the trained actual models, the proxy data, proxy model parameters, or other learned knowledge can be transmitted to one or more non-private computing devices. The learned knowledge from many private data servers can then be aggregated into one or more trained global models without exposing private data.
    Type: Grant
    Filed: August 18, 2022
    Date of Patent: July 4, 2023
    Assignees: NANTOMICS, LLC, NANT HOLDINGS IP, LLC
    Inventors: Christopher W. Szeto, Stephen Charles Benz, Nicholas J. Witchey
  • Patent number: 11693562
    Abstract: Systems, methods and apparatus of intelligent bandwidth allocation to different types of operations to access storage media in a data storage device. For example, a data storage device of a vehicle includes: storage media components; a controller configured to store data into and retrieve data from the storage media components according to commands received in the data storage device; and an artificial neural network configured to receive, as input and as a function of time, operating parameters indicative a data access pattern, and generate, based on the input, a prediction to determine an optimized bandwidth allocation scheme for controlling access by different types of operations in the data storage device to the storage media components. The controller is configured to schedule the operations of the different types to access the one or more storage media components according to the optimized bandwidth allocation scheme.
    Type: Grant
    Filed: September 5, 2019
    Date of Patent: July 4, 2023
    Assignee: Micron Technology, Inc.
    Inventors: Poorna Kale, Robert Richard Noel Bielby
  • Patent number: 11693392
    Abstract: Example implementations described herein are directed to a system for manufacturing dispatching using reinforcement learning and transfer learning. The systems and methods described herein can be deployed in factories for manufacturing dispatching for reducing job-due related costs. In particular, example implementations described herein can be used to reduce massive data collection and reduce model training time, which can eventually improve dispatching efficiency and reduce factory cost.
    Type: Grant
    Filed: January 30, 2019
    Date of Patent: July 4, 2023
    Assignee: HITACHI, LTD.
    Inventors: Shuai Zheng, Chetan Gupta, Susumu Serita
  • Patent number: 11681916
    Abstract: A system maintains a knowledge layout to support the building of event and analytics models in parity. The system uses the event models to provide a snapshot of the relevant conditions present when a challenge event occurs. The system uses the analytics models to select one or more actions (which may include robotic tasks) to respond to the challenge condition. In some cases, the system may render continued response compulsory until a successful response to the challenge event is achieved.
    Type: Grant
    Filed: July 24, 2020
    Date of Patent: June 20, 2023
    Assignee: ACCENTURE GLOBAL SOLUTIONS LIMITED
    Inventors: Michael Thomas Giba, Teresa Sheausan Tung, Colin Anil Puri
  • Patent number: 11675693
    Abstract: A novel and useful neural network (NN) processing core incorporating inter-device connectivity and adapted to implement artificial neural networks (ANNs). A chip-to-chip interface spreads a given ANN model across multiple devices in a seamless manner. The NN processor is constructed from self-contained computational units organized in a hierarchical architecture. The homogeneity enables simpler management and control of similar computational units, aggregated in multiple levels of hierarchy. Computational units are designed with minimal overhead as possible, where additional features and capabilities are aggregated at higher levels in the hierarchy. On-chip memory provides storage for content inherently required for basic operation at a particular hierarchy and is coupled with the computational resources in an optimal ratio. Lean control provides just enough signaling to manage only the operations required at a particular hierarchical level.
    Type: Grant
    Filed: April 3, 2018
    Date of Patent: June 13, 2023
    Inventors: Avi Baum, Or Danon, Hadar Zeitlin, Daniel Ciubotariu, Rami Feig
  • Patent number: 11657264
    Abstract: Media content is received for streaming to a user device. A neural network is trained based on a first portion of the media content. Weights of the neural network are updated to overfit the first portion of the media content to provide a first overfitted neural network. The neural network or the first overfitted neural network is trained based on a second portion of the media content. Weights of the neural network or the first overfitted neural network are updated to overfit the second portion of the media content to provide a second overfitted neural network. The first portion and the second portion of the media content are sent with associations to the first overfitted neural network and the second overfitted to the user equipment.
    Type: Grant
    Filed: April 9, 2018
    Date of Patent: May 23, 2023
    Assignee: Nokia Technologies Oy
    Inventors: Francesco Cricri, Caglar Aytekin, Emre Baris Aksu, Miika Sakari Tupala, Xingyang Ni
  • Patent number: 11645358
    Abstract: In an example, a neural network program corresponding to a neural network model is received. The neural network program includes matrices, vectors, and matrix-vector multiplication (MVM) operations. A computation graph corresponding to the neural network model is generated. The computation graph includes a plurality of nodes, each node representing a MVM operation, a matrix, or a vector. Further, a class model corresponding to the neural network model is populated with a data structure pointing to the computation graph. The computation graph is traversed based on the class model. Based on the traversal, the plurality of MVM operations are assigned to MVM units of a neural network accelerator. Each MVM unit can perform a MVM operation. Based on assignment of the plurality of MVM operations, an executable file is generated for execution by the neural network accelerator.
    Type: Grant
    Filed: January 29, 2019
    Date of Patent: May 9, 2023
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Soumitra Chatterjee, Sunil Vishwanathpur Lakshminarasimha, Mohan Parthasarathy
  • Patent number: 11645508
    Abstract: A method for generating a trained model is provided. The method for generating a trained model includes: receiving a learning data; generating an asymmetric multi-task feature network including a parameter matrix of the trained model which permits an asymmetric knowledge transfer between tasks and a feedback matrix for a feedback connection from the tasks to features; computing a parameter matrix of the asymmetric multi-task feature network using the input learning data to minimize a predetermined objective function; and generating an asymmetric multi-task feature trained model using the computed parameter matrix as the parameter of the generated asymmetric multi-task feature network.
    Type: Grant
    Filed: June 7, 2018
    Date of Patent: May 9, 2023
    Assignee: Korea Advanced Institute of Science and Technology
    Inventors: Sungju Hwang, Haebum Lee, Donghyun Na, Eunho Yang
  • Patent number: 11645501
    Abstract: Systems for distributed, event-based computation are provided. In various embodiments, the systems include a plurality of neurosynaptic processors and a network interconnecting the plurality of neurosynaptic processors. Each neurosynaptic processor includes a clock uncoupled from the clock of each other neurosynaptic processor. Each neurosynaptic processor is adapted to receive an input stream, the input stream comprising a plurality of inputs and a clock value associated with each of the plurality of inputs. Each neurosynaptic processor is adapted to compute, for each clock value, an output based on the inputs associated with that clock value. Each neurosynaptic processor is adapted to send to another of the plurality of neurosynaptic processors, via the network, the output and an associated clock value.
    Type: Grant
    Filed: February 28, 2018
    Date of Patent: May 9, 2023
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Arnon Amir, David Berg, Pallab Datta, Jeffrey A. Kusnitz, Hartmut Penner
  • Patent number: 11645110
    Abstract: Aspects of the present disclosure relate to automatically generating a user manual using a technique that includes training a first model with a first set of training data. The technique further includes generating, by the first model, a set of operations and a set of windows, where the set of operations and the set of windows are functions of the program. The technique further includes, generating a plurality of tasks, where a first task comprises a first operation being performed on a first window. The technique further includes determining an order of the plurality of tasks and calculating a level score for the first operation of the first window. The technique further includes assembling the user manual having the plurality of tasks in the determined order.
    Type: Grant
    Filed: March 13, 2019
    Date of Patent: May 9, 2023
    Assignee: International Business Machines Corporation
    Inventors: Xiao Feng Ji, Yuan Jin, Li ping Wang, Xiao Rui Shao
  • Patent number: 11645513
    Abstract: Methods and systems are described for populating knowledge graphs. A processor can identify a set of data in a knowledge graph. The processor can identify a plurality of portions of an unannotated corpus, where a portion includes at least one entity. The processor can cluster the plurality of portions into at least one data set based on the at least one entity of the plurality of portions. The processor can train a model using the at least one data set and the set of data identified from the knowledge graph. The processor can apply the model to a set of entities in the unannotated corpus to predict unary relations associated with the set of entities. The processor can convert the predicted unary relations into a set of binary relations associated with the set of entities. The processor can add the set of binary relations to the knowledge graph.
    Type: Grant
    Filed: July 3, 2019
    Date of Patent: May 9, 2023
    Assignee: International Business Machines Corporation
    Inventors: Michael Robert Glass, Alfio Massimiliano Gliozzo
  • Patent number: 11645510
    Abstract: An example method for accelerating neuron computations in an artificial neural network (ANN) comprises receiving a plurality of pairs of first values and second values associated with a neuron of an ANN, selecting pairs from the plurality of pairs, wherein a count of the selected pairs is less than a count of all pairs in the plurality of pairs, performing mathematical operations on the selected pairs to obtain a result, determining that the result does not satisfy a criterion, and, until the result satisfies the criterion, selecting further pairs from the plurality, performing the mathematical operations on the selected further pairs to obtain further results, and determining, based on the result and the further results, an output of the neuron.
    Type: Grant
    Filed: April 8, 2019
    Date of Patent: May 9, 2023
    Assignee: MIPSOLOGY SAS
    Inventor: Ludovic Larzul
  • Patent number: 11640534
    Abstract: Backpropagation of an artificial neural network can be triggered or based on input data. The input data are received into the artificial neural network, and the input data are forward propagated through the artificial neural network, which generates output values at classifier layer perceptrons of the network. Classifier layer perceptrons that have the largest output values after the input data have been forward propagated through the artificial neural network are identified. The output difference between the classifier layer perceptrons that have the largest output values is determined. It is then determined whether the output difference transgresses a threshold, and if the output difference does not transgress a threshold, the artificial neural network is backpropagated.
    Type: Grant
    Filed: November 15, 2019
    Date of Patent: May 2, 2023
    Assignee: Raytheon Company
    Inventor: John E. Mixter
  • Patent number: 11636386
    Abstract: Methods, systems, and computer program products for determining data representative of bias within a model are provided herein. A computer-implemented method includes obtaining a first dataset on which a model was trained, wherein the first dataset contains protected attributes, and a second dataset on which the model was trained, wherein the protected attributes have been removed from the second dataset; identifying, for each of the one or more protected attributes in the first dataset, one or more attributes in the second dataset correlated therewith; determining bias among at least a portion of the identified correlated attributes; and outputting, to at least one user, identifying information pertaining to the one or more instances of bias.
    Type: Grant
    Filed: November 21, 2019
    Date of Patent: April 25, 2023
    Assignee: International Business Machines Corporation
    Inventors: Pranay Kumar Lohia, Diptikalyan Saha, Manish Anand Bhide, Sameep Mehta
  • Patent number: 11636001
    Abstract: Embodiments of the invention provide a method and system for determining an error threshold value when a vector distance based error measure is to be used for machine failure prediction. The method comprises: identifying a plurality of basic memory depth values based on a target sequence to be used for machine failure prediction; calculating an average depth value based on the plurality of basic memory depth values; retrieving an elementary error threshold value, based on the average depth value, from a pre-stored table which is stored in a memory and includes a plurality of mappings wherein each mapping associates a predetermined depth value of an elementary sequence to an elementary error threshold value; and calculating an error threshold value corresponding to the target sequence based on both the retrieved elementary error threshold value and a standard deviation of the plurality of basic memory depth values.
    Type: Grant
    Filed: April 24, 2019
    Date of Patent: April 25, 2023
    Assignee: Avanseus Holdings Pte. Ltd.
    Inventor: Chiranjib Bhandary
  • Patent number: 11630982
    Abstract: Aspects of the present disclosure address systems and methods for fixed-point quantization using a dynamic quantization level adjustment scheme. Consistent with some embodiments, a method comprises accessing a neural network comprising floating-point representations of filter weights corresponding to one or more convolution layers. The method further includes determining a peak value of interest from the filter weights and determining a quantization level for the filter weights based on a number of bits in a quantization scheme. The method further includes dynamically adjusting the quantization level based on one or more constraints. The method further includes determining a quantization scale of the filter weights based on the peak value of interest and the adjusted quantization level. The method further includes quantizing the floating-point representations of the filter weights using the quantization scale to generate fixed-point representations of the filter weights.
    Type: Grant
    Filed: September 14, 2018
    Date of Patent: April 18, 2023
    Assignee: Cadence Design Systems, Inc.
    Inventors: Ming Kai Hsu, Sandip Parikh
  • Patent number: 11625640
    Abstract: In one embodiment, a device distributes sets of training records from a training dataset for a random forest-based classifier among a plurality of workers of a computing cluster. Each worker determines whether it can perform a node split operation locally on the random forest by comparing a number of training records at the worker to a predefined threshold. The device determines, for each of the split operations, a data size and entropy measure of the training records to be used for the split operation. The device applies a machine learning-based predictor to the determined data size and entropy measure of the training records to be used for the split operation, to predict its completion time. The device coordinates the workers of the computing cluster to perform the node split operations in parallel such that the node split operations in a given batch are grouped based on their predicted completion times.
    Type: Grant
    Filed: October 5, 2018
    Date of Patent: April 11, 2023
    Assignee: Cisco Technology, Inc.
    Inventors: Radek Starosta, Jan Brabec, Lukas Machlica
  • Patent number: 11625631
    Abstract: An apparatus for implementing a computing system to predict preferences includes at least one processor device operatively coupled to a memory. The at least one processor device is configured to calculate a parameter relating to a density of a prior distribution at each sample of a set of samples associated with the prior distribution. The at least one parameter including a distance from each sample to at least one neighboring sample. The at least one processor device is further configured to estimate, for the plurality of samples, at least one differential entropy of at least one posterior distribution associated with at least one observation based on the parameter relating to the density of the prior distribution at each sample and the likelihood of observation for each sample. The estimation is performed without sampling the at least one posterior distribution to reduce consumption of resources of the computing system.
    Type: Grant
    Filed: September 25, 2019
    Date of Patent: April 11, 2023
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Takayuki Osogami, Rudy Raymond Harry Putra
  • Patent number: 11625598
    Abstract: Systems, devices, methods, and computer readable media for training a machine learning architecture include: receiving one or more observation data sets representing one or more observations associated with at least a portion of a state; and training the machine learning architecture with the one or more observation data sets, where the training includes updating the plurality of weights based on an error value, and at least one time-varying step-size value; wherein the at least one step-size value is based on a set of meta-weights which vary based on a stochastic meta-descent.
    Type: Grant
    Filed: March 5, 2019
    Date of Patent: April 11, 2023
    Assignee: ROYAL BANK OF CANADA
    Inventor: Alexandra Kathleen Kearney
  • Patent number: 11625099
    Abstract: Systems, methods, and protocols for developing invasive brain computer interface (iBCI) decoders non-invasively by using emulated brain data are provided. A human operator can interact in real-time with control algorithms designed for iBCI. An operator can provide input to one or more computer models (e.g., via body gestures), and this process can generate emulated brain signals that would otherwise require invasive brain electrodes to obtain.
    Type: Grant
    Filed: July 11, 2022
    Date of Patent: April 11, 2023
    Assignee: THE FLORIDA INTERNATIONAL UNIVERSITY BOARD OF TRUSTEES
    Inventors: Tzu-Hsiang Lin, Zachary Danziger