Abstract: Systems, methods, and computer-executable instructions for determining a computation schedule for a recurrent neural network (RNN). A matrix multiplication (MM) directed-acyclic graph (DAG) is received for the RNN. Valid phased computation schedules for the RNN are generated. Each of the valid phase computation schedule includes an ordering of MM operations. For each of the plurality of valid phased computation schedules, each of the MM operations is partitioned to processor cores based on L3 cache to L2 cache data movement. The RNN is executed based on the valid phased computation schedules. A final computation schedule is stored. The final computation schedule is used for future executions of the RNN.
Type:
Grant
Filed:
June 26, 2018
Date of Patent:
December 13, 2022
Assignee:
Microsoft Technology Licensing, LLC
Inventors:
Minjia Zhang, Samyam Rajbhandari, Wenhan Wang, Yuxiong He
Abstract: An information processing apparatus comprises a storage unit configured to store correct answer data used to detect at least one portion of a detection object from an image and detection data detected as the at least one portion of the detection object from the image; a target determination unit configured to extract mismatching data between the correct answer data and the detection data, which exists within a predetermined range from a region in which the correct answer data and the detection data match, and determine the mismatching data as evaluation target data; an investigation unit configured to investigate property information of the evaluation target data; and an error determination unit configured to determine, based on the property information, whether the evaluation target data is error candidate data of the correct answer data.
Abstract: Methods, apparatus, systems, and articles of manufacture for training a neural network are disclosed. An example apparatus includes a training data segmenter to generate a partial set of labeled training data from a set of labeled training data. A matrix constructor is to create a design of experiments matrix identifying permutations of hyperparameters to be tested. A training controller is to cause a neural network trainer to train a neural network using a plurality of the permutations of hyperparameters in the design of experiments matrix and the partial set of labeled training data, and access results of the training corresponding of each of the permutations of hyperparameters. A result comparator is to select a permutation of hyperparameters based on the results, the training controller to instruct the neural network trainer to train the neural network using the selected permutation of hyperparameters and the labeled training data.
Type:
Grant
Filed:
December 28, 2017
Date of Patent:
November 15, 2022
Assignee:
Intel Corporation
Inventors:
LayWai Kong, Takeshi Nakazawa, Anne Hansen-Musakwa
Abstract: A neural network includes inputs for receiving input signals, synapses connected to the inputs and having corrective weights, and neurons having outputs connected with the inputs via the synapses. Each neuron generates a neuron sum by summing corrective weights selected from the respective synapse. A controller receives a desired output signal, determines a deviation of the neuron sum from the desired output signal value, and modifies respective corrective weights using the determined deviation. Adding up the modified corrective weights to determine the neuron sum minimizes the deviation and trains the network. A structure-forming module rearranges connections between network elements during the training and a signal allocation module distributes the input signals among the network elements during the training.
Type:
Grant
Filed:
July 26, 2019
Date of Patent:
November 8, 2022
Inventors:
Boris Zlotin, Dmitri Pescianschi, Vladimir Proseanic
Abstract: A method for training a model and an information recommendation system are provided. The method includes the following. Multiple types of features of the target model are obtained and a feature group sequence of the multiple types of features is generated, where the feature group sequence includes multiple feature groups and a sequence relation between the multiple feature groups and each feature group contains at least one type of features among the multiple types of features. The target model is classified into a multi-level model according to the feature group sequence. A trained target model is obtained by executing a preset training operation on the feature weight values corresponding to each level of the multi-level model.
Type:
Grant
Filed:
October 29, 2018
Date of Patent:
November 8, 2022
Assignee:
GUANGDONG OPPO MOBILE TELECOMMUNICATIONS CORP., LTD.
Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a machine learning model to generate embeddings of inputs to the machine learning model, the machine learning model having an encoder that generates the embeddings from the inputs and a decoder that generates outputs from the generated embeddings, wherein the embedding is partitioned into a sequence of embedding partitions that each includes one or more dimensions of the embedding, the operations comprising: for a first embedding partition in the sequence of embedding partitions: performing initial training to train the encoder and a decoder replica corresponding to the first embedding partition; for each particular embedding partition that is after the first embedding partition in the sequence of embedding partitions: performing incremental training to train the encoder and a decoder replica corresponding to the particular partition.
Type:
Grant
Filed:
September 27, 2019
Date of Patent:
November 8, 2022
Assignee:
Google LLC
Inventors:
Robert Andrew James Clark, Chun-an Chan, Vincent Ping Leung Wan
Abstract: In an aspect, provided is a method comprising monitoring one or more data analysis sessions, determining, based on the monitoring, a common data analysis technique performed across common data analysis sessions, identifying the common data analysis technique as a precedent, and providing the precedent to a precedent engine.
Abstract: Systems and methods for automating information extraction from piping and instrumentation diagrams is provided. Traditional systems and methods do not provide for end-to-end and automated data extraction from the piping and instrumentation diagrams.
Abstract: Systems, devices, and methodologies are provided for training and controlling a neural network. The neural network is trained using definitive and random training modes to train neurons in a monolithic network. The neural network output is used to control an autonomous or semi-autonomous vehicle.
Abstract: Provided are processes of balancing between exploration and optimization with knowledge discovery processes applied to unstructured data with tight interrogation budgets. A process may include determining a relevance probability distribution of responses and scores as an explanatory diagnostic. A distribution curve may be determined based on a probabilistic graphical network and a result may be audited relative to the distribution curve to determine noise measurements. The distribution curve may be determined based on a distribution of posterior predictions of entities to score ranking entity bias and noisiness of ranking entity feedback.
Type:
Grant
Filed:
October 1, 2021
Date of Patent:
September 27, 2022
Assignee:
CrowdSmart, Inc.
Inventors:
Thomas Kehler, Markus Guehrs, Sonali Sinha
Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for a graph processing system. In one aspect, the graph processing system obtains data identifying a first node and a second node from a graph of nodes and edges. The system processes numeric embeddings of the first node and the second node using a manifold neural network to generate respective manifold coordinates of the first node and the second node. The system applies a learned edge function to the manifold coordinates of the first node and the manifold coordinates of the second node to generate an edge score that represents a likelihood that an entity represented by the first node and an entity represented by the second node have a particular relationship.
Type:
Grant
Filed:
April 5, 2018
Date of Patent:
September 27, 2022
Assignee:
Google LLC
Inventors:
Rami Al-rfou′, Sami Ahmad Abu-El-Haija, Bryan Thomas Perozzi
Abstract: In one aspect, the invention comprises a system and method for control of a transaction state system utilizing a distributed ledger. First, the system and method includes an application plane layer adapted to receive instructions regarding operation of the transaction state system. Preferably, the application plane layer is coupled to the application plane layer interface. Second, a control plane layer is provided, the control plane layer including an adaptive control unit, such as a cognitive computing unit, artificial intelligence unit or machine-learning unit. Third, a data plane layer includes an input interface to receive data input from one or more data sources and to provide output coupled to a decentralized distributed ledger, the data plane layer is coupled to the control plane layer. Optionally, the system and method serve to implement a smart contract on a decentralized distributed ledger.
Abstract: The present disclosure provides a computation method and product thereof. The computation method adopts a fusion method to perform machine learning computations. Technical effects of the present disclosure include fewer computations and less power consumption.
Type:
Grant
Filed:
December 19, 2019
Date of Patent:
September 13, 2022
Assignee:
SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTD
Abstract: Embodiments for efficient machine and deep learning hyperparameter tuning in a distributed computing system. Runtime metrics of each training iteration are collected to identify candidate jobs to merge during an execution phase. The candidate jobs are grouped into job groups, and the job groups containing the candidate jobs are merged together subsequent to each iteration boundary for execution during the execution phase.
Type:
Grant
Filed:
June 21, 2018
Date of Patent:
September 13, 2022
Assignee:
INTERNATIONAL BUSINESS MACHINES CORPORATION
Inventors:
Junfeng Liu, Kuan Feng, Zhichao Su, Yi Zhao
Abstract: The present disclosure provides a computation method and product thereof. The computation method adopts a fusion method to perform machine learning computations. Technical effects of the present disclosure include fewer computations and less power consumption.
Type:
Grant
Filed:
December 19, 2019
Date of Patent:
September 13, 2022
Assignee:
SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTD
Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for facilitating generation of prediction models. In some embodiments, a predetermined number of parameter value sets is identified. Each parameter value set includes a plurality of parameter values that represent corresponding parameters within a time series model. The parameter values can be selected in accordance with stratified sampling to increase a likelihood of prediction accuracy. The parameter value sets are input into a time series model to generate a prediction value in accordance with observed time series data, and the parameter value set resulting in a least amount of prediction error can be selected and used to generate a time series prediction model (ARIMA, AR, MA, ARMA) with corresponding model parameters, such as p, q, and/or k, subsequently used to predict values.
Abstract: A method for accelerating a neural network includes identifying neural network layers that meet a locality constraint. Code is generated to implement depth-first processing for different hardware based on the identified neural network layers. The generated code is used to perform the depth-first processing on the neural network based on the generated code.
Type:
Grant
Filed:
February 6, 2018
Date of Patent:
August 30, 2022
Assignee:
NEC CORPORATION
Inventors:
Nicolas Weber, Felipe Huici, Mathias Niepert
Abstract: A method for accelerated decision tree execution in a processor of a digital system is provided that includes receiving at least some attribute values of a plurality of attribute values of a query for the decision tree in a pre-processing component, evaluating the received attribute values in the pre-processing component according to first early termination conditions corresponding to a first decision to determine whether or not the received attribute values fulfill first early termination conditions, and querying the decision tree with the plurality of attribute values when the received attribute values do not fulfill the first early termination conditions.
Abstract: In accordance to embodiments, an encoder neural network is configured to receive a one-hot representation of a real text and output a latent representation of the real text generated from the one-hot representation of the real text. A decoder neural network is configured to receive the latent representation of the real text, and output a reconstructed softmax representation of the real text from the latent representation of the real text, the reconstructed softmax representation of the real text is a soft-text. A generator neural network is configured to generate artificial text based on random noise data. A discriminator neural network is configured to receive the soft-text and receive a softmax representation of the artificial text, and output a probability indicating whether the softmax representation of the artificial text received by the discriminator neural network is not from the generator neural network.
Abstract: A method for ensemble machine learning includes: receiving input data and input models, the input models each having learning properties; generating perturbed data by adding noise to the input data; performing a landmarking operation on the perturbed data to generate meta-features that correlate with the learning properties of the input models; generating decision trees based on the input models and the meta-features.
Type:
Grant
Filed:
March 28, 2019
Date of Patent:
August 23, 2022
Assignee:
NEC CORPORATION
Inventors:
Jihed Khiari, Luis Moreira-Matias, Saso Dzeroski, Bernard Zenko