Patents Examined by Shien Ming Chou
  • Patent number: 11966836
    Abstract: According to one embodiment, a detection system includes an acquirer, a trainer, and a detector. The acquirer acquires first data, second data, and third data. The first data is based on an action of a first body part in a first work of a first worker having a first proficiency. The second data is based on an action of the first body part in the first work of a second worker having a second proficiency. The third data is based on an action of the first body part in the first work of a third worker. The trainer trains a recurrent neural network including a first output layer using the first data and the second data. The detector inputs the third data to the trained recurrent neural network and detects a response of the first neuron or the second neuron.
    Type: Grant
    Filed: August 13, 2018
    Date of Patent: April 23, 2024
    Assignee: KABUSHIKI KAISHA TOSHIBA
    Inventor: Yasuo Namioka
  • Patent number: 11934944
    Abstract: Methods and systems are provided for training a neural network with augmented data. A dataset comprising a plurality of classes is obtained for training a neural network. Prior to initiation of training, the dataset may be augmented by performing affine transformations of the data in the dataset, wherein the amount of augmentation is determined by a data augmentation variable. The neural network is trained with the augmented dataset. A training loss and a difference of class accuracy for each class is determined. The data augmentation variable is updated based on the total loss and class accuracy for each class. The dataset is augmented by performing affine transformations of the data in the dataset according to the updated data augmentation variable, and the neural network is trained with the augmented dataset.
    Type: Grant
    Filed: October 4, 2018
    Date of Patent: March 19, 2024
    Assignee: International Business Machines Corporation
    Inventors: Takuya Goto, Masaharu Sakamoto, Hiroki Nakano
  • Patent number: 11922313
    Abstract: A system may include a processor and a memory. The memory may include program code that provides operations when executed by the processor. The operations may include: partitioning, based at least on a resource constraint of a platform, a global machine learning model into a plurality of local machine learning models; transforming training data to at least conform to the resource constraint of the platform; and training the global machine learning model by at least processing, at the platform, the transformed training data with a first of the plurality of local machine learning models.
    Type: Grant
    Filed: February 6, 2017
    Date of Patent: March 5, 2024
    Assignee: WILLIAM MARSH RICE UNIVERSITY
    Inventors: Bita Darvish Rouhani, Azalia Mirhoseini, Farinaz Koushanfar
  • Patent number: 11853860
    Abstract: Systems, methods, devices, and other techniques are described herein for training and using neural networks to encode inputs and to process encoded inputs, e.g., to reconstruct inputs from the encoded inputs. A neural network system can include an encoder neural network, a trusted decoder neural network, and an adversary decoder neural network. The encoder neural network processes a primary neural network input and a key input to generate an encoded representation of the primary neural network input. The trusted decoder neural network processes the encoded representation and the key input to generate a first estimated reconstruction of the primary neural network input. The adversary decoder neural network processes the encoded representation without the key input to generate a second estimated reconstruction of the primary neural network input. The encoder and trusted decoder neural networks can be trained jointly, and these networks trained adversarially to the adversary decoder neural network.
    Type: Grant
    Filed: March 3, 2022
    Date of Patent: December 26, 2023
    Assignee: Google LLC
    Inventors: Martin Abadi, David Godbe Andersen
  • Patent number: 11741346
    Abstract: Devices and methods for systolically processing data according to a neural network. In one aspect, a first arrangement of processing units includes at least first, second, third, and fourth processing units. The first and second processing units are connected to systolically pulse data to one another, and the third and fourth processing units are connected to systolically pulse data to one another. A second arrangement of processing units includes at least fifth, sixth, seventh, and eighth processing units. The fifth and sixth processing units are connected to systolically pulse data to one another, and the seventh and eighth processing units are connected to systolically pulse data to one another. The second processing unit is configured to systolically pulse data to the seventh processing unit along a first interconnect and the third processing unit is configured to systolically pulse data to the sixth processing unit along a second interconnect.
    Type: Grant
    Filed: May 16, 2018
    Date of Patent: August 29, 2023
    Assignee: Western Digital Technologies, Inc.
    Inventor: Luiz M. Franca-Neto
  • Patent number: 11741361
    Abstract: A method and an apparatus to build a machine learning based network model are described. For example, processing circuitry of an information processing apparatus obtains a data processing procedure of a first network model and a reference dataset that is generated by the first network model in the data processing procedure. The data processing procedure includes a first data processing step. Further, the processing circuitry builds a first sub-network in a second network model of a neural network type. The second network model is the machine learning based network model to be built. The first sub-network performs the first data processing step. Then, the processing circuitry performs optimization training on the first sub-network by using the reference dataset.
    Type: Grant
    Filed: May 21, 2018
    Date of Patent: August 29, 2023
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Bo Zheng, Zhibin Liu, Rijia Liu, Qian Chen
  • Patent number: 11715009
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a neural network including a first subnetwork followed by a second subnetwork on training inputs by optimizing an objective function. In one aspect, a method includes processing a training input using the neural network to generate a training model output, including processing a subnetwork input for the training input using the first subnetwork to generate a subnetwork activation for the training input in accordance with current values of parameters of the first subnetwork, and providing the subnetwork activation as input to the second subnetwork; determining a synthetic gradient of the objective function for the first subnetwork by processing the subnetwork activation using a synthetic gradient model in accordance with current values of parameters of the synthetic gradient model; and updating the current values of the parameters of the first subnetwork using the synthetic gradient.
    Type: Grant
    Filed: May 19, 2017
    Date of Patent: August 1, 2023
    Assignee: DeepMind Technologies Limited
    Inventors: Oriol Vinyals, Alexander Benjamin Graves, Wojciech Czarnecki, Koray Kavukcuoglu, Simon Osindero, Maxwell Elliot Jaderberg
  • Patent number: 11651271
    Abstract: Computer systems and associated methods are disclosed to detect a future change point in time series data used as input to a machine learning model. A forecast for the time series data is generated. In some embodiments, a fitting model is generated from the time series data, and residuals of the fitting model are obtained for respective portions of the data both before and after a potential change point in the future. The change point is determined based on a ratio of residual metrics for the two portions. In some embodiments, data features are extracted from individual segments in the time series data, and the segments are clustered based on their data features. A change point is determined based on a dissimilarity in cluster assignments for segments before and after the point. In some embodiments, when a change point is predicted, an update of the machine learning model is triggered.
    Type: Grant
    Filed: July 3, 2018
    Date of Patent: May 16, 2023
    Assignee: Amazon Technologies, Inc.
    Inventors: Quiping Xu, Joshua Allen Edgerton
  • Patent number: 11651227
    Abstract: In general, the disclosure describes techniques for facilitating trust in neural networks using a trusted neural network system. For example, described herein are multi-headed, trusted neural network systems that can be trained to satisfy one or more constraints as part of the training process, where such constraints may take the form of one or more logical rules and cause the objective function of at least one the heads of the trusted neural network system to steer, during machine learning model training, the overall objective function for the system toward an optimal solution that satisfies the constraints. The constraints may be non-temporal, temporal, or a combination of non-temporal and temporal. The constraints may be directly compiled to a neural network or otherwise used to train the machine learning model.
    Type: Grant
    Filed: December 19, 2018
    Date of Patent: May 16, 2023
    Assignee: SRI INTERNATIONAL
    Inventors: Shalini Ghosh, Patrick Lincoln, Ashish Tiwari, Susmit Jha
  • Patent number: 11537868
    Abstract: A method includes a computing system accessing a training sample that includes first sensor data obtained using a first sensor at a first geographic location, and first metadata comprising information relating to the first sensor. The system may train a machine-learning model by generating first map data by processing the training sample using the model and updating the model based on the generated first map data and target map data associated with the first geographic location. The system may then access second sensor data and second metadata, where the second sensor data is obtained using a second sensor. The system may generate second map data associated with a second geographic location by processing the second sensor data and the second metadata using the trained model. A high-definition map may be generated using the second map data.
    Type: Grant
    Filed: November 13, 2017
    Date of Patent: December 27, 2022
    Assignee: Lyft, Inc.
    Inventor: Gil Arditi
  • Patent number: 11468331
    Abstract: An information processing unit includes: an attention module having an attention layer and a computation section and that is for a neural network including plural levels of layer, the attention layer being configured to compute an output feature corresponding to an input feature from a predetermined layer and based on a parameter; the computation section that multiplies the input feature by the output feature, and outputs a computed result to a layer at a next level; the first learning unit connected to the neural network and that learns the parameter in a state in which learning has been suspended at least for the predetermined layer and the next level layer; and the channel selection section that selects as a redundant channel a channel satisfying a predetermined relationship between the output feature and a predetermined threshold value.
    Type: Grant
    Filed: June 6, 2019
    Date of Patent: October 11, 2022
    Assignee: Oki Electric Industry Co., Ltd.
    Inventors: Kohei Yamamoto, Kurato Maeno
  • Patent number: 11461614
    Abstract: A novel and useful system and method of data driven quantization optimization of weights and input data in an artificial neural network (ANN). The system reduces quantization implications (i.e. error) in a limited resource system by employing the information available in the data actually observed by the system. Data counters in the layers of the network observe the data input thereto. The distribution of the data is used to determine an optimum quantization scheme to apply to the weights, input data, or both. The mechanism is sensitive to the data observed at the input layer of the network. As a result, the network auto-tunes to optimize the instance specific representation of the network. The network becomes customized (i.e. specialized) to the inputs it observes and better fits itself to the subset of the sample space that is applicable to its actual data flow. As a result, nominal process noise is reduced and detection accuracy improves.
    Type: Grant
    Filed: December 12, 2017
    Date of Patent: October 4, 2022
    Inventors: Avi Baum, Or Danon, Daniel Ciubotariu, Mark Grobman, Alex Finkelstein
  • Patent number: 11455522
    Abstract: A mobile electronic device such as a smartphone is used in conjunction with a deep learning system to detect and respond to personal danger. The deep learning system monitors current information (such as location, audio, biometrics, etc.) from the smartphone and generates a risk score by comparing the information to a routine profile for the user. If the risk score exceeds a predetermined threshold, an alert is sent to the smartphone which presents an alert screen to the user. The alert screen allows the user to cancel the alert (and notify the deep learning system) or confirm the alert (and immediately transmit an emergency message). Multiple emergency contacts can be designated, e.g., one for a low-level risk, another for an intermediate-level risk, and another for a high-level risk, and the emergency message can be sent to a selected contact depending upon the severity of the risk score.
    Type: Grant
    Filed: November 17, 2017
    Date of Patent: September 27, 2022
    Assignee: International Business Machines Corporation
    Inventors: Steven A. Cordes, Michael S. Gordon, Nigel Hinds, Maja Vukovic
  • Patent number: 11366999
    Abstract: It is possible to improve estimation accuracy with regard to data in which significance is attached to a relative phase. Provided is an information processing device including an estimation unit configured to estimate a status by using a neural network. The neural network includes a first complex-valued neural network to which complex data is input, a phase difference computation layer from which phase difference for each element between a plurality of sets with regard to the complex data is output, and a second complex-valued neural network from which complex data is output on the basis of the phase difference.
    Type: Grant
    Filed: November 21, 2017
    Date of Patent: June 21, 2022
    Assignee: Oki Electric Industry Co., Ltd.
    Inventors: Kohei Yamamoto, Kurato Maeno
  • Patent number: 11308385
    Abstract: Systems, methods, devices, and other techniques are described herein for training and using neural networks to encode inputs and to process encoded inputs, e.g., to reconstruct inputs from the encoded inputs. A neural network system can include an encoder neural network, a trusted decoder neural network, and an adversary decoder neural network. The encoder neural network processes a primary neural network input and a key input to generate an encoded representation of the primary neural network input. The trusted decoder neural network processes the encoded representation and the key input to generate a first estimated reconstruction of the primary neural network input. The adversary decoder neural network processes the encoded representation without the key input to generate a second estimated reconstruction of the primary neural network input. The encoder and trusted decoder neural networks can be trained jointly, and these networks trained adversarially to the adversary decoder neural network.
    Type: Grant
    Filed: August 3, 2017
    Date of Patent: April 19, 2022
    Assignee: Google LLC
    Inventors: Martin Abadi, David Godbe Andersen