Patents Examined by Vasyl Dykyy
  • Patent number: 11205110
    Abstract: An electronic device is described which has at least one input interface to receive at least one item of a sequence of items. The electronic device is able to communicate with a server, the server storing a neural network and a process which generates item embeddings of the neural network. The electronic device has a memory storing a copy of the neural network and a plurality of item embeddings of the neural network. In the case when there is unavailability at the electronic device of a corresponding item embedding corresponding to the received at least one item, the electronic device triggers transfer of the corresponding item embedding from the server to the electronic device. A processor at the electronic device predicts at least one candidate next item in the sequence by processing the corresponding item embedding with the copy of the neural network and the plurality of item embeddings.
    Type: Grant
    Filed: October 24, 2016
    Date of Patent: December 21, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Matthew James Willson, Marco Fiscato, Juha Iso-Sipilä, Douglas Alexander Harper Orr
  • Patent number: 11113605
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for reinforcement learning using agent curricula. One of the methods includes maintaining data specifying plurality of candidate agent policy neural networks; initializing mixing data that assigns a respective weight to each of the candidate agent policy neural networks; training the candidate agent policy neural networks using a reinforcement learning technique to generate combined action selection policies that result in improved performance on a reinforcement learning task; and during the training, repeatedly adjusting the weights in the mixing data to favor higher-performing candidate agent policy neural networks.
    Type: Grant
    Filed: May 20, 2019
    Date of Patent: September 7, 2021
    Assignee: DeepMind Technologies Limited
    Inventors: Wojciech Czarnecki, Siddhant Jayakumar
  • Patent number: 11100388
    Abstract: An apparatus, a computer readable medium, and a learning method for learning a model corresponding to time-series input data, including acquiring the time-series input data, which is a time series of input data including a plurality of input values, propagating, to a plurality of nodes in a model, each of a plurality of propagation values obtained by weighting each input value at a plurality of time points before one time point according to passage of time points, in association with the plurality of input values at the one time point, calculating a node value of a first node among the plurality of nodes by using each propagated value propagated to the first node, and updating a weight parameter used to calculate each propagation value propagated to the first node, by using a corresponding input value and a calculated error of the node value at the one time point.
    Type: Grant
    Filed: November 22, 2016
    Date of Patent: August 24, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventor: Takayuki Osogami
  • Patent number: 11074480
    Abstract: A learning method for acquiring at least one personalized reward function, used for performing a Reinforcement Learning (RL) algorithm, corresponding to a personalized optimal policy for a subject driver is provided. And the method includes steps of: (a) a learning device performing a process of instructing an adjustment reward network to generate first adjustment rewards, by referring to the information on actual actions and actual circumstance vectors in driving trajectories, a process of instructing a common reward module to generate first common rewards by referring to the actual actions and the actual circumstance vectors, and a process of instructing an estimation network to generate actual prospective values by referring to the actual circumstance vectors; and (b) the learning device instructing a first loss layer to generate an adjustment reward and to perform backpropagation to learn parameters of the adjustment reward network.
    Type: Grant
    Filed: January 10, 2020
    Date of Patent: July 27, 2021
    Assignee: Stradvision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Dongsoo Shin, Donghun Yeo, Wooju Ryu, Myeong-Chun Lee, Hyungsoo Lee, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10997496
    Abstract: A method, computer program product, and system perform computations using a sparse convolutional neural network accelerator. Compressed-sparse data is received for input to a processing element, wherein the compressed-sparse data encodes non-zero elements and corresponding multi-dimensional positions. The non-zero elements are processed in parallel by the processing element to produce a plurality of result values. The corresponding multi-dimensional positions are processed in parallel by the processing element to produce destination addresses for each result value in the plurality of result values. Each result value is transmitted to a destination accumulator associated with the destination address for the result value.
    Type: Grant
    Filed: March 14, 2017
    Date of Patent: May 4, 2021
    Assignee: NVIDIA Corporation
    Inventors: William J. Dally, Angshuman Parashar, Joel Springer Emer, Stephen William Keckler, Larry Robert Dennison
  • Patent number: 10902311
    Abstract: Regularization of neural networks. Neural networks can be regularized by obtaining an original neural network having a plurality of first-in-first-out (FIFO) queues, each FIFO queue located between a pair of nodes among a plurality of nodes of the original neural network, generating at least one modified neural network, the modified neural network being equivalent to the original neural network with a modified length of at least one FIFO queue, evaluating each neural network among the original neural network and the at least one modified neural network, and determining which neural network among the original neural network and the at least one modified neural network is most accurate, based on the evaluation.
    Type: Grant
    Filed: December 28, 2016
    Date of Patent: January 26, 2021
    Assignee: International Business Machines Corporation
    Inventors: Sakyasingha Dasgupta, Takayuki Osogami
  • Patent number: 10891542
    Abstract: An individual neuron circuit calculates a first value based on a sum of products each obtained by multiplying one of weight values, each representing connection or disconnection between a corresponding neuron circuit and one of the other neuron circuits, by a corresponding one of output signals of the other neuron circuits and outputs 0 or 1, based on a result of comparison between a second value obtained by adding a noise value to the first value and a threshold. An arbitration circuit allows, when first output signals of first neuron circuits interconnected among the neuron circuits simultaneously change based on the weight values, updating of only one of the first output signals of the first neuron circuits and allows, when second output signals of second neuron circuits not interconnected simultaneously change, updating of the second output signals.
    Type: Grant
    Filed: April 10, 2017
    Date of Patent: January 12, 2021
    Assignee: FUJITSU LIMITED
    Inventors: Satoshi Matsubara, Hirotaka Tamura
  • Patent number: 10885441
    Abstract: The present disclosure includes methods and systems for generating digital predictive models by progressively sampling a repository of data samples. In particular, one or more embodiments of the disclosed systems and methods identify initial attributes for predicting a target attribute and utilize the initial attributes to identify a coarse sample set. Moreover, the disclosed systems and methods can utilize the coarse sample set to identify focused attributes pertinent to predicting the target attribute. Utilizing the focused attributes, the disclosed systems and methods can identify refined data samples and utilize the refined data samples to identify final attributes and generate a digital predictive model.
    Type: Grant
    Filed: December 22, 2016
    Date of Patent: January 5, 2021
    Assignee: ADOBE INC.
    Inventors: Wei Zhang, Scott Tomko
  • Patent number: 10853719
    Abstract: A data collecting device includes a receiver configured to receive an optical signal; an optical-to-electrical converter configured to convert the optical signal received by the receiver into an electrical signal; an analog-to-digital converter configured to convert the electrical signal into a digital signal; a data reducing circuit configured to reduce the digital signal output from the analog-to-digital converter; and a transmitter configured to transmit, to a managing device that manages the data collecting device, a signal obtained by reducing the digital signal by the data reducing circuit.
    Type: Grant
    Filed: December 2, 2016
    Date of Patent: December 1, 2020
    Assignee: FUJITSU LIMITED
    Inventors: Takahito Tanimura, Takeshi Hoshida
  • Patent number: 10846611
    Abstract: A data processing system is disclosed for machine learning. The system comprises a sampling module (13) and a computational module (15) interconnected by a data communications link (17). The computational module is configured to store a parameter vector representing an energy function of a network having a plurality of visible units connected using links to a plurality of hidden units, each link being a relationship between two units. The sampling module is configured to receive the parameter vector from the first processing module and to sample from the probability distribution defined by the parameter vector to produce state vectors for the network. The computational module is further configured to receive the state vectors from the second processing module and to apply an algorithm to produce new data. The sampling and computational modules are configured to operate independently from one another.
    Type: Grant
    Filed: June 16, 2014
    Date of Patent: November 24, 2020
    Assignee: Nokia Technologies Oy
    Inventors: Joachim Wabnig, Antti Niskanen
  • Patent number: 10839312
    Abstract: Techniques for generating a warning filter to filter the warnings output from a static program analysis tool are provided. In one example, a computer-implemented method comprises determining feature vector data for a set of warnings, wherein the set of warnings is generated in response to static analysis of a computer program, and wherein the feature vector data comprises a feature vector indicative of an attribute of a warning of the set of warnings. The computer-implemented method also comprises determining a warning filter that identifies a first subset of the set of warnings as representing true positives based on the feature vector data and classified warning data, and wherein the classified warning data represents a second subset of the set of warnings that have been classified to indicate whether respective members of the second subset are indicative of true positives.
    Type: Grant
    Filed: August 9, 2016
    Date of Patent: November 17, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Aleksandr Y. Aravkin, Salvatore Angelo Guarnieri, Marco Pistoia, Omer Tripp
  • Patent number: 10789510
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for using dynamic minibatch sizes during neural network training. One of the methods includes receiving, by each of a plurality of host computer, a respective batch of training examples, each training example having zero or more features, computing, by each host computer, a minimum number of minibatches into which the host computer can divide the respective batch of training examples so that the host computer can process each minibatch using an embedding layer of the neural network without exceeding available computing resources, determining a largest minimum number of minibatches (N) into which any host computer can divide its respective batch of training examples, generating, by each host computer, N minibatches from the respective batch of training examples received by the host computer, and processing, by each host computer, the N minibatches using the embedding layer.
    Type: Grant
    Filed: January 11, 2019
    Date of Patent: September 29, 2020
    Assignee: Google LLC
    Inventors: Jeremiah Willcock, George Kurian
  • Patent number: 10679119
    Abstract: The present disclosure provides for generating a spiking neural network. Generating a spiking neural network can include determining that a first input fan-in from a plurality of input neurons to each of a plurality of output neurons is greater than a threshold, generating a plurality of intermediate neurons based on a determination that the first input fan-in is greater than the threshold, and coupling the plurality of intermediate neurons to the plurality of input neurons and the plurality of output neurons, wherein each of the plurality of intermediate neurons has a second input fan-in that is less than the first input fan-in and each of the plurality of output neurons has a third input fan-in that is less than the first input fan-in.
    Type: Grant
    Filed: March 24, 2017
    Date of Patent: June 9, 2020
    Assignee: INTEL CORPORATION
    Inventors: Arnab Paul, Narayan Srinivasa
  • Patent number: 10614355
    Abstract: A method for updating a weight of a synapse of a neuromorphic device is provided. The synapse may include a transistor and a memristor. The memristor may have a first electrode coupled to a source electrode of the transistor. The method may include inputting a row spike to a drain electrode of the transistor at a first time; inputting a column spike to a second electrode of the memristor at a second time; inputting a row pulse to the drain electrode of the transistor at a third time that is delayed by a first delay time from the second time; inputting a column pulse to the second electrode of the memristor at a fourth time that is delayed by a second delay time from the second time; and inputting a gating pulse to a gate electrode of the transistor at a fifth time that is delayed by a third delay time from the fourth time.
    Type: Grant
    Filed: December 21, 2016
    Date of Patent: April 7, 2020
    Assignee: SK hynix Inc.
    Inventor: Hyung-Dong Lee
  • Patent number: 10592802
    Abstract: An electronic synapse is disclosed, comprising a heavy metal layer having a high spin orbit coupling, a domain wall magnet layer having a bottom surface adjacent to a top surface of the heavy metal layer, the domain wall magnet layer having a perpendicular magnetic anisotropy, the domain wall magnet layer having a domain wall, the domain wall running parallel to a longitudinal axis of the domain wall magnet layer, a pinned layer having perpendicular magnetic anisotropy, and an oxide tunnel barrier connected between the domain wall magnet layer and the pinned layer, wherein the pinned layer, the oxide tunnel barrier, and the free layer form a magnetic tunnel junction.
    Type: Grant
    Filed: February 28, 2017
    Date of Patent: March 17, 2020
    Assignee: Purdue Research Foundation
    Inventors: Abhronil Sengupta, Zubair Al Azim, Xuanyao Kelvin Fong, Kaushik Roy