Learning Method Patents (Class 706/25)
-
Patent number: 10872290Abstract: A dynamically adaptive neural network processing system includes memory to store instructions representing a neural network in contiguous blocks, hardware acceleration (HA) circuitry to execute the neural network, direct memory access (DMA) circuitry to transfer the instructions from the contiguous blocks of the memory to the HA circuitry, and a central processing unit (CPU) to dynamically modify a linked list representing the neural network during execution of the neural network by the HA circuitry to perform machine learning, and to generate the instructions in the contiguous blocks of the memory based on the linked list.Type: GrantFiled: September 21, 2017Date of Patent: December 22, 2020Assignee: Raytheon CompanyInventors: John R. Goulding, John E. Mixter, David R. Mucha
-
Patent number: 10861545Abstract: A programmable artificial neuron emitting an output signal controlled by at least one control parameter, includes, for each control parameter, a capacitor and at least one block including at least one multiplexer configured to be in two states: a programming state and an operating state; a transistor; and a non-volatile resistive random access memory connected in series with the transistor, the capacitor and the resistive random access memory being mounted in parallel. The multiplexer is configured to, when it is in the programming state, set a resistance value of the resistive random access memory to set the value of the control parameter; when it is in the operating state, conserve the set resistance value of the resistive random access memory.Type: GrantFiled: July 31, 2019Date of Patent: December 8, 2020Assignee: COMMISSARIAT A L'ENERGIE ATOMIQUE ET AUX ENERGIES ALTERNATIVESInventors: Thomas Dalgaty, Elisa Vianello
-
Patent number: 10853726Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for determining neural network architectures. One of the methods includes obtaining training data for a dense image prediction task; and determining an architecture for a neural network configured to perform the dense image prediction task, comprising: searching a space of candidate architectures to identify one or more best performing architectures using the training data, wherein each candidate architecture in the space of candidate architectures comprises (i) the same first neural network backbone that is configured to receive an input image and to process the input image to generate a plurality of feature maps and (ii) a different dense prediction cell configured to process the plurality of feature maps and to generate an output for the dense image prediction task; and determining the architecture for the neural network based on the best performing candidate architectures.Type: GrantFiled: May 29, 2019Date of Patent: December 1, 2020Assignee: Google LLCInventors: Barret Zoph, Jonathon Shlens, Yukun Zhu, Maxwell Donald Emmet Collins, Liang-Chieh Chen, Gerhard Florian Schroff, Hartwig Adam, Georgios Papandreou
-
Patent number: 10846188Abstract: A device for producing test data stores a plurality of simulated test data each of which substantially conforms to the data format accepted by a device under test (DUT). The data format includes different data blocks. The device for producing test data also mutates each of the simulated test data in one of a plurality of mutation forms to generate a plurality of first test data for testing the DUT. Each of the mutation forms refers to mutating one of the data blocks in one of a plurality of mutation ways.Type: GrantFiled: November 30, 2018Date of Patent: November 24, 2020Assignee: Institute For Information IndustryInventors: Chia-Wei Tien, Pei-Yi Lin, Chin-Wei Tien
-
Patent number: 10846590Abstract: A spike timing dependent plasticity (STDP) rule is applied in a spiking neural network (SNN) that includes artificial synapses bi-directionally connecting artificial neurons in the SNN to model locations within a physical environment. A first neuron is activated to cause a spike wave to propagate from the first neuron to other neurons in the SNN. Propagation of the spike wave causes synaptic weights of a subset of the synapses to be increased based on the STDP rule. A second neuron is activated after propagation of the spike wave to cause a spike chain to propagate along a path from the second neuron to the first neuron, based on the changes to the synaptic weights. A physical path is determined from the second to the first neuron based on the spike chain, and a signal may be sent to a controller of an autonomous device to cause the autonomous to navigate the physical path.Type: GrantFiled: December 20, 2016Date of Patent: November 24, 2020Assignee: Intel CorporationInventors: Nabil Imam, Narayan Srinivasa
-
Patent number: 10839543Abstract: Presented are systems and methods for improving speed and quality of real-time per-pixel depth estimation of scene layouts from a single image by using a 3D end-to-end Convolutional Spatial Propagation Network (CSPN). An efficient linear propagation model performs propagation using a recurrent convolutional operation. The affinity among neighboring pixels may be learned through a deep convolutional neural network (CNN). The CSPN may be applied to two depth estimation tasks, given a single image: (1) to refine the depth output of existing methods, and (2) to convert sparse depth samples to a dense depth map, e.g., by embedding the depth samples within the propagation procedure. For stereo depth estimation, the 3D CPSN is applied to stereo matching by adding a diffusion dimension over discrete disparity space and feature scale space. This aids the recovered stereo depth to generate more details and to avoid error matching from noisy appearance caused by sunlight, shadow, and similar effects.Type: GrantFiled: February 26, 2019Date of Patent: November 17, 2020Assignee: Baidu USA LLCInventors: Xinjing Cheng, Peng Wang, Ruigang Yang
-
Patent number: 10839208Abstract: A system and method to detect fraudulent documents is disclosed. The system uses a generative adversarial network to generate synthetic document data including new fraud patterns. The synthetic document data is used to train a fraud classifier to detect potentially fraudulent documents as part of a document validation workflow. The method includes extracting document features from sample data corresponding to target regions of the documents, such as logo regions and watermark regions. The method may include updating a cost function of the generators to reduce the tendency of the system to generate repeated fraud patterns.Type: GrantFiled: December 10, 2018Date of Patent: November 17, 2020Assignee: Accenture Global Solutions LimitedInventors: Julian Addison Anthony Samy, Kamal Mannar, Thanh Hai Le, Tau Herng Lim
-
Patent number: 10839316Abstract: A method includes using a computational network to learn and predict time-series data. The computational network is configured to receive the time-series data and perform transformation-invariant encoding of the time-series data. The computational network includes one or more encoding layers. The method also includes feeding back future predictions of the time-series data through inertial adjustments of transformations. The inertial adjustments preserve the invariants in the computational network. The computational network could further include one or more pooling layers each configured to reduce dimensionality of data, where the one or more pooling layers provide the transformation invariance for the encoding.Type: GrantFiled: August 1, 2017Date of Patent: November 17, 2020Assignee: Goldman Sachs & Co. LLCInventor: Paul Burchard
-
Patent number: 10838848Abstract: Computer implemented methods and systems are provided for generating one or more test cases based on received one or more natural language strings. An example system comprises a natural language classification unit that utilizes a trained neural network in conjunction with a reinforcement learning model, the system receiving as inputs various natural language strings and providing as outputs mapped test actions, mapped by the neural network.Type: GrantFiled: June 1, 2018Date of Patent: November 17, 2020Assignee: ROYAL BANK OF CANADAInventor: Cory Fong
-
Patent number: 10839700Abstract: A method for optimized data transmission between an unmanned aerial vehicle and a telecommunications network includes: transmitting, by the unmanned aerial vehicle, command and control data and/or payload communication data in an uplink direction from the unmanned aerial vehicle to the telecommunications network; and/or receiving, by the unmanned aerial vehicle, command and control data and/or payload communication data in a downlink direction from the telecommunications network to the unmanned aerial vehicle. The unmanned aerial vehicle comprises a communication interface to transmit the command and control data and/or the payload communication data in the uplink direction and/or to receive the command and control data and/or the payload communication data in the downlink direction, wherein the communication interface comprises a first air interface towards an optimized cellular narrowband mobile network and a second air interface towards a cellular broadband mobile network.Type: GrantFiled: August 29, 2017Date of Patent: November 17, 2020Assignee: DEUTSCHE TELEKOM AGInventors: Andreas Frisch, Andreas Lassak
-
Patent number: 10831860Abstract: Zero-shifting techniques in analog crosspoint arrays are provided. In one aspect, an analog array-based vector-matrix multiplication includes: a weight array connected to a reference array, each including a crossbar array having a set of conductive row wires and a set of conductive column wires intersecting the set of conductive row wires, and optimizable crosspoint devices at intersections of the set of conductive column wires and the set of conductive row wires. A method for analog array-based vector-matrix computing is also provided that includes: applying repeated voltage pulses to the crosspoint devices in the weight array until all of the crosspoint devices in the weight array converge to their own symmetry point; and copying conductance values for each crosspoint device from the weight array to the reference array.Type: GrantFiled: October 11, 2018Date of Patent: November 10, 2020Assignee: International Business Machines CorporationInventors: Seyoung Kim, Hyungjun Kim, Tayfun Gokmen, Malte Rasch
-
Patent number: 10834486Abstract: A network distribution, point (1) for operation as a node in a telecommunications system intermediate between a remote access server (41) and a plurality of individual termination points (1) served from the remote access server (41) by respective digital subscriber loops (30), in particular at an optical fibre/copper wire interface (17), incorporates a dynamic line management system (18) for processing data relating to the capabilities of each of the digital subscriber loops (30), and generating a pro file of each digital subscriber loop (30) for transmission to the remote access server (49) to allow control of the transmission of data to the individual termination points.Type: GrantFiled: January 7, 2010Date of Patent: November 10, 2020Assignee: BRITISH TELECOMMUNICATIONS PUBLIC LIMITED COMPANYInventor: Ian Robert Cooper
-
Patent number: 10832135Abstract: An embodiment includes a method, comprising: pruning a layer of a neural network having multiple layers using a threshold; and repeating the pruning of the layer of the neural network using a different threshold until a pruning error of the pruned layer reaches a pruning error allowance.Type: GrantFiled: April 14, 2017Date of Patent: November 10, 2020Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Zhengping Ji, John Wakefield Brothers, Ilia Ovsiannikov, Eunsoo Shim
-
Patent number: 10810457Abstract: Systems and methods directed to utilizing a first camera system to capture first images of one or more people in proximity to a tabletop; utilizing a second camera system to capture second images of one or more documents in proximity to the tabletop; generating a query for a database derived from people recognition conducted on the first images and text extraction on the second images; determining a first ranked list of people and a second ranked list of documents based on results of the query, the results based on a calculated ranked list of two-mode networks; and providing an interface on a display to access information about one or more people from the first ranked list of people and one or more documents from the second ranked list of documents.Type: GrantFiled: May 9, 2018Date of Patent: October 20, 2020Assignee: FUJI XEROX CO., LTD.Inventors: Patrick Chiu, Chelhwon Kim, Hajime Ueno, Yulius Tjahjadi, Anthony Dunnigan, Francine Chen, Jian Zhao, Bee-Yian Liew, Scott Carter
-
Patent number: 10803359Abstract: In some examples, processing circuitry obtains a shallow hash neural network (SHNN) model that has been trained from a HNN model and based on a set of SHNN training images that aggregates image recognition results from at least two reference hash neural network (HNN) models. Further, the processing circuitry performs an image recognition on the image according to the SHNN model, to obtain an image class vector in an image class space. The image class vector includes probability values of respective image classes in the image class space. A probability value of an image class in the image class space is a combination of intermediate probability values of the image class that are resulted from the at least two reference HNN models. Further, the processing circuitry determines one of the image classes for the image according to the probability values of the respective image classes in the image class space.Type: GrantFiled: March 15, 2019Date of Patent: October 13, 2020Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITEDInventors: Pai Peng, Xiaowei Guo
-
Patent number: 10803401Abstract: The multiple independent processes run in an AI engine on its cloud-based platform. The multiple independent processes are configured as an independent process wrapped in its own container so that multiple instances of the same processes can run simultaneously to scale to handle one or more users to perform actions. The actions to solve AI problems can include 1) running multiple training sessions on two or more AI models at the same time, 2) creating two or more AI models at the same time, 3) running a training session on one or more AI models while creating one or more AI models at the same time, 4) deploying and using two or more trained AI models to do predictions on data from one or more data sources, 5) etc. A service handles scaling by dynamically calling in additional computing devices to load on and run additional instances of one or more of the independent processes as needed.Type: GrantFiled: January 26, 2017Date of Patent: October 13, 2020Assignee: Microsoft Technology Licensing, LLCInventors: Mark Isaac Hammond, Keen McEwan Browne, Shane Arney, Matthew Haigh, Jett Evan Jones, Matthew James Brown, Ruofan Kong, Chetan Desh
-
Patent number: 10796139Abstract: A gesture recognition system using siamese neural network executes a gesture recognition method. The gesture recognition method includes steps of: receiving a first training signal to calculate a first feature; receiving a second training signal to calculate a second feature; determining a distance between the first feature and the second feature in a feature space; adjusting the distance between the first feature and the second feature in feature space according to a predetermined parameter. Two neural networks are used to generate the first feature and the second feature, and determine the distance between the first feature and the second feature in the feature space for training the neural networks. Therefore, the gesture recognition system does not need a big amount of data to train one neural network for classifying a sensing signal. A user may easily define a new personalized gesture.Type: GrantFiled: September 4, 2018Date of Patent: October 6, 2020Assignee: KAIKUTEK INC.Inventors: Tsung-Ming Tai, Yun-Jie Jhang, Wen-Jyi Hwang, Chun-Hsuan Kuo
-
Patent number: 10789505Abstract: A method is described that includes executing a convolutional neural network layer on an image processor having an array of execution lanes and a two-dimensional shift register. The executing of the convolutional neural network includes loading a plane of image data of a three-dimensional block of image data into the two-dimensional shift register.Type: GrantFiled: June 23, 2017Date of Patent: September 29, 2020Assignee: Google LLCInventors: Ofer Shacham, David Patterson, William R. Mark, Albert Meixner, Daniel Frederic Finchelstein, Jason Rupert Redgrave
-
Patent number: 10783900Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for identifying the language of a spoken utterance. One of the methods includes receiving input features of an utterance; and processing the input features using an acoustic model that comprises one or more convolutional neural network (CNN) layers, one or more long short-term memory network (LSTM) layers, and one or more fully connected neural network layers to generate a transcription for the utterance.Type: GrantFiled: September 8, 2015Date of Patent: September 22, 2020Assignee: Google LLCInventors: Tara N. Sainath, Andrew W. Senior, Oriol Vinyals, Hasim Sak
-
Patent number: 10785745Abstract: Embodiments of the invention provide a system for scaling multi-core neurosynaptic networks. The system comprises multiple network circuits. Each network circuit comprises a plurality of neurosynaptic core circuits. Each core circuit comprises multiple electronic neurons interconnected with multiple electronic axons via a plurality of electronic synapse devices. An interconnect fabric couples the network circuits. Each network circuit has at least one network interface. Each network interface for each network circuit enables data exchange between the network circuit and another network circuit by tagging each data packet from the network circuit with corresponding routing information.Type: GrantFiled: December 19, 2017Date of Patent: September 22, 2020Assignee: International Business Machines CorporationInventors: Rodrigo Alvarez Icaza Rivera, John V. Arthur, Andrew S. Cassidy, Bryan L. Jackson, Paul A. Merolla, Dharmendra S. Modha, Jun Sawada
-
Patent number: 10783452Abstract: A computer-implemented method is provided for learning a model corresponding to a target function that changes in time series. The method includes acquiring a time-series parameter that is a time series of input parameters including parameter values expressing the target function. The method further includes propagating propagation values, which are obtained by weighting parameters values at time points before one time point according to passage of the time points, to nodes in the model associated with the parameter values at the one time point. The method also includes calculating a node value of each node using each propagation value propagated to each node. The method additionally includes updating a weight parameter used for calculating the propagation values propagated to each node, using a difference between the target function at the one time point and a prediction function obtained by making a prediction from the node values of the nodes.Type: GrantFiled: April 28, 2017Date of Patent: September 22, 2020Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventor: Hiroshi Kajino
-
Patent number: 10776692Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training an actor neural network used to select actions to be performed by an agent interacting with an environment. One of the methods includes obtaining a minibatch of experience tuples; and updating current values of the parameters of the actor neural network, comprising: for each experience tuple in the minibatch: processing the training observation and the training action in the experience tuple using a critic neural network to determine a neural network output for the experience tuple, and determining a target neural network output for the experience tuple; updating current values of the parameters of the critic neural network using errors between the target neural network outputs and the neural network outputs; and updating the current values of the parameters of the actor neural network using the critic neural network.Type: GrantFiled: July 22, 2016Date of Patent: September 15, 2020Assignee: DeepMind Technologies LimitedInventors: Timothy Paul Lillicrap, Jonathan James Hunt, Alexander Pritzel, Nicolas Manfred Otto Heess, Tom Erez, Yuval Tassa, David Silver, Daniel Pieter Wierstra
-
Patent number: 10769528Abstract: A computer trains a neural network model. (B) A neural network is executed to compute a post-iteration gradient vector and a current iteration weight vector. (C) A search direction vector is computed using a Hessian approximation matrix and the post-iteration gradient vector. (D) A step size value is initialized. (E) An objective function value is computed that indicates an error measure of the executed neural network. (F) When the computed objective function value is greater than an upper bound value, the step size value is updated using a predefined backtracking factor value. The upper bound value is computed as a sliding average of a predefined upper bound updating interval value number of previous upper bound values. (G) (E) and (F) are repeated until the computed objective function value is not greater than the upper bound value. (H) An updated weight vector is computed to describe a trained neural network model.Type: GrantFiled: October 2, 2019Date of Patent: September 8, 2020Assignee: SAS Institute Inc.Inventors: Ben-hao Wang, Joshua David Griffin, Seyedalireza Yektamaram, Yan Xu
-
Patent number: 10755162Abstract: A method to reduce a neural network includes: adding a reduced layer, which is reduced from a layer in the neural network, to the neural network; computing a layer loss and a result loss with respect to the reduced layer based on the layer and the reduced layer; and determining a parameter of the reduced layer based on the layer loss and the result loss.Type: GrantFiled: May 2, 2017Date of Patent: August 25, 2020Assignee: Samsung Electronics Co., Ltd.Inventors: Seung Ju Han, Jung Bae Kim, Jae Joon Han, Chang Kyu Choi
-
Patent number: 10747886Abstract: A computer implemented method to determine whether a target virtual machine (VM) in a virtualized computing environment is susceptible to a security attack, the method comprising: training a machine learning algorithm as a classifier based on a plurality of training data items, each training data item corresponding to a training VM and including a representation of parameters for a configuration of the training VM and a representation of characteristics of security attacks for the training VM; generating a data structure for storing one or more relationships between VM configuration parameters and attack characteristics, wherein the data structure is generated by sampling the trained machine learning algorithm to identify the relationships; determining a set of configuration parameters for the target VM; and identifying attack characteristics in the data structure associated with configuration parameters of the target VM as characteristics of attacks to which the target VM is susceptible.Type: GrantFiled: August 15, 2017Date of Patent: August 18, 2020Assignee: British Telecommunication Public Limited CompanyInventors: Fadi El-Moussa, Ian Herwono
-
Patent number: 10748057Abstract: Methods, apparatus, and computer readable media related to combining and/or training one or more neural network modules based on version identifier(s) assigned to the neural network module(s). Some implementations are directed to using version identifiers of neural network modules in determining whether and/or how to combine multiple neural network modules to generate a combined neural network model for use by a robot and/or other apparatus. Some implementations are additionally or alternatively directed to assigning a version identifier to an endpoint of a neural network module based on one or more other neural network modules to which the neural network module is joined during training of the neural network module.Type: GrantFiled: September 21, 2016Date of Patent: August 18, 2020Assignee: X DEVELOPMENT LLCInventors: Adrian Li, Mrinal Kalakrishnan
-
Patent number: 10741184Abstract: According to an embodiment, an arithmetic operation apparatus for a neural network includes an input layer calculator, a correction unit calculator, a hidden layer calculator, and an output layer calculator. The input layer calculator is configured to convert an input pattern into features as outputs of an input layer. The correction unit calculator is configured to perform calculation on N unit groups corresponding respectively to N classes of the input pattern and including correction units that each multiply a value based on inputs by a weight determined for the corresponding class. The hidden layer calculator is configured to perform calculation in a hidden layer based on the outputs of the input layer, another hidden layer, or the correction unit calculator. The output layer calculator is configured to perform calculation in an output layer based on the calculation for the hidden layer or the outputs of the correction unit calculator.Type: GrantFiled: March 15, 2016Date of Patent: August 11, 2020Assignee: KABUSHIKI KAISHA TOSHIBAInventors: Hiroshi Fujimura, Takashi Masuko
-
Patent number: 10740674Abstract: A method of configuring a System-on-Chip (SoC) to execute a Convolutional Neural Network (CNN) by (i) receiving scheduling schemes each specifying a sequence of operations executable by Processing Units (PUs) of the SoC; (ii) selecting, a scheduling scheme for a current layer of the CNN; (iii) determining a current state of memory for a storage location in the SoC allocated for storing feature map data from the CNN; (iv) selecting, from the plurality of scheduling schemes and dependent upon the scheduling scheme for the current layer of the CNN, a set of candidate scheduling schemes for a next layer of the CNN; and (v) selecting, from the set of candidate scheduling schemes dependent upon the determined current state of memory, a scheduling scheme for the next layer of the CNN.Type: GrantFiled: May 25, 2017Date of Patent: August 11, 2020Assignee: Canon Kabushiki KaishaInventors: Jude Angelo Ambrose, Iftekhar Ahmed, Yusuke Yachide, Haseeb Bokhari, Jorgen Peddersen, Sridevan Parameswaran
-
Patent number: 10740656Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training and using machine learning clustering models to determine conditions a satellite communication system. In some implementations, feature vectors for a time period are obtained. Each feature vector includes feature values that represent properties of a satellite communication system at a respective time during the time period. Each feature vector is provided as input to a machine learning model that assigns the feature vector to a based on the properties of the satellite communication system represented by the feature vector. Each cluster corresponds to a respective potential operating condition of the satellite communication system.Type: GrantFiled: September 19, 2018Date of Patent: August 11, 2020Assignee: Hughes Network Systems, LLCInventors: Amit Arora, Archana Gharpuray, John Kenyon
-
Patent number: 10726328Abstract: A method for implementing a convolutional neural network (CNN) accelerator on a target includes identifying characteristics and parameters for the CNN accelerator. Resources on the target are identified. A design for the CNN accelerator is generated in response to the characteristics and parameters of the CNN accelerator and the resources on the target.Type: GrantFiled: October 9, 2015Date of Patent: July 28, 2020Assignee: Altera CorporationInventors: Andrew Chaang Ling, Gordon Raymond Chiu, Utku Aydonat
-
Patent number: 10720217Abstract: A memory device includes a plurality of memory cells and a controller. The controller is configured to program each of the memory cells to one of a plurality of program states, and to read the memory cells using a read operation of applied voltages to the memory cells. During the read operation, separations between adjacent ones of the program states vary based on frequencies of use of the program states in the plurality of memory cells.Type: GrantFiled: April 11, 2019Date of Patent: July 21, 2020Assignee: Silicon Storage Technology, Inc.Inventors: Hieu Van Tran, Steven Lemke, Vipin Tiwari, Nhan Do, Mark Reiten
-
Patent number: 10721254Abstract: Systems and methods for threat detection in a network are provided. The system obtains recoils for entities that access a network. The records include attributes associated with the entities. The system identifies features for each of the entities based on the attributes. The system generates a feature set for each of the entities. The feature set is generated from the features identified based on the attributes of each of the entities. The system forms clusters of entities based on the feature set for each of the entities. The system classifies each of the clusters with a threat severity score calculated based on scores associated with entities forming each of the clusters. The system determines to generate an alert for an entity in a cluster response to the threat severity score of the cluster being greater than a threshold.Type: GrantFiled: March 1, 2018Date of Patent: July 21, 2020Assignee: Crypteia Networks S.A.Inventors: Ilias Kotinas, Theocharis Tsigkritis, Giorgos Gkroumas
-
Patent number: 10720235Abstract: A method for improving food-related personalized for a user including determining food-related preferences associated with a plurality of users to generate a user food preferences database; collecting dietary inputs from a subject matter expert (SME) at an SME interface associated with the user food preferences database; determining personalized food parameters for the user based on the user food-related preferences and the dietary inputs; receiving feedback associated with the personalized food parameters from the user; and updating the user food preferences database based on the feedback.Type: GrantFiled: January 25, 2019Date of Patent: July 21, 2020Assignee: KRAFT FOODS GROUP BRANDS LLCInventors: Tjarko Leifer, Erik Andrejko, Sivan Aldor-Noiman
-
Patent number: 10706352Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training an action selection neural network. One of the methods includes maintaining a replay memory that stores trajectories generated as a result of interaction of an agent with an environment; and training an action selection neural network having policy parameters on the trajectories in the replay memory, wherein training the action selection neural network comprises: sampling a trajectory from the replay memory; and adjusting current values of the policy parameters by training the action selection neural network on the trajectory using an off-policy actor critic reinforcement learning technique.Type: GrantFiled: May 3, 2019Date of Patent: July 7, 2020Assignee: DeepMind Technologies LimitedInventors: Ziyu Wang, Nicolas Manfred Otto Heess, Victor Constant Bapst
-
Patent number: 10706327Abstract: There is provided with an information processing apparatus. A processing unit obtains output data by inputting training data to a recognition unit. A determination unit determines an error in a discrimination result for the training data obtained by inputting the output data to a plurality of discriminators. A first training unit trains the recognition unit based on the error in the discrimination result.Type: GrantFiled: July 27, 2017Date of Patent: July 7, 2020Assignee: CANON KABUSHIKI KAISHAInventor: Yuki Saito
-
Patent number: 10691133Abstract: Methods and systems that allow neural network systems to maintain or increase operational accuracy while being able to operate in various settings. A set of training data is collected over each of at least two different settings. Each setting has a set of characteristics. Examples of setting characteristic types can be time, geographical location, and/or weather condition. Each set of training data is used to train a neural network resulting in a set of coefficients. For each setting, the setting characteristics are associated with the corresponding neural network having the resulting coefficients and neural network structure. A neural network, having the coefficients and neural network structure resulted after training using the training data collected over a setting, would yield optimal results when operated in/under the setting. A database management system can store information relating to, for example, the setting characteristics, neural network coefficients, and/or neural network structures.Type: GrantFiled: January 3, 2020Date of Patent: June 23, 2020Assignee: Apex Artificial Intelligence Industries, Inc.Inventor: Kenneth Austin Abeloe
-
Patent number: 10685278Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for implementing long-short term memory cells with saturating gating functions. One of the systems includes a first Long Short-Term Memory (LSTM) cell, wherein the first LSTM cell is configured to, for each of the plurality of time steps, generate a new cell state and a new cell output by applying a plurality of gates to a current cell input, a current cell state, and a current cell output, each of the plurality of gates being configured to, for each of the plurality of time steps: receive a gate input vector, generate a respective intermediate gate output vector from the gate input, and apply a respective gating function to each component of the respective intermediate gate output vector, wherein the respective gating function for at least one of the plurality of gates is a saturating gating function.Type: GrantFiled: December 30, 2019Date of Patent: June 16, 2020Assignee: Google LLCInventors: Sergey Ioffe, Raymond Wensley Smith
-
Patent number: 10679120Abstract: Embodiments of the present invention relate to providing power minimization in a multi-core neurosynaptic network. In one embodiment of the present invention, a method of and computer program product for power-driven synaptic network synthesis is provided. Power consumption of a neurosynaptic network is modeled as wire length. The neurosynaptic network comprises a plurality of neurosynaptic cores. An arrangement of the synaptic cores is determined by minimizing the wire length.Type: GrantFiled: November 10, 2014Date of Patent: June 9, 2020Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Charles J. Alpert, Pallab Datta, Myron D. Flickner, Zhuo Li, Dharmendra S. Modha, Gi-Joon Nam
-
Patent number: 10679118Abstract: A spiking neural network (SNN) is defined that includes artificial neurons interconnected by artificial synapses, the SNN defined to correspond to one or more numerical matrices in an equation such that weight values of the synapses correspond to values in the numerical matrices. An input vector is provided to the SNN to correspond to a numerical vector in the equation. A steady state spiking rate is determined for at least a portion of the neurons in the SNN and an approximate result of a matrix inverse problem corresponding to the equation is determined based on values of the steady state spiking rates.Type: GrantFiled: December 20, 2016Date of Patent: June 9, 2020Assignee: Intel CorporationInventors: Tsung-Han Lin, Narayan Srinivasa
-
Patent number: 10671922Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for implementing a neural network. In one aspect, the neural network includes a batch renormalization layer between a first neural network layer and a second neural network layer. The first neural network layer generates first layer outputs having multiple components. The batch renormalization layer is configured to, during training of the neural network on a current batch of training examples, obtain respective current moving normalization statistics for each of the multiple components and determine respective affine transform parameters for each of the multiple components from the current moving normalization statistics. The batch renormalization layer receives a respective first layer output for each training example in the current batch and applies the affine transform to each component of a normalized layer output to generate a renormalized layer output for the training example.Type: GrantFiled: July 1, 2019Date of Patent: June 2, 2020Assignee: Google LLCInventor: Sergey Ioffe
-
Patent number: 10664725Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for data-efficient reinforcement learning. One of the systems is a system for training an actor neural network used to select actions to be performed by an agent that interacts with an environment by receiving observations characterizing states of the environment and, in response to each observation, performing an action selected from a continuous space of possible actions, wherein the actor neural network maps observations to next actions in accordance with values of parameters of the actor neural network, and wherein the system comprises: a plurality of workers, wherein each worker is configured to operate independently of each other worker, wherein each worker is associated with a respective agent replica that interacts with a respective replica of the environment during the training of the actor neural network.Type: GrantFiled: July 31, 2019Date of Patent: May 26, 2020Assignee: DeepMind Technologies LimitedInventors: Martin Riedmiller, Roland Hafner, Mel Vecerik, Timothy Paul Lillicrap, Thomas Lampe, Ivaylo Popov, Gabriel Barth-Maron, Nicolas Manfred Otto Heess
-
Patent number: 10657175Abstract: Methods and a computer-readable storage device are disclosed for generating a frequency representation of a query audio file. The frequency representation represents information about at least a number of frequencies within a time range containing a number of time frames of the audio content information and a level associated with each of said frequencies. At least one of area of data points in the frequency representation is selected. A fingerprint for each selected area of data points is generated by applying a trained neural network onto said selected area of data points thereby generating a vector in a metric space. A distance between at least one of the generated query fingerprints and at least one reference fingerprint is calculated using a specified distance metric. A reference audio file having associated reference fingerprints which have produced at least one associated distance satisfying a predetermined threshold is identified.Type: GrantFiled: October 26, 2018Date of Patent: May 19, 2020Assignee: Spotify ABInventors: Jonathan Donier, Till Hoffmann
-
Patent number: 10656605Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for generating a target sequence from a source sequence. In one aspect, the system includes a recurrent neural network configured to, at each time step, receive am input for the time step and process the input to generate a progress score and a set of output scores; and a subsystem configured to, at each time step, generate the recurrent neural network input and provide the input to the recurrent neural network; determine, from the progress score, whether or not to emit a new output at the time step; and, in response to determining to emit a new output, select an output using the output scores and emit the selected output as the output at a next position in the output order.Type: GrantFiled: May 2, 2019Date of Patent: May 19, 2020Assignee: Google LLCInventors: Chung-Cheng Chiu, Navdeep Jaitly, Ilya Sutskever, Yuping Luo
-
Patent number: 10652617Abstract: In one embodiment, a method separates subscriber features generated from subscriber interaction with a video delivery service into feature dimensions and inputs the feature dimensions into a respective prediction network. Each prediction network is trained to output a respective dimension score. The method outputs dimension scores using parameters in the plurality of prediction networks that are trained using a variance term to control a variance of the plurality of feature dimensions and using a de-correlation term to control a correlation of the plurality of feature dimensions. The dimension scores are combined into a retention prediction score and an action is performed on the video delivery service for the subscriber based on the retention score.Type: GrantFiled: April 4, 2018Date of Patent: May 12, 2020Assignee: HULU, LLCInventors: Nathan Becker, Colin Zhou, Matthew Holcombe, Atul Arun Phadnis, Hang Li, Sridhar Srinivasa Subramanian, Kristen Huff
-
Patent number: 10650830Abstract: Processing circuitry of an information processing apparatus obtains a set of identity vectors that are calculated according to voice samples from speakers. The identity vectors are classified into speaker classes respectively corresponding to the speakers. The processing circuitry selects, from the identity vectors, first subsets of interclass neighboring identity vectors respectively corresponding to the identity vectors and second subsets of intraclass neighboring identity vectors respectively corresponding to the identity vectors. The processing circuitry determines an interclass difference based on the first subsets of interclass neighboring identity vectors and the corresponding identity vectors; and determines an intraclass difference based on the second subsets of intraclass neighboring identify vectors and the corresponding identity vectors.Type: GrantFiled: April 16, 2018Date of Patent: May 12, 2020Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITEDInventors: Wei Li, Binghua Qian, Xingming Jin, Ke Li, Fuzhang Wu, Yongjian Wu, Feiyue Huang
-
Patent number: 10650045Abstract: An apparatus includes a processor to: train a first neural network of a chain to generate first configuration data including first trained parameters, wherein the chain performs an analytical function generating a set of output values from a set of input values, each neural network has inputs to receive the set of input values and outputs to output a portion of the set of output values, and the neural networks are ordered from the first at the head to a last neural network at the tail, and are interconnected so that each neural network additionally receives the outputs of a preceding neural network; train, using the first configuration data, a next neural network in the chain ordering to generate next configuration data including next trained parameters; and use at least the first and next configuration data and data indicating the interconnections to instantiate the chain to perform the analytical function.Type: GrantFiled: August 30, 2019Date of Patent: May 12, 2020Assignee: SAS INSTITUTE INC.Inventors: Henry Gabriel Victor Bequet, Jacques Rioux, John Alejandro Izquierdo, Huina Chen, Juan Du
-
Patent number: 10635975Abstract: A disclosed machine learning method includes: calculating a first output error between a label and an output in a case where dropout in which values are replaced with 0 is executed for a last layer of a first channel among plural channels in a parallel neural network; calculating a second output error between the label and an output in a case where the dropout is not executed for the last layer of the first channel; and identifying at least one channel from the plural channels based on a difference between the first output error and the second output error to update parameters of the identified channel.Type: GrantFiled: December 19, 2016Date of Patent: April 28, 2020Assignee: FUJITSU LIMITEDInventor: Yuhei Umeda
-
Patent number: 10635944Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training an object representation neural network. One of the methods includes obtaining training sets of images, each training set comprising: (i) a before image of a before scene of the environment, (ii) an after image of an after scene of the environment after the robot has removed a particular object, and (iii) an object image of the particular object, and training the object representation neural network on the batch of training data, comprising determining an update to the object representation parameters that encourages the vector embedding of the particular object in each training set to be closer to a difference between (i) the vector embedding of the after scene in the training set and (ii) the vector embedding of the before scene in the training set.Type: GrantFiled: June 17, 2019Date of Patent: April 28, 2020Assignee: Google LLCInventors: Eric Victor Jang, Sergey Vladimir Levine, Coline Manon Devin
-
Patent number: 10628710Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for processing images or features of images using an image classification system that includes a batch normalization layer. One of the systems includes a convolutional neural network configured to receive an input comprising an image or image features of the image and to generate a network output that includes respective scores for each object category in a set of object categories, the score for each object category representing a likelihood that that the image contains an image of an object belonging to the category, and the convolutional neural network comprising: a plurality of neural network layers, the plurality of neural network layers comprising a first convolutional neural network layer and a second neural network layer; and a batch normalization layer between the first convolutional neural network layer and the second neural network layer.Type: GrantFiled: December 19, 2018Date of Patent: April 21, 2020Assignee: Google LLCInventors: Sergey Ioffe, Corinna Cortes
-
Patent number: 10628262Abstract: A first static server configured to perform at least one first node process and a second static server configured to perform at least one second node process may be instantiated. A conglomerate server may periodically analyze the at least one first node process and the at least one second node process to identify a network process state based on the at least one first node process and the at least one second node process. The conglomerate server may store the network process state in a memory. A failure may be detected in the first static server. In response to the detecting, the first static server may be reinstantiated. The reinstantiating may comprise restarting the at least one first node process according to the network process state from the memory.Type: GrantFiled: October 24, 2018Date of Patent: April 21, 2020Assignee: Capital One Services, LLCInventors: Austin Walters, Jeremy Goodsitt, Fardin Abdi Taghi Abad