Patents Examined by Benjamin J Buss
-
Patent number: 11501195Abstract: Systems, methods and aspects, and embodiments thereof relate to unsupervised or semi-supervised features learning using a quantum processor. To achieve unsupervised or semi-supervised features learning, the quantum processor is programmed to achieve Hierarchal Deep Learning (referred to as HDL) over one or more data sets. Systems and methods search for, parse, and detect maximally repeating patterns in one or more data sets or across data or data sets. Embodiments and aspects regard using sparse coding to detect maximally repeating patterns in or across data. Examples of sparse coding include L0 and L1 sparse coding. Some implementations may involve appending, incorporating or attaching labels to dictionary elements, or constituent elements of one or more dictionaries. There may be a logical association between label and the element labeled such that the process of unsupervised or semi-supervised feature learning spans both the elements and the incorporated, attached or appended label.Type: GrantFiled: July 3, 2017Date of Patent: November 15, 2022Assignee: D-WAVE SYSTEMS INC.Inventors: Geordie Rose, Suzanne Gildert, William G. Macready, Dominic Christoph Walliman
-
Patent number: 11455545Abstract: A computer-implemented system and method for building context models in real time is provided. A database of models for a user is maintained. Each model represents a contextual situation and includes one or more actions. Contextual data is collected for the user and a contextual situation is identified for that user based on the collected contextual information. Models related to the identified situation are selected and merged. One or more actions from the merged model are then selected.Type: GrantFiled: August 10, 2016Date of Patent: September 27, 2022Assignee: Palo Alto Research Center IncorporatedInventor: Simon Tucker
-
Patent number: 11256984Abstract: A machine learning (ML) task system trains a neural network model that learns a compressed representation of acquired data and performs a ML task using the compressed representation. The neural network model is trained to generate a compressed representation that balances the objectives of achieving a target codelength and achieving a high accuracy of the output of the performed ML task. During deployment, an encoder portion and a task portion of the neural network model are separately deployed. A first system acquires data, applies the encoder portion to generate a compressed representation, performs an encoding process to generate compressed codes, and transmits the compressed codes. A second system regenerates the compressed representation from the compressed codes and applies the task model to determine the output of a ML task.Type: GrantFiled: December 15, 2017Date of Patent: February 22, 2022Assignee: WaveOne Inc.Inventors: Lubomir Bourdev, Carissa Lew, Sanjay Nair, Oren Rippel
-
Patent number: 11244225Abstract: Implementing a neural network can include receiving a macro instruction for implementing the neural network within a control unit of a neural network processor. The macro instruction can indicate a first data set, a second data set, a macro operation for the neural network, and a mode of operation for performing the macro operation. The macro operation can be automatically initiated using a processing unit of the neural network processor by applying the second data set to the first data set based on the mode of operation.Type: GrantFiled: June 27, 2016Date of Patent: February 8, 2022Inventors: John W. Brothers, Joohoon Lee
-
Patent number: 11176449Abstract: Neural network accelerator hardware-specific division of inference may be performed by operations including obtaining a computational graph and a hardware chip configuration. The operations also include dividing inference of the plurality of layers into a plurality of groups. Each group includes a number of sequential layers based on an estimate of duration and energy consumption by the hardware chip to perform inference of the neural network by performing the mathematical operations on activation data, sequentially by layer, of corresponding portions of layers of each group. The operations further include generating instructions for the hardware chip to perform inference of the neural network, sequentially by group, of the plurality of groups.Type: GrantFiled: February 26, 2021Date of Patent: November 16, 2021Assignee: EDGECORTIX PTE. LTD.Inventors: Nikolay Nez, Antonio Tomas Nevado Vilchez, Hamid Reza Zohouri, Mikhail Volkov, Oleg Khavin, Sakyasingha Dasgupta
-
Patent number: 11170310Abstract: Systems and methods for automatically analyzing and selecting prominent channels from multi-dimensional biomedical signals in order to detect particular diseases or ailments are provided. Such systems and methods may be applied in different ways to obtain numerous benefits, such as lowering of power and processing requirements, reducing an amount of data acquired, simplifying hardware deployment, detecting non-trivial patterns, obtaining, clinical episode prognosis, improving patient care, and/or the like.Type: GrantFiled: January 25, 2013Date of Patent: November 9, 2021Assignee: THE REGENTS OF THE UNIVERSITY OF CALIFORNIAInventors: Majid Sarrafzadeh, Mars Lan
-
Patent number: 11164084Abstract: A device, system, and method is provided for training or prediction using a cluster-connected neural network. The cluster-connected neural network may be divided into a plurality of clusters of artificial neurons connected by weights or convolutional channels connected by convolutional filters. Within each cluster is a locally dense sub-network of intra-cluster weights or filters with a majority of pairs of neurons or channels connected by intra-cluster weights or filters that are co-activated together as an activation block during training or prediction. Outside each cluster is a globally sparse network of inter-cluster weights or filters with a minority of pairs of neurons or channels separated by a cluster border across different clusters connected by inter-cluster weights or filters. Training or predicting is performed using the cluster-connected neural network.Type: GrantFiled: November 11, 2020Date of Patent: November 2, 2021Assignee: DEEPCUBE LTD.Inventors: Eli David, Eri Rubin
-
Neural network processing with the neural network model pinned to on-chip memories of hardware nodes
Patent number: 11157801Abstract: Systems and methods for neural network processing are provided. A method in a system comprising a plurality of nodes interconnected via a network, where each node includes a plurality of on-chip memory blocks and a plurality of compute units, is provided. The method includes upon service activation receiving an N by M matrix of coefficients corresponding to the neural network model. The method includes loading the coefficients corresponding to the neural network model into the plurality of the on-chip memory blocks for processing by the plurality of compute units. The method includes regardless of a utilization of the plurality of the on-chip memory blocks as part of an evaluation of the neural network model, maintaining the coefficients corresponding to the neural network model in the plurality of the on-chip memory blocks until the service is interrupted or the neural network model is modified or replaced.Type: GrantFiled: June 29, 2017Date of Patent: October 26, 2021Assignee: Microsoft Technology Licensing, LLCInventors: Eric S. Chung, Douglas C. Burger, Jeremy Fowers, Kalin Ovtcharov -
Patent number: 11157814Abstract: The present disclosure provides systems and methods to reduce computational costs associated with convolutional neural networks. In addition, the present disclosure provides a class of efficient models termed “MobileNets” for mobile and embedded vision applications. MobileNets are based on a straight-forward architecture that uses depthwise separable convolutions to build light weight deep neural networks. The present disclosure further provides two global hyper-parameters that efficiently trade-off between latency and accuracy. These hyper-parameters allow the entity building the model to select the appropriately sized model for the particular application based on the constraints of the problem. MobileNets and associated computational cost reduction techniques are effective across a wide range of applications and use cases.Type: GrantFiled: September 18, 2017Date of Patent: October 26, 2021Assignee: Google LLCInventors: Andrew Gerald Howard, Bo Chen, Dmitry Kalenichenko, Tobias Christoph Weyand, Menglong Zhu, Marco Andreetto, Weijun Wang
-
Patent number: 11157815Abstract: The present disclosure provides systems and methods to reduce computational costs associated with convolutional neural networks. In addition, the present disclosure provides a class of efficient models termed “MobileNets” for mobile and embedded vision applications. MobileNets are based on a straight-forward architecture that uses depthwise separable convolutions to build light weight deep neural networks. The present disclosure further provides two global hyper-parameters that efficiently trade-off between latency and accuracy. These hyper-parameters allow the entity building the model to select the appropriately sized model for the particular application based on the constraints of the problem. MobileNets and associated computational cost reduction techniques are effective across a wide range of applications and use cases.Type: GrantFiled: July 29, 2019Date of Patent: October 26, 2021Assignee: Google LLCInventors: Andrew Gerald Howard, Bo Chen, Dmitry Kalenichenko, Tobias Christoph Weyand, Menglong Zhu, Marco Andreetto, Weijun Wang
-
Patent number: 11144820Abstract: Processors and methods for neural network processing are provided. A method in a processor including a pipeline having a matrix vector unit (MVU), a first multifunction unit connected to receive an input from the matrix vector unit, a second multifunction unit connected to receive an output from the first multifunction unit, and a third multifunction unit connected to receive an output from the second multifunction unit is provided. The method includes decoding a chain of instructions received via an input queue, where the chain of instructions comprises a first instruction that can only be processed by the matrix vector unit and a sequence of instructions that can only be processed by a multifunction unit. The method includes processing the first instruction using the MVU and processing each of instructions in the sequence of instructions depending upon a position of the each of instructions in the sequence of instructions.Type: GrantFiled: June 29, 2017Date of Patent: October 12, 2021Assignee: Microsoft Technology Licensing, LLCInventors: Eric S. Chung, Douglas C. Burger, Jeremy Fowers
-
Patent number: 11132599Abstract: Processors and methods for neural network processing are provided. A method in a processor including a pipeline having a matrix vector unit (MVU), a first multifunction unit connected to receive an input from the MVU, a second multifunction unit connected to receive an output from the first multifunction unit, and a third multifunction unit connected to receive an output from the second multifunction unit is provided. The method includes decoding instructions including a first type of instruction for processing by only the MVU and a second type of instruction for processing by only one of the multifunction units. The method includes mapping a first instruction for processing by the matrix vector unit or to any one of the first multifunction unit, the second multifunction unit, or the third multifunction unit depending on whether the first instruction is the first type of instruction or the second type of instruction.Type: GrantFiled: June 29, 2017Date of Patent: September 28, 2021Assignee: Microsoft Technology Licensing, LLCInventors: Eric S. Chung, Douglas C. Burger, Jeremy Fowers
-
Patent number: 11107005Abstract: A method for relative temperature preference learning is described. In one embodiment, the method includes identifying one or more current settings of a thermostat located at a premises, identifying one or more current indoor and outdoor conditions, calculating a current indoor differential between the current indoor temperature and the current target temperature, calculating a current outdoor differential between the current outdoor temperature and the current target temperature, and learning temperature preferences based on an analysis of the one or more current indoor conditions and the one or more current outdoor conditions. The one or more current settings of the thermostat include at least one of a current target temperature, current runtime settings, and current airflow settings.Type: GrantFiled: August 19, 2019Date of Patent: August 31, 2021Assignee: Vivint, Inc.Inventor: JonPaul Vega
-
Patent number: 11080610Abstract: A numerical control system detects a state amount indicating an operation state of a machine tool, creates a characteristic amount that characterizes the state of a machining operation from the detected state amount, infers an evaluation value of the operation state of the machine tool from the characteristic amount, and detects an abnormality in the operation state of the machine tool on the basis of the inferred evaluation value. The numerical control system generates and updates a learning model by machine learning that uses the characteristic amount, and stores the learning model in correlation with a combination of conditions of the machining operation of the machine tool.Type: GrantFiled: September 25, 2018Date of Patent: August 3, 2021Assignee: Fanuc CorporationInventors: Kazunori Iijima, Kazuhiro Satou, Yohei Kamiya
-
Patent number: 11074510Abstract: One embodiment provides an apparatus, including: a sensor subsystem comprising i) a plurality of sensors that collect information about the apparatus' immediate environment and ii) at least one agent that fuses and interprets the collected information; a model subsystem comprising i) a plurality of models, including a model for each of the apparatus' immediate environment, sentient beings, and the apparatus itself, the models receiving the collected information and storing other information and ii) at least one agent that uses the collected information and the stored other information to deduce information about the apparatus' immediate environment; an actuator subsystem comprising a plurality of actuators that interact with the apparatus' immediate environment based upon the collected information and the information deduced by the model subsystem; and an agency subsystem comprising a plurality of agents that carry out plans according to goals identifying at least one desired outcome in relation to the apparaType: GrantFiled: March 6, 2017Date of Patent: July 27, 2021Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Ernest Grady Booch, Raphael P. Chancey
-
Patent number: 11049001Abstract: The present invention provides a system comprising multiple core circuits. Each core circuit comprises multiple electronic axons for receiving event packets, multiple electronic neurons for generating event packets, and a fanout crossbar including multiple electronic synapse devices for interconnecting the neurons with the axons. The system further comprises a routing system for routing event packets between the core circuits. The routing system virtually connects each neuron with one or more programmable target axons for the neuron by routing each event packet generated by the neuron to the target axons. Each target axon for each neuron of each core circuit is an axon located on the same core circuit as, or a different core circuit than, the neuron.Type: GrantFiled: July 30, 2018Date of Patent: June 29, 2021Assignee: International Business Machines CorporationInventors: Rodrigo Alvarez-Icaza Rivera, John V. Arthur, Andrew S. Cassidy, Bryan L. Jackson, Paul A. Merolla, Dharmendra S. Modha, Jun Sawada
-
Patent number: 11037071Abstract: A machine learning engine may be used to identify items in a second item category that have a visual appearance similar to the visual appearance of a first item selected from a first item category. Image data and text data associated with a large number of items from different item categories may be processed and used by an association model created by a machine learning engine. The association model may extract item attributes from the image data and text data of the first item. The machine learning engine may determine weights for parameter types, and the weights may calibrate the influence of the respective parameter types on the search results. The association model may be deployed to identify items from different item categories that have a visual appearance similar to the first item. The association model may be updated over time by the machine learning engine as data correlations evolve.Type: GrantFiled: March 6, 2017Date of Patent: June 15, 2021Assignee: Amazon Technologies, Inc.Inventors: Karolina Tekiela, Gabriel Blanco Saldana, Rui Luo
-
Patent number: 11037070Abstract: A framework diagnostic test planning is described herein. In accordance with one aspect, the framework receives data representing one or more sample patients, diagnostic tests administered to the one or more sample patients, diagnostic test results and confirmed medical conditions associated with the administered diagnostic tests. The framework trains one or more classifiers based on the data to identify diagnostic test plans from the diagnostic tests. The one or more classifiers may then be applied to current patient data to generate a diagnostic test plan for a given patient.Type: GrantFiled: April 21, 2016Date of Patent: June 15, 2021Assignee: Siemens Healthcare GmbHInventors: Marcos Salganicoff, Xiang Sean Zhou, Gerardo Hermosillo Valadez, Luca Bogoni
-
Patent number: 11023827Abstract: A machine learning device performs machine learning with respect to a servo control device including at least two feedforward calculation units among a position feedforward calculation unit configured to calculate a position feedforward term on the basis of a position command, a velocity feedforward calculation unit configured to calculate a velocity feedforward term on the basis of a position command, and a current feedforward calculation unit configured to calculate a current feedforward term on the basis of a position command. Machine learning related to the coefficients of a transfer function of one feedforward calculation unit among the at least two feedforward calculation units is performed earlier than machine learning related to the coefficients of a transfer function of the other feedforward calculation unit.Type: GrantFiled: February 11, 2019Date of Patent: June 1, 2021Assignee: FANUC CORPORATIONInventors: Ryoutarou Tsuneki, Satoshi Ikai
-
Patent number: 11017319Abstract: A method for training an obfuscation network and a surrogate network is provided. The method includes steps of: a 1st learning device (a) inputting original data of a 1st party, corresponding thereto, into the obfuscation network to generate obfuscated data wherein the 1st party owns the original data or is an entity to whom the original data is delegated; (b) transmitting the obfuscated data and the ground truth to a 2nd learning device corresponding to a 2nd party, and instructing the 2nd learning device to (i) input the obfuscated data into the surrogate network to generate characteristic information, (ii) calculate 1st losses using the ground truth and one of the characteristic information and task specific outputs, and (iii) train the surrogate network minimizing the 1st losses, and transmit the 1st losses to the 1st learning device; and (c) training the obfuscation network minimizing the 1st losses and maximizing 2nd losses.Type: GrantFiled: June 23, 2020Date of Patent: May 25, 2021Assignee: Deeping Source Inc.Inventor: Tae Hoon Kim