Patents Examined by Benjamin P. Geib
  • Patent number: 11429862
    Abstract: Techniques are disclosed for training a deep neural network (DNN) for reduced computational resource requirements. A computing system includes a memory for storing a set of weights of the DNN. The DNN includes a plurality of layers. For each layer of the plurality of layers, the set of weights includes weights of the layer and a set of bit precision values includes a bit precision value of the layer. The weights of the layer are represented in the memory using values having bit precisions equal to the bit precision value of the layer. The weights of the layer are associated with inputs to neurons of the layer. Additionally, the computing system includes processing circuitry for executing a machine learning system configured to train the DNN. Training the DNN comprises optimizing the set of weights and the set of bit precision values.
    Type: Grant
    Filed: September 17, 2018
    Date of Patent: August 30, 2022
    Assignee: SRI INTERNATIONAL
    Inventors: Sek Meng Chai, Aswin Nadamuni Raghavan, Samyak Parajuli
  • Patent number: 11429851
    Abstract: Disclosed circuits and methods involve a first register configured to store of a first convolutional neural network (CNN) instruction during processing of the first CNN instruction and a second register configured to store a second CNN instruction during processing of the second CNN instruction. Each of a plurality of address generation circuits is configured to generate one or more addresses in response to an input CNN instruction. Control circuitry is configured to select one of the first CNN instruction or the second CNN instruction as input to the address generation circuits.
    Type: Grant
    Filed: December 13, 2018
    Date of Patent: August 30, 2022
    Assignee: XILINX, INC.
    Inventors: Xiaoqian Zhang, Ephrem C. Wu, David Berman
  • Patent number: 11429847
    Abstract: Mechanisms including: receiving a first set of observed spike counts (FSoOSCs) for the spiking cells; determining a set of probabilities (SoPs) by: retrieving the SoPs from stored information (SI); or calculating the SopS based on the SI, wherein the SI regards possible biological states (BSs) of a subject, wherein each of the possible BSs belongs to at least one of a plurality of time sequences (PoTSs) of BSs, wherein each of the PoTSs of BSs corresponds to a possible action of the subject, and wherein each probability in the set of probabilities indicates a likelihood of observing a possible spike count for one of the plurality of spiking cells; identifying using a hardware processor a first identified BS of the subject from the possible BSs based on the FSoOSCs and the set of probabilities; and determining an action to be performed based on the first identified BS.
    Type: Grant
    Filed: January 31, 2022
    Date of Patent: August 30, 2022
    Assignee: The Trustees of Columbia University in the City of New York
    Inventors: Sean Perkins, Mark Churchland, Karen Schroeder, John Patrick Cunningham
  • Patent number: 11409996
    Abstract: Software that performs the following operations: (i) receiving descriptive information associated with a domain expert; (ii) receiving a question and a corresponding candidate answer for the question; (iii) determining a set of scoring features to be used to evaluate the candidate answer, wherein the set of scoring features includes: at least one scoring feature pertaining to the question, at least one scoring feature pertaining to the candidate answer, and at least one scoring feature pertaining to the descriptive information; (iv) receiving a score from the domain expert, wherein the score is based, at least in part, on the domain expert's evaluation of the candidate answer; (v) generating a feature vector based on the set of scoring features; (vi) cross-correlating the feature vector with the score; and (vii) clustering the domain expert with one or more other domain experts according to the cross-correlation, thereby creating a first cluster.
    Type: Grant
    Filed: November 29, 2017
    Date of Patent: August 9, 2022
    Assignee: International Business Machines Corporation
    Inventors: Corville O. Allen, Andrew R. Freed, Joseph Kozhaya, Dwi Sianto Mansjur
  • Patent number: 11410020
    Abstract: In one aspect, a computerized method for using machine learning methods for modeling for time in traffic for a vehicle on a delivery route includes the step of collecting a set of traffic feature values from a database. The method includes the step of normalizing the set of traffic feature values. The method includes the step of providing a machine learning model. The method includes the step of inputting the set of normalized traffic features into the machine learning model. The method includes the step of training the machine learning model with the set of normalized traffic features. The method includes the step of determining a target time for the vehicle on the delivery route.
    Type: Grant
    Filed: July 7, 2019
    Date of Patent: August 9, 2022
    Inventors: Geet Garg, Vittal Sai Prasad Sirigiri, Farhat Abbas Habib
  • Patent number: 11392832
    Abstract: Methods and computer systems improve a trained base deep neural network by structurally changing the base deep neural network to create an updated deep neural network, such that the updated deep neural network has no degradation in performance relative to the base deep neural network on the training data. The updated deep neural network is subsequently training. Also, an asynchronous agent for use in a machine learning system comprises a second machine learning system ML2 that is to be trained to perform some machine learning task. The asynchronous agent further comprises a learning coach LC and an optional data selector machine learning system DS. The purpose of the data selection machine learning system DS is to make the second stage machine learning system ML2 more efficient in its learning (by selecting a set of training data that is smaller but sufficient) and/or more effective (by selecting a set of training data that is focused on an important task).
    Type: Grant
    Filed: March 1, 2022
    Date of Patent: July 19, 2022
    Assignee: D5AI LLC
    Inventor: James K. Baker
  • Patent number: 11373089
    Abstract: Most artificial neural networks are implemented electronically using graphical processing units to compute products of input signals and predetermined weights. The number of weights scales as the square of the number of neurons in the neural network, causing the power and bandwidth associated with retrieving and distributing the weights in an electronic architecture to scale poorly. Switching from an electronic architecture to an optical architecture for storing and distributing weights alleviates the communications bottleneck and reduces the power per transaction for much better scaling. The weights can be distributed at terabits per second at a power cost of picojoules per bit (versus gigabits per second and femtojoules per bit for electronic architectures). The bandwidth and power advantages are even better when distributing the same weights to many optical neural networks running simultaneously.
    Type: Grant
    Filed: February 6, 2019
    Date of Patent: June 28, 2022
    Assignee: Massachusetts Institute of Technology
    Inventor: Dirk Robert Englund
  • Patent number: 11354566
    Abstract: A treatment model that is a first neural network is trained to optimize a treatment loss function based on a treatment variable t using a plurality of observation vectors by regressing t on x(1),z. The trained treatment model is executed to compute an estimated treatment variable value {circumflex over (t)}i for each observation vector. An outcome model that is a second neural network is trained to optimize an outcome loss function by regressing y on x(2) and an estimated treatment variable t. The trained outcome model is executed to compute an estimated first unknown function value {circumflex over (?)}(xi(2)) and an estimated second unknown function value {circumflex over (?)}(xi(2)) for each observation vector. An influence function value is computed for a parameter of interest using {circumflex over (?)}(xi(2)) and {circumflex over (?)}(xi(2)). A value is computed for the predefined parameter of interest using the computed influence function value.
    Type: Grant
    Filed: October 21, 2021
    Date of Patent: June 7, 2022
    Assignee: SAS Institute Inc.
    Inventors: Xilong Chen, Douglas Allan Cairns, Jan Chvosta, David Bruce Elsheimer, Yang Zhao, Ming-Chun Chang, Gunce Eryuruk Walton, Michael Thomas Lamm
  • Patent number: 11348010
    Abstract: Methods and systems are disclosed to reduce the memory requirement of neural networks by encoding the coefficients of a neural network during training stage and decoding them during the inference. The disclosed embodiment consists of a neural network coefficient decoder (NNCD), a genetic algorithm-based encoding system (GAbES), a coefficient encoding method (CEM), and a genetic algorithm-based neural network coefficient encoding/decoding (GANCED) system. The design is consistent with both hardware and firmware and it can be implemented by bitwise operation as a hardware accelerator or easily computed by a traditional processing unit. The disclosed embodiment reduces the memory storage requirement of hardware implementation of neural networks. This reduction speeds up the processing of neural networks and reduces the dynamic power consumption of the circuit.
    Type: Grant
    Filed: May 25, 2021
    Date of Patent: May 31, 2022
    Inventor: Mohammad Solgi
  • Patent number: 11341404
    Abstract: Using training data, machine learning is performed to construct a learning model which is a non-linear function for discrimination or regression analysis (S2). A degree of contribution of each input dimension is calculated from a partial differential value of that function. Input dimensions to be invalidated are determined using a threshold defined by a Gaussian distribution function based on the degrees of contribution (S3-S5). Machine learning is once more performed using the training data with the partially-invalidated input dimensions (S6). A new value of the degree of contribution of each input dimension is determined from the obtained learning model, and the degree of contribution is updated using the old and new values (S7-S8). After the processes of Steps S5-S8 are iterated a specified number of times (S9), useful dimensions are determined based on the finally obtained degrees of contribution, and the machine-learning model is constructed (S10).
    Type: Grant
    Filed: March 14, 2018
    Date of Patent: May 24, 2022
    Assignee: SHIMADZU CORPORATION
    Inventor: Akira Noda
  • Patent number: 11341415
    Abstract: A method and apparatus for compressing a neural network are provided. A specific embodiment of the method includes: acquiring a to-be-compressed trained neural network; selecting at least one layer from layers of the neural network as a to-be-compressed layer; performing the following processing steps sequentially on each of the to-be-compressed layers in descending order of the number of level of the to-be-compressed layer in the neural network: quantifying parameters of the to-be-compressed layer based on a specified number, and training the quantified neural network based on a preset training sample using a machine learning method; and determining the neural network obtained after performing the processing steps on the selected at least one to-be-compressed layer as a compressed neural network, and storing the compressed neural network. This embodiment achieves efficient compression of the neural network.
    Type: Grant
    Filed: September 27, 2018
    Date of Patent: May 24, 2022
    Assignee: BAIDU ONLINE NETWORK TECHNOLOGY (BEIJING) CO., LTD.
    Inventor: Gang Zhang
  • Patent number: 11341399
    Abstract: A deep neural network (“DNN”) module can determine whether processing of certain values in an input buffer or a weight buffer by neurons can be skipped. For example, the DNN module might determine whether neurons can skip the processing of values in entire columns of a neuron buffer. Processing of these values might be skipped if an entire column of an input buffer or a weight buffer are zeros, for example. The DNN module can also determine whether processing of single values in rows of the input buffer or the weight buffer can be skipped (e.g. if the values are zero). Neurons that complete their processing early as a result of skipping operations can assist other neurons with their processing. A combination operation can be performed following the completion of processing that transfers the results of the processing operations performed by a neuron to their correct owner.
    Type: Grant
    Filed: April 13, 2018
    Date of Patent: May 24, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Amol Ashok Ambardekar, Chad Balling McBride, George Petre, Larry Marvin Wall, Kent D. Cedola, Boris Bobrov
  • Patent number: 11341408
    Abstract: Memristive learning concepts for neuromorphic circuits are described. In one example case, a neuromorphic circuit includes a first oscillatory-based neuron that generates a first oscillatory signal, a diode that rectifies the first oscillatory signal, and a synapse coupled to the diode and including a long-term potentiation (LTP) memristor arranged in parallel with a long-term depression (LTD) memristor. The circuit further includes a difference amplifier coupled to the synapse that generates a difference signal based on a difference between output signals from the LTP and LTD memristors, and a second oscillatory-based neuron electrically coupled to the difference amplifier that generates a second oscillatory signal based on the difference signal. The circuit also includes a feedback circuit that provides a feedback signal to the LTP and LTD memristors based on a difference or error between a target signal and the second oscillatory signal.
    Type: Grant
    Filed: October 27, 2017
    Date of Patent: May 24, 2022
    Assignee: University of Florida Research Foundation, Inc.
    Inventors: Jack D. Kendall, Juan C. Nino
  • Patent number: 11334801
    Abstract: A device for obtaining a local optimal AI model may include an artificial intelligence (AI) chip and a processing device configured to receive a first initial AI model from the host device. The device may load the initial AI model into the AI chip to determine a performance value of the AI model based on a dataset, and determine a probability that a current AI model should be replaced by the initial AI model. The device may determine, based on the probability, whether to replace the current AI model with the initial AI model. If it is determined that the current AI model be replaced, the device may replace the current AI model with the initial AI model. The device may repeat the above processes and obtain a final current AI model. The device may transmit the final current AI model to the host device.
    Type: Grant
    Filed: November 13, 2018
    Date of Patent: May 17, 2022
    Assignee: Gyrfalcon Technology Inc.
    Inventors: Yequn Zhang, Yongxiong Ren, Baohua Sun, Lin Yang, Qi Dong
  • Patent number: 11334815
    Abstract: Disclosed are various embodiments for implementing computational tasks in a cloud environment in one or more operating system level virtualized containers. A parameter file can specify different parameters including hardware parameters, library parameters, user code parameters, and job parameters (e.g., sets of hyperparameters). The parameter file can be converted via a mapping and implemented in a cloud-based container platform.
    Type: Grant
    Filed: September 28, 2018
    Date of Patent: May 17, 2022
    Assignee: Snap Inc.
    Inventors: Eric Buehl, Jordan Hurwitz, Sergey Tulyakov, Shubham Vij
  • Patent number: 11328217
    Abstract: A noise reduction and smart ticketing application for social media-based communication systems may identify social media-based communications from users who are attempting to engage with a brand or entity on a social media platform as actionable, and distinguish other communications as noise. The noise reduction and smart ticketing system may use machine learning to determine which social media communications are actionable for a given company or other organization, and generates tickets for actionable communications. Actionable communications may include, but are not limited to, technical support issues, inquiries about a product release date, grievances, incidents, suggestions to improve service, critiques of company policies, etc. Non-actionable communications (i.e., “noise”) may include, but are not limited to, suggestions to other users, promotions, coupons, offers, marketing campaigns, affiliate marketing, statements that a user is attending an event, etc.
    Type: Grant
    Filed: September 28, 2017
    Date of Patent: May 10, 2022
    Assignee: Freshworks, Inc.
    Inventors: Anuj Gupta, Saurabh Arora, Satyam Saxena, Navaneethan Santhanam
  • Patent number: 11308388
    Abstract: A circuit comprises a series of calculating blocks that can each implement a group of neurons; a transformation block that is linked to the calculating blocks by a communication means and that can be linked at the input of the circuit to an external data bus, the transformation block transforming the format of the input data and transmitting the data to said calculating blocks by means of K independent communication channels, an input data word being cut up into sub-words such that the sub-words are transmitted over multiple successive communication cycles, one sub-word being transmitted per communication cycle over a communication channel dedicated to the word such that the N channels can transmit K words in parallel.
    Type: Grant
    Filed: December 7, 2016
    Date of Patent: April 19, 2022
    Assignee: COMMISSARIAT A L'ENERGIE ATOMIQUE ET AUX ENERGIES ALTERNATIVES
    Inventors: Jean-Marc Philippe, Alexandre Carbon, Marc Duranton
  • Patent number: 11308382
    Abstract: Neuromorphic synapse apparatus is provided comprising a synaptic device and a control signal generator. The synaptic device comprises a memory element, disposed between first and second terminals, for conducting a signal between those terminals with an efficacy which corresponds to a synaptic weight in a read mode of operation, and a third terminal operatively coupled to the memory element. The memory element has a non-volatile characteristic, which is programmable to vary the efficacy in response to programming signals applied via the first and second terminals in a write mode of operation, and a volatile characteristic which is controllable to vary the efficacy in response to control signals applied to the third terminal. The control signal generator is responsive to input signals and is adapted to apply control signals to the third terminal in the read and write modes, in dependence on the input signals, to implement predetermined synaptic dynamics.
    Type: Grant
    Filed: August 25, 2017
    Date of Patent: April 19, 2022
    Assignee: International Business Machines Corporation
    Inventors: Wabe W. Koelmans, Timoleon Moraitis, Abu Sebastian
  • Patent number: 11295210
    Abstract: Methods and computer systems improve a trained base deep neural network by structurally changing the base deep neural network to create an updated deep neural network, such that the updated deep neural network has no degradation in performance relative to the base deep neural network on the training data. The updated deep neural network is subsequently training. Also, an asynchronous agent for use in a machine learning system comprises a second machine learning system ML2 that is to be trained to perform some machine learning task. The asynchronous agent further comprises a learning coach LC and an optional data selector machine learning system DS. The purpose of the data selection machine learning system DS is to make the second stage machine learning system ML2 more efficient in its learning (by selecting a set of training data that is smaller but sufficient) and/or more effective (by selecting a set of training data that is focused on an important task).
    Type: Grant
    Filed: May 31, 2018
    Date of Patent: April 5, 2022
    Assignee: D5AI LLC
    Inventor: James K. Baker
  • Patent number: 11281967
    Abstract: An integrated circuit is configurable to generate a notification message when an indicator of an event used to synchronize the execution of different functional blocks of the integrated circuit changes status. The indicator of the event is cleared when an operation is triggered and is set when the operation completes. The notification message includes a timestamp indicating the time when the indicator of the event changes status. The notification message is used to determine the execution timeline of a set of instructions executed by integrated circuit and to identify bottlenecks in the set of instructions or the integrated circuit.
    Type: Grant
    Filed: December 11, 2018
    Date of Patent: March 22, 2022
    Assignee: Amazon Technologies, Inc.
    Inventors: Thomas A. Volpe, Nafea Bshara