Patents Assigned to Perceive Corporation
  • Patent number: 11586902
    Abstract: Some embodiments provide a method for training a machine-trained (MT) network that processes input data using network parameters. The method maps input instances to output values by propagating the instances through the network. The input instances include instances for each of multiple categories. For a particular instance selected as an anchor instance, the method identifies each instance in a different category as a negative instance. The method calculates, for each negative instance of the anchor, a surprise function that probabilistically measures a surprise of finding an output value for an instance in the same category as the anchor that is a greater distance from the output value for the anchor instance than output value for the negative instance. The method calculates a loss function that emphasizes a maximum surprise calculated for the anchor. The method trains the network parameters using the calculated loss function value to minimize the maximum surprise.
    Type: Grant
    Filed: March 14, 2018
    Date of Patent: February 21, 2023
    Assignee: PERCEIVE CORPORATION
    Inventors: Eric A. Sather, Steven L. Teig, Andrew C. Mihal
  • Patent number: 11568227
    Abstract: Some embodiments provide a neural network inference circuit for executing a neural network with multiple layers. The neural network inference circuit includes a set of processing circuits for executing the layers of the neural network, a set of memories for storing data used by the set of processing circuits to execute the neural network layers, and a read controller for retrieving the data from the set of memories and storing the data in a cache for use by the set of processing circuits. The read controller retrieves the data in one of (i) a first mode for retrieving the data from sequential memory locations within the set of memories to store in the cache and (ii) a second mode for retrieving the data from non-sequential memory locations within the set of memories to store in the cache.
    Type: Grant
    Filed: December 17, 2019
    Date of Patent: January 31, 2023
    Assignee: PERCEIVE CORPORATION
    Inventors: Jung Ko, Kenneth Duong, Steven L. Teig
  • Patent number: 11537870
    Abstract: Some embodiments provide a method for training a machine-trained (MT) network. The method propagates multiple inputs through the MT network to generate an output for each of the inputs. each of the inputs is associated with an expected output, the MT network uses multiple network parameters to process the inputs, and each network parameter of a set of the network parameters is defined during training as a probability distribution across a discrete set of possible values for the network parameter. The method calculates a value of a loss function for the MT network that includes (i) a first term that measures network error based on the expected outputs compared to the generated outputs and (ii) a second term that penalizes divergence of the probability distribution for each network parameter in the set of network parameters from a predefined probability distribution for the network parameter.
    Type: Grant
    Filed: March 14, 2018
    Date of Patent: December 27, 2022
    Assignee: PERCEIVE CORPORATION
    Inventors: Steven L. Teig, Eric A. Sather
  • Patent number: 11531868
    Abstract: Some embodiments provide a method for a neural network inference circuit that executes a neural network including computation nodes at multiple layers. Each of a set of the nodes includes a dot product of input values and weight values. The method reads multiple input values for a particular layer from a memory location of the circuit. A first set of the input values are used for a first dot product for a first node of the layer. The method stores the input values in a cache. The method computes the first dot product for the first node using the first set of input values. Without requiring a read of any input values from any additional memory locations, the method computes a second dot product for a second node of the particular layer using a subset of the first set of input values and a second set of the input values.
    Type: Grant
    Filed: March 15, 2019
    Date of Patent: December 20, 2022
    Assignee: PERCEIVE CORPORATION
    Inventors: Kenneth Duong, Jung Ko, Steven L. Teig
  • Patent number: 11531879
    Abstract: Some embodiments provide a method for training a machine-trained (MT) network. The method uses a first set of training inputs to train parameters of the MT network. The method uses a set of validation inputs to measure error for the MT network as trained by the first set of training inputs. The method adds at least a subset of the validation inputs to the first set of training inputs to create a second set of training inputs. The method uses the second set of training inputs to train the parameters of the MT network. The error measurement is used to modify the training with the second set of training inputs.
    Type: Grant
    Filed: June 26, 2019
    Date of Patent: December 20, 2022
    Assignee: PERCEIVE CORPORATION
    Inventors: Steven L. Teig, Eric A. Sather
  • Patent number: 11531727
    Abstract: Some embodiments provide a method for a circuit that executes a neural network including multiple nodes. The method loads a set of weight values for a node into a set of weight value buffers, a first set of bits of each input value of a set of input values for the node into a first set of input value buffers, and a second set of bits of each of the input values into a second set of input value buffers. The method computes a first dot product of the weight values and the first set of bits of each input value and a second dot product of the weight values and the second set of bits of each input value. The method shifts the second dot product by a particular number of bits and adds the first dot product with the bit-shifted second dot product to compute a dot product for the node.
    Type: Grant
    Filed: December 6, 2018
    Date of Patent: December 20, 2022
    Assignee: PERCEIVE CORPORATION
    Inventors: Jung Ko, Kenneth Duong, Steven L. Teig
  • Patent number: 11501138
    Abstract: Some embodiments provide a neural network inference circuit (NNIC) for executing a neural network that includes multiple computation nodes at multiple layers. The NNIC includes a set of clusters of core computation circuits and a channel, connecting the core computation circuits, that includes separate segments corresponding to each of the clusters. The NNIC includes a fabric controller circuit, a cluster controller circuit for each of the clusters, and a core controller circuit for each of the core computation circuits. The fabric controller circuit receives high-level neural network instructions from a microprocessor and parses the high-level neural network instructions.
    Type: Grant
    Filed: December 6, 2018
    Date of Patent: November 15, 2022
    Assignee: PERCEIVE CORPORATION
    Inventors: Kenneth Duong, Jung Ko, Steven L. Teig
  • Patent number: 11494657
    Abstract: Some embodiments of the invention provide a novel method for training a quantized machine-trained network. Some embodiments provide a method of scaling a feature map of a pre-trained floating-point neural network in order to match the range of output values provided by quantized activations in a quantized neural network. A quantization function is modified, in some embodiments, to be differentiable to fix the mismatch between the loss function computed in forward propagation and the loss gradient used in backward propagation. Variational information bottleneck, in some embodiments, is incorporated to train the network to be insensitive to multiplicative noise applied to each channel. In some embodiments, channels that finish training with large noise, for example, exceeding 100%, are pruned.
    Type: Grant
    Filed: October 8, 2019
    Date of Patent: November 8, 2022
    Assignee: PERCEIVE CORPORATION
    Inventors: Eric A. Sather, Steven L. Teig
  • Patent number: 11481612
    Abstract: Some embodiments provide a neural network inference circuit (NNIC) for executing a neural network that includes multiple computation nodes at multiple layers. Each of a set of the computation nodes includes a dot product of input values and weight values. The NNIC includes dot product cores, each of which includes (i) partial dot product computation circuits to compute dot products between input values and weight values and (ii) memories to store the weight values and input values for a layer of the NN. The input values for a particular layer of the NN are stored in the memories of multiple cores. A starting memory location in a first core for the input values of the layer stored in the first core is the same as a starting memory location for the input values in each of the other cores that store the input values for the layer.
    Type: Grant
    Filed: March 15, 2019
    Date of Patent: October 25, 2022
    Assignee: PERCEIVE CORPORATION
    Inventors: Kenneth Duong, Jung Ko, Steven L. Teig
  • Patent number: 11475310
    Abstract: Some embodiments provide a method for configuring a machine-trained (MT) network that includes multiple configurable weights to train. The method propagates a set of inputs through the MT network to generate a set of output probability distributions. Each input has a corresponding expected output probability distribution. The method calculates a value of a continuously-differentiable loss function that includes a term approximating an extremum function of the difference between the expected output probability distributions and generated set of output probability distributions. The method trains the weights by back-propagating the calculated value of the continuously-differentiable loss function.
    Type: Grant
    Filed: November 28, 2017
    Date of Patent: October 18, 2022
    Assignee: PERCEIVE CORPORATION
    Inventors: Steven L. Teig, Andrew C. Mihal
  • Patent number: 11468145
    Abstract: Some embodiments provide a neural network inference circuit (NNIC) for executing a NN that includes multiple computation nodes at multiple layers. Each of a set of the computation nodes includes a dot product of input values and weight values. The NNIC includes a set of dot product cores, each of which includes (i) partial dot product computation circuits to compute dot products between input values and weight values and (ii) memories to store the sets of weight values and sets of input values for a layer of the neural network. The input values for a particular layer are arranged in a plurality of two-dimensional grids. A particular core stores all of the input values of a subset of the two-dimensional grids. Input values having a same set of coordinates in each respective grid of the subset of the two-dimensional grids are stored sequentially within the memories of the particular core.
    Type: Grant
    Filed: March 15, 2019
    Date of Patent: October 11, 2022
    Assignee: PERCEIVE CORPORATION
    Inventors: Kenneth Duong, Jung Ko, Steven L. Teig
  • Patent number: 11429861
    Abstract: Some embodiments provide an electronic device that includes a set of processing units and a set of machine-readable media. The set of machine-readable media stores sets of instructions for applying a network of computation nodes to an input received by the device. The set of machine-readable media stores at least two sets of machine-trained parameters for configuring the network for different types of inputs. A first of the sets of parameters is used for applying the network to a first type of input and a second of the sets of parameters is used for applying the network to a second type of input.
    Type: Grant
    Filed: November 16, 2017
    Date of Patent: August 30, 2022
    Assignee: PERCEIVE CORPORATION
    Inventors: Steven L. Teig, Eric A. Sather
  • Patent number: 11403530
    Abstract: Some embodiments provide a method for compiling a neural network program for a neural network inference circuit. The method receives a neural network definition including multiple weight values arranged as multiple filters. For each filter, each of the weight values is one of a set of weight values associated with the filter. At least one of the filters has more than three different associated weight values. The method generates program instructions for instructing the neural network inference circuit to execute the neural network. The neural network inference circuit includes circuitry for executing neural networks with a maximum of three different weight values per filter.
    Type: Grant
    Filed: December 6, 2018
    Date of Patent: August 2, 2022
    Assignee: PERCEIVE CORPORATION
    Inventors: Jung Ko, Kenneth Duong, Steven L. Teig
  • Patent number: 11373325
    Abstract: Some embodiments of the invention provide a novel method for training a multi-layer node network to reliably determine depth based on a plurality of input sources (e.g., cameras, microphones, etc.) that may be arranged with deviations from an ideal alignment or placement. Some embodiments train the multi-layer network using a set of inputs generated with random misalignments incorporated into the training set. In some embodiments, the training set includes (i) a synthetically generated training set based on a three-dimensional ground truth model as it would be sensed by a sensor array from different positions and with different deviations from ideal alignment and placement, and/or (ii) a training set generated by a set of actual sensor arrays augmented with an additional sensor (e.g., additional camera or time of flight measurement device such as lidar) to collect ground truth data.
    Type: Grant
    Filed: October 2, 2019
    Date of Patent: June 28, 2022
    Assignee: PERCEIVE CORPORATION
    Inventors: Andrew Mihal, Steven Teig
  • Patent number: 11361213
    Abstract: Some embodiments provide a neural network inference circuit for implementing a neural network that includes multiple computation nodes at multiple layers. Each of a set of the computation nodes includes (i) a linear function that includes a dot product of input values and weight values and (ii) a non-linear activation function. The neural network inference circuit includes (i) a set of dot product circuits to compute dot products for the plurality of computation nodes and (ii) at least one computation node post-processing circuit to (i) receive a dot product for a computation node computed by the set of dot product circuits, (ii) compute a result of the linear function for the computation node based on the dot product, and (iii) use a lookup table to compute the non-linear activation function of the computation node from the result of the linear function to determine an output of the computation node.
    Type: Grant
    Filed: December 6, 2018
    Date of Patent: June 14, 2022
    Assignee: PERCEIVE CORPORATION
    Inventors: Kenneth Duong, Jung Ko, Steven L. Teig
  • Patent number: 11348006
    Abstract: Some embodiments of the invention provide a novel method for training a multi-layer node network that mitigates against overfitting the adjustable parameters of the network for a particular problem. During training, the method of some embodiments adjusts the modifiable parameters of the network by iteratively identifying different interior-node, influence-attenuating masks that effectively specify different sampled networks of the multi-layer node network. An interior-node, influence-attenuating mask specifies attenuation parameters that are applied (1) to the outputs of the interior nodes of the network in some embodiments, (2) to the inputs of the interior nodes of the network in other embodiments, or (3) to the outputs and inputs of the interior nodes in still other embodiments. In each mask, the attenuation parameters can be any one of several values (e.g., three or more values) within a range of values (e.g., between 0 and 1).
    Type: Grant
    Filed: March 8, 2020
    Date of Patent: May 31, 2022
    Assignee: PERCEIVE CORPORATION
    Inventor: Steven L. Teig
  • Patent number: 11347297
    Abstract: For a neural network inference circuit that executes a neural network including multiple computation nodes at multiple layers for which data is stored in a plurality of memory banks, some embodiments provide a method for dynamically putting memory banks into a sleep mode of operation to conserve power. The method tracks the accesses to individual memory banks and, if a certain number of clock cycles elapse with no access to a particular memory bank, sends a signal to the memory bank indicating that it should operate in a sleep mode. Circuit components involved in dynamic memory sleep, in some embodiments, include a core RAM pipeline, a core RAM sleep controller, a set of core RAM bank select decoders, and a set of core RAM memory bank wrappers.
    Type: Grant
    Filed: May 30, 2019
    Date of Patent: May 31, 2022
    Assignee: PERCEIVE CORPORATION
    Inventors: Jung Ko, Kenneth Duong, Steven L. Teig
  • Patent number: 11341397
    Abstract: Some embodiments provide a method for a neural network inference circuit (NNIC) that implements a neural network including multiple computation nodes at multiple layers. Each computation node includes a dot product of input values and weight values and a set of post-processing operations. The method retrieves a set of weight values and a set of input values for a computation node from a set of memories of the NNIC. The method computes a dot product of the retrieved sets of weight values and input values. The method performs the post-processing operations for the computation node on a result of the dot product computation to compute an output value for the computation node. The method stores the output value in the set of memories. No intermediate results of the dot product or the set of post-processing operations are stored in any RAM of the NNIC during the computation.
    Type: Grant
    Filed: December 6, 2018
    Date of Patent: May 24, 2022
    Assignee: PERCEIVE CORPORATION
    Inventors: Kenneth Duong, Jung Ko, Steven L. Teig
  • Patent number: 11295200
    Abstract: Some embodiments provide a method for a neural network inference circuit that executes a neural network including multiple nodes. The method loads a first set of weight values into a first set of weight value buffers, a second set of weight values into a second set of weight value buffers, a first set of input values into a first set of input value buffers, and a second set of input values into a second set of input value buffers. In a first clock cycle, the method computes a first dot product of the first set of weight values and the first set of input values. In a second clock cycle, the method computes a second dot product of the second set of weight values and the second set of input values. The method adds the first and second dot products to compute a dot product for the node.
    Type: Grant
    Filed: December 6, 2018
    Date of Patent: April 5, 2022
    Assignee: PERCEIVE CORPORATION
    Inventors: Jung Ko, Kenneth Duong, Steven L. Teig
  • Patent number: 11250840
    Abstract: Some embodiments provide a method of training a MT network to detect a wake expression that directs a digital assistant to perform an operation based on a request that follows the expression. The MT network includes processing nodes with configurable parameters. The method iteratively selects different sets of input values with known sets of output values. Each of a first group of input value sets includes a vocative use of the expression. Each of a second group of input value sets includes a non-vocative use of the expression. For each set of input values, the method uses the MT network to process the input set to produce an output value set and computes an error value that expresses an error between the produced output value set and the known output value set. Based on the error values, the method adjusts configurable parameters of the processing nodes of the MT network.
    Type: Grant
    Filed: April 5, 2019
    Date of Patent: February 15, 2022
    Assignee: PERCEIVE CORPORATION
    Inventor: Steven L. Teig