Patents by Inventor Steven L. Teig

Steven L. Teig has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250148279
    Abstract: Some embodiments of the invention provide a method for configuring a network with multiple nodes. Each node generates an output value based on received input values and a set of weights that are previously trained to each have an initial value. For each weight, the method calculates a factor that represents a loss of accuracy to the network due to changing the weight from its initial value to a different value in a set of allowed values for the weight. Based on the factors, the method identifies a subset of the weights that have factors with values below a threshold. The method changes the values of each weight from its initial value to one of the values in its set of allowed values. The values of the identified subset are all changed to zero. The method trains the weights beginning with the changed values for each weight.
    Type: Application
    Filed: September 4, 2024
    Publication date: May 8, 2025
    Inventors: Alexandru F. Drimbarean, Steven L. Teig
  • Publication number: 20250148644
    Abstract: Some embodiments of the invention provide a novel method for training a multi-layer node network. Some embodiments train the multi-layer network using a set of inputs generated with random misalignments incorporated into the training data set. In some embodiments, the training data set is a synthetically generated training set based on a three-dimensional ground truth model as it would be sensed by a sensor array from different positions and with different deviations from ideal alignment and placement. Some embodiments dynamically generate training data sets when a determination is made that more training is required. Training data sets, in some embodiments, are generated based on training data sets for which the multi-layer node network has produced bad results.
    Type: Application
    Filed: June 7, 2024
    Publication date: May 8, 2025
    Inventors: Andrew C. Mihal, Steven L. Teig
  • Patent number: 12293993
    Abstract: Some embodiments of the invention provide a three-dimensional (3D) circuit that is formed by stacking two or more integrated circuit (IC) dies to at least partially overlap and to share one or more interconnect layers that distribute power, clock and/or data-bus signals. The shared interconnect layers include interconnect segments that carry power, clock and/or data-bus signals. In some embodiments, the shared interconnect layers are higher level interconnect layers (e.g., the top interconnect layer of each IC die). In some embodiments, the stacked IC dies of the 3D circuit include first and second IC dies. The first die includes a first semiconductor substrate and a first set of interconnect layers defined above the first semiconductor substrate. Similarly, the second IC die includes a second semiconductor substrate and a second set of interconnect layers defined above the second semiconductor substrate.
    Type: Grant
    Filed: October 13, 2023
    Date of Patent: May 6, 2025
    Assignee: Adeia Semiconductor Inc.
    Inventors: Javier A. DeLaCruz, Steven L. Teig, Ilyas Mohammed
  • Publication number: 20250142942
    Abstract: Some embodiments of the invention provide a three-dimensional (3D) circuit that is formed by stacking two or more integrated circuit (IC) dies to at least partially overlap and to share one or more interconnect layers that distribute power, clock and/or data-bus signals. The shared interconnect layers include interconnect segments that carry power, clock and/or data-bus signals. In some embodiments, the shared interconnect layers are higher level interconnect layers (e.g., the top interconnect layer of each IC die). In some embodiments, the stacked IC dies of the 3D circuit include first and second IC dies. The first die includes a first semiconductor substrate and a first set of interconnect layers defined above the first semiconductor substrate. Similarly, the second IC die includes a second semiconductor substrate and a second set of interconnect layers defined above the second semiconductor substrate.
    Type: Application
    Filed: October 3, 2024
    Publication date: May 1, 2025
    Inventors: Javier A. DeLaCruz, Steven L. Teig, Ilyas Mohammed, Eric M. Nequist
  • Patent number: 12265905
    Abstract: Some embodiments provide a method for a circuit that executes a neural network including multiple nodes. The method loads a set of weight values for a node into a set of weight value buffers, a first set of bits of each input value of a set of input values for the node into a first set of input value buffers, and a second set of bits of each of the input values into a second set of input value buffers. The method computes a first dot product of the weight values and the first set of bits of each input value and a second dot product of the weight values and the second set of bits of each input value. The method shifts the second dot product by a particular number of bits and adds the first dot product with the bit-shifted second dot product to compute a dot product for the node.
    Type: Grant
    Filed: November 9, 2022
    Date of Patent: April 1, 2025
    Assignee: Amazon Technologies, Inc.
    Inventors: Jung Ko, Kenneth Duong, Steven L. Teig
  • Publication number: 20250103341
    Abstract: Some embodiments provide a neural network inference circuit (NNIC) for executing a neural network that includes multiple computation nodes at multiple layers. The NNIC includes multiple core circuits including memories for storing input values for the computation nodes. The NNIC includes a set of post-processing circuits for computing output values of the computation nodes. The output values for a first layer are for storage in the core circuits as input values for a second layer. The NNIC includes an output bus that connects the post-processing circuits to the core circuits. The output bus is for (i) receiving a set of output values from the post-processing circuits, (ii) transporting the output values of the set to the core circuits based on configuration data specifying a core circuit at which each of the output values is to be stored, and (iii) aligning the output values for storage in the core circuits.
    Type: Application
    Filed: September 4, 2024
    Publication date: March 27, 2025
    Inventors: Kenneth Duong, Jung Ko, Steven L. Teig
  • Patent number: 12260317
    Abstract: Some embodiments provide a compiler for optimizing the implementation of a machine-trained network (e.g., a neural network) on an integrated circuit (IC). The compiler of some embodiments receives a specification of a machine-trained network including multiple layers of computation nodes and generates a graph representing options for implementing the machine-trained network in the IC. In some embodiments, the compiler also generates instructions for gating operations. Gating operations, in some embodiments, include gating at multiple levels (e.g., gating of clusters, cores, or memory units). Gating operations conserve power in some embodiments by gating signals so that they do not reach the gated element or so that they are not propagated within the gated element. In some embodiments, a clock signal is gated such that a register that transmits data on a rising (or falling) edge of a clock signal is not triggered.
    Type: Grant
    Filed: July 29, 2019
    Date of Patent: March 25, 2025
    Assignee: Amazon Technologies, Inc.
    Inventors: Brian Thomas, Steven L. Teig
  • Patent number: 12248880
    Abstract: Some embodiments provide a method for training a machine-trained (MT) network that processes inputs using network parameters. The method propagates a set of input training items through the MT network to generate a set of output values. The set of input training items comprises multiple training items for each of multiple categories. The method identifies multiple training item groupings in the set of input training items. Each grouping includes at least two training items in a first category and at least one training item in a second category. The method calculates a value of a loss function as a summation of individual loss functions for each of the identified training item groupings. The individual loss function for each particular training item grouping is based on the output values for the training items of the grouping. The method trains the network parameters using the calculated loss function value.
    Type: Grant
    Filed: August 27, 2023
    Date of Patent: March 11, 2025
    Assignee: Amazon Technologies, Inc.
    Inventors: Eric A. Sather, Steven L. Teig, Andrew C. Mihal
  • Patent number: 12248869
    Abstract: Some embodiments provide a three-dimensional (3D) circuit structure that has two or more vertically stacked bonded layers with a machine-trained network on at least one bonded layer. As described above, each bonded layer can be an IC die or an IC wafer in some embodiments with different embodiments encompassing different combinations of wafers and dies for the different bonded layers. The machine-trained network in some embodiments includes several stages of machine-trained processing nodes with routing fabric that supplies the outputs of earlier stage nodes to drive the inputs of later stage nodes. In some embodiments, the machine-trained network is a neural network and the processing nodes are neurons of the neural network. In some embodiments, one or more parameters associated with each processing node (e.g., each neuron) is defined through machine-trained processes that define the values of these parameters in order to allow the machine-trained network (e.g.
    Type: Grant
    Filed: September 19, 2023
    Date of Patent: March 11, 2025
    Assignee: Adeia Semiconductor Inc.
    Inventors: Steven L. Teig, Kenneth Duong
  • Publication number: 20250068912
    Abstract: Some embodiments provide a method for configuring a machine-trained (MT) network that includes multiple configurable weights to train. The method propagates a set of inputs through the MT network to generate a set of output probability distributions. Each input has a corresponding expected output probability distribution. The method calculates a value of a continuously-differentiable loss function that includes a term approximating an extremum function of the difference between the expected output probability distributions and generated set of output probability distributions. The method trains the weights by back-propagating the calculated value of the continuously-differentiable loss function.
    Type: Application
    Filed: July 29, 2024
    Publication date: February 27, 2025
    Inventors: Steven L. Teig, Andrew C. Mihal
  • Patent number: 12217160
    Abstract: Some embodiments provide a method that receives a specification of a neural network for execution by an integrated circuit. The integrated circuit includes a neural network inference circuit for executing the neural network to generate an output based on an input, an input processing circuit for providing the input to the neural network inference circuit, a microprocessor circuit for controlling the neural network inference circuit and the input processing circuit, and a unified memory accessible by the microprocessor circuit, the neural network inference circuit, and the input processing circuit. The method determines usage of the unified memory by the neural network inference circuit while executing the neural network. Based on the determined usage by the neural network inference circuit, the method allocates portions of the unified memory to the microprocessor circuit and input processing circuit.
    Type: Grant
    Filed: May 3, 2021
    Date of Patent: February 4, 2025
    Assignee: Amazon Technologies, Inc.
    Inventors: Jung Ko, Kenneth Duong, Steven L. Teig, Won Rhee
  • Patent number: 12218059
    Abstract: Some embodiments of the invention provide a three-dimensional (3D) circuit that is formed by stacking two or more integrated circuit (IC) dies to at least partially overlap and to share one or more interconnect layers that distribute power, clock and/or data-bus signals. The shared interconnect layers include interconnect segments that carry power, clock and/or data-bus signals. In some embodiments, the shared interconnect layers are higher level interconnect layers (e.g., the top interconnect layer of each IC die). In some embodiments, the stacked IC dies of the 3D circuit include first and second IC dies. The first die includes a first semiconductor substrate and a first set of interconnect layers defined above the first semiconductor substrate. Similarly, the second IC die includes a second semiconductor substrate and a second set of interconnect layers defined above the second semiconductor substrate.
    Type: Grant
    Filed: December 28, 2023
    Date of Patent: February 4, 2025
    Assignee: Adeia Semiconductor Inc.
    Inventors: Ilyas Mohammed, Steven L. Teig, Javier A. DeLaCruz
  • Publication number: 20250028945
    Abstract: Some embodiments provide a method for executing a layer of a neural network, for a circuit that restricts a number of weight values used per layer. The method applies a first set of weights to a set of inputs to generate a first set of results. The first set of weights are restricted to a first set of allowed values. For each of one or more additional sets of weights, the method applies the respective additional set of weights to the same set of inputs to generate a respective additional set of results. The respective additional set of weights is restricted to a respective additional set of allowed values that is related to the first set of allowed values and the other additional sets of allowed values. The method generates outputs for the particular layer by combining the first set of results with each respective additional set of results.
    Type: Application
    Filed: May 17, 2024
    Publication date: January 23, 2025
    Inventors: Eric A. Sather, Steven L. Teig
  • Patent number: 12190230
    Abstract: Some embodiments provide a neural network inference circuit (NNIC) for executing a neural network that includes multiple computation nodes at multiple layers. The NNIC includes a set of clusters of core computation circuits and a channel, connecting the core computation circuits, that includes separate segments corresponding to each of the clusters. The NNIC includes a fabric controller circuit, a cluster controller circuit for each of the clusters, and a core controller circuit for each of the core computation circuits. The fabric controller circuit receives high-level neural network instructions from a microprocessor and parses the high-level neural network instructions.
    Type: Grant
    Filed: November 7, 2022
    Date of Patent: January 7, 2025
    Assignee: Amazon Technologies, Inc.
    Inventors: Kenneth Duong, Jung Ko, Steven L. Teig
  • Patent number: 12175368
    Abstract: Some embodiments provide a method for training a machine-trained (MT) network. The method propagates multiple inputs through the MT network to generate an output for each of the inputs. each of the inputs is associated with an expected output, the MT network uses multiple network parameters to process the inputs, and each network parameter of a set of the network parameters is defined during training as a probability distribution across a discrete set of possible values for the network parameter. The method calculates a value of a loss function for the MT network that includes (i) a first term that measures network error based on the expected outputs compared to the generated outputs and (ii) a second term that penalizes divergence of the probability distribution for each network parameter in the set of network parameters from a predefined probability distribution for the network parameter.
    Type: Grant
    Filed: November 7, 2022
    Date of Patent: December 24, 2024
    Assignee: Amazon Technologies, Inc.
    Inventors: Steven L. Teig, Eric A. Sather
  • Patent number: 12165066
    Abstract: Some embodiments provide a method for training a machine-trained (MT) network that processes input data using network parameters. The method maps a set of input instances to a set of output values by propagating the set of input instances through the MT network. The set of input instances includes input instances for each of multiple categories. For a particular input instance selected as an anchor instance, the method calculates a true positive rate (TPR) for the MT network as a function of a distance between the output value for the anchor instance and the output value for each input instance not in a same category as the anchor instance. The method calculates a loss function for the anchor instance that maximizes the TPR for the MT network at low false positive rate. The method trains the network parameters using the calculated loss function.
    Type: Grant
    Filed: March 14, 2018
    Date of Patent: December 10, 2024
    Assignee: Amazon Technologies, Inc.
    Inventors: Eric A. Sather, Steven L. Teig, Andrew C. Mihal
  • Patent number: 12165069
    Abstract: Some embodiments provide a compiler for optimizing the implementation of a machine-trained network (e.g., a neural network) on an integrated circuit (IC). The compiler of some embodiments receives a specification of a machine-trained network including multiple layers of computation nodes and generates a graph representing options for implementing the machine-trained network in the IC. In some embodiments, the graph includes nodes representing options for implementing each layer of the machine-trained network and edges between nodes for different layers representing different implementations that are compatible. In some embodiments, the graph is populated according to rules relating to memory use and the numbers of cores necessary to implement a particular layer of the machine trained network such that nodes for a particular layer, in some embodiments, represent fewer than all the possible groupings of sets of clusters.
    Type: Grant
    Filed: July 29, 2019
    Date of Patent: December 10, 2024
    Assignee: Amazon Technologies, Inc.
    Inventors: Brian Thomas, Steven L. Teig
  • Patent number: 12165043
    Abstract: Some embodiments provide a neural network inference circuit for executing a neural network that includes multiple layers of computation nodes. At least a subset of the layers include non-convolutional layers. The neural network inference circuit includes multiple cores with memories that store input values for the layers. The cores are grouped into multiple clusters. For each cluster, the neural network inference circuit includes a set of processing circuits for receiving input values from the cores of the cluster and executing the computation nodes of the non-convolutional layers.
    Type: Grant
    Filed: October 8, 2023
    Date of Patent: December 10, 2024
    Assignee: Amazon Technologies, Inc.
    Inventors: Jung Ko, Kenneth Duong, Steven L. Teig
  • Patent number: 12165055
    Abstract: Some embodiments of the invention provide a method for implementing a temporal convolution network (TCN) that includes several layers of machine-trained processing nodes. While processing one set of inputs that is provided to the TCN at a particular time, some of the processing nodes of the TCN use intermediate values computed by the processing nodes for other sets of inputs that were provided to the TCN at earlier times. To speed up the operation of the TCN and improve its efficiency, the method of some embodiments stores intermediate values computed by the TCN processing nodes for earlier sets of TCN inputs, so that these values can later be used for processing later set of TCN inputs.
    Type: Grant
    Filed: November 9, 2020
    Date of Patent: December 10, 2024
    Assignee: Amazon Technologies, Inc.
    Inventors: Ryan J. Cassidy, Steven L. Teig
  • Patent number: 12159214
    Abstract: Some embodiments provide a method for executing a neural network. The method writes a first input to a first set of physical memory banks in a unified memory shared by an input processing circuit and a neural network inference circuit that executes the neural network. While the neural network inference circuit is executing the network a first time to generate a first output for the first input, the method writes a second input to a second set of physical memory banks in the unified memory. The neural network inference circuit executes a same set of instructions to read the first input from the first set of memory banks in order to execute the network the first time and to read the second input from the second set of memory banks in order to execute the network a second time to generate a second output for the second input.
    Type: Grant
    Filed: May 3, 2021
    Date of Patent: December 3, 2024
    Assignee: Perceive Corporation
    Inventors: Jung Ko, Kenneth Duong, Steven L. Teig, Won Rhee