Patents by Inventor Andrew C. Mihal
Andrew C. Mihal has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20250148644Abstract: Some embodiments of the invention provide a novel method for training a multi-layer node network. Some embodiments train the multi-layer network using a set of inputs generated with random misalignments incorporated into the training data set. In some embodiments, the training data set is a synthetically generated training set based on a three-dimensional ground truth model as it would be sensed by a sensor array from different positions and with different deviations from ideal alignment and placement. Some embodiments dynamically generate training data sets when a determination is made that more training is required. Training data sets, in some embodiments, are generated based on training data sets for which the multi-layer node network has produced bad results.Type: ApplicationFiled: June 7, 2024Publication date: May 8, 2025Inventors: Andrew C. Mihal, Steven L. Teig
-
Patent number: 12248880Abstract: Some embodiments provide a method for training a machine-trained (MT) network that processes inputs using network parameters. The method propagates a set of input training items through the MT network to generate a set of output values. The set of input training items comprises multiple training items for each of multiple categories. The method identifies multiple training item groupings in the set of input training items. Each grouping includes at least two training items in a first category and at least one training item in a second category. The method calculates a value of a loss function as a summation of individual loss functions for each of the identified training item groupings. The individual loss function for each particular training item grouping is based on the output values for the training items of the grouping. The method trains the network parameters using the calculated loss function value.Type: GrantFiled: August 27, 2023Date of Patent: March 11, 2025Assignee: Amazon Technologies, Inc.Inventors: Eric A. Sather, Steven L. Teig, Andrew C. Mihal
-
Publication number: 20250068912Abstract: Some embodiments provide a method for configuring a machine-trained (MT) network that includes multiple configurable weights to train. The method propagates a set of inputs through the MT network to generate a set of output probability distributions. Each input has a corresponding expected output probability distribution. The method calculates a value of a continuously-differentiable loss function that includes a term approximating an extremum function of the difference between the expected output probability distributions and generated set of output probability distributions. The method trains the weights by back-propagating the calculated value of the continuously-differentiable loss function.Type: ApplicationFiled: July 29, 2024Publication date: February 27, 2025Inventors: Steven L. Teig, Andrew C. Mihal
-
Patent number: 12165066Abstract: Some embodiments provide a method for training a machine-trained (MT) network that processes input data using network parameters. The method maps a set of input instances to a set of output values by propagating the set of input instances through the MT network. The set of input instances includes input instances for each of multiple categories. For a particular input instance selected as an anchor instance, the method calculates a true positive rate (TPR) for the MT network as a function of a distance between the output value for the anchor instance and the output value for each input instance not in a same category as the anchor instance. The method calculates a loss function for the anchor instance that maximizes the TPR for the MT network at low false positive rate. The method trains the network parameters using the calculated loss function.Type: GrantFiled: March 14, 2018Date of Patent: December 10, 2024Assignee: Amazon Technologies, Inc.Inventors: Eric A. Sather, Steven L. Teig, Andrew C. Mihal
-
Patent number: 12051000Abstract: Some embodiments provide a method for configuring a machine-trained (MT) network that includes multiple configurable weights to train. The method propagates a set of inputs through the MT network to generate a set of output probability distributions. Each input has a corresponding expected output probability distribution. The method calculates a value of a continuously-differentiable loss function that includes a term approximating an extremum function of the difference between the expected output probability distributions and generated set of output probability distributions. The method trains the weights by back-propagating the calculated value of the continuously-differentiable loss function.Type: GrantFiled: October 10, 2022Date of Patent: July 30, 2024Assignee: PERCEIVE CORPORATIONInventors: Steven L. Teig, Andrew C. Mihal
-
Patent number: 12001948Abstract: Some embodiments of the invention provide a machine-trained method that selects an output from a plurality of outputs by processing an input. The method uses layers of machine-trained processing nodes to process the input to produce a multi-dimensional codeword. The method generates a set of affinity scores with each affinity score identifying the proximity of the produced codeword to a codeword in a first set of previously defined codewords. The method compares the set of affinity scores generated for the produced codeword with sets of affinity scores previously generated for the first-set codewords that express the proximity of the first-set codewords to a second set of codewords. The method identifies the first-set codeword that has the affinity score set that best matches the affinity score set generated for the produced codeword. The method selects the associated output of the identified first-set codeword as the output of the network.Type: GrantFiled: December 8, 2017Date of Patent: June 4, 2024Assignee: PERCEIVE CORPORATIONInventors: Steven L. Teig, Andrew C. Mihal
-
Patent number: 11995537Abstract: Some embodiments provide a method for training a machine-trained (MT) network that processes input data using network parameters. The method maps a set of input instances to a set of output values by propagating the set of input instances through the MT network. The set of input instances include input instances for each of multiple categories. The method selects multiple input instances as anchor instances. For each anchor instance, the method computes a loss function as a comparison between the output value for the anchor instance and each output value for an input instance in a different category than the anchor. The method computes a total loss function for the MT network as a sum of the loss function computed for each anchor instance. The method trains the network parameters using the computed total loss function.Type: GrantFiled: March 14, 2018Date of Patent: May 28, 2024Assignee: PERCEIVE CORPORATIONInventors: Eric A. Sather, Steven L. Teig, Andrew C. Mihal
-
Publication number: 20240153044Abstract: Some embodiments provide a neural network inference circuit for executing a neural network that includes multiple nodes that use state data from previous executions of the neural network. The neural network inference circuit includes (i) a set of computation circuits configured to execute the nodes of the neural network and (ii) a set of memories configured to implement a set of one or more registers to store, while executing the neural network for a particular input, state data generated during at least two executions of the network for previous inputs. The state data is for use by the set of computation circuits when executing a set of the nodes of the neural network for the particular input.Type: ApplicationFiled: January 5, 2024Publication date: May 9, 2024Inventors: Andrew C. Mihal, Steven L. Teig, Eric A. Sather
-
Patent number: 11868871Abstract: Some embodiments provide a neural network inference circuit for executing a neural network that includes multiple nodes that use state data from previous executions of the neural network. The neural network inference circuit includes (i) a set of computation circuits configured to execute the nodes of the neural network and (ii) a set of memories configured to implement a set of one or more registers to store, while executing the neural network for a particular input, state data generated during at least two executions of the network for previous inputs. The state data is for use by the set of computation circuits when executing a set of the nodes of the neural network for the particular input.Type: GrantFiled: September 26, 2019Date of Patent: January 9, 2024Assignee: PERCEIVE CORPORATIONInventors: Andrew C. Mihal, Steven L Teig, Eric A. Sather
-
Publication number: 20230409918Abstract: Some embodiments provide a method for training a machine-trained (MT) network that processes inputs using network parameters. The method propagates a set of input training items through the MT network to generate a set of output values. The set of input training items comprises multiple training items for each of multiple categories. The method identifies multiple training item groupings in the set of input training items. Each grouping includes at least two training items in a first category and at least one training item in a second category. The method calculates a value of a loss function as a summation of individual loss functions for each of the identified training item groupings. The individual loss function for each particular training item grouping is based on the output values for the training items of the grouping. The method trains the network parameters using the calculated loss function value.Type: ApplicationFiled: August 27, 2023Publication date: December 21, 2023Inventors: Eric A. Sather, Steven L. Teig, Andrew C. Mihal
-
Patent number: 11741369Abstract: Some embodiments provide a method for training a machine-trained (MT) network that processes inputs using network parameters. The method propagates a set of input training items through the MT network to generate a set of output values. The set of input training items comprises multiple training items for each of multiple categories. The method identifies multiple training item groupings in the set of input training items. Each grouping includes at least two training items in a first category and at least one training item in a second category. The method calculates a value of a loss function as a summation of individual loss functions for each of the identified training item groupings. The individual loss function for each particular training item grouping is based on the output values for the training items of the grouping. The method trains the network parameters using the calculated loss function value.Type: GrantFiled: October 29, 2021Date of Patent: August 29, 2023Assignee: PERCEIVE CORPORATIONInventors: Eric A. Sather, Steven L. Teig, Andrew C. Mihal
-
Patent number: 11620495Abstract: Some embodiments provide a method for executing a neural network that includes multiple nodes. The method receives an input for a particular execution of the neural network. The method receives state data that includes data generated from at least two previous executions of the neural network. The method executes the neural network to generate a set of output data for the received input. A set of the nodes performs computations using (i) data output from other nodes of the particular execution of the neural network and (ii) the received state data generated from at least two previous executions of the neural network.Type: GrantFiled: September 26, 2019Date of Patent: April 4, 2023Assignee: PERCEIVE CORPORATIONInventors: Andrew C. Mihal, Steven L. Teig, Eric A. Sather
-
Patent number: 11586902Abstract: Some embodiments provide a method for training a machine-trained (MT) network that processes input data using network parameters. The method maps input instances to output values by propagating the instances through the network. The input instances include instances for each of multiple categories. For a particular instance selected as an anchor instance, the method identifies each instance in a different category as a negative instance. The method calculates, for each negative instance of the anchor, a surprise function that probabilistically measures a surprise of finding an output value for an instance in the same category as the anchor that is a greater distance from the output value for the anchor instance than output value for the negative instance. The method calculates a loss function that emphasizes a maximum surprise calculated for the anchor. The method trains the network parameters using the calculated loss function value to minimize the maximum surprise.Type: GrantFiled: March 14, 2018Date of Patent: February 21, 2023Assignee: PERCEIVE CORPORATIONInventors: Eric A. Sather, Steven L. Teig, Andrew C. Mihal
-
Publication number: 20230040889Abstract: Some embodiments provide a method for configuring a machine-trained (MT) network that includes multiple configurable weights to train. The method propagates a set of inputs through the MT network to generate a set of output probability distributions. Each input has a corresponding expected output probability distribution. The method calculates a value of a continuously-differentiable loss function that includes a term approximating an extremum function of the difference between the expected output probability distributions and generated set of output probability distributions. The method trains the weights by back-propagating the calculated value of the continuously-differentiable loss function.Type: ApplicationFiled: October 10, 2022Publication date: February 9, 2023Inventors: Steven L. Teig, Andrew C. Mihal
-
Patent number: 11475310Abstract: Some embodiments provide a method for configuring a machine-trained (MT) network that includes multiple configurable weights to train. The method propagates a set of inputs through the MT network to generate a set of output probability distributions. Each input has a corresponding expected output probability distribution. The method calculates a value of a continuously-differentiable loss function that includes a term approximating an extremum function of the difference between the expected output probability distributions and generated set of output probability distributions. The method trains the weights by back-propagating the calculated value of the continuously-differentiable loss function.Type: GrantFiled: November 28, 2017Date of Patent: October 18, 2022Assignee: PERCEIVE CORPORATIONInventors: Steven L. Teig, Andrew C. Mihal
-
Publication number: 20220051002Abstract: Some embodiments provide a method for training a machine-trained (MT) network that processes inputs using network parameters. The method propagates a set of input training items through the MT network to generate a set of output values. The set of input training items comprises multiple training items for each of multiple categories. The method identifies multiple training item groupings in the set of input training items. Each grouping includes at least two training items in a first category and at least one training item in a second category. The method calculates a value of a loss function as a summation of individual loss functions for each of the identified training item groupings. The individual loss function for each particular training item grouping is based on the output values for the training items of the grouping. The method trains the network parameters using the calculated loss function value.Type: ApplicationFiled: October 29, 2021Publication date: February 17, 2022Inventors: Eric A. Sather, Steven L. Teig, Andrew C. Mihal
-
Patent number: 11163986Abstract: Some embodiments provide a method for training a machine-trained (MT) network that processes inputs using network parameters. The method propagates a set of input training items through the MT network to generate a set of output values. The set of input training items comprises multiple training items for each of multiple categories. The method identifies multiple training item groupings in the set of input training items. Each grouping includes at least two training items in a first category and at least one training item in a second category. The method calculates a value of a loss function as a summation of individual loss functions for each of the identified training item groupings. The individual loss function for each particular training item grouping is based on the output values for the training items of the grouping. The method trains the network parameters using the calculated loss function value.Type: GrantFiled: April 17, 2020Date of Patent: November 2, 2021Assignee: PERCEIVE CORPORATIONInventors: Eric A. Sather, Steven L. Teig, Andrew C. Mihal
-
Patent number: 11151695Abstract: Some embodiments provide a method for processing a video that includes a sequence of images using a neural network. The method receives a set of video images as a set of inputs to successive executions of the neural network. The method executes the neural network for each successive video image of the set of video images to reduce an amount of noise in the video image by (i) identifying spatial features of the video image and (ii) storing a set of state data representing identified spatial features for use in identifying spatial features of subsequent video images in the set of video images. Identifying spatial features of a particular video image includes using the stored sets of spatial features of video images previous to the particular video image.Type: GrantFiled: September 26, 2019Date of Patent: October 19, 2021Assignee: PERCEIVE CORPORATIONInventors: Andrew C. Mihal, Steven L. Teig, Eric A. Sather
-
Publication number: 20200250476Abstract: Some embodiments provide a method for training a machine-trained (MT) network that processes inputs using network parameters. The method propagates a set of input training items through the MT network to generate a set of output values. The set of input training items comprises multiple training items for each of multiple categories. The method identifies multiple training item groupings in the set of input training items. Each grouping includes at least two training items in a first category and at least one training item in a second category. The method calculates a value of a loss function as a summation of individual loss functions for each of the identified training item groupings. The individual loss function for each particular training item grouping is based on the output values for the training items of the grouping. The method trains the network parameters using the calculated loss function value.Type: ApplicationFiled: April 17, 2020Publication date: August 6, 2020Inventors: Eric A. Sather, Steven L. Teig, Andrew C. Mihal
-
Patent number: 10671888Abstract: Some embodiments provide a method for training a machine-trained (MT) network that processes inputs using network parameters. The method propagates a set of input training items through the MT network to generate a set of output values. The set of input training items comprises multiple training items for each of multiple categories. The method identifies multiple training item groupings in the set of input training items. Each grouping includes at least two training items in a first category and at least one training item in a second category. The method calculates a value of a loss function as a summation of individual loss functions for each of the identified training item groupings. The individual loss function for each particular training item grouping is based on the output values for the training items of the grouping. The method trains the network parameters using the calculated loss function value.Type: GrantFiled: February 21, 2018Date of Patent: June 2, 2020Assignee: PERCEIVE CORPORATIONInventors: Eric A. Sather, Steven L. Teig, Andrew C. Mihal