Patents by Inventor Kenneth W. Waters

Kenneth W. Waters has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11934941
    Abstract: A neural processor circuit includes one or more planar engine circuits that perform non-convolution operations in parallel with convolution operations performed by one or more neural engine circuits. The neural engine circuits perform the convolution operations on neural input data corresponding to one or more neural engine tasks to generate neural output data. The planar engine circuits perform non-convolution operations on planar input data corresponding to one or more planar engine tasks to generate planar output data. A data processor circuit in the neural processor circuit addresses data dependency between the one or more neural engine tasks and the one or more planar engine tasks by controlling reading of the neural output data as the planar input data by the planar engine circuits or reading of the planar output data as the neural input data by the neural engine circuits.
    Type: Grant
    Filed: November 17, 2022
    Date of Patent: March 19, 2024
    Assignee: APPLE INC.
    Inventors: Christopher L. Mills, Kenneth W. Waters
  • Publication number: 20230236799
    Abstract: Embodiments relate to a neural processor circuit that includes a neural engine and a post-processing circuit. The neural engine performs a computational task related to a neural network to generate a processed value. The post-processing circuit includes a random bit generator, an adder circuit and a rounding circuit. The random bit generator generates a random string of bits. The adder circuit adds the random string of bits to a version of the processed value to generate an added value. The rounding circuit truncates the added value to generate an output value of the computational task. The random bit generator may include a linear-feedback shift register (LFSR) that generates random numbers based on a seed. The seed may be derived from a master seed that is specific to a task of the neural network.
    Type: Application
    Filed: January 24, 2022
    Publication date: July 27, 2023
    Inventor: Kenneth W. Waters
  • Publication number: 20230206051
    Abstract: Embodiments relate to a neural processor that includes one or more neural engine circuits and planar engine circuits. The neural engine circuits can perform convolution operations of input data with one or more kernels to generate outputs. The planar engine circuit is coupled to the plurality of neural engine circuits. A planar engine circuit can be configured to multiple modes. In an elementwise mode, the planar engine circuit may combine two tensors by performing operations element by element. The planar engine circuit may support elementwise operation for two tensors that are in different sizes and ranks. The planar engine circuit may perform a broadcasting operation to duplicate one or more values across one or more channels to make a smaller tensor matching the size of the larger tensor.
    Type: Application
    Filed: March 10, 2023
    Publication date: June 29, 2023
    Inventors: Christopher L. Mills, Kenneth W. Waters, Youchang Kim
  • Patent number: 11630991
    Abstract: Embodiments relate to a neural processor that includes one or more neural engine circuits and planar engine circuits. The neural engine circuits can perform convolution operations of input data with one or more kernels to generate outputs. The planar engine circuit is coupled to the plurality of neural engine circuits. A planar engine circuit can be configured to multiple modes. In an elementwise mode, the planar engine circuit may combine two tensors by performing operations element by element. The planar engine circuit may support elementwise operation for two tensors that are in different sizes and ranks. The planar engine circuit may perform a broadcasting operation to duplicate one or more values across one or more channels to make a smaller tensor matching the size of the larger tensor.
    Type: Grant
    Filed: February 4, 2020
    Date of Patent: April 18, 2023
    Assignee: Apple Inc.
    Inventors: Christopher L. Mills, Kenneth W. Waters, Youchang Kim
  • Publication number: 20230081023
    Abstract: Embodiments relate to a neural processor circuit including one or more planar engine circuits that perform non-convolution operations in parallel with convolution operations performed by one or more neural engine circuits. The neural engine circuits perform the convolution operations on neural input data corresponding to one or more neural engine tasks to generate neural output data. The planar engine circuits perform non-convolution operations on planar input data corresponding to one or more planar engine tasks to generate planar output data. A data processor circuit in the neural processor circuit addresses data dependency between the one or more neural engine tasks and the one or more planar engine tasks by controlling reading of the neural output data as the planar input data by the planar engine circuits or reading of the planar output data as the neural input data by the neural engine circuits.
    Type: Application
    Filed: November 17, 2022
    Publication date: March 16, 2023
    Inventors: Christopher L. Mills, Kenneth W. Waters
  • Patent number: 11604975
    Abstract: A neural processor includes one or more neural engine circuits and a planar engine circuit. The neural engine circuits can perform convolution operations of first input data with one or more kernels to generate a first output. The planar engine circuit receives second input data that corresponds to a version of the first input data. The planar engine circuit also receives third input data that includes fourth input data and fifth input data stored together in a dimension of third input data. The planar engine circuit performs a first elementwise operation between a version of the second input data and a version of the fourth input data to generate intermediate data. The planar engine circuit performs a second elementwise operation between the intermediate data and a version of the fifth input data to generate a second output.
    Type: Grant
    Filed: April 9, 2020
    Date of Patent: March 14, 2023
    Assignee: Apple Inc.
    Inventors: Christopher L. Mills, Kenneth W. Waters, Youchang Kim
  • Patent number: 11599780
    Abstract: A neural processor circuit including one or more planar engine circuits that perform non-convolution operations in parallel with convolution operations performed by one or more neural engine circuits. The neural engine circuits perform the convolution operations on neural input data corresponding to one or more neural engine tasks to generate neural output data. The planar engine circuits perform non-convolution operations on planar input data corresponding to one or more planar engine tasks to generate planar output data. A data processor circuit in the neural processor circuit addresses data dependency between the one or more neural engine tasks and the one or more planar engine tasks by controlling reading of the neural output data as the planar input data by the planar engine circuits or reading of the planar output data as the neural input data by the neural engine circuits.
    Type: Grant
    Filed: March 2, 2020
    Date of Patent: March 7, 2023
    Assignee: Apple Inc.
    Inventors: Christopher L. Mills, Kenneth W. Waters
  • Patent number: 11537864
    Abstract: Embodiments relate to a neural processor that includes one or more neural engine circuits and planar engine circuits. The neural engine circuits can perform convolution operations of input data with one or more kernels to generate outputs. The planar engine circuit is coupled to the plurality of neural engine circuits. A planar engine circuit can be configured to multiple modes. In a reduction mode, the planar engine circuit may process values arranged in one or more dimensions of input to generate a reduced value. The reduced values across multiple input data may be accumulated. The planar engine circuit may program a filter circuit as a reduction tree to gradually reduce the data into a reduced value. The reduction operation reduces the size of one or more dimensions of a tensor.
    Type: Grant
    Filed: November 26, 2019
    Date of Patent: December 27, 2022
    Assignee: Apple Inc.
    Inventors: Christopher L. Mills, Kenneth W. Waters, Youchang Kim
  • Publication number: 20220241641
    Abstract: Systems and methods of analyzing a user's motion during a swimming session are described. One or more motions sensors can collect motion data of the user. A processor circuit can make motion analysis based on the motion data. The processor circuit can determine if the user's arm swing is a genuine swim stroke. The processor circuit can also determine whether the user is swimming or turning. The processor circuit can also classify the user's swim stroke style. The processor circuit can also determine the user's swim stroke phase. The processor circuit can also determine the user's stroke orbit consistency.
    Type: Application
    Filed: August 27, 2021
    Publication date: August 4, 2022
    Inventors: Craig Mermel, Karthik Jayaraman Raghuram, Hung A. Pham, Adam S. Howell, James P. Ochs, Alexander Singh Alvarado, Sunny K. Chow, Ronald K. Huang, Gunes Dervisoglu, Kenneth W. Waters
  • Publication number: 20220237439
    Abstract: A neural processor includes neural engines for performing convolution operations on input data corresponding to one or more tasks to generate output data. The neural processor circuit also includes a data processor circuit that is coupled to one or more neural engine. The data processor circuit receives the output data from the neural engine and generates a branching command from the output data. The neural processor circuit further includes a task manager that is coupled to the data processor circuit. The task manager receives the branching command from the data processor circuit. The task manager enqueues one of two or more segment branches according to the received branching command. The two or more segment branches are subsequent to a pre-branch task segment that includes the pre-branch task. The task manager transmits a task from the selected one of the segment branches to data processor circuit to perform the task.
    Type: Application
    Filed: January 22, 2021
    Publication date: July 28, 2022
    Inventors: Kenneth W. Waters, Christopher L. Mills
  • Publication number: 20220237438
    Abstract: A neural processor includes neural engines for performing convolution operations on input data corresponding to one or more tasks to generate output data. The neural processor also includes a data processor circuit coupled to external system memory. The data processor circuit includes a buffer for storing the output data from the neural engines. The neural processor further includes a task manager coupled to the data processor circuit. The task manager receives a context-switch task. The context-switch task specifies a switch of the data processor circuit from handling an outgoing task to an incoming task. The task manager sends configuration data of the context-switch task to cause the data processor circuit to transmit the output data corresponding to the outgoing task from the buffer to the external system memory. The data processor circuit also fetches data corresponding to the incoming task from the external system memory to the buffer.
    Type: Application
    Filed: January 22, 2021
    Publication date: July 28, 2022
    Inventors: Christopher L. Mills, Kenneth W. Waters
  • Publication number: 20210319290
    Abstract: A neural processor includes one or more neural engine circuits and a planar engine circuit. The neural engine circuits can perform convolution operations of first input data with one or more kernels to generate a first output. The planar engine circuit receives second input data that corresponds to a version of the first input data. The planar engine circuit also receives third input data that includes fourth input data and fifth input data stored together in a dimension of third input data. The planar engine circuit performs a first elementwise operation between a version of the second input data and a version of the fourth input data to generate intermediate data. The planar engine circuit performs a second elementwise operation between the intermediate data and a version of the fifth input data to generate a second output.
    Type: Application
    Filed: April 9, 2020
    Publication date: October 14, 2021
    Inventors: Christopher L. Mills, Kenneth W. Waters, Youchang Kim
  • Publication number: 20210271958
    Abstract: Embodiments relate to a neural processor circuit including one or more planar engine circuits that perform non-convolution operations in parallel with convolution operations performed by one or more neural engine circuits. The neural engine circuits perform the convolution operations on neural input data corresponding to one or more neural engine tasks to generate neural output data. The planar engine circuits perform non-convolution operations on planar input data corresponding to one or more planar engine tasks to generate planar output data. A data processor circuit in the neural processor circuit addresses data dependency between the one or more neural engine tasks and the one or more planar engine tasks by controlling reading of the neural output data as the planar input data by the planar engine circuits or reading of the planar output data as the neural input data by the neural engine circuits.
    Type: Application
    Filed: March 2, 2020
    Publication date: September 2, 2021
    Inventors: Christopher L. Mills, Kenneth W. Waters
  • Publication number: 20210241079
    Abstract: Embodiments relate to a neural processor that includes one or more neural engine circuits and planar engine circuits. The neural engine circuits can perform convolution operations of input data with one or more kernels to generate outputs. The planar engine circuit is coupled to the plurality of neural engine circuits. A planar engine circuit can be configured to multiple modes. In an elementwise mode, the planar engine circuit may combine two tensors by performing operations element by element. The planar engine circuit may support elementwise operation for two tensors that are in different sizes and ranks. The planar engine circuit may perform a broadcasting operation to duplicate one or more values across one or more channels to make a smaller tensor matching the size of the larger tensor.
    Type: Application
    Filed: February 4, 2020
    Publication date: August 5, 2021
    Inventors: CHRISTOPHER L. MILLS, KENNETH W. WATERS, YOUCHANG KIM
  • Publication number: 20210158135
    Abstract: Embodiments relate to a neural processor that includes one or more neural engine circuits and planar engine circuits. The neural engine circuits can perform convolution operations of input data with one or more kernels to generate outputs. The planar engine circuit is coupled to the plurality of neural engine circuits. A planar engine circuit can be configured to multiple modes. In a reduction mode, the planar engine circuit may process values arranged in one or more dimensions of input to generate a reduced value. The reduced values across multiple input data may be accumulated. The planar engine circuit may program a filter circuit as a reduction tree to gradually reduce the data into a reduced value. The reduction operation reduces the size of one or more dimensions of a tensor.
    Type: Application
    Filed: November 26, 2019
    Publication date: May 27, 2021
    Inventors: Christopher L. MILLS, Kenneth W. WATERS, Youchang KIM
  • Publication number: 20210103803
    Abstract: Embodiments relate to a neural processor that include a plurality of neural engine circuits and one or more planar engine circuits. The plurality of neural engine circuits can perform convolution operations of input data of the neural engine circuits with one or more kernels to generate outputs. The planar engine circuit is coupled to the plurality of neural engine circuits. The planar engine circuit generates an output from input data that corresponds to output of the neural engine circuits or a version of input data of the neural processor. The planar engine circuit can be configured to multiple modes. In a pooling mode, the planar engine circuit reduces a spatial size of a version of the input data. In an elementwise mode, the planar engine circuit performs an elementwise operation on the input data. In a reduction mode, the planar engine circuit reduces the rank of a tensor.
    Type: Application
    Filed: October 8, 2019
    Publication date: April 8, 2021
    Inventors: Christopher L. Mills, Kenneth W. Waters, Youchang Kim