Patents by Inventor James Michael Bodwin

James Michael Bodwin has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12001508
    Abstract: A plurality of chiplets may be used to multiply two matrices A and B. Matrix A may be decomposed into horizontal stripes and matrix B may be decomposed into vertical stripes. Each of the horizontal stripes may be multiplied by each of the vertical stripes to form the output matrix C. Specifically, horizontal stripes may be stored in a stationary, distributed manner across the chiplets, while the vertical stripes (or sub-vertical stripes) may be passed between respective pairs of the chiplets until each of the vertical stripes (or sub-vertical stripes) of matrix B has been received and processed by each of the chiplets. The vertical stripes may be passed along one or more paths that interconnect the chiplets. Similar techniques can be applied to an arrangement in which the vertical stripes are stationary and the horizontal stripes (or sub-horizontal stripes) are passed between respective pairs of the chiplets.
    Type: Grant
    Filed: October 23, 2023
    Date of Patent: June 4, 2024
    Assignee: Persimmons, Inc.
    Inventor: James Michael Bodwin
  • Publication number: 20240143988
    Abstract: Dynamic data quantization may be applied to minimize the power consumption of a system that implements a convolutional neural network (CNN). Under such a quantization scheme, a quantized representation of a 3×3 array of m-bit activation values may include 9 n-bit mantissa values and one exponent shared between the n-bit mantissa values (n<m); and a quantized representation of a 3×3 kernel with p-bit parameter values may include 9 q-bit mantissa values and one exponent shared between the q-bit mantissa values (q<p). Convolution of the kernel with the activation data may include computing a dot product of the 9 n-bit mantissa values with the 9 q-bit mantissa values, and summing the shared exponents. In a CNN with multiple kernels, multiple computing units (each corresponding to one of the kernels) may receive the quantized representation of the 3×3 array of m-bit activation values from the same quantization-alignment module.
    Type: Application
    Filed: January 11, 2024
    Publication date: May 2, 2024
    Inventors: Jian hui Huang, James Michael Bodwin, Pradeep R. Joginipally, Shabarivas Abhiram, Gary S. Goldman, Martin Stefan Patz, Eugene M. Feinberg, Berend Ozceri
  • Patent number: 11915126
    Abstract: Dynamic data quantization may be applied to minimize the power consumption of a system that implements a convolutional neural network (CNN). Under such a quantization scheme, a quantized representation of a 3×3 array of m-bit activation values may include 9 n-bit mantissa values and one exponent shared between the n-bit mantissa values (n<m); and a quantized representation of a 3×3 kernel with p-bit parameter values may include 9 q-bit mantissa values and one exponent shared between the q-bit mantissa values (q<p). Convolution of the kernel with the activation data may include computing a dot product of the 9 n-bit mantissa values with the 9 q-bit mantissa values, and summing the shared exponents. In a CNN with multiple kernels, multiple computing units (each corresponding to one of the kernels) may receive the quantized representation of the 3×3 array of m-bit activation values from the same quantization-alignment module.
    Type: Grant
    Filed: September 4, 2020
    Date of Patent: February 27, 2024
    Assignee: Recogni Inc.
    Inventors: Jian hui Huang, James Michael Bodwin, Pradeep R. Joginipally, Shabarivas Abhiram, Gary S. Goldman, Martin Stefan Patz, Eugene M. Feinberg, Berend Ozceri
  • Publication number: 20220076104
    Abstract: Dynamic data quantization may be applied to minimize the power consumption of a system that implements a convolutional neural network (CNN). Under such a quantization scheme, a quantized representation of a 3×3 array of m-bit activation values may include 9 n-bit mantissa values and one exponent shared between the n-bit mantissa values (n<m); and a quantized representation of a 3×3 kernel with p-bit parameter values may include 9 q-bit mantissa values and one exponent shared between the q-bit mantissa values (q<p). Convolution of the kernel with the activation data may include computing a dot product of the 9 n-bit mantissa values with the 9 q-bit mantissa values, and summing the shared exponents. In a CNN with multiple kernels, multiple computing units (each corresponding to one of the kernels) may receive the quantized representation of the 3×3 array of m-bit activation values from the same quantization-alignment module.
    Type: Application
    Filed: September 4, 2020
    Publication date: March 10, 2022
    Inventors: Jian hui Huang, James Michael Bodwin, Pradeep R. Joginipally, Shabarivas Abhiram, Gary S. Goldman, Martin Stefan Patz, Eugene M. Feinberg, Berend Ozceri