Patents by Inventor Edwin James

Edwin James has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250131237
    Abstract: Techniques in wavelet filtering for advanced deep learning provide improvements in one or more of accuracy, performance, and energy efficiency. An array of processing elements comprising a portion of a neural network accelerator performs flow-based computations on wavelets of data. Each processing element comprises a compute element to execute programmed instructions using the data and a router to route the wavelets in accordance with virtual channel specifiers. Each processing element is enabled to perform local filtering of wavelets received at the processing element, selectively, conditionally, and/or optionally discarding zero or more of the received wavelets, thereby preventing further processing of the discarded wavelets. The wavelet filtering is performed by one or more configurable wavelet filters operable in various modes, such as counter, sparse, and range modes.
    Type: Application
    Filed: December 24, 2024
    Publication date: April 24, 2025
    Inventors: Michael Morrison, Michael Edwin James, Sean Lie, Srikanth Arekapudi, Gary R. Lauterbach
  • Publication number: 20250110808
    Abstract: Techniques in placement of compute and memory for accelerated deep learning provide improvements in one or more of accuracy, performance, and energy efficiency. An array of processing elements comprising a portion of a neural network accelerator performs flow-based computations on wavelets of data. Each processing element comprises a compute element to execute programmed instructions using the data and a router to route the wavelets. The routing is in accordance with virtual channel specifiers of the wavelets and controlled by routing configuration information of the router. A software stack determines placement of compute resources and memory resources based on a description of a neural network. The determined placement is used to configure the routers including usage of the respective colors. The determined placement is used to configure the compute elements including the respective programmed instructions each is configured to execute.
    Type: Application
    Filed: December 12, 2024
    Publication date: April 3, 2025
    Inventors: Vladimir Kibardin, Michael Edwin James, Michael Morrison, Sean Lie, Gary R. Lauterbach, Stanislav Funiak
  • Publication number: 20250080477
    Abstract: Techniques in dynamic routing for advanced deep learning provide improvements in one or more of accuracy, performance, and energy efficiency. An array of processing elements comprising a portion of a neural network accelerator performs flow-based computations on wavelets of data. Each processing element comprises a compute element enabled to execute programmed instructions using the data and a router enabled to route the wavelets via static routing, dynamic routing, or both. The routing is in accordance with a respective virtual channel specifier of each of the wavelets and controlled by routing configuration information of the router. The static techniques enable statically specifiable neuron connections. The dynamic techniques enable information from the wavelets to alter the routing configuration information during neural network processing.
    Type: Application
    Filed: November 12, 2024
    Publication date: March 6, 2025
    Inventors: Michael Morrison, Michael Edwin James, Sean Lie, Srikanth Arekapudi, Gary R. Lauterbach, Vijay Anand Reddy KORTHIKANTI
  • Patent number: 12217147
    Abstract: Techniques in wavelet filtering for advanced deep learning provide improvements in one or more of accuracy, performance, and energy efficiency. An array of processing elements comprising a portion of a neural network accelerator performs flow-based computations on wavelets of data. Each processing element comprises a compute element to execute programmed instructions using the data and a router to route the wavelets in accordance with virtual channel specifiers. Each processing element is enabled to perform local filtering of wavelets received at the processing element, selectively, conditionally, and/or optionally discarding zero or more of the received wavelets, thereby preventing further processing of the discarded wavelets. The wavelet filtering is performed by one or more configurable wavelet filters operable in various modes, such as counter, sparse, and range modes.
    Type: Grant
    Filed: October 15, 2020
    Date of Patent: February 4, 2025
    Assignee: Cerebras Systems Inc.
    Inventors: Michael Morrison, Michael Edwin James, Sean Lie, Srikanth Arekapudi, Gary R. Lauterbach
  • Patent number: 12204954
    Abstract: Techniques in placement of compute and memory for accelerated deep learning provide improvements in one or more of accuracy, performance, and energy efficiency. An array of processing elements comprising a portion of a neural network accelerator performs flow-based computations on wavelets of data. Each processing element comprises a compute element to execute programmed instructions using the data and a router to route the wavelets. The routing is in accordance with virtual channel specifiers of the wavelets and controlled by routing configuration information of the router. A software stack determines placement of compute resources and memory resources based on a description of a neural network. The determined placement is used to configure the routers including usage of the respective colors. The determined placement is used to configure the compute elements including the respective programmed instructions each is configured to execute.
    Type: Grant
    Filed: October 29, 2020
    Date of Patent: January 21, 2025
    Assignee: Cerebras Systems Inc.
    Inventors: Vladimir Kibardin, Michael Edwin James, Michael Morrison, Sean Lie, Gary R. Lauterbach, Stanislav Funiak
  • Patent number: 12177133
    Abstract: Techniques in dynamic routing for advanced deep learning provide improvements in one or more of accuracy, performance, and energy efficiency. An array of processing elements comprising a portion of a neural network accelerator performs flow-based computations on wavelets of data. Each processing element comprises a compute element enabled to execute programmed instructions using the data and a router enabled to route the wavelets via static routing, dynamic routing, or both. The routing is in accordance with a respective virtual channel specifier of each of the wavelets and controlled by routing configuration information of the router. The static techniques enable statically specifiable neuron connections. The dynamic techniques enable information from the wavelets to alter the routing configuration information during neural network processing.
    Type: Grant
    Filed: October 14, 2020
    Date of Patent: December 24, 2024
    Assignee: Cerebras Systems Inc.
    Inventors: Michael Morrison, Michael Edwin James, Sean Lie, Srikanth Arekapudi, Gary R. Lauterbach, Vijay Anand Reddy Korthikanti
  • Patent number: 12169771
    Abstract: Techniques in wavelet filtering for advanced deep learning provide improvements in one or more of accuracy, performance, and energy efficiency. An array of processing elements comprising a portion of a neural network accelerator performs flow-based computations on wavelets of data. Each processing element comprises a compute element to execute programmed instructions using the data and a router to route the wavelets in accordance with virtual channel specifiers. Each processing element is enabled to perform local filtering of wavelets received at the processing element, selectively, conditionally, and/or optionally discarding zero or more of the received wavelets, thereby preventing further processing of the discarded wavelets. The wavelet filtering is performed by one or more configurable wavelet filters operable in various modes, such as counter, sparse, and range modes.
    Type: Grant
    Filed: October 15, 2020
    Date of Patent: December 17, 2024
    Assignee: Cerebras Systems Inc.
    Inventors: Michael Morrison, Michael Edwin James, Sean Lie, Srikanth Arekapudi, Gary R. Lauterbach
  • Publication number: 20240370522
    Abstract: Embodiments of the present disclosure are directed to methods and systems for computing depth-wise convolutions on a two-dimensional array (or grid) of compute cores. In a first aspect of the disclosure, a method for running the depth-wise convolution on a two-dimensional array of compute cores comprises of (a) partitioning the input image among the compute cores, such that the compute core array collectively stores the whole image (b) allocating a memory buffer (an accumulator) that holds the subimage plus some frame (or padding) around the subimage (c) receiving convolution filter weights, multiplying the input subimage by the weight and adding to the accumulator with an offset (d) exchanging the information from the subimage frame with the neighboring compute cores. The advantage of the method is that the depth-wise convolution operation can be parallelized across a two-dimensional array of compute cores.
    Type: Application
    Filed: May 1, 2023
    Publication date: November 7, 2024
    Applicant: Cerebras Systems Inc.
    Inventors: Vladimir V. Kibardin, Michael Edwin James
  • Patent number: 11987003
    Abstract: A method of making a three-dimensional object, including the steps of: (a) providing a carrier platform; (b) producing the three-dimensional object adhered to the carrier platform; (c) immersing the object in a wash liquid with the object remaining adhered to the carrier platform; (d) agitating the object in the wash liquid and/or the wash liquid with the object immersed therein to at least partially remove residual resin from the object; (e) separating the object from the wash liquid, with the object remaining adhered to the carrier platform; (f) agitating the object at least partially remove residual wash liquid; and (g) repeating steps (c) through (f) at least once to remove additional polymerizable resin, steps (c) through (f) are carried out in the same vessel, the immersing step (c) includes filling the vessel with the wash liquid, and the separating step (e) includes draining the wash liquid from the vessel.
    Type: Grant
    Filed: September 2, 2022
    Date of Patent: May 21, 2024
    Assignee: Carbon, Inc.
    Inventors: Courtney F. Converse, W. Ryan Powell, Sherwood Forlee, David Eliot Scheinman, Scott Heines, Edwin James Sabathia, Jr.
  • Patent number: 11934945
    Abstract: Techniques in advanced deep learning provide improvements in one or more of accuracy, performance, and energy efficiency, such as accuracy of learning, accuracy of prediction, speed of learning, performance of learning, and energy efficiency of learning. An array of processing elements performs flow-based computations on wavelets of data. Each processing element has a respective compute element and a respective routing element. Each compute element has processing resources and memory resources. Each router enables communication via wavelets with at least nearest neighbors in a 2D mesh. Stochastic gradient descent, mini-batch gradient descent, and continuous propagation gradient descent are techniques usable to train weights of a neural network modeled by the processing elements. Reverse checkpoint is usable to reduce memory usage during the training.
    Type: Grant
    Filed: February 23, 2018
    Date of Patent: March 19, 2024
    Assignee: Cerebras Systems Inc.
    Inventors: Sean Lie, Michael Morrison, Michael Edwin James, Gary R. Lauterbach, Srikanth Arekapudi
  • Patent number: 11853867
    Abstract: Techniques in advanced deep learning provide improvements in one or more of accuracy, performance, and energy efficiency. An array of processing elements performs flow-based computations on wavelets of data. Each processing element has a compute element and a routing element. Each router enables communication via wavelets with at least nearest neighbors in a 2D mesh. Routing is controlled by virtual channel specifiers in each wavelet and routing configuration information in each router. Execution of an activate instruction or completion of a fabric vector operation activates one of the virtual channels. A virtual channel is selected from a pool comprising previously activated virtual channels and virtual channels associated with previously received wavelets. A task corresponding to the selected virtual channel is activated by executing instructions corresponding to the selected virtual channel.
    Type: Grant
    Filed: October 19, 2021
    Date of Patent: December 26, 2023
    Assignee: Cerebras Systems Inc.
    Inventors: Sean Lie, Michael Morrison, Srikanth Arekapudi, Michael Edwin James, Gary R. Lauterbach
  • Patent number: 11727257
    Abstract: Techniques in advanced deep learning provide improvements in one or more of accuracy, performance, and energy efficiency. An array of processing elements performs flow-based computations on wavelets of data. Each processing element has a respective compute element and a respective routing element. Instructions executed by the compute element include operand specifiers, some specifying a data structure register storing a data structure descriptor describing an operand as a fabric vector or a memory vector. The data structure descriptor further describes the memory vector as one of a one-dimensional vector, a four-dimensional vector, or a circular buffer vector. Optionally, the data structure descriptor specifies an extended data structure register storing an extended data structure descriptor. The extended data structure descriptor specifies parameters relating to a four-dimensional vector or a circular buffer vector.
    Type: Grant
    Filed: January 24, 2022
    Date of Patent: August 15, 2023
    Assignee: Cerebras Systems Inc.
    Inventors: Sean Lie, Michael Morrison, Srikanth Arekapudi, Gary R. Lauterbach, Michael Edwin James
  • Patent number: 11727254
    Abstract: Techniques in advanced deep learning provide improvements in one or more of accuracy, performance, and energy efficiency. An array of processing elements performs flow based computations on wavelets of data. Each processing element has a compute element and a routing element. Each compute element has memory. Each router enables communication via wavelets with nearest neighbors in a 2D mesh. A compute element receives a wavelet. If a control specifier of the wavelet is a first value, then instructions are read from the memory of the compute element in accordance with an index specifier of the wavelet. If the control specifier is a second value, then instructions are read from the memory of the compute element in accordance with a virtual channel specifier of the wavelet. Then the compute element initiates execution of the instructions.
    Type: Grant
    Filed: August 27, 2020
    Date of Patent: August 15, 2023
    Assignee: Cerebras Systems Inc.
    Inventors: Sean Lie, Gary R. Lauterbach, Michael Edwin James, Michael Morrison, Srikanth Arekapudi
  • Publication number: 20230125522
    Abstract: Techniques in optimized placement for efficiency for accelerated deep learning provide improvements in one or more of accuracy, performance, and energy efficiency. An array of processing elements comprising a portion of a neural network accelerator performs flow-based computations on wavelets of data. Each processing element comprises a compute element to execute programmed instructions using the data and a router to route the wavelets. The routing is in accordance with virtual channel specifiers of the wavelets and controlled by routing configuration information of the router. A software stack determines optimized placement based on a description of a neural network. The determined placement is used to configure the routers including usage of the respective colors. The determined placement is used to configure the compute elements including the respective programmed instructions each is configured to execute.
    Type: Application
    Filed: October 30, 2020
    Publication date: April 27, 2023
    Inventors: Vladimir KIBARDIN, Michael Edwin JAMES, Michael MORRISON, Sean LIE, Gary R. LAUTERBACH, Stanislav FUNIAK
  • Publication number: 20230105188
    Abstract: Described herein are devices, systems, and methods to aid in the manipulation of cells. The devices, methods, and systems disclosed herein can be applied towards, for example, automation of the in vitro fertilization process.
    Type: Application
    Filed: August 12, 2022
    Publication date: April 6, 2023
    Inventors: Jose Antonio HORCAJADAS ALMANSA, Tamara MARTIN VILLALBA, Santiago MUNNE, Hannah Victoria HARE, Edwin James STONE, Michael Ian WALKER, Jonathan Patrick CASEY, Peter Lee CROSSLEY, Alan James JUDD, Gary Keith JEPPS
  • Publication number: 20230071424
    Abstract: Techniques in placement of compute and memory for accelerated deep learning provide improvements in one or more of accuracy, performance, and energy efficiency. An array of processing elements comprising a portion of a neural network accelerator performs flow-based computations on wavelets of data. Each processing element comprises a compute element to execute programmed instructions using the data and a router to route the wavelets. The routing is in accordance with virtual channel specifiers of the wavelets and controlled by routing configuration information of the router. A software stack determines placement of compute resources and memory resources based on a description of a neural network. The determined placement is used to configure the routers including usage of the respective colors. The determined placement is used to configure the compute elements including the respective programmed instructions each is configured to execute.
    Type: Application
    Filed: October 29, 2020
    Publication date: March 9, 2023
    Inventors: Vladimir KIBARDIN, Michael Edwin JAMES, Michael MORRISON, Sean LIE, Gary R. LAUTERBACH, Stanislav FUNIAK
  • Publication number: 20230069536
    Abstract: Techniques in dynamic routing for advanced deep learning provide improvements in one or more of accuracy, performance, and energy efficiency. An array of processing elements comprising a portion of a neural network accelerator performs flow-based computations on wavelets of data. Each processing element comprises a compute element enabled to execute programmed instructions using the data and a router enabled to route the wavelets via static routing, dynamic routing, or both. The routing is in accordance with a respective virtual channel specifier of each of the wavelets and controlled by routing configuration information of the router. The static techniques enable statically specifiable neuron connections. The dynamic techniques enable information from the wavelets to alter the routing configuration information during neural network processing.
    Type: Application
    Filed: October 14, 2020
    Publication date: March 2, 2023
    Inventors: Michael MORRISON, Michael Edwin JAMES, Sean LIE, Srikanth AREKAPUDI, Gary R. LAUTERBACH, Vijay Anand Reddy KORTHIKANTI
  • Patent number: 11580394
    Abstract: Techniques in advanced deep learning provide improvements in one or more of accuracy, performance, and energy efficiency, such as accuracy of learning, accuracy of prediction, speed of learning, performance of learning, and energy efficiency of learning. An array of processing elements performs flow-based computations on wavelets of data. Each processing element has a respective compute element and a respective routing element. Each compute element has processing resources and memory resources. Each router enables communication via wavelets with at least nearest neighbors in a 2D mesh. Stochastic gradient descent, mini-batch gradient descent, and continuous propagation gradient descent are techniques usable to train weights of a neural network modeled by the processing elements. Reverse checkpoint is usable to reduce memory usage during the training.
    Type: Grant
    Filed: June 24, 2020
    Date of Patent: February 14, 2023
    Assignee: Cerebras Systems Inc.
    Inventors: Sean Lie, Michael Morrison, Michael Edwin James, Gary R. Lauterbach, Srikanth Arekapudi
  • Publication number: 20220410480
    Abstract: A method of making a three-dimensional object from a polymerizable resin, the method comprising the steps of: (a) providing a carrier platform on which the three-dimensional object can be formed; (b) producing the three-dimensional object adhered to the carrier platform from the polymerizable resin by stereolithography, the object having residual resin on the surface thereof; (c) immersing the object in a wash liquid with the object remaining adhered to the carrier platform; (d) agitating (i) the object in said wash liquid (e.g., by spinning), (ii) the wash liquid with the object immersed therein (e.g., by sonication of the wash liquid), or (iii) both the object in the wash liquid and the wash liquid with the object immersed therein, to at least partially remove residual resin from the surface of the object; (e) separating the object from the wash liquid, with the object remaining adhered to the carrier platform, the object having residual wash liquid on the surface thereof; (f) agitating the object (e.g.
    Type: Application
    Filed: September 2, 2022
    Publication date: December 29, 2022
    Inventors: Courtney F. Converse, W. Ryan Powell, Sherwood Forlee, David Eliot Scheinman, Scott Heines, Edwin James Sabathia, JR.
  • Publication number: 20220398443
    Abstract: Techniques in advanced deep learning provide improvements in one or more of accuracy, performance, and energy efficiency. An array of processing elements performs flow-based computations on wavelets of data. Each processing element has a respective compute element and a respective routing element. Instructions executed by the compute element include operand specifiers, some specifying a data structure register storing a data structure descriptor describing an operand as a fabric vector or a memory vector. The data structure descriptor further describes the memory vector as one of a one-dimensional vector, a four-dimensional vector, or a circular buffer vector. Optionally, the data structure descriptor specifies an extended data structure register storing an extended data structure descriptor. The extended data structure descriptor specifies parameters relating to a four-dimensional vector or a circular buffer vector.
    Type: Application
    Filed: January 24, 2022
    Publication date: December 15, 2022
    Inventors: Sean LIE, Michael MORRISON, Srikanth AREKAPUDI, Gary R. LAUTERBACH, Michael Edwin JAMES