Patents by Inventor Edwin James
Edwin James has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11979489Abstract: A database stores a document as a plurality of encrypted records, where each record is indicative of an incremental change to the state of the document, and encrypted using a document key. The document key is stored with encryption decryptable using a group key, and the group key is stored with encryption decryptable using a first access key. In response to a request to rotate from the first access key to a second access key, the database decrypts the group key using the first access key, a stores a group key re-encrypted with the second access key.Type: GrantFiled: May 10, 2022Date of Patent: May 7, 2024Assignee: Amazon Technologies, Inc.Inventors: Edwin Robbins, Bala Murali Krishna Ummaneni, Carr James Onstott, Thomas Barton, John Richter, Rong Xiao, Caroline Gordon, Shayna Weinstein
-
Patent number: 11934945Abstract: Techniques in advanced deep learning provide improvements in one or more of accuracy, performance, and energy efficiency, such as accuracy of learning, accuracy of prediction, speed of learning, performance of learning, and energy efficiency of learning. An array of processing elements performs flow-based computations on wavelets of data. Each processing element has a respective compute element and a respective routing element. Each compute element has processing resources and memory resources. Each router enables communication via wavelets with at least nearest neighbors in a 2D mesh. Stochastic gradient descent, mini-batch gradient descent, and continuous propagation gradient descent are techniques usable to train weights of a neural network modeled by the processing elements. Reverse checkpoint is usable to reduce memory usage during the training.Type: GrantFiled: February 23, 2018Date of Patent: March 19, 2024Assignee: Cerebras Systems Inc.Inventors: Sean Lie, Michael Morrison, Michael Edwin James, Gary R. Lauterbach, Srikanth Arekapudi
-
Patent number: 11853867Abstract: Techniques in advanced deep learning provide improvements in one or more of accuracy, performance, and energy efficiency. An array of processing elements performs flow-based computations on wavelets of data. Each processing element has a compute element and a routing element. Each router enables communication via wavelets with at least nearest neighbors in a 2D mesh. Routing is controlled by virtual channel specifiers in each wavelet and routing configuration information in each router. Execution of an activate instruction or completion of a fabric vector operation activates one of the virtual channels. A virtual channel is selected from a pool comprising previously activated virtual channels and virtual channels associated with previously received wavelets. A task corresponding to the selected virtual channel is activated by executing instructions corresponding to the selected virtual channel.Type: GrantFiled: October 19, 2021Date of Patent: December 26, 2023Assignee: Cerebras Systems Inc.Inventors: Sean Lie, Michael Morrison, Srikanth Arekapudi, Michael Edwin James, Gary R. Lauterbach
-
Patent number: 11727254Abstract: Techniques in advanced deep learning provide improvements in one or more of accuracy, performance, and energy efficiency. An array of processing elements performs flow based computations on wavelets of data. Each processing element has a compute element and a routing element. Each compute element has memory. Each router enables communication via wavelets with nearest neighbors in a 2D mesh. A compute element receives a wavelet. If a control specifier of the wavelet is a first value, then instructions are read from the memory of the compute element in accordance with an index specifier of the wavelet. If the control specifier is a second value, then instructions are read from the memory of the compute element in accordance with a virtual channel specifier of the wavelet. Then the compute element initiates execution of the instructions.Type: GrantFiled: August 27, 2020Date of Patent: August 15, 2023Assignee: Cerebras Systems Inc.Inventors: Sean Lie, Gary R. Lauterbach, Michael Edwin James, Michael Morrison, Srikanth Arekapudi
-
Patent number: 11727257Abstract: Techniques in advanced deep learning provide improvements in one or more of accuracy, performance, and energy efficiency. An array of processing elements performs flow-based computations on wavelets of data. Each processing element has a respective compute element and a respective routing element. Instructions executed by the compute element include operand specifiers, some specifying a data structure register storing a data structure descriptor describing an operand as a fabric vector or a memory vector. The data structure descriptor further describes the memory vector as one of a one-dimensional vector, a four-dimensional vector, or a circular buffer vector. Optionally, the data structure descriptor specifies an extended data structure register storing an extended data structure descriptor. The extended data structure descriptor specifies parameters relating to a four-dimensional vector or a circular buffer vector.Type: GrantFiled: January 24, 2022Date of Patent: August 15, 2023Assignee: Cerebras Systems Inc.Inventors: Sean Lie, Michael Morrison, Srikanth Arekapudi, Gary R. Lauterbach, Michael Edwin James
-
Publication number: 20230125522Abstract: Techniques in optimized placement for efficiency for accelerated deep learning provide improvements in one or more of accuracy, performance, and energy efficiency. An array of processing elements comprising a portion of a neural network accelerator performs flow-based computations on wavelets of data. Each processing element comprises a compute element to execute programmed instructions using the data and a router to route the wavelets. The routing is in accordance with virtual channel specifiers of the wavelets and controlled by routing configuration information of the router. A software stack determines optimized placement based on a description of a neural network. The determined placement is used to configure the routers including usage of the respective colors. The determined placement is used to configure the compute elements including the respective programmed instructions each is configured to execute.Type: ApplicationFiled: October 30, 2020Publication date: April 27, 2023Inventors: Vladimir KIBARDIN, Michael Edwin JAMES, Michael MORRISON, Sean LIE, Gary R. LAUTERBACH, Stanislav FUNIAK
-
Publication number: 20230105188Abstract: Described herein are devices, systems, and methods to aid in the manipulation of cells. The devices, methods, and systems disclosed herein can be applied towards, for example, automation of the in vitro fertilization process.Type: ApplicationFiled: August 12, 2022Publication date: April 6, 2023Inventors: Jose Antonio HORCAJADAS ALMANSA, Tamara MARTIN VILLALBA, Santiago MUNNE, Hannah Victoria HARE, Edwin James STONE, Michael Ian WALKER, Jonathan Patrick CASEY, Peter Lee CROSSLEY, Alan James JUDD, Gary Keith JEPPS
-
Publication number: 20230071424Abstract: Techniques in placement of compute and memory for accelerated deep learning provide improvements in one or more of accuracy, performance, and energy efficiency. An array of processing elements comprising a portion of a neural network accelerator performs flow-based computations on wavelets of data. Each processing element comprises a compute element to execute programmed instructions using the data and a router to route the wavelets. The routing is in accordance with virtual channel specifiers of the wavelets and controlled by routing configuration information of the router. A software stack determines placement of compute resources and memory resources based on a description of a neural network. The determined placement is used to configure the routers including usage of the respective colors. The determined placement is used to configure the compute elements including the respective programmed instructions each is configured to execute.Type: ApplicationFiled: October 29, 2020Publication date: March 9, 2023Inventors: Vladimir KIBARDIN, Michael Edwin JAMES, Michael MORRISON, Sean LIE, Gary R. LAUTERBACH, Stanislav FUNIAK
-
Publication number: 20230069536Abstract: Techniques in dynamic routing for advanced deep learning provide improvements in one or more of accuracy, performance, and energy efficiency. An array of processing elements comprising a portion of a neural network accelerator performs flow-based computations on wavelets of data. Each processing element comprises a compute element enabled to execute programmed instructions using the data and a router enabled to route the wavelets via static routing, dynamic routing, or both. The routing is in accordance with a respective virtual channel specifier of each of the wavelets and controlled by routing configuration information of the router. The static techniques enable statically specifiable neuron connections. The dynamic techniques enable information from the wavelets to alter the routing configuration information during neural network processing.Type: ApplicationFiled: October 14, 2020Publication date: March 2, 2023Inventors: Michael MORRISON, Michael Edwin JAMES, Sean LIE, Srikanth AREKAPUDI, Gary R. LAUTERBACH, Vijay Anand Reddy KORTHIKANTI
-
Patent number: 11580394Abstract: Techniques in advanced deep learning provide improvements in one or more of accuracy, performance, and energy efficiency, such as accuracy of learning, accuracy of prediction, speed of learning, performance of learning, and energy efficiency of learning. An array of processing elements performs flow-based computations on wavelets of data. Each processing element has a respective compute element and a respective routing element. Each compute element has processing resources and memory resources. Each router enables communication via wavelets with at least nearest neighbors in a 2D mesh. Stochastic gradient descent, mini-batch gradient descent, and continuous propagation gradient descent are techniques usable to train weights of a neural network modeled by the processing elements. Reverse checkpoint is usable to reduce memory usage during the training.Type: GrantFiled: June 24, 2020Date of Patent: February 14, 2023Assignee: Cerebras Systems Inc.Inventors: Sean Lie, Michael Morrison, Michael Edwin James, Gary R. Lauterbach, Srikanth Arekapudi
-
Publication number: 20220410480Abstract: A method of making a three-dimensional object from a polymerizable resin, the method comprising the steps of: (a) providing a carrier platform on which the three-dimensional object can be formed; (b) producing the three-dimensional object adhered to the carrier platform from the polymerizable resin by stereolithography, the object having residual resin on the surface thereof; (c) immersing the object in a wash liquid with the object remaining adhered to the carrier platform; (d) agitating (i) the object in said wash liquid (e.g., by spinning), (ii) the wash liquid with the object immersed therein (e.g., by sonication of the wash liquid), or (iii) both the object in the wash liquid and the wash liquid with the object immersed therein, to at least partially remove residual resin from the surface of the object; (e) separating the object from the wash liquid, with the object remaining adhered to the carrier platform, the object having residual wash liquid on the surface thereof; (f) agitating the object (e.g.Type: ApplicationFiled: September 2, 2022Publication date: December 29, 2022Inventors: Courtney F. Converse, W. Ryan Powell, Sherwood Forlee, David Eliot Scheinman, Scott Heines, Edwin James Sabathia, JR.
-
Publication number: 20220398443Abstract: Techniques in advanced deep learning provide improvements in one or more of accuracy, performance, and energy efficiency. An array of processing elements performs flow-based computations on wavelets of data. Each processing element has a respective compute element and a respective routing element. Instructions executed by the compute element include operand specifiers, some specifying a data structure register storing a data structure descriptor describing an operand as a fabric vector or a memory vector. The data structure descriptor further describes the memory vector as one of a one-dimensional vector, a four-dimensional vector, or a circular buffer vector. Optionally, the data structure descriptor specifies an extended data structure register storing an extended data structure descriptor. The extended data structure descriptor specifies parameters relating to a four-dimensional vector or a circular buffer vector.Type: ApplicationFiled: January 24, 2022Publication date: December 15, 2022Inventors: Sean LIE, Michael MORRISON, Srikanth AREKAPUDI, Gary R. LAUTERBACH, Michael Edwin JAMES
-
Publication number: 20220374288Abstract: Techniques in distributed placement of linear operators for accelerated deep learning provide improvements in one or more of accuracy, performance, and energy efficiency. An array of processing elements comprising a portion of a neural network accelerator performs flow-based computations on wavelets of data. Each processing element comprises a compute element to execute programmed instructions using the data and a router to route the wavelets. The routing is in accordance with virtual channel specifiers of the wavelets and controlled by routing configuration information of the router. A software stack determines distributed placement of linear operators based on a description of a neural network. The determined placement is used to configure the routers including usage of the respective colors. The determined placement is used to configure the compute elements including the respective programmed instructions each is configured to execute.Type: ApplicationFiled: October 30, 2020Publication date: November 24, 2022Inventors: Vladimir KIBARDIN, Michael Edwin JAMES, Michael MORRISON, Sean LIE, Gary R. LAUTERBACH, Stanislav FUNIAK
-
Patent number: 11488004Abstract: Techniques in advanced deep learning provide improvements in one or more of accuracy, performance, and energy efficiency. An array of processing elements performs flow-based computations on wavelets of data. Each processing element has a respective compute element and a respective routing element. Each compute element has memory. At least a first single neuron is implemented using resources of a plurality of the array of processing elements. At least a portion of a second neuron is implemented using resources of one or more of the plurality of processing elements. In some usage scenarios, the foregoing neuron implementation enables greater performance by enabling a single neuron to use the computational resources of multiple processing elements and/or computational load balancing across the processing elements while maintaining locality of incoming activations for the processing elements.Type: GrantFiled: April 15, 2018Date of Patent: November 1, 2022Assignee: Cerebras Systems Inc.Inventors: Sean Lie, Michael Morrison, Srikanth Arekapudi, Michael Edwin James, Gary R. Lauterbach
-
Publication number: 20220343136Abstract: Techniques in wavelet filtering for advanced deep learning provide improvements in one or more of accuracy, performance, and energy efficiency. An array of processing elements comprising a portion of a neural network accelerator performs flow-based computations on wavelets of data. Each processing element comprises a compute element to execute programmed instructions using the data and a router to route the wavelets in accordance with virtual channel specifiers. Each processing element is enabled to perform local filtering of wavelets received at the processing element, selectively, conditionally, and/or optionally discarding zero or more of the received wavelets, thereby preventing further processing of the discarded wavelets. The wavelet filtering is performed by one or more configurable wavelet filters operable in various modes, such as counter, sparse, and range modes.Type: ApplicationFiled: October 15, 2020Publication date: October 27, 2022Inventors: Michael MORRISON, Michael Edwin JAMES, Sean LIE, Srikanth AREKAPUDI, Gary R. LAUTERBACH
-
Patent number: 11478987Abstract: Method of making a three-dimensional object, comprising the steps of: (a) providing a carrier platform on which the three-dimensional object can be formed; (b) producing the three-dimensional object, (c) immersing the object in a wash liquid with the object remaining adhered to the carrier platform; (d) agitating (i) the object in said wash liquid, (ii) the wash liquid with the object immersed therein, or (iii) both the object and the wash liquid, to at least partially remove residual resin from the surface of the object; (e) separating the object from the wash liquid, with the object remaining adhered to the carrier platform, (f) agitating the object to at least partially remove residual wash liquid from the surface thereof; and (g) repeating steps (c) through (f) to remove additional polymerizable resin from the surface thereof, wherein steps (c) through (f) are carried out in the same vessel.Type: GrantFiled: November 29, 2017Date of Patent: October 25, 2022Assignee: Carbon, Inc.Inventors: Courtney F. Converse, W. Ryan Powell, Sherwood Forlee, David Eliot Scheinman, Scott Heines, Edwin James Sabathia, Jr.
-
Patent number: 11475282Abstract: Techniques in advanced deep learning provide improvements in one or more of accuracy, performance, and energy efficiency. An array of compute elements and routers performs flow-based computations on wavelets of data. Some instructions are performed in iterations, such as one iteration per element of a fabric vector or FIFO. When sources for an iteration of an instruction are unavailable, and/or there is insufficient space to store results of the iteration, indicators associated with operands of the instruction are checked to determine whether other work can be performed. In some scenarios, other work cannot be performed and processing stalls. Alternatively, information about the instruction is saved, the other work is performed, and sometime after the sources become available and/or sufficient space to store the results becomes available, the iteration is performed using the saved information.Type: GrantFiled: April 17, 2018Date of Patent: October 18, 2022Assignee: Cerebras Systems Inc.Inventors: Sean Lie, Michael Morrison, Michael Edwin James, Gary R. Lauterbach, Srikanth Arekapudi
-
Patent number: 11449574Abstract: Techniques in advanced deep learning provide improvements in one or more of accuracy, performance, and energy efficiency. An array of processing elements comprising a portion of a neural network accelerator performs flow-based computations on wavelets of data. Each processing element has a respective compute element and a respective routing element. Each compute element has a respective floating-point unit enabled to perform stochastic rounding, thus in some circumstances enabling reducing systematic bias in long dependency chains of floating-point computations. The long dependency chains of floating-point computations are performed, e.g., to train a neural network or to perform inference with respect to a trained neural network.Type: GrantFiled: April 13, 2018Date of Patent: September 20, 2022Assignee: Cerebras Systems Inc.Inventors: Sean Lie, Michael Edwin James, Michael Morrison, Gary R. Lauterbach, Srikanth Arekapudi
-
Patent number: 11440259Abstract: A rotor for separating residual resin from additively manufactured objects in a centrifugal separator, the rotor including a rotor base and a plurality of engagement members configured to secure additively manufactured, light polymerized, objects to the rotor base, each object carrying unpolymerized resin on a surface thereof. The improvement includes a plurality of catch pans removably connected to the base, each catch pan configured to receive unpolymerized resin therein upon centrifugal separation of the resin from the additively manufactured, light polymerized, objects.Type: GrantFiled: January 26, 2021Date of Patent: September 13, 2022Assignee: Carbon, Inc.Inventors: R. Griffin Price, Edwin James Sabathia, Jr., Bob E. Feller
-
Publication number: 20220284275Abstract: Techniques in advanced deep learning provide improvements in one or more of accuracy, performance, and energy efficiency. An array of processing elements performs flow-based computations on wavelets of data. Each processing element has a compute element and a routing element. Each router enables communication via wavelets with at least nearest neighbors in a 2D mesh. Routing is controlled by virtual channel specifiers in each wavelet and routing configuration information in each router. Execution of an activate instruction or completion of a fabric vector operation activates one of the virtual channels. A virtual channel is selected from a pool comprising previously activated virtual channels and virtual channels associated with previously received wavelets. A task corresponding to the selected virtual channel is activated by executing instructions corresponding to the selected virtual channel.Type: ApplicationFiled: October 19, 2021Publication date: September 8, 2022Inventors: Sean LIE, Michael MORRISON, Srikanth AREKAPUDI, Michael Edwin JAMES, Gary R. LAUTERBACH