Patents by Inventor Michael Morrison

Michael Morrison has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11934945
    Abstract: Techniques in advanced deep learning provide improvements in one or more of accuracy, performance, and energy efficiency, such as accuracy of learning, accuracy of prediction, speed of learning, performance of learning, and energy efficiency of learning. An array of processing elements performs flow-based computations on wavelets of data. Each processing element has a respective compute element and a respective routing element. Each compute element has processing resources and memory resources. Each router enables communication via wavelets with at least nearest neighbors in a 2D mesh. Stochastic gradient descent, mini-batch gradient descent, and continuous propagation gradient descent are techniques usable to train weights of a neural network modeled by the processing elements. Reverse checkpoint is usable to reduce memory usage during the training.
    Type: Grant
    Filed: February 23, 2018
    Date of Patent: March 19, 2024
    Assignee: Cerebras Systems Inc.
    Inventors: Sean Lie, Michael Morrison, Michael Edwin James, Gary R. Lauterbach, Srikanth Arekapudi
  • Publication number: 20240025222
    Abstract: A supporting ring for an air-suspension strut. In embodiments, a support ring has a central longitudinal axis and includes a wall. In a radially inward direction, the wall has or forms an internal-abutment geometry that projects in the radial direction, and/or, in a radially outward direction, the wall has or forms an external-abutment geometry that projects in the radial direction.
    Type: Application
    Filed: July 25, 2023
    Publication date: January 25, 2024
    Inventors: Jan Ole Maack, Michael Morrison
  • Patent number: 11853867
    Abstract: Techniques in advanced deep learning provide improvements in one or more of accuracy, performance, and energy efficiency. An array of processing elements performs flow-based computations on wavelets of data. Each processing element has a compute element and a routing element. Each router enables communication via wavelets with at least nearest neighbors in a 2D mesh. Routing is controlled by virtual channel specifiers in each wavelet and routing configuration information in each router. Execution of an activate instruction or completion of a fabric vector operation activates one of the virtual channels. A virtual channel is selected from a pool comprising previously activated virtual channels and virtual channels associated with previously received wavelets. A task corresponding to the selected virtual channel is activated by executing instructions corresponding to the selected virtual channel.
    Type: Grant
    Filed: October 19, 2021
    Date of Patent: December 26, 2023
    Assignee: Cerebras Systems Inc.
    Inventors: Sean Lie, Michael Morrison, Srikanth Arekapudi, Michael Edwin James, Gary R. Lauterbach
  • Patent number: 11827283
    Abstract: A tailgate stop apparatus. The apparatus includes a base member configured to be fixed with respect to a primary gate assembly of a multiple-gate tailgate. The apparatus includes a stop member coupled to the base member, wherein the stop member is movable between a tailgate-engaged position configured to inhibit an inner gate panel of an inner gate assembly of the multiple-gate tailgate from being moved to a closed position, and a tailgate-disengaged position configured to allow the inner gate panel to be moved to the closed position.
    Type: Grant
    Filed: October 9, 2020
    Date of Patent: November 28, 2023
    Assignee: Banks Morrison Innovations LLC
    Inventors: James E. Banks, Jr., Michael A. Morrison
  • Patent number: 11801994
    Abstract: A receptacle is provided for holding a quantity of solid material. The receptacle includes a container having an open end, and a gate operably connected to the container so as to be rotatable to a closed position structured to prevent solid material from exiting the container through the open end. The gate includes a locking arm mounted thereon. A latching mechanism is operably connected to the container and includes a latch rotatable to a latched orientation structured to contact the locking arm to prevent the gate from being rotated out of the closed position. The locking arm is structured to move rearwardly and upwardly during rotation of the gate out of the closed position. The latch is structured so that contact between the latch and the locking arm when the latch is in the latched orientation prevents rearward and upward movement of the locking arm.
    Type: Grant
    Filed: May 11, 2022
    Date of Patent: October 31, 2023
    Assignees: Royal Truck and Trailer Sales and Service, Inc., Edw. C. Levy Co.
    Inventors: Michael Morrison, Scott Chapman, Sean Chapman, Brian Billings, John Battles, Michael Robert Pollock, Brian Matthew Clark
  • Publication number: 20230322309
    Abstract: A tailgate deactivation system. A switch includes two terminals configured to be electrically coupled to a tailgate power circuit that supplies power to at least a portion of a tailgate of a vehicle, and an actuator configured to electrically couple the two terminals in an on state to allow power to flow in the tailgate power circuit, and to electrically decouple the two terminals in an off state to inhibit power from flowing in the tailgate power circuit.
    Type: Application
    Filed: May 30, 2023
    Publication date: October 12, 2023
    Inventors: James E. Banks, JR., Michael A. Morrison
  • Patent number: 11727254
    Abstract: Techniques in advanced deep learning provide improvements in one or more of accuracy, performance, and energy efficiency. An array of processing elements performs flow based computations on wavelets of data. Each processing element has a compute element and a routing element. Each compute element has memory. Each router enables communication via wavelets with nearest neighbors in a 2D mesh. A compute element receives a wavelet. If a control specifier of the wavelet is a first value, then instructions are read from the memory of the compute element in accordance with an index specifier of the wavelet. If the control specifier is a second value, then instructions are read from the memory of the compute element in accordance with a virtual channel specifier of the wavelet. Then the compute element initiates execution of the instructions.
    Type: Grant
    Filed: August 27, 2020
    Date of Patent: August 15, 2023
    Assignee: Cerebras Systems Inc.
    Inventors: Sean Lie, Gary R. Lauterbach, Michael Edwin James, Michael Morrison, Srikanth Arekapudi
  • Patent number: 11727257
    Abstract: Techniques in advanced deep learning provide improvements in one or more of accuracy, performance, and energy efficiency. An array of processing elements performs flow-based computations on wavelets of data. Each processing element has a respective compute element and a respective routing element. Instructions executed by the compute element include operand specifiers, some specifying a data structure register storing a data structure descriptor describing an operand as a fabric vector or a memory vector. The data structure descriptor further describes the memory vector as one of a one-dimensional vector, a four-dimensional vector, or a circular buffer vector. Optionally, the data structure descriptor specifies an extended data structure register storing an extended data structure descriptor. The extended data structure descriptor specifies parameters relating to a four-dimensional vector or a circular buffer vector.
    Type: Grant
    Filed: January 24, 2022
    Date of Patent: August 15, 2023
    Assignee: Cerebras Systems Inc.
    Inventors: Sean Lie, Michael Morrison, Srikanth Arekapudi, Gary R. Lauterbach, Michael Edwin James
  • Patent number: 11702148
    Abstract: A tailgate deactivation system. A switch includes two terminals configured to be electrically coupled to a tailgate power circuit that supplies power to at least a portion of a tailgate of a vehicle, and an actuator configured to electrically couple the two terminals in an on state to allow power to flow in the tailgate power circuit, and to electrically decouple the two terminals in an off state to inhibit power from flowing in the tailgate power circuit.
    Type: Grant
    Filed: October 29, 2021
    Date of Patent: July 18, 2023
    Assignee: Banks Morrison Innovations LLC
    Inventors: James E. Banks, Jr., Michael A. Morrison
  • Publication number: 20230125522
    Abstract: Techniques in optimized placement for efficiency for accelerated deep learning provide improvements in one or more of accuracy, performance, and energy efficiency. An array of processing elements comprising a portion of a neural network accelerator performs flow-based computations on wavelets of data. Each processing element comprises a compute element to execute programmed instructions using the data and a router to route the wavelets. The routing is in accordance with virtual channel specifiers of the wavelets and controlled by routing configuration information of the router. A software stack determines optimized placement based on a description of a neural network. The determined placement is used to configure the routers including usage of the respective colors. The determined placement is used to configure the compute elements including the respective programmed instructions each is configured to execute.
    Type: Application
    Filed: October 30, 2020
    Publication date: April 27, 2023
    Inventors: Vladimir KIBARDIN, Michael Edwin JAMES, Michael MORRISON, Sean LIE, Gary R. LAUTERBACH, Stanislav FUNIAK
  • Publication number: 20230071424
    Abstract: Techniques in placement of compute and memory for accelerated deep learning provide improvements in one or more of accuracy, performance, and energy efficiency. An array of processing elements comprising a portion of a neural network accelerator performs flow-based computations on wavelets of data. Each processing element comprises a compute element to execute programmed instructions using the data and a router to route the wavelets. The routing is in accordance with virtual channel specifiers of the wavelets and controlled by routing configuration information of the router. A software stack determines placement of compute resources and memory resources based on a description of a neural network. The determined placement is used to configure the routers including usage of the respective colors. The determined placement is used to configure the compute elements including the respective programmed instructions each is configured to execute.
    Type: Application
    Filed: October 29, 2020
    Publication date: March 9, 2023
    Inventors: Vladimir KIBARDIN, Michael Edwin JAMES, Michael MORRISON, Sean LIE, Gary R. LAUTERBACH, Stanislav FUNIAK
  • Publication number: 20230069536
    Abstract: Techniques in dynamic routing for advanced deep learning provide improvements in one or more of accuracy, performance, and energy efficiency. An array of processing elements comprising a portion of a neural network accelerator performs flow-based computations on wavelets of data. Each processing element comprises a compute element enabled to execute programmed instructions using the data and a router enabled to route the wavelets via static routing, dynamic routing, or both. The routing is in accordance with a respective virtual channel specifier of each of the wavelets and controlled by routing configuration information of the router. The static techniques enable statically specifiable neuron connections. The dynamic techniques enable information from the wavelets to alter the routing configuration information during neural network processing.
    Type: Application
    Filed: October 14, 2020
    Publication date: March 2, 2023
    Inventors: Michael MORRISON, Michael Edwin JAMES, Sean LIE, Srikanth AREKAPUDI, Gary R. LAUTERBACH, Vijay Anand Reddy KORTHIKANTI
  • Patent number: 11580394
    Abstract: Techniques in advanced deep learning provide improvements in one or more of accuracy, performance, and energy efficiency, such as accuracy of learning, accuracy of prediction, speed of learning, performance of learning, and energy efficiency of learning. An array of processing elements performs flow-based computations on wavelets of data. Each processing element has a respective compute element and a respective routing element. Each compute element has processing resources and memory resources. Each router enables communication via wavelets with at least nearest neighbors in a 2D mesh. Stochastic gradient descent, mini-batch gradient descent, and continuous propagation gradient descent are techniques usable to train weights of a neural network modeled by the processing elements. Reverse checkpoint is usable to reduce memory usage during the training.
    Type: Grant
    Filed: June 24, 2020
    Date of Patent: February 14, 2023
    Assignee: Cerebras Systems Inc.
    Inventors: Sean Lie, Michael Morrison, Michael Edwin James, Gary R. Lauterbach, Srikanth Arekapudi
  • Publication number: 20220398443
    Abstract: Techniques in advanced deep learning provide improvements in one or more of accuracy, performance, and energy efficiency. An array of processing elements performs flow-based computations on wavelets of data. Each processing element has a respective compute element and a respective routing element. Instructions executed by the compute element include operand specifiers, some specifying a data structure register storing a data structure descriptor describing an operand as a fabric vector or a memory vector. The data structure descriptor further describes the memory vector as one of a one-dimensional vector, a four-dimensional vector, or a circular buffer vector. Optionally, the data structure descriptor specifies an extended data structure register storing an extended data structure descriptor. The extended data structure descriptor specifies parameters relating to a four-dimensional vector or a circular buffer vector.
    Type: Application
    Filed: January 24, 2022
    Publication date: December 15, 2022
    Inventors: Sean LIE, Michael MORRISON, Srikanth AREKAPUDI, Gary R. LAUTERBACH, Michael Edwin JAMES
  • Publication number: 20220374288
    Abstract: Techniques in distributed placement of linear operators for accelerated deep learning provide improvements in one or more of accuracy, performance, and energy efficiency. An array of processing elements comprising a portion of a neural network accelerator performs flow-based computations on wavelets of data. Each processing element comprises a compute element to execute programmed instructions using the data and a router to route the wavelets. The routing is in accordance with virtual channel specifiers of the wavelets and controlled by routing configuration information of the router. A software stack determines distributed placement of linear operators based on a description of a neural network. The determined placement is used to configure the routers including usage of the respective colors. The determined placement is used to configure the compute elements including the respective programmed instructions each is configured to execute.
    Type: Application
    Filed: October 30, 2020
    Publication date: November 24, 2022
    Inventors: Vladimir KIBARDIN, Michael Edwin JAMES, Michael MORRISON, Sean LIE, Gary R. LAUTERBACH, Stanislav FUNIAK
  • Publication number: 20220372791
    Abstract: A door latch clasp assembly configured for use with a door having a door latch assembly and a door latch plate is provided. The door latch clasp assembly includes a first aperture section having a first aperture. The first aperture section has an arcuate cross-sectional shape. A second aperture section has a second aperture and the second aperture section has an arcuate cross-sectional shape. An intermediate section extends from the first aperture section to the second aperture section. The intermediate section has an arcuate cross-sectional shape and a latch assembly aperture. The arcuate cross-sectional shapes of the first and the second aperture sections and the arcuate cross-sectional shape of the intermediate section are configured to approximate an arcuate cross-sectional shape of a perimeter wall of a face bore of the door.
    Type: Application
    Filed: May 19, 2022
    Publication date: November 24, 2022
    Inventor: Michael Morrison
  • Publication number: 20220363471
    Abstract: A receptacle is provided for holding a quantity of solid material. The receptacle includes a container having an open end, and a gate operably connected to the container so as to be rotatable to a closed position structured to prevent solid material from exiting the container through the open end. The gate includes a locking arm mounted thereon. A latching mechanism is operably connected to the container and includes a latch rotatable to a latched orientation structured to contact the locking arm to prevent the gate from being rotated out of the closed position. The locking arm is structured to move rearwardly and upwardly during rotation of the gate out of the closed position. The latch is structured so that contact between the latch and the locking arm when the latch is in the latched orientation prevents rearward and upward movement of the locking arm.
    Type: Application
    Filed: May 11, 2022
    Publication date: November 17, 2022
    Inventors: Michael Morrison, Scott Chapman, Sean Chapman, Brian Billings, John Battles, Michael Robert Pollock, Brian Matthew Clark
  • Patent number: 11488004
    Abstract: Techniques in advanced deep learning provide improvements in one or more of accuracy, performance, and energy efficiency. An array of processing elements performs flow-based computations on wavelets of data. Each processing element has a respective compute element and a respective routing element. Each compute element has memory. At least a first single neuron is implemented using resources of a plurality of the array of processing elements. At least a portion of a second neuron is implemented using resources of one or more of the plurality of processing elements. In some usage scenarios, the foregoing neuron implementation enables greater performance by enabling a single neuron to use the computational resources of multiple processing elements and/or computational load balancing across the processing elements while maintaining locality of incoming activations for the processing elements.
    Type: Grant
    Filed: April 15, 2018
    Date of Patent: November 1, 2022
    Assignee: Cerebras Systems Inc.
    Inventors: Sean Lie, Michael Morrison, Srikanth Arekapudi, Michael Edwin James, Gary R. Lauterbach
  • Publication number: 20220343136
    Abstract: Techniques in wavelet filtering for advanced deep learning provide improvements in one or more of accuracy, performance, and energy efficiency. An array of processing elements comprising a portion of a neural network accelerator performs flow-based computations on wavelets of data. Each processing element comprises a compute element to execute programmed instructions using the data and a router to route the wavelets in accordance with virtual channel specifiers. Each processing element is enabled to perform local filtering of wavelets received at the processing element, selectively, conditionally, and/or optionally discarding zero or more of the received wavelets, thereby preventing further processing of the discarded wavelets. The wavelet filtering is performed by one or more configurable wavelet filters operable in various modes, such as counter, sparse, and range modes.
    Type: Application
    Filed: October 15, 2020
    Publication date: October 27, 2022
    Inventors: Michael MORRISON, Michael Edwin JAMES, Sean LIE, Srikanth AREKAPUDI, Gary R. LAUTERBACH
  • Patent number: 11475282
    Abstract: Techniques in advanced deep learning provide improvements in one or more of accuracy, performance, and energy efficiency. An array of compute elements and routers performs flow-based computations on wavelets of data. Some instructions are performed in iterations, such as one iteration per element of a fabric vector or FIFO. When sources for an iteration of an instruction are unavailable, and/or there is insufficient space to store results of the iteration, indicators associated with operands of the instruction are checked to determine whether other work can be performed. In some scenarios, other work cannot be performed and processing stalls. Alternatively, information about the instruction is saved, the other work is performed, and sometime after the sources become available and/or sufficient space to store the results becomes available, the iteration is performed using the saved information.
    Type: Grant
    Filed: April 17, 2018
    Date of Patent: October 18, 2022
    Assignee: Cerebras Systems Inc.
    Inventors: Sean Lie, Michael Morrison, Michael Edwin James, Gary R. Lauterbach, Srikanth Arekapudi