Patents by Inventor Mark Alan Lovell

Mark Alan Lovell has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230324980
    Abstract: Described are context-aware low-power systems and methods that reduce power consumption in compute circuits such as commonly available machine learning hardware accelerators that carry out a large number of arithmetic operations when performing convolution operations and related computations. Various embodiments exploit the fact that power demand for a series of computation steps and many other functions a hardware accelerator performs is highly deterministic, thus, allowing for energy needs to be anticipated or even calculated to a certain degree. Accordingly, power supply output may be optimized according to actual energy needs of compute circuits. In certain embodiments this is accomplished by proactively and dynamically adjusting power-related parameters according to high-power and low-power operations to benefit a machine learning circuit and to avoid wasting valuable power resources, especially in embedded computing systems.
    Type: Application
    Filed: June 8, 2023
    Publication date: October 12, 2023
    Applicant: Maxim Integrated Products, Inc.
    Inventors: Mark Alan LOVELL, Robert Michael MUCHSEL
  • Patent number: 11747887
    Abstract: Described are context-aware low-power systems and methods that reduce power consumption in compute circuits such as commonly available machine learning hardware accelerators that carry out a large number of arithmetic operations when performing convolution operations and related computations. Various embodiments exploit the fact that power demand for a series of computation steps and many other functions a hardware accelerator performs is highly deterministic, thus, allowing for energy needs to be anticipated or even calculated to a certain degree. Accordingly, power supply output may be optimized according to actual energy needs of compute circuits. In certain embodiments this is accomplished by proactively and dynamically adjusting power-related parameters according to high-power and low-power operations to benefit a machine learning circuit and to avoid wasting valuable power resources, especially in embedded computing systems.
    Type: Grant
    Filed: August 18, 2022
    Date of Patent: September 5, 2023
    Assignee: Maxim Integrated Products, Inc.
    Inventors: Mark Alan Lovell, Robert Michael Muchsel
  • Patent number: 11709911
    Abstract: Described herein are systems and methods that increase the utilization and performance of computational resources, such as storage space and computation time, thereby, reducing computational cost. Various embodiments of the invention provide for a hardware structure that allows both streaming of source data that eliminates redundant data transfer and allows for in-memory computations that eliminate requirements for data transfer to and from intermediate storage. In certain embodiments, computational cost is reduced by using a hardware structure that enables mathematical operations, such as element-wise matrix multiplications employed by convolutional neural networks, to be performed automatically and efficiently.
    Type: Grant
    Filed: October 1, 2019
    Date of Patent: July 25, 2023
    Assignee: Maxim Integrated Products, Inc.
    Inventors: Mark Alan Lovell, Robert Michael Muchsel
  • Publication number: 20230222315
    Abstract: An energy-efficient sequencer comprising inline multipliers and adders causes a read source that contains matching values to output an enable signal to enable a data item prior to using a multiplier to multiply the data item with a weight to obtain a product for use in a matrix-multiplication in hardware. A second enable signal causes the output to be written to the data item.
    Type: Application
    Filed: February 27, 2023
    Publication date: July 13, 2023
    Applicant: Maxim Integrated Products, Inc.
    Inventors: Mark Alan Lovell, Robert Michael Muchsel, Donald Wood Loomis III, III
  • Publication number: 20230108883
    Abstract: Low-power systems and methods increase computational efficiency in neural network processing by allowing hardware accelerators to perform processing steps on large amounts of data at reduced execution times without significantly increasing hardware cost. In various embodiments, this is accomplished by accessing locations in a source memory coupled to a hardware accelerator and using a resource optimizer that based on storage availability and network parameters determines target locations in a number of distributed memory elements. The target storage locations are selected according to one or more memory access metrics to reduce power consumption. A read/write synchronizer then schedules simultaneous read and write operations to reduce idle time and further increase computational efficiency.
    Type: Application
    Filed: October 5, 2021
    Publication date: April 6, 2023
    Applicant: Maxim Integrated Products, Inc.
    Inventors: Mark Alan LOVELL, Robert Michael MUCHSEL
  • Patent number: 11610095
    Abstract: An energy-efficient sequencer comprising inline multipliers and adders causes a read source that contains matching values to output an enable signal to enable a data item prior to using a multiplier to multiply the data item with a weight to obtain a product for use in a matrix-multiplication in hardware. A second enable signal causes the output to be written to the data item.
    Type: Grant
    Filed: October 1, 2019
    Date of Patent: March 21, 2023
    Assignee: Maxim Integrated Products, Inc.
    Inventors: Mark Alan Lovell, Robert Michael Muchsel, Donald Wood Loomis, III
  • Publication number: 20230077454
    Abstract: Dynamic data-dependent neural network processing systems and methods increase computational efficiency in neural network processing by uniquely processing data based on the data itself and/or configuration parameters for processing the data. In embodiments, this is accomplished by receiving, at a controller, input data that is to be processed by a first device in a first layer of a sequence of processing layers of a neural network using a first set of parameters. The input data is analyzed to determine whether to modify it, whether processing the (modified) data in a second layer would conserve at least one computational resource, or whether to apply a different set of parameters. Depending on the determination, the sequence of processing layers is modified, and the (modified) data are processed according to the modified sequence to reduce data movements and transitions, thereby, conserving computational resources.
    Type: Application
    Filed: September 10, 2021
    Publication date: March 16, 2023
    Applicant: Maxim Integrated Products, Inc.
    Inventors: Mark Alan LOVELL, Robert Michael MUCHSEL
  • Publication number: 20230079229
    Abstract: Non-intrusive, low-cost systems and methods allow designers to reduce headroom and safety margin requirements in the context of compute circuits, such as machine learning circuits, without increasing footprint or having to sacrifice computing capacity and other valuable resources. Various embodiments accomplish this by taking advantage of certain properties of machine learning circuits and using a CNN as a diagnostic tool for evaluating circuit behavior and adjusting circuit parameters to fully exploit available computing resources.
    Type: Application
    Filed: September 10, 2021
    Publication date: March 16, 2023
    Applicant: Maxim Integrated Products, Inc.
    Inventors: Mark Alan LOVELL, Robert Michael MUCHSEL
  • Publication number: 20220413590
    Abstract: Systems and methods increase computational efficiency in machine learning accelerators. In embodiments, this is accomplished by evaluating, partitioning, and selecting computational resources to uniquely process, accumulate, and store data based on the type of the data and configuration parameters that are used to process the data. Various embodiments, take advantage of the zeroing feature of a Built-In Self-Test (BIST) controller to cause a BIST circuit to create a known state for a hardware accelerator, e.g., during a startup and/or wakeup phase, thereby, reducing data movements and transitions to save both time and energy.
    Type: Application
    Filed: June 23, 2021
    Publication date: December 29, 2022
    Applicant: Maxim Integrated Products, Inc.
    Inventors: Mark Alan LOVELL, Robert Michael MUCHSEL
  • Publication number: 20220397954
    Abstract: Described are context-aware low-power systems and methods that reduce power consumption in compute circuits such as commonly available machine learning hardware accelerators that carry out a large number of arithmetic operations when performing convolution operations and related computations. Various embodiments exploit the fact that power demand for a series of computation steps and many other functions a hardware accelerator performs is highly deterministic, thus, allowing for energy needs to be anticipated or even calculated to a certain degree. Accordingly, power supply output may be optimized according to actual energy needs of compute circuits. In certain embodiments this is accomplished by proactively and dynamically adjusting power-related parameters according to high-power and low-power operations to benefit a machine learning circuit and to avoid wasting valuable power resources, especially in embedded computing systems.
    Type: Application
    Filed: August 18, 2022
    Publication date: December 15, 2022
    Applicant: Maxim Integrated Products, Inc.
    Inventors: Mark Alan LOVELL, Robert Michael MUCHSEL
  • Publication number: 20220382361
    Abstract: In-flight operations in an inbound data path from a source memory to a convolution hardware circuit increase computational throughput when performing convolution calculations, such as pooling and element-wise operations. Various operations may be performed in-line within an outbound data path to a target memory. Advantageously, this drastically reduces extraneous memory access and associated read-write operations, thereby, significantly reducing overall power consumption in a computing system.
    Type: Application
    Filed: May 25, 2021
    Publication date: December 1, 2022
    Applicant: Maxim Integrated Products, Inc.
    Inventors: Mark Alan LOVELL, Robert Michael MUCHSEL
  • Publication number: 20220366225
    Abstract: Systems and methods allow existing hardware, such as commonly available hardware accelerators to process fully connected network (FCN) layers in an energy-efficient manner and without having to implement additional expensive hardware. Various embodiments, accomplish this by using a “flattening” method that converts a channel associated with a number of pixels into a number of channels that equals the number pixels.
    Type: Application
    Filed: May 14, 2021
    Publication date: November 17, 2022
    Applicant: Maxim Integrated Products, Inc.
    Inventors: Mark Alan LOVELL, Robert Michael MUCHSEL
  • Publication number: 20220366261
    Abstract: Storage-efficient, low-cost systems and methods provide embedded systems with the ability to dynamically perform on-device learning to modify or customize a trained model to improve computing and detection accuracy in small-scale devices. In certain embodiments, this is accomplished by repurposing storage elements from inference to training and performing partial back-propagation in embedded devices in the final layers of an existing network. In various embodiments replacing weights in final layers, while using hardware components to iteratively performing forward-propagation calculation, advantageously, reduces the need to store intermediate results, thus, allowing for on-device training without significantly increasing hardware requirements or requiring excessive computational memory resources when compared to conventional machine learning methods.
    Type: Application
    Filed: May 14, 2021
    Publication date: November 17, 2022
    Applicant: Maxim Integrated Products, Inc.
    Inventors: Mark Alan LOVELL, Robert Michael MUCHSEL, Brian Gregory RUSH
  • Publication number: 20220334634
    Abstract: Systems and methods reduce power consumption in embedded machine learning hardware accelerators and enable cost-effective embedded at-the-edge machine-learning and related applications. In various embodiments this may be accomplished by using hardware accelerators that comprise a programmable pre-processing circuit that operates in the same clock domain as the accelerator. In some embodiments, tightly coupled data loading first-in-first-out registers (FIFOs) eliminate clock synchronization issues and reduce unnecessary address writes. In other embodiments, a data transformation may gather source data bits in a manner that allows loading full words of native bus width to reduce the number of writes and, thus, overall power consumption.
    Type: Application
    Filed: April 16, 2021
    Publication date: October 20, 2022
    Applicant: Maxim Integrated Products, Inc.
    Inventors: Mark Alan LOVELL, Robert Michael MUCHSEL
  • Patent number: 11449126
    Abstract: Described are context-aware low-power systems and methods that reduce power consumption in compute circuits such as commonly available machine learning hardware accelerators that carry out a large number of arithmetic operations when performing convolution operations and related computations. Various embodiments exploit the fact that power demand for a series of computation steps and many other functions a hardware accelerator performs is highly deterministic, thus, allowing for energy needs to be anticipated or even calculated to a certain degree. Accordingly, power supply output may be optimized according to actual energy needs of compute circuits. In certain embodiments this is accomplished by proactively and dynamically adjusting power-related parameters according to high-power and low-power operations to benefit a machine learning circuit and to avoid wasting valuable power resources, especially in embedded computing systems.
    Type: Grant
    Filed: June 1, 2021
    Date of Patent: September 20, 2022
    Assignee: Maxim Integrated Products, Inc.
    Inventors: Mark Alan Lovell, Robert Michael Muchsel
  • Publication number: 20210216868
    Abstract: Described herein are systems and methods for efficiently processing large amounts of data when performing complex neural network operations, such as convolution and pooling operations. Given cascaded convolutional neural network layers, various embodiments allow for commencing processing of a downstream layer prior to completing processing of a current or previous network layer. In certain embodiments, this is accomplished by utilizing a handshaking mechanism or asynchronous logic to determine an active neural network layer in a neural network and using that active neural layer to process a subset of a set of input data of a first layer prior to processing all of the set of input data.
    Type: Application
    Filed: December 21, 2020
    Publication date: July 15, 2021
    Applicant: Maxim Integrated Products, Inc.
    Inventors: Mark Alan LOVELL, Robert Michael MUCHSEL
  • Patent number: 10771062
    Abstract: Presented are systems and methods that allow hardware designers to protect valuable IP and information in the hardware domain in order to increase overall system security. In various embodiments of the invention this is accomplished by configuring logic gates of existing logic circuitry based on a key input. In certain embodiments, a logic function provides results that are dependent not only on input values but also on an encrypted logic key that determines connections for a given logic building block, such that the functionality of the logic function cannot be determined by reverse engineering. In some embodiments, the logic key is created by decrypting a piece of data using a secret or private key. Advantages of automatic encryption include that existing circuitry need not be re-implemented or re-built, and that the systems and methods presented are backward compatible with standard manufacturing tools.
    Type: Grant
    Filed: August 9, 2018
    Date of Patent: September 8, 2020
    Assignee: Maxim Integrated Products, Inc.
    Inventors: Robert Michael Muchsel, Donald Wood Loomis, III, Edward Tangkwai Ma, Hung Thanh Nguyen, Nancy Kow Iida, Mark Alan Lovell
  • Publication number: 20200110979
    Abstract: An energy-efficient sequencer comprising inline multipliers and adders causes a read source that contains matching values to output an enable signal to enable a data item prior to using a multiplier to multiply the data item with a weight to obtain a product for use in a matrix-multiplication in hardware. A second enable signal causes the output to be written to the data item.
    Type: Application
    Filed: October 1, 2019
    Publication date: April 9, 2020
    Applicant: Maxim Integrated Products, Inc.
    Inventors: Mark Alan Lovell, Robert Michael Muchsel, Donald Wood Loomis, III
  • Publication number: 20200110604
    Abstract: Described herein are systems and methods that increase the utilization and performance of computational resources, such as storage space and computation time, thereby, reducing computational cost. Various embodiments of the invention provide for a hardware structure that allows both streaming of source data that eliminates redundant data transfer and allows for in-memory computations that eliminate requirements for data transfer to and from intermediate storage. In certain embodiments, computational cost is reduced by using a hardware structure that enables mathematical operations, such as element-wise matrix multiplications employed by convolutional neural networks, to be performed automatically and efficiently.
    Type: Application
    Filed: October 1, 2019
    Publication date: April 9, 2020
    Applicant: Maxim Integrated Products, Inc.
    Inventors: Mark Alan Lovell, Robert Michael Muchsel
  • Patent number: 10063231
    Abstract: Presented are systems and methods that allow hardware designers to protect valuable IP and information in the hardware domain in order to increase overall system security. In various embodiments of the invention this is accomplished by configuring logic gates of existing logic circuitry based on a key input. In certain embodiments, a logic function provides results that are dependent not only on input values but also on an encrypted logic key that determines connections for a given logic building block, such that the functionality of the logic function cannot be determined by reverse engineering. In some embodiments, the logic key is created by decrypting a piece of data using a secret or private key. Advantages of automatic encryption include that existing circuitry need not be re-implemented or re-built, and that the systems and methods presented are backward compatible with standard manufacturing tools.
    Type: Grant
    Filed: July 10, 2017
    Date of Patent: August 28, 2018
    Assignee: Maxim Integrated Products, Inc.
    Inventors: Robert Michael Muchsel, Donald Wood Loomis, III, Edward Tangkwai Ma, Hung Thanh Nguyen, Nancy Kow Iida, Mark Alan Lovell