Patents by Inventor Mark Alan Lovell
Mark Alan Lovell has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20230324980Abstract: Described are context-aware low-power systems and methods that reduce power consumption in compute circuits such as commonly available machine learning hardware accelerators that carry out a large number of arithmetic operations when performing convolution operations and related computations. Various embodiments exploit the fact that power demand for a series of computation steps and many other functions a hardware accelerator performs is highly deterministic, thus, allowing for energy needs to be anticipated or even calculated to a certain degree. Accordingly, power supply output may be optimized according to actual energy needs of compute circuits. In certain embodiments this is accomplished by proactively and dynamically adjusting power-related parameters according to high-power and low-power operations to benefit a machine learning circuit and to avoid wasting valuable power resources, especially in embedded computing systems.Type: ApplicationFiled: June 8, 2023Publication date: October 12, 2023Applicant: Maxim Integrated Products, Inc.Inventors: Mark Alan LOVELL, Robert Michael MUCHSEL
-
Patent number: 11747887Abstract: Described are context-aware low-power systems and methods that reduce power consumption in compute circuits such as commonly available machine learning hardware accelerators that carry out a large number of arithmetic operations when performing convolution operations and related computations. Various embodiments exploit the fact that power demand for a series of computation steps and many other functions a hardware accelerator performs is highly deterministic, thus, allowing for energy needs to be anticipated or even calculated to a certain degree. Accordingly, power supply output may be optimized according to actual energy needs of compute circuits. In certain embodiments this is accomplished by proactively and dynamically adjusting power-related parameters according to high-power and low-power operations to benefit a machine learning circuit and to avoid wasting valuable power resources, especially in embedded computing systems.Type: GrantFiled: August 18, 2022Date of Patent: September 5, 2023Assignee: Maxim Integrated Products, Inc.Inventors: Mark Alan Lovell, Robert Michael Muchsel
-
Patent number: 11709911Abstract: Described herein are systems and methods that increase the utilization and performance of computational resources, such as storage space and computation time, thereby, reducing computational cost. Various embodiments of the invention provide for a hardware structure that allows both streaming of source data that eliminates redundant data transfer and allows for in-memory computations that eliminate requirements for data transfer to and from intermediate storage. In certain embodiments, computational cost is reduced by using a hardware structure that enables mathematical operations, such as element-wise matrix multiplications employed by convolutional neural networks, to be performed automatically and efficiently.Type: GrantFiled: October 1, 2019Date of Patent: July 25, 2023Assignee: Maxim Integrated Products, Inc.Inventors: Mark Alan Lovell, Robert Michael Muchsel
-
Publication number: 20230222315Abstract: An energy-efficient sequencer comprising inline multipliers and adders causes a read source that contains matching values to output an enable signal to enable a data item prior to using a multiplier to multiply the data item with a weight to obtain a product for use in a matrix-multiplication in hardware. A second enable signal causes the output to be written to the data item.Type: ApplicationFiled: February 27, 2023Publication date: July 13, 2023Applicant: Maxim Integrated Products, Inc.Inventors: Mark Alan Lovell, Robert Michael Muchsel, Donald Wood Loomis III, III
-
Publication number: 20230108883Abstract: Low-power systems and methods increase computational efficiency in neural network processing by allowing hardware accelerators to perform processing steps on large amounts of data at reduced execution times without significantly increasing hardware cost. In various embodiments, this is accomplished by accessing locations in a source memory coupled to a hardware accelerator and using a resource optimizer that based on storage availability and network parameters determines target locations in a number of distributed memory elements. The target storage locations are selected according to one or more memory access metrics to reduce power consumption. A read/write synchronizer then schedules simultaneous read and write operations to reduce idle time and further increase computational efficiency.Type: ApplicationFiled: October 5, 2021Publication date: April 6, 2023Applicant: Maxim Integrated Products, Inc.Inventors: Mark Alan LOVELL, Robert Michael MUCHSEL
-
Patent number: 11610095Abstract: An energy-efficient sequencer comprising inline multipliers and adders causes a read source that contains matching values to output an enable signal to enable a data item prior to using a multiplier to multiply the data item with a weight to obtain a product for use in a matrix-multiplication in hardware. A second enable signal causes the output to be written to the data item.Type: GrantFiled: October 1, 2019Date of Patent: March 21, 2023Assignee: Maxim Integrated Products, Inc.Inventors: Mark Alan Lovell, Robert Michael Muchsel, Donald Wood Loomis, III
-
Publication number: 20230077454Abstract: Dynamic data-dependent neural network processing systems and methods increase computational efficiency in neural network processing by uniquely processing data based on the data itself and/or configuration parameters for processing the data. In embodiments, this is accomplished by receiving, at a controller, input data that is to be processed by a first device in a first layer of a sequence of processing layers of a neural network using a first set of parameters. The input data is analyzed to determine whether to modify it, whether processing the (modified) data in a second layer would conserve at least one computational resource, or whether to apply a different set of parameters. Depending on the determination, the sequence of processing layers is modified, and the (modified) data are processed according to the modified sequence to reduce data movements and transitions, thereby, conserving computational resources.Type: ApplicationFiled: September 10, 2021Publication date: March 16, 2023Applicant: Maxim Integrated Products, Inc.Inventors: Mark Alan LOVELL, Robert Michael MUCHSEL
-
Publication number: 20230079229Abstract: Non-intrusive, low-cost systems and methods allow designers to reduce headroom and safety margin requirements in the context of compute circuits, such as machine learning circuits, without increasing footprint or having to sacrifice computing capacity and other valuable resources. Various embodiments accomplish this by taking advantage of certain properties of machine learning circuits and using a CNN as a diagnostic tool for evaluating circuit behavior and adjusting circuit parameters to fully exploit available computing resources.Type: ApplicationFiled: September 10, 2021Publication date: March 16, 2023Applicant: Maxim Integrated Products, Inc.Inventors: Mark Alan LOVELL, Robert Michael MUCHSEL
-
Publication number: 20220413590Abstract: Systems and methods increase computational efficiency in machine learning accelerators. In embodiments, this is accomplished by evaluating, partitioning, and selecting computational resources to uniquely process, accumulate, and store data based on the type of the data and configuration parameters that are used to process the data. Various embodiments, take advantage of the zeroing feature of a Built-In Self-Test (BIST) controller to cause a BIST circuit to create a known state for a hardware accelerator, e.g., during a startup and/or wakeup phase, thereby, reducing data movements and transitions to save both time and energy.Type: ApplicationFiled: June 23, 2021Publication date: December 29, 2022Applicant: Maxim Integrated Products, Inc.Inventors: Mark Alan LOVELL, Robert Michael MUCHSEL
-
Publication number: 20220397954Abstract: Described are context-aware low-power systems and methods that reduce power consumption in compute circuits such as commonly available machine learning hardware accelerators that carry out a large number of arithmetic operations when performing convolution operations and related computations. Various embodiments exploit the fact that power demand for a series of computation steps and many other functions a hardware accelerator performs is highly deterministic, thus, allowing for energy needs to be anticipated or even calculated to a certain degree. Accordingly, power supply output may be optimized according to actual energy needs of compute circuits. In certain embodiments this is accomplished by proactively and dynamically adjusting power-related parameters according to high-power and low-power operations to benefit a machine learning circuit and to avoid wasting valuable power resources, especially in embedded computing systems.Type: ApplicationFiled: August 18, 2022Publication date: December 15, 2022Applicant: Maxim Integrated Products, Inc.Inventors: Mark Alan LOVELL, Robert Michael MUCHSEL
-
Publication number: 20220382361Abstract: In-flight operations in an inbound data path from a source memory to a convolution hardware circuit increase computational throughput when performing convolution calculations, such as pooling and element-wise operations. Various operations may be performed in-line within an outbound data path to a target memory. Advantageously, this drastically reduces extraneous memory access and associated read-write operations, thereby, significantly reducing overall power consumption in a computing system.Type: ApplicationFiled: May 25, 2021Publication date: December 1, 2022Applicant: Maxim Integrated Products, Inc.Inventors: Mark Alan LOVELL, Robert Michael MUCHSEL
-
Publication number: 20220366225Abstract: Systems and methods allow existing hardware, such as commonly available hardware accelerators to process fully connected network (FCN) layers in an energy-efficient manner and without having to implement additional expensive hardware. Various embodiments, accomplish this by using a “flattening” method that converts a channel associated with a number of pixels into a number of channels that equals the number pixels.Type: ApplicationFiled: May 14, 2021Publication date: November 17, 2022Applicant: Maxim Integrated Products, Inc.Inventors: Mark Alan LOVELL, Robert Michael MUCHSEL
-
Publication number: 20220366261Abstract: Storage-efficient, low-cost systems and methods provide embedded systems with the ability to dynamically perform on-device learning to modify or customize a trained model to improve computing and detection accuracy in small-scale devices. In certain embodiments, this is accomplished by repurposing storage elements from inference to training and performing partial back-propagation in embedded devices in the final layers of an existing network. In various embodiments replacing weights in final layers, while using hardware components to iteratively performing forward-propagation calculation, advantageously, reduces the need to store intermediate results, thus, allowing for on-device training without significantly increasing hardware requirements or requiring excessive computational memory resources when compared to conventional machine learning methods.Type: ApplicationFiled: May 14, 2021Publication date: November 17, 2022Applicant: Maxim Integrated Products, Inc.Inventors: Mark Alan LOVELL, Robert Michael MUCHSEL, Brian Gregory RUSH
-
Publication number: 20220334634Abstract: Systems and methods reduce power consumption in embedded machine learning hardware accelerators and enable cost-effective embedded at-the-edge machine-learning and related applications. In various embodiments this may be accomplished by using hardware accelerators that comprise a programmable pre-processing circuit that operates in the same clock domain as the accelerator. In some embodiments, tightly coupled data loading first-in-first-out registers (FIFOs) eliminate clock synchronization issues and reduce unnecessary address writes. In other embodiments, a data transformation may gather source data bits in a manner that allows loading full words of native bus width to reduce the number of writes and, thus, overall power consumption.Type: ApplicationFiled: April 16, 2021Publication date: October 20, 2022Applicant: Maxim Integrated Products, Inc.Inventors: Mark Alan LOVELL, Robert Michael MUCHSEL
-
Patent number: 11449126Abstract: Described are context-aware low-power systems and methods that reduce power consumption in compute circuits such as commonly available machine learning hardware accelerators that carry out a large number of arithmetic operations when performing convolution operations and related computations. Various embodiments exploit the fact that power demand for a series of computation steps and many other functions a hardware accelerator performs is highly deterministic, thus, allowing for energy needs to be anticipated or even calculated to a certain degree. Accordingly, power supply output may be optimized according to actual energy needs of compute circuits. In certain embodiments this is accomplished by proactively and dynamically adjusting power-related parameters according to high-power and low-power operations to benefit a machine learning circuit and to avoid wasting valuable power resources, especially in embedded computing systems.Type: GrantFiled: June 1, 2021Date of Patent: September 20, 2022Assignee: Maxim Integrated Products, Inc.Inventors: Mark Alan Lovell, Robert Michael Muchsel
-
Publication number: 20210216868Abstract: Described herein are systems and methods for efficiently processing large amounts of data when performing complex neural network operations, such as convolution and pooling operations. Given cascaded convolutional neural network layers, various embodiments allow for commencing processing of a downstream layer prior to completing processing of a current or previous network layer. In certain embodiments, this is accomplished by utilizing a handshaking mechanism or asynchronous logic to determine an active neural network layer in a neural network and using that active neural layer to process a subset of a set of input data of a first layer prior to processing all of the set of input data.Type: ApplicationFiled: December 21, 2020Publication date: July 15, 2021Applicant: Maxim Integrated Products, Inc.Inventors: Mark Alan LOVELL, Robert Michael MUCHSEL
-
Patent number: 10771062Abstract: Presented are systems and methods that allow hardware designers to protect valuable IP and information in the hardware domain in order to increase overall system security. In various embodiments of the invention this is accomplished by configuring logic gates of existing logic circuitry based on a key input. In certain embodiments, a logic function provides results that are dependent not only on input values but also on an encrypted logic key that determines connections for a given logic building block, such that the functionality of the logic function cannot be determined by reverse engineering. In some embodiments, the logic key is created by decrypting a piece of data using a secret or private key. Advantages of automatic encryption include that existing circuitry need not be re-implemented or re-built, and that the systems and methods presented are backward compatible with standard manufacturing tools.Type: GrantFiled: August 9, 2018Date of Patent: September 8, 2020Assignee: Maxim Integrated Products, Inc.Inventors: Robert Michael Muchsel, Donald Wood Loomis, III, Edward Tangkwai Ma, Hung Thanh Nguyen, Nancy Kow Iida, Mark Alan Lovell
-
Publication number: 20200110979Abstract: An energy-efficient sequencer comprising inline multipliers and adders causes a read source that contains matching values to output an enable signal to enable a data item prior to using a multiplier to multiply the data item with a weight to obtain a product for use in a matrix-multiplication in hardware. A second enable signal causes the output to be written to the data item.Type: ApplicationFiled: October 1, 2019Publication date: April 9, 2020Applicant: Maxim Integrated Products, Inc.Inventors: Mark Alan Lovell, Robert Michael Muchsel, Donald Wood Loomis, III
-
Publication number: 20200110604Abstract: Described herein are systems and methods that increase the utilization and performance of computational resources, such as storage space and computation time, thereby, reducing computational cost. Various embodiments of the invention provide for a hardware structure that allows both streaming of source data that eliminates redundant data transfer and allows for in-memory computations that eliminate requirements for data transfer to and from intermediate storage. In certain embodiments, computational cost is reduced by using a hardware structure that enables mathematical operations, such as element-wise matrix multiplications employed by convolutional neural networks, to be performed automatically and efficiently.Type: ApplicationFiled: October 1, 2019Publication date: April 9, 2020Applicant: Maxim Integrated Products, Inc.Inventors: Mark Alan Lovell, Robert Michael Muchsel
-
Patent number: 10063231Abstract: Presented are systems and methods that allow hardware designers to protect valuable IP and information in the hardware domain in order to increase overall system security. In various embodiments of the invention this is accomplished by configuring logic gates of existing logic circuitry based on a key input. In certain embodiments, a logic function provides results that are dependent not only on input values but also on an encrypted logic key that determines connections for a given logic building block, such that the functionality of the logic function cannot be determined by reverse engineering. In some embodiments, the logic key is created by decrypting a piece of data using a secret or private key. Advantages of automatic encryption include that existing circuitry need not be re-implemented or re-built, and that the systems and methods presented are backward compatible with standard manufacturing tools.Type: GrantFiled: July 10, 2017Date of Patent: August 28, 2018Assignee: Maxim Integrated Products, Inc.Inventors: Robert Michael Muchsel, Donald Wood Loomis, III, Edward Tangkwai Ma, Hung Thanh Nguyen, Nancy Kow Iida, Mark Alan Lovell