Patents by Inventor Robert Michael Muchsel

Robert Michael Muchsel has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11829864
    Abstract: An energy-efficient multiplication circuit uses analog multipliers and adders to reduce the distance that data has to move and the number of times that the data has to be moved when performing matrix multiplications in the analog domain. The multiplication circuit is tailored to bitwise multiply the innermost product of a rearranged matrix formula generate a matrix multiplication result in form of a current that is then digitized for further processing.
    Type: Grant
    Filed: November 8, 2022
    Date of Patent: November 28, 2023
    Assignee: Analog Devices, Inc.
    Inventors: Sung Ung Kwak, Robert Michael Muchsel
  • Patent number: 11797994
    Abstract: Various embodiments of the present disclosure provide systems and methods for securing electronic devices, such as financial payment terminals, to protect sensitive data and prevent unauthorized access to confidential information. In embodiments, this is achieved without having to rely on the availability of backup energy sources. In certain embodiments, tampering attempts are thwarted by using a virtually perfect PUF circuit and PUF-generated secret or private key within a payment terminal that does not require a battery backup system and, thus, eliminates the cost associated with common battery-backed security systems. In certain embodiments, during regular operation, sensors constantly monitor the to-be-protected electronic device for tampering attempts and physical attack to ensure the physical integrity.
    Type: Grant
    Filed: July 12, 2017
    Date of Patent: October 24, 2023
    Assignee: Maxim Integrated Products, Inc.
    Inventors: Robert Michael Muchsel, Gregory Guez
  • Publication number: 20230324980
    Abstract: Described are context-aware low-power systems and methods that reduce power consumption in compute circuits such as commonly available machine learning hardware accelerators that carry out a large number of arithmetic operations when performing convolution operations and related computations. Various embodiments exploit the fact that power demand for a series of computation steps and many other functions a hardware accelerator performs is highly deterministic, thus, allowing for energy needs to be anticipated or even calculated to a certain degree. Accordingly, power supply output may be optimized according to actual energy needs of compute circuits. In certain embodiments this is accomplished by proactively and dynamically adjusting power-related parameters according to high-power and low-power operations to benefit a machine learning circuit and to avoid wasting valuable power resources, especially in embedded computing systems.
    Type: Application
    Filed: June 8, 2023
    Publication date: October 12, 2023
    Applicant: Maxim Integrated Products, Inc.
    Inventors: Mark Alan LOVELL, Robert Michael MUCHSEL
  • Patent number: 11747887
    Abstract: Described are context-aware low-power systems and methods that reduce power consumption in compute circuits such as commonly available machine learning hardware accelerators that carry out a large number of arithmetic operations when performing convolution operations and related computations. Various embodiments exploit the fact that power demand for a series of computation steps and many other functions a hardware accelerator performs is highly deterministic, thus, allowing for energy needs to be anticipated or even calculated to a certain degree. Accordingly, power supply output may be optimized according to actual energy needs of compute circuits. In certain embodiments this is accomplished by proactively and dynamically adjusting power-related parameters according to high-power and low-power operations to benefit a machine learning circuit and to avoid wasting valuable power resources, especially in embedded computing systems.
    Type: Grant
    Filed: August 18, 2022
    Date of Patent: September 5, 2023
    Assignee: Maxim Integrated Products, Inc.
    Inventors: Mark Alan Lovell, Robert Michael Muchsel
  • Patent number: 11709911
    Abstract: Described herein are systems and methods that increase the utilization and performance of computational resources, such as storage space and computation time, thereby, reducing computational cost. Various embodiments of the invention provide for a hardware structure that allows both streaming of source data that eliminates redundant data transfer and allows for in-memory computations that eliminate requirements for data transfer to and from intermediate storage. In certain embodiments, computational cost is reduced by using a hardware structure that enables mathematical operations, such as element-wise matrix multiplications employed by convolutional neural networks, to be performed automatically and efficiently.
    Type: Grant
    Filed: October 1, 2019
    Date of Patent: July 25, 2023
    Assignee: Maxim Integrated Products, Inc.
    Inventors: Mark Alan Lovell, Robert Michael Muchsel
  • Publication number: 20230222315
    Abstract: An energy-efficient sequencer comprising inline multipliers and adders causes a read source that contains matching values to output an enable signal to enable a data item prior to using a multiplier to multiply the data item with a weight to obtain a product for use in a matrix-multiplication in hardware. A second enable signal causes the output to be written to the data item.
    Type: Application
    Filed: February 27, 2023
    Publication date: July 13, 2023
    Applicant: Maxim Integrated Products, Inc.
    Inventors: Mark Alan Lovell, Robert Michael Muchsel, Donald Wood Loomis III, III
  • Publication number: 20230121532
    Abstract: An energy-efficient multiplication circuit uses analog multipliers and adders to reduce the distance that data has to move and the number of times that the data has to be moved when performing matrix multiplications in the analog domain. The multiplication circuit is tailored to bitwise multiply the innermost product of a rearranged matrix formula generate a matrix multiplication result in form of a current that is then digitized for further processing.
    Type: Application
    Filed: November 8, 2022
    Publication date: April 20, 2023
    Applicant: Analog Devices, Inc.
    Inventors: Sung Ung Kwak, Robert Michael Muchsel
  • Publication number: 20230108883
    Abstract: Low-power systems and methods increase computational efficiency in neural network processing by allowing hardware accelerators to perform processing steps on large amounts of data at reduced execution times without significantly increasing hardware cost. In various embodiments, this is accomplished by accessing locations in a source memory coupled to a hardware accelerator and using a resource optimizer that based on storage availability and network parameters determines target locations in a number of distributed memory elements. The target storage locations are selected according to one or more memory access metrics to reduce power consumption. A read/write synchronizer then schedules simultaneous read and write operations to reduce idle time and further increase computational efficiency.
    Type: Application
    Filed: October 5, 2021
    Publication date: April 6, 2023
    Applicant: Maxim Integrated Products, Inc.
    Inventors: Mark Alan LOVELL, Robert Michael MUCHSEL
  • Patent number: 11610095
    Abstract: An energy-efficient sequencer comprising inline multipliers and adders causes a read source that contains matching values to output an enable signal to enable a data item prior to using a multiplier to multiply the data item with a weight to obtain a product for use in a matrix-multiplication in hardware. A second enable signal causes the output to be written to the data item.
    Type: Grant
    Filed: October 1, 2019
    Date of Patent: March 21, 2023
    Assignee: Maxim Integrated Products, Inc.
    Inventors: Mark Alan Lovell, Robert Michael Muchsel, Donald Wood Loomis, III
  • Publication number: 20230077454
    Abstract: Dynamic data-dependent neural network processing systems and methods increase computational efficiency in neural network processing by uniquely processing data based on the data itself and/or configuration parameters for processing the data. In embodiments, this is accomplished by receiving, at a controller, input data that is to be processed by a first device in a first layer of a sequence of processing layers of a neural network using a first set of parameters. The input data is analyzed to determine whether to modify it, whether processing the (modified) data in a second layer would conserve at least one computational resource, or whether to apply a different set of parameters. Depending on the determination, the sequence of processing layers is modified, and the (modified) data are processed according to the modified sequence to reduce data movements and transitions, thereby, conserving computational resources.
    Type: Application
    Filed: September 10, 2021
    Publication date: March 16, 2023
    Applicant: Maxim Integrated Products, Inc.
    Inventors: Mark Alan LOVELL, Robert Michael MUCHSEL
  • Publication number: 20230079229
    Abstract: Non-intrusive, low-cost systems and methods allow designers to reduce headroom and safety margin requirements in the context of compute circuits, such as machine learning circuits, without increasing footprint or having to sacrifice computing capacity and other valuable resources. Various embodiments accomplish this by taking advantage of certain properties of machine learning circuits and using a CNN as a diagnostic tool for evaluating circuit behavior and adjusting circuit parameters to fully exploit available computing resources.
    Type: Application
    Filed: September 10, 2021
    Publication date: March 16, 2023
    Applicant: Maxim Integrated Products, Inc.
    Inventors: Mark Alan LOVELL, Robert Michael MUCHSEL
  • Publication number: 20220413590
    Abstract: Systems and methods increase computational efficiency in machine learning accelerators. In embodiments, this is accomplished by evaluating, partitioning, and selecting computational resources to uniquely process, accumulate, and store data based on the type of the data and configuration parameters that are used to process the data. Various embodiments, take advantage of the zeroing feature of a Built-In Self-Test (BIST) controller to cause a BIST circuit to create a known state for a hardware accelerator, e.g., during a startup and/or wakeup phase, thereby, reducing data movements and transitions to save both time and energy.
    Type: Application
    Filed: June 23, 2021
    Publication date: December 29, 2022
    Applicant: Maxim Integrated Products, Inc.
    Inventors: Mark Alan LOVELL, Robert Michael MUCHSEL
  • Publication number: 20220397954
    Abstract: Described are context-aware low-power systems and methods that reduce power consumption in compute circuits such as commonly available machine learning hardware accelerators that carry out a large number of arithmetic operations when performing convolution operations and related computations. Various embodiments exploit the fact that power demand for a series of computation steps and many other functions a hardware accelerator performs is highly deterministic, thus, allowing for energy needs to be anticipated or even calculated to a certain degree. Accordingly, power supply output may be optimized according to actual energy needs of compute circuits. In certain embodiments this is accomplished by proactively and dynamically adjusting power-related parameters according to high-power and low-power operations to benefit a machine learning circuit and to avoid wasting valuable power resources, especially in embedded computing systems.
    Type: Application
    Filed: August 18, 2022
    Publication date: December 15, 2022
    Applicant: Maxim Integrated Products, Inc.
    Inventors: Mark Alan LOVELL, Robert Michael MUCHSEL
  • Publication number: 20220382361
    Abstract: In-flight operations in an inbound data path from a source memory to a convolution hardware circuit increase computational throughput when performing convolution calculations, such as pooling and element-wise operations. Various operations may be performed in-line within an outbound data path to a target memory. Advantageously, this drastically reduces extraneous memory access and associated read-write operations, thereby, significantly reducing overall power consumption in a computing system.
    Type: Application
    Filed: May 25, 2021
    Publication date: December 1, 2022
    Applicant: Maxim Integrated Products, Inc.
    Inventors: Mark Alan LOVELL, Robert Michael MUCHSEL
  • Publication number: 20220366261
    Abstract: Storage-efficient, low-cost systems and methods provide embedded systems with the ability to dynamically perform on-device learning to modify or customize a trained model to improve computing and detection accuracy in small-scale devices. In certain embodiments, this is accomplished by repurposing storage elements from inference to training and performing partial back-propagation in embedded devices in the final layers of an existing network. In various embodiments replacing weights in final layers, while using hardware components to iteratively performing forward-propagation calculation, advantageously, reduces the need to store intermediate results, thus, allowing for on-device training without significantly increasing hardware requirements or requiring excessive computational memory resources when compared to conventional machine learning methods.
    Type: Application
    Filed: May 14, 2021
    Publication date: November 17, 2022
    Applicant: Maxim Integrated Products, Inc.
    Inventors: Mark Alan LOVELL, Robert Michael MUCHSEL, Brian Gregory RUSH
  • Publication number: 20220366225
    Abstract: Systems and methods allow existing hardware, such as commonly available hardware accelerators to process fully connected network (FCN) layers in an energy-efficient manner and without having to implement additional expensive hardware. Various embodiments, accomplish this by using a “flattening” method that converts a channel associated with a number of pixels into a number of channels that equals the number pixels.
    Type: Application
    Filed: May 14, 2021
    Publication date: November 17, 2022
    Applicant: Maxim Integrated Products, Inc.
    Inventors: Mark Alan LOVELL, Robert Michael MUCHSEL
  • Patent number: 11494625
    Abstract: A novel energy-efficient multiplication circuit using analog multipliers and adders reduces the distance data has to move and the number of times the data has to be moved when performing matrix multiplications in the analog domain. The multiplication circuit is tailored to bitwise multiply the innermost product of a rearranged matrix formula to output the generate a matrix multiplication result in form of a current that is then digitized for further processing.
    Type: Grant
    Filed: October 1, 2019
    Date of Patent: November 8, 2022
    Assignee: Maxim Integrated Products, Inc.
    Inventors: Sung Ung Kwak, Robert Michael Muchsel
  • Publication number: 20220334634
    Abstract: Systems and methods reduce power consumption in embedded machine learning hardware accelerators and enable cost-effective embedded at-the-edge machine-learning and related applications. In various embodiments this may be accomplished by using hardware accelerators that comprise a programmable pre-processing circuit that operates in the same clock domain as the accelerator. In some embodiments, tightly coupled data loading first-in-first-out registers (FIFOs) eliminate clock synchronization issues and reduce unnecessary address writes. In other embodiments, a data transformation may gather source data bits in a manner that allows loading full words of native bus width to reduce the number of writes and, thus, overall power consumption.
    Type: Application
    Filed: April 16, 2021
    Publication date: October 20, 2022
    Applicant: Maxim Integrated Products, Inc.
    Inventors: Mark Alan LOVELL, Robert Michael MUCHSEL
  • Patent number: 11449126
    Abstract: Described are context-aware low-power systems and methods that reduce power consumption in compute circuits such as commonly available machine learning hardware accelerators that carry out a large number of arithmetic operations when performing convolution operations and related computations. Various embodiments exploit the fact that power demand for a series of computation steps and many other functions a hardware accelerator performs is highly deterministic, thus, allowing for energy needs to be anticipated or even calculated to a certain degree. Accordingly, power supply output may be optimized according to actual energy needs of compute circuits. In certain embodiments this is accomplished by proactively and dynamically adjusting power-related parameters according to high-power and low-power operations to benefit a machine learning circuit and to avoid wasting valuable power resources, especially in embedded computing systems.
    Type: Grant
    Filed: June 1, 2021
    Date of Patent: September 20, 2022
    Assignee: Maxim Integrated Products, Inc.
    Inventors: Mark Alan Lovell, Robert Michael Muchsel
  • Patent number: 11341472
    Abstract: Various embodiments of the present invention relate to a point-of-sale (POS) system, and more particularly, to systems, devices and methods of making secure payments using a mobile device in addition to a POS terminal that may be an insecure payment device exposed to various tamper attempts under certain circumstances. The mobile device is involved in a trusted transaction between a central financial entity, e.g., a bank, and the payment terminal, such that the insecure payment terminal may be further authenticated based on rolling codes, two-way or three-way authentication, or an off-line mode enabled by incorporation of the mobile device. Although either of the mobile device and the payment terminal provides limited security, a POS system incorporating both of them demonstrates an enhanced level of security.
    Type: Grant
    Filed: December 21, 2017
    Date of Patent: May 24, 2022
    Assignee: Maxim Integrated Products, Inc.
    Inventors: Donald Wood Loomis, III, Edward Tangkwai Ma, Robert Michael Muchsel