Patents by Inventor Mohammed A. ZIDAN

Mohammed A. ZIDAN has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11562788
    Abstract: An in-memory computing system for computing vector-matrix multiplications includes an array of resistive memory devices arranged in columns and rows, such that resistive memory devices in each row of the array are interconnected by a respective word line and resistive memory devices in each column of the array are interconnected by a respective bitline. The in-memory computing system also includes an interface circuit electrically coupled to each bitline of the array of resistive memory devices and computes the vector-matrix multiplication between an input vector applied to a given set of word lines and data values stored in the array. For each bitline, the interface circuit receives an output in response to the input being applied to the given wordline, compares the output to a threshold, and increments a count maintained for each bitline when the output exceeds the threshold. The count for a given bitline represents a dot-product.
    Type: Grant
    Filed: March 5, 2021
    Date of Patent: January 24, 2023
    Assignee: The Regents of the University of Michigan
    Inventors: Wei Lu, Mohammed A. Zidan
  • Patent number: 11488650
    Abstract: A memory processing unit architecture can include a plurality of memory regions and a plurality of processing regions interleaved between the plurality of memory regions. The plurality of processing regions can be configured to perform computation functions of a model such as an artificial neural network. Data can be transferred between the computation functions in respective memory processing regions. In addition, the memory regions can be utilized to transfer data between a computation function in one processing region and a computation function in another processing region adjacent to the given memory region.
    Type: Grant
    Filed: April 6, 2020
    Date of Patent: November 1, 2022
    Assignee: MemryX Incorporated
    Inventors: Mohammed A. Zidan, Jacob Christopher Botimer, Chester Liu, Fan-hsuan Meng, Timothy Alan Wesley, Zhengya Zhang, Wei Lu
  • Publication number: 20210312977
    Abstract: A memory processing unit architecture can include a plurality of memory regions and a plurality of processing regions interleaved between the plurality of memory regions. The plurality of processing regions can be configured to perform computation functions of a model such as an artificial neural network. Data can be transferred between the computation functions in respective memory processing regions. In addition, the memory regions can be utilized to transfer data between a computation function in one processing region and a computation function in another processing region adjacent to the given memory region.
    Type: Application
    Filed: April 6, 2020
    Publication date: October 7, 2021
    Inventors: Mohammed A. ZIDAN, Wei LU, Fan-hsuan MENG, Timothy Alan Wesley
  • Publication number: 20210210138
    Abstract: An in-memory computing system for computing vector-matrix multiplications includes an array of resistive memory devices arranged in columns and rows, such that resistive memory devices in each row of the array are interconnected by a respective word line and resistive memory devices in each column of the array are interconnected by a respective bitline. The in-memory computing system also includes an interface circuit electrically coupled to each bitline of the array of resistive memory devices and computes the vector-matrix multiplication between an input vector applied to a given set of word lines and data values stored in the array. For each bitline, the interface circuit receives an output in response to the input being applied to the given wordline, compares the output to a threshold, and increments a count maintained for each bitline when the output exceeds the threshold. The count for a given bitline represents a dot-product.
    Type: Application
    Filed: March 5, 2021
    Publication date: July 8, 2021
    Inventors: Wei LU, Mohammed A. ZIDAN
  • Patent number: 10943652
    Abstract: An in-memory computing system for computing vector-matrix multiplications includes an array of resistive memory devices arranged in columns and rows, such that resistive memory devices in each row of the array are interconnected by a respective wordline and resistive memory devices in each column of the array are interconnected by a respective bitline. The in-memory computing system also includes an interface circuit electrically coupled to each bitline of the array of resistive memory devices and computes the vector-matrix multiplication between an input vector applied to a given set of wordlines and data values stored in the array. For each bitline, the interface circuit receives an output in response to the input being applied to the given wordline, compares the output to a threshold, and increments a count maintained for each bitline when the output exceeds the threshold. The count for a given bitline represents a dot-product.
    Type: Grant
    Filed: May 22, 2018
    Date of Patent: March 9, 2021
    Assignee: The Regents of the University of Michigan
    Inventors: Wei Lu, Mohammed A. Zidan
  • Publication number: 20190362787
    Abstract: An in-memory computing system for computing vector-matrix multiplications includes an array of resistive memory devices arranged in columns and rows, such that resistive memory devices in each row of the array are interconnected by a respective wordline and resistive memory devices in each column of the array are interconnected by a respective bitline. The in-memory computing system also includes an interface circuit electrically coupled to each bitline of the array of resistive memory devices and computes the vector-matrix multiplication between an input vector applied to a given set of wordlines and data values stored in the array. For each bitline, the interface circuit receives an output in response to the input being applied to the given wordline, compares the output to a threshold, and increments a count maintained for each bitline when the output exceeds the threshold. The count for a given bitline represents a dot-product.
    Type: Application
    Filed: May 22, 2018
    Publication date: November 28, 2019
    Inventors: Wei LU, Mohammed A. ZIDAN
  • Patent number: 10346347
    Abstract: For decades, advances in electronics were directly related to the scaling of CMOS transistors according to Moore's law. However, both the CMOS scaling and the classical computer architecture are approaching fundamental and practical limits. A novel memory-centric, reconfigurable, general purpose computing platform is proposed to handle the explosive amount of data in a fast and energy-efficient manner. The proposed computing architecture is based on a single physical resistive memory-centric fabric that can be optimally reconfigured and utilized to perform different computing and data storage tasks in a massively parallel approach. The system can be tailored to achieve maximal energy efficiency based on the data flow by dynamically allocating the basic computing fabric to storage, arithmetic, and analog computing including neuromorphic computing tasks.
    Type: Grant
    Filed: October 3, 2017
    Date of Patent: July 9, 2019
    Inventors: Wei Lu, Mohammed A. Zidan
  • Publication number: 20180095930
    Abstract: For decades, advances in electronics were directly related to the scaling of CMOS transistors according to Moore's law. However, both the CMOS scaling and the classical computer architecture are approaching fundamental and practical limits. A novel memory-centric, reconfigurable, general purpose computing platform is proposed to handle the explosive amount of data in a fast and energy-efficient manner. The proposed computing architecture is based on a single physical resistive memory-centric fabric that can be optimally reconfigured and utilized to perform different computing and data storage tasks in a massively parallel approach. The system can be tailored to achieve maximal energy efficiency based on the data flow by dynamically allocating the basic computing fabric to storage, arithmetic, and analog computing including neuromorphic computing tasks.
    Type: Application
    Filed: October 3, 2017
    Publication date: April 5, 2018
    Inventors: Wei LU, Mohammed A. ZIDAN