Patents by Inventor Mohammed Zidan

Mohammed Zidan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240086257
    Abstract: A computing system including an application processor and a direct dataflow compute-in-memory accelerator. The direct dataflow compute-in-memory accelerator executes is configured to an execute accelerator task on accelerator data to generate an accelerator task result. An accelerator driver is configured to stream the accelerator task data from the application processor to the direct dataflow compute-in-memory architecture without placing a load on the application processor. The accelerator drive can also return the accelerator task result to the application processor.
    Type: Application
    Filed: September 14, 2022
    Publication date: March 14, 2024
    Inventors: Wei LU, Keith KRESSIN, Mohammed ZIDAN, Jacob BOTIMER, Timothy WESLEY, Chester LIU
  • Publication number: 20230305807
    Abstract: A multi-accumulator multiply-and-accumulate (MAC) unit can include a multiplier and a plurality of accumulators. The multiplier can be configured to multiply a given element of a corresponding column of a first matrix and a plurality of elements of a corresponding row of a second matrix to generate a plurality of corresponding partial product elements that can be accumulated by corresponding ones of the plurality of accumulators.
    Type: Application
    Filed: February 14, 2023
    Publication date: September 28, 2023
    Inventors: Mohammed ZIDAN, Jacob BOTIMER, Timothy WESLY, Chester LIU, Zhengya ZHANG, Wei LU
  • Publication number: 20230273729
    Abstract: A memory processing unit (MPU) can include a first memory, a second memory, a plurality of processing regions and control logic. The first memory can include a plurality of memory regions. The plurality of memory regions can be organized in a plurality of memory blocks. The plurality of memory regions can be configured to store integer, B-float, and/or Group B-float encode data. The plurality of processing regions can be interleaved between the plurality of processing regions of the first memory. The plurality of processing regions can be organized in a plurality of core groups include a plurality of compute cores. The compute groups in the processing regions can be coupled to a plurality of adjacent memory blocks in the adjacent memory regions. The second memory can be coupled to the plurality of processing regions.
    Type: Application
    Filed: February 14, 2023
    Publication date: August 31, 2023
    Inventors: Mohammed ZIDAN, Jacob BOTIMER, Timothy WESLY, Chester LIU, Zhengya ZHANG, Wei LU
  • Publication number: 20230259282
    Abstract: A memory processing unit (MPU) can include a first memory, a second memory, a plurality of processing regions and control logic. The first memory can include a plurality of memory regions. The plurality of memory regions can be organized in a plurality of memory blocks. The plurality of processing regions can be interleaved between the plurality of processing regions of the first memory. The plurality of processing regions can be organized in a plurality of core groups include a plurality of compute cores. The compute groups in the processing regions can be coupled to a plurality of adjacent memory blocks in the adjacent memory regions. The second memory can be coupled to the plurality of processing regions.
    Type: Application
    Filed: February 14, 2023
    Publication date: August 17, 2023
    Inventors: Mohammed ZIDAN, Jacob BOTIMER, Timothy WESLY, Chester LIU, Zhengya ZHANG, Wei LU
  • Publication number: 20230073012
    Abstract: A memory processing unit (MPU) can include a first memory, a second memory, a plurality of processing regions and control logic. The first memory can include a plurality of regions. The plurality of processing regions can be interleaved between the plurality of regions of the first memory. The processing regions can include a plurality of compute cores. The second memory can be coupled to the plurality of processing regions. The control logic can configure data flow between compute cores of one or more of the processing regions and corresponding adjacent regions of the first memory. The control logic can also configure data flow between the second memory and the compute cores of one or more of the processing regions. The control logic can also configure data flow between compute cores within one or more respective ones of the processing regions. The control logic can also configure array data for storage memory of the MPU.
    Type: Application
    Filed: September 12, 2022
    Publication date: March 9, 2023
    Inventors: Jacob Botimer, Mohammed Zidan, Timothy Wesley, Chester Liu, Wei Lu
  • Publication number: 20230075069
    Abstract: A memory processing unit (MPU) can include a first memory, a second memory, a plurality of processing regions and control logic. The first memory can include a plurality of regions. The plurality of processing regions can be interleaved between the plurality of regions of the first memory. The processing regions can include a plurality of compute cores. The second memory can be coupled to the plurality of processing regions. The control logic can configure data flow between compute cores of one or more of the processing regions and corresponding adjacent regions of the first memory. The control logic can also configure data flow between the second memory and the compute cores of one or more of the processing regions. The control logic can also configure data flow between compute cores within one or more respective ones of the processing regions.
    Type: Application
    Filed: September 12, 2022
    Publication date: March 9, 2023
    Inventors: Mohammed Zidan, Jacob Botimer, Timothy Wesley, Chester Liu, Zhengya Zhang, Wei Lu
  • Publication number: 20230076473
    Abstract: A memory processing unit (MPU) configuration method can include mapping operations of one or more neural network models to sets of cores in a plurality of processing regions. In addition, dataflow of the one or more neural network models can be mapped to the sets of cores in the plurality of processing regions. Furthermore, configuration information can be generated based on the mapping of the operations of the one or more neural network models to the set of cores in the plurality of processing regions and the mapping of dataflow of the one or more neural network models to the sets of cores in the plurality of processing regions. The method can be implemented by generating an initial graph from a neural network model. A mapping graph can then be generated from the final graph. One or more configuration files can then be generated from the mapping graph.
    Type: Application
    Filed: September 12, 2022
    Publication date: March 9, 2023
    Inventors: Mohammed ZIDAN, Jacob BOTIMER, Timothy WESLEY, Chester LIU, Wei LU
  • Publication number: 20230061711
    Abstract: A memory processing unit (MPU) can include a first memory, a second memory, a plurality of processing regions and control logic. The first memory can include a plurality of regions. The plurality of processing regions can be interleaved between the plurality of regions of the first memory. The processing regions can include a plurality of compute cores. The second memory can be coupled to the plurality of processing regions. The control logic can configure data flow between compute cores of one or more of the processing regions and corresponding adjacent regions of the first memory. The control logic can also configure data flow between the second memory and the compute cores of one or more of the processing regions. The control logic can also configure data flow between compute cores within one or more respective ones of the processing regions.
    Type: Application
    Filed: September 12, 2022
    Publication date: March 2, 2023
    Inventors: Jacob BOTIMER, Mohammed ZIDAN, Chester LIU, Timothy WESLEY, Wei LU
  • Patent number: 11562788
    Abstract: An in-memory computing system for computing vector-matrix multiplications includes an array of resistive memory devices arranged in columns and rows, such that resistive memory devices in each row of the array are interconnected by a respective word line and resistive memory devices in each column of the array are interconnected by a respective bitline. The in-memory computing system also includes an interface circuit electrically coupled to each bitline of the array of resistive memory devices and computes the vector-matrix multiplication between an input vector applied to a given set of word lines and data values stored in the array. For each bitline, the interface circuit receives an output in response to the input being applied to the given wordline, compares the output to a threshold, and increments a count maintained for each bitline when the output exceeds the threshold. The count for a given bitline represents a dot-product.
    Type: Grant
    Filed: March 5, 2021
    Date of Patent: January 24, 2023
    Assignee: The Regents of the University of Michigan
    Inventors: Wei Lu, Mohammed A. Zidan
  • Patent number: 11537535
    Abstract: A monolithic integrated circuit (IC) including one or more compute circuitry, one or more non-volatile memory circuits, one or more communication channels and one or more communication interface. The one or more communication channels can communicatively couple the one or more compute circuitry, the one or more non-volatile memory circuits and the one or more communication interface together. The one or more communication interfaces can communicatively couple one or more circuits of the monolithic integrated circuit to one or more circuits external to the monolithic integrated circuit.
    Type: Grant
    Filed: June 5, 2020
    Date of Patent: December 27, 2022
    Assignee: MemryX Incorporated
    Inventors: Zhengya Zhang, Mohammed Zidan, Fan-hsuan Meng, Chester Liu, Jacob Botimer, Timothy Wesley, Wei Lu
  • Patent number: 11488650
    Abstract: A memory processing unit architecture can include a plurality of memory regions and a plurality of processing regions interleaved between the plurality of memory regions. The plurality of processing regions can be configured to perform computation functions of a model such as an artificial neural network. Data can be transferred between the computation functions in respective memory processing regions. In addition, the memory regions can be utilized to transfer data between a computation function in one processing region and a computation function in another processing region adjacent to the given memory region.
    Type: Grant
    Filed: April 6, 2020
    Date of Patent: November 1, 2022
    Assignee: MemryX Incorporated
    Inventors: Mohammed A. Zidan, Jacob Christopher Botimer, Chester Liu, Fan-hsuan Meng, Timothy Alan Wesley, Zhengya Zhang, Wei Lu
  • Publication number: 20220188492
    Abstract: A processing unit can include a plurality of chiplets coupled in a cascade topology by a plurality of interfaces. A set of the plurality of cascade coupled chiplets can be configured to execute a plurality of layers or blocks of layers of an artificial intelligence model. The set of cascade coupled chiplets can also be configured with parameter data of corresponding ones of the plurality of layers or blocks of layers of the artificial intelligence model.
    Type: Application
    Filed: December 10, 2020
    Publication date: June 16, 2022
    Inventors: Ching-Yu KO, Chester LIU, Mohammed ZIDAN, Jacob BOTIMER, Timothy WESLEY, Zhengya ZHANG, Wei LU
  • Publication number: 20220057993
    Abstract: A matrix multiplication engine can include a plurality of processing elements configured to compute a matrix dot product as a summation of a sequence of vector-vector outer-products.
    Type: Application
    Filed: August 21, 2020
    Publication date: February 24, 2022
    Inventors: Fan-hsuan MENG, Mohammed ZIDAN, Zhengya ZHANG, Wei LU
  • Publication number: 20210312977
    Abstract: A memory processing unit architecture can include a plurality of memory regions and a plurality of processing regions interleaved between the plurality of memory regions. The plurality of processing regions can be configured to perform computation functions of a model such as an artificial neural network. Data can be transferred between the computation functions in respective memory processing regions. In addition, the memory regions can be utilized to transfer data between a computation function in one processing region and a computation function in another processing region adjacent to the given memory region.
    Type: Application
    Filed: April 6, 2020
    Publication date: October 7, 2021
    Inventors: Mohammed A. ZIDAN, Wei LU, Fan-hsuan MENG, Timothy Alan Wesley
  • Publication number: 20210210138
    Abstract: An in-memory computing system for computing vector-matrix multiplications includes an array of resistive memory devices arranged in columns and rows, such that resistive memory devices in each row of the array are interconnected by a respective word line and resistive memory devices in each column of the array are interconnected by a respective bitline. The in-memory computing system also includes an interface circuit electrically coupled to each bitline of the array of resistive memory devices and computes the vector-matrix multiplication between an input vector applied to a given set of word lines and data values stored in the array. For each bitline, the interface circuit receives an output in response to the input being applied to the given wordline, compares the output to a threshold, and increments a count maintained for each bitline when the output exceeds the threshold. The count for a given bitline represents a dot-product.
    Type: Application
    Filed: March 5, 2021
    Publication date: July 8, 2021
    Inventors: Wei LU, Mohammed A. ZIDAN
  • Patent number: 10998037
    Abstract: A memory processing unit can be configured to compute partial products between one or more elements of a first matrix stored in a given row of a memory cell array and sequential bits of one or more elements of a second matrix. The partial products can be calculated first sequentially across the set of rows and second sequentially across the bit positions of the elements of the second matrix. Alternatively, the partial products can be calculated first sequentially across the bit positions of the elements of the second matrix first and second sequentially across the set of rows. The partial products for each column of elements can be accumulated and bit shifted to compute the dot product of the first and second matrix.
    Type: Grant
    Filed: December 24, 2019
    Date of Patent: May 4, 2021
    Assignee: MemryX Incorporated
    Inventors: Mohammed Zidan, Chester Liu, Zhengya Zhang, Wei Lu
  • Patent number: 10943652
    Abstract: An in-memory computing system for computing vector-matrix multiplications includes an array of resistive memory devices arranged in columns and rows, such that resistive memory devices in each row of the array are interconnected by a respective wordline and resistive memory devices in each column of the array are interconnected by a respective bitline. The in-memory computing system also includes an interface circuit electrically coupled to each bitline of the array of resistive memory devices and computes the vector-matrix multiplication between an input vector applied to a given set of wordlines and data values stored in the array. For each bitline, the interface circuit receives an output in response to the input being applied to the given wordline, compares the output to a threshold, and increments a count maintained for each bitline when the output exceeds the threshold. The count for a given bitline represents a dot-product.
    Type: Grant
    Filed: May 22, 2018
    Date of Patent: March 9, 2021
    Assignee: The Regents of the University of Michigan
    Inventors: Wei Lu, Mohammed A. Zidan
  • Publication number: 20210011732
    Abstract: Techniques for computing matrix convolutions in a plurality of multiply and accumulate units including data reuse of adjacent values. The data reuse can include reading a current value of the first matrix in from memory for concurrent use by the plurality of multiply and accumulate units. The data reuse can also include reading a current value of the second matrix in from memory to a serial shift buffer coupled to the plurality of multiply and accumulate units. The data reuse can also include reading a current value of the second matrix in from memory for concurrent use by the plurality of multiply and accumulate units.
    Type: Application
    Filed: December 31, 2019
    Publication date: January 14, 2021
    Inventors: Jacob Botimer, Mohammed Zidan, Chester Liu, Fan-hsuan Meng, Timothy Wesley, Wei Lu, Zhengya Zhang
  • Publication number: 20210011863
    Abstract: A monolithic integrated circuit (IC) including one or more compute circuitry, one or more non-volatile memory circuits, one or more communication channels and one or more communication interface. The one or more communication channels can communicatively couple the one or more compute circuitry, the one or more non-volatile memory circuits and the one or more communication interface together. The one or more communication interfaces can communicatively couple one or more circuits of the monolithic integrated circuit to one or more circuits external to the monolithic integrated circuit.
    Type: Application
    Filed: June 5, 2020
    Publication date: January 14, 2021
    Inventors: Zhengya ZHANG, Mohammed Zidan, Fan-hsuan MENG, Chester LIU, Jacob BOTIMER, Timothy WESLEY, Wei LU
  • Publication number: 20200379758
    Abstract: A memory processing unit can be configured to compute partial products between one or more elements of a first matrix stored in a first storage location and sequential bits of one or more elements of a second matrix stored in a second storage location. The partial products can be calculated utilizing zero bit skipping to increase throughput and or reduce energy consumption. The partial products for each column of elements can be accumulated and bit shifted to compute the dot product of the first and second matrix.
    Type: Application
    Filed: December 24, 2019
    Publication date: December 3, 2020
    Inventors: Chester Liu, Mohammed Zidan, Wei Lu, Zhengya Zhang