Patents by Inventor Mohammed Fouda

Mohammed Fouda has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240005162
    Abstract: The present disclosure presents neural network learning systems and methods. One such method comprises receiving an input current signal; converting the input current signal to an input voltage pulse signal utilized by a memristive neuromorphic hardware of a multi-layered spiked neural network module; transmitting the input voltage pulse signal to the memristive neuromorphic hardware of the multi-layered spiked neural network module; performing a layer-by-layer calculation and conversion on the input voltage pulse signal to complete an on-chip learning to obtain an output signal; sending the output signal to a weight update circuitry module; and/or calculating, by the weight update circuitry module, an error signal and based on a magnitude of the error signal, triggering an adjustment of a conductance value of the memristive neuromorphic hardware so as to update synaptic weight values stored by the memristive neuromorphic hardware. Other methods and systems are also provided.
    Type: Application
    Filed: November 19, 2021
    Publication date: January 4, 2024
    Inventors: Mohammed FOUDA, Emre NEFTCI, Fadi KURDAHI, Ahmed ELTAWIL, Melika PAYVAND
  • Patent number: 11714727
    Abstract: A stuck-at fault mitigation method for resistive random access memory (ReRAM)-based deep learning accelerators, includes: confirming a distorted output value (Y0) due to a stuck-at fault (SAF) by using a correction data set in a pre-trained deep learning network, by means of ReRAM-based deep learning accelerator hardware; updating an average (?) and a standard deviation (?) of a batch normalization (BN) layer by using the distorted output value (Y0), by means of the ReRAM-based deep learning accelerator hardware; folding the batch normalization (BN) layer in which the average (?) and the standard deviation (?) are updated into a convolution layer or a fully-connected layer, by means of the ReRAM-based deep learning accelerator hardware; and deriving a normal output value (Y1) by using the deep learning network in which the batch normalization (BN) layer is folded, by means of the ReRAM-based deep learning accelerator hardware.
    Type: Grant
    Filed: January 21, 2022
    Date of Patent: August 1, 2023
    Assignees: UNIST ACADEMY-INDUSTRY RESEARCH CORPORATION, THE REGENTS OF THE UNIVERSITY OF CALIFORNIA, KING ABDULLAH UNIVERSITY OF SCIENCE AND TECHNOLOGY
    Inventors: Jong Eun Lee, Su Gil Lee, Gi Ju Jung, Mohammed Fouda, Fadi Kurdahi, Ahmed M. Eltawil
  • Patent number: 11681577
    Abstract: Disclosed are various approaches for a controller that can generate and use non-stationary polar codes for encoding and decoding information. In one example, a method includes performing, by an encoder of the controller, a linear operation on at least one vector of information to be stored in a memory. The linear operation includes generating a polar encoded representation from the at least one vector of information. The linear operation also includes generating an output using at least one permutation that is based on a statistical characterization analysis of channels of the memory and a channel dependent permutation that is applied to the polar encoded representation. In some aspects, the statistical characterization analysis includes a respective reliability level of each one of the plurality of channels, and the channel dependent permutation includes an ordered permutation that orders the channels according to their respective reliability level.
    Type: Grant
    Filed: January 29, 2021
    Date of Patent: June 20, 2023
    Assignee: The Regents of the University of California
    Inventors: Marwen Zorgui, Mohammed Fouda, Ahmed M. Eltawil, Zhiying Wang, Fadi Kurdahi
  • Publication number: 20220245038
    Abstract: A stuck-at fault mitigation method for resistive random access memory (ReRAM)-based deep learning accelerators, includes: confirming a distorted output value (Y0) due to a stuck-at fault (SAF) by using a correction data set in a pre-trained deep learning network, by means of ReRAM-based deep learning accelerator hardware; updating an average (?) and a standard deviation (?) of a batch normalization (BN) layer by using the distorted output value (Y0), by means of the ReRAM-based deep learning accelerator hardware; folding the batch normalization (BN) layer in which the average (?) and the standard deviation (?) are updated into a convolution layer or a fully-connected layer, by means of the ReRAM-based deep learning accelerator hardware; and deriving a normal output value (Y1) by using the deep learning network in which the batch normalization (BN) layer is folded, by means of the ReRAM-based deep learning accelerator hardware.
    Type: Application
    Filed: January 21, 2022
    Publication date: August 4, 2022
    Applicants: UNIST Academy-Industry Research Corporation, THE REGENTS OF THE UNIVERSITY OF CALIFORNIA, KING ABDULLAH UNIVERSITY OF SCIENCE AND TECHNOLOGY
    Inventors: Jong Eun LEE, Su Gil LEE, Gi Ju JUNG, Mohammed FOUDA, Fadi KURDAHI, Ahmed M. ELTAWIL
  • Publication number: 20210240565
    Abstract: Disclosed are various approaches for a controller that can generate and use non-stationary polar codes for encoding and decoding information. In one example, a method includes performing, by an encoder of the controller, a linear operation on at least one vector of information to be stored in a memory. The linear operation includes generating a polar encoded representation from the at least one vector of information. The linear operation also includes generating an output using at least one permutation that is based on a statistical characterization analysis of channels of the memory and a channel dependent permutation that is applied to the polar encoded representation. In some aspects, the statistical characterization analysis includes a respective reliability level of each one of the plurality of channels, and the channel dependent permutation includes an ordered permutation that orders the channels according to their respective reliability level.
    Type: Application
    Filed: January 29, 2021
    Publication date: August 5, 2021
    Inventors: Marwen Zorgui, Mohammed Fouda, Ahmed M. Eltawil, Zhiying Wang, Fadi Kurdahi