Patents by Inventor Deliang Fan

Deliang Fan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240145036
    Abstract: A method of calculating an abundance of an mRNA sequence within a gene comprises storing an index table of the gene in a non-volatile memory, obtaining a short read of the mRNA sequence, generating a set of input fragments from the mRNA sequence, initializing a compatibility table in a volatile memory, for each input fragment in the set of input fragments, searching for an exact match of the input fragment in the index table, calculating a final result from the compatibility table, and calculating an abundance of the mRNA sequence in the gene by aggregating the transcripts compatible with the short read, wherein the calculating step is performed on the same integrated circuit as the non-volatile memory. A system for in-memory calculation of an abundance of an mRNA sequence within a gene is also disclosed.
    Type: Application
    Filed: March 21, 2023
    Publication date: May 2, 2024
    Applicant: Arizona Board of Regents on behalf of Arizona State University
    Inventors: Deliang Fan, Fan Zhang, Shaahin Angizi
  • Publication number: 20240144998
    Abstract: A system for in-memory computing comprises a volatile memory comprising at least a first layered subarray, wherein each subarray comprises a plurality of memory cells, and a plurality of sub-sense amplifiers connected to a read bitline of the first subarray of the memory, configured to compare a measured voltage of the read bitline to at least one threshold and provide at least one binary output corresponding to a logic operation based on whether the voltage of the read bitline is above or below the threshold. A method for in-memory computing is also disclosed.
    Type: Application
    Filed: November 1, 2023
    Publication date: May 2, 2024
    Applicant: Arizona Board of Regents on behalf of Arizona State University
    Inventors: Deliang Fan, Shaahin Angizi
  • Publication number: 20240135256
    Abstract: A method of training a machine learning algorithm comprises providing a set of input data, performing transforms on the input data to generate augmented data, to provide transformed base paths into machine learning algorithm encoders, segmenting the augmented data, calculating main base path outputs by applying a weighting to the segmented augmented data, calculating pruning masks from the input and augmented data to apply to the base paths of the machine learning algorithm encoders, the pruning masks having a binary value for each segment in the segmented augmented data, calculating sparse conditional path outputs by performing a computation on the segments of the segmented augmented data, and calculating a final output as a sum of the main base path outputs and the sparse conditional path outputs. A computer-implemented system for learning sparse features of a dataset is also disclosed.
    Type: Application
    Filed: October 24, 2023
    Publication date: April 25, 2024
    Applicant: Arizona Board of Regents on behalf of Arizona State University
    Inventors: Jae-sun Seo, Jian Meng, Li Yang, Deliang Fan
  • Publication number: 20240095528
    Abstract: A method for increasing the temperature-resiliency of a neural network, the method comprising loading a neural network model into a resistive nonvolatile in-memory-computing chip, training the deep neural network model using a progressive knowledge distillation algorithm as a function of a teacher model, the algorithm comprising injecting, using a clean model as the teacher model, low-temperature noise values into a student model and changing, now using the student model as the teacher model, the low-temperature noises to high-temperature noises, and training the deep neural network model using a batch normalization adaptation algorithm, wherein the batch normalization adaptation algorithm includes training a plurality of batch normalization parameters with respect to a plurality of thermal variations.
    Type: Application
    Filed: September 8, 2023
    Publication date: March 21, 2024
    Applicant: Arizona Board of Regents on behalf of Arizona State University
    Inventors: Jae-sun Seo, Jian Meng, Li Yang, Deliang Fan
  • Publication number: 20240037394
    Abstract: A neural network accelerator architecture for multiple task adaptation comprises a volatile memory comprising a plurality of subarrays, each subarray comprising M rows and N columns of volatile memory cells; a source line driver connected to a plurality of N source lines, each source line corresponding to a column in the subarray; a binary mask buffer memory having size at least N bits, each bit corresponding to a column in the subarray, where a 0 corresponds to turning off the column for a convolution operation and a 1 corresponds to turning on the column for the convolution operation; and a controller configured to selectively drive each of the N source lines with a corresponding value from the mask buffer; wherein each column in the subarray is configured to store a convolution kernel.
    Type: Application
    Filed: July 27, 2023
    Publication date: February 1, 2024
    Applicant: Arizona Board of Regents on behalf of Arizona State University
    Inventors: Deliang Fan, Fan Zhang, Li Yang
  • Publication number: 20240012641
    Abstract: A model construction method and an apparatus, and a medium and an electronic device are disclosed. The method is applied to a first participant platform, and includes: associating first configuration information pre-created by a first participant with second configuration information pre-created by a second participant; verifying the first configuration information; sending, to a second participant platform corresponding to the second participant, a second creation request for requesting the creation of the federated learning model, to cause the second participant platform to verify the second configuration information creating a first model task on the basis of a first parameter corresponding to the first configuration information; and performing co-training on the basis of the first model task and a second model task, to obtain the federated learning model.
    Type: Application
    Filed: November 16, 2021
    Publication date: January 11, 2024
    Inventors: Ruoxing HUANG, Junyuan XIE, Longyijia LI, Chenliaohui FANG, Shihao SHEN, Lei SHI, Lingyuan ZHANG, Peng ZHAO, Deliang FAN, Di WU, Xiaobing LIU
  • Publication number: 20240005976
    Abstract: A Processing-in-Memory (PIM) design is disclosed that converts any memory sub-array based on non-volatile resistive bit-cells into a potential processing unit. The memory includes the data matrix stored in terms of resistive states of memory cells. Through modifying peripheral circuits, the address decoder receives three addresses and activates three memory rows with resistive bit-cells (i.e., data operands). In this way, three bit-cells are activated in each memory bit-line and sensed simultaneously, leading to different parallel resistive levels at the sense amplifier side. By selecting different reference resistance levels and a modified sense amplifier, a full-set of single-cycle 1-/2-3-input reconfigurable complete Boolean logic and full-adder outputs could be intrinsically readout based on input operand data in the memory array.
    Type: Application
    Filed: August 11, 2022
    Publication date: January 4, 2024
    Applicant: Arizona Board of Regents on behalf of Arizona State University
    Inventors: Deliang Fan, Shaahin Angizi
  • Publication number: 20230401422
    Abstract: A full-stack neural network obfuscation framework obfuscates a neural network architecture while preserving its functionality with very limited performance overhead. The framework includes obfuscating parameters or “knobs”, including layer branching, layer widening, selective fusion and schedule pruning, that increase the number of operators, reduce/increase the latency, and number of cache and DRAM accesses. In addition, a genetic algorithm-based approach is adopted to orchestrate the combination of obfuscating knobs to achieve the best obfuscating effect on the layer sequence and dimension parameters so that the architecture information cannot be successfully extracted.
    Type: Application
    Filed: June 9, 2023
    Publication date: December 14, 2023
    Applicant: Arizona Board of Regents on behalf of Arizona State University
    Inventors: Jingtao Li, Chaitali Chakrabarti, Deliang Fan, Adnan Siraj Rakin
  • Publication number: 20230342604
    Abstract: Dynamic additive attention adaption for memory-efficient multi-domain on-device learning is provided. Almost all conventional methods for multi-domain learning in deep neural networks (DNNs) only focus on improving accuracy with minimal parameter update, while ignoring high computing and memory cost during training. This makes it difficult to deploy multi-domain learning into resource-limited edge devices, like mobile phones, internet-of-things (IoT) devices, embedded systems, and so on. To reduce training memory usage, while keeping the domain adaption accuracy performance, Dynamic Additive Attention Adaption (DA3) is proposed as a novel memory-efficient on-device multi-domain learning approach. Embodiments of DA3 learn a novel additive attention adaptor module, while freezing the weights of the pre-trained backbone model for each domain.
    Type: Application
    Filed: April 21, 2023
    Publication date: October 26, 2023
    Applicant: Arizona Board of Regents on behalf of Arizona State University
    Inventors: Li Yang, Deliang Fan, Adnan Siraj Rakin
  • Publication number: 20230297331
    Abstract: A method of calculating a boundary value of a set of numerical values in a volatile memory comprises storing a set of numerical values in a volatile memory, initializing a comparison vector, initializing a matching vector, transpose-copying a first bit of each of the set of numerical values into a buffer, calculating a result vector, updating the matching vector, repeating the previous steps for each of the bits in the set of numerical values, and returning the matching vector, where the position of each 1 remaining in the matching vector corresponds to an index of the boundary value in the set of numerical values, wherein the computation and the memory storage take place on the same integrated circuit. A system for calculating a boundary value of a set of numerical values is also disclosed.
    Type: Application
    Filed: March 21, 2023
    Publication date: September 21, 2023
    Applicant: Arizona Board of Regents on behalf of Arizona State University
    Inventors: Deliang Fan, Fan Zhang, Shaahin Angizi
  • Publication number: 20230085867
    Abstract: Method, systems, and devices, disclosed herein can leverage noise and aggressive quantization of in-memory computing (IMC) to provide robust deep neural network (DNN) hardware against adversarial input and weight attacks. IMC substantially improves the energy efficiency of DNN hardware by activating many rows together and performing analog computing. The noisy analog IMC induces some amount of accuracy drop in hardware acceleration, which is generally considered as a negative effect. However, this disclosure demonstrates that such hardware intrinsic noise can, on the contrary, play a positive role in enhancing adversarial robustness. To achieve this, a new DNN training scheme is proposed that integrates measured IMC hardware noise and aggressive partial sum quantization at the IMC crossbar. It is shown that this effectively improves the robustness of IMC DNN hardware against both adversarial input and weight attacks.
    Type: Application
    Filed: September 13, 2022
    Publication date: March 23, 2023
    Inventors: Adnan Siraj Rakin, Deliang Fan, Sai Kiran Cherupally, Jae-sun Seo
  • Publication number: 20230078473
    Abstract: A robust and accurate binary neural network, referred to as RA-BNN, is provided to simultaneously defend against adversarial noise injection and improve accuracy. Recently developed adversarial weight attack, a.k.a. bit-flip attack (BFA), has shown enormous success in compromising deep neural network (DNN) performance with an extremely small amount of model parameter perturbation. To defend against this threat, embodiments of RA-BNN adopt a complete binary neural network (BNN) to significantly improve DNN model robustness (defined as the number of bit-flips required to degrade the accuracy to as low as a random guess). To improve clean inference accuracy, a novel and efficient two-stage network growing method is proposed and referred to as early growth. Early growth selectively grows the channel size of each BNN layer based on channel-wise binary masks training with Gumbel-Sigmoid function.
    Type: Application
    Filed: September 14, 2022
    Publication date: March 16, 2023
    Applicant: Arizona Board of Regents on behalf of Arizona State University
    Inventors: Deliang Fan, Adnan Siraj Rakin, Li Yang, Chaitali Chakrabarti, Yu Cao, Jae-sun Seo, Jingtao Li
  • Publication number: 20220318628
    Abstract: Hardware noise-aware training for improving accuracy of in-memory computing (IMC)-based deep neural network (DNN) hardware is provided. DNNs have been very successful in large-scale recognition tasks, but they exhibit large computation and memory requirements. To address the memory bottleneck of digital DNN hardware accelerators, IMC designs have been presented to perform analog DNN computations inside the memory. Recent IMC designs have demonstrated high energy-efficiency, but this is achieved by trading off the noise margin, which can degrade the DNN inference accuracy. The present disclosure proposes hardware noise-aware DNN training to largely improve the DNN inference accuracy of IMC hardware. During DNN training, embodiments perform noise injection at the partial sum level, which matches with the crossbar structure of IMC hardware, and the injected noise data is directly based on measurements of actual IMC prototype chips.
    Type: Application
    Filed: April 6, 2022
    Publication date: October 6, 2022
    Applicant: Arizona Board of Regents on behalf of Arizona State University
    Inventors: Sai Kiran Cherupally, Jian Meng, Shihui Yin, Deliang Fan, Jae?sun Seo