Patents by Inventor Chaitali Chakrabarti

Chaitali Chakrabarti has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240103908
    Abstract: Provided herein are dynamic adaptive scheduling (DAS) systems. In some embodiments, the DAS systems include a first scheduler, a second scheduler that is slower than the first scheduler, and a runtime preselection classifier that is operably connected to the first scheduler and the second scheduler, which runtime preselection classifier is configured to effect selective use of the first scheduler or the second scheduler to perform a given scheduling task. Related systems, computer readable media, and additional methods are also provided.
    Type: Application
    Filed: September 19, 2023
    Publication date: March 28, 2024
    Applicants: ARIZONA BOARD OF REGENTS ON BEHALF OF ARIZONA STATE UNIVERSITY, WISCONSIN ALUMNI RESEARCH FOUNDATION, UNIVERSITY OF ARIZONA, BOARD OF REGENTS, THE UNIVERSITY OF TEXAS SYSTEM
    Inventors: Chaitali CHAKRABARTI, Umit OGRAS, Ahmet GOKSOY, Anish KRISHNAKUMAR, Ali AKOGLU, Md Sahil HASSAN, Radu MARCULESCU, Allen-Jasmin FARCAS
  • Publication number: 20240004776
    Abstract: A user-space emulation framework for heterogeneous system-on-chip (SoC) design is provided. Embodiments described herein propose a portable, Linux-based emulation framework to provide an ecosystem for hardware-software co-design of heterogenous SoCs (e.g., domain-specific SoCs (DSSoCs)) and enable their rapid evaluation during the pre-silicon design phase. This framework holistically targets three key challenges of heterogeneous SoC design: accelerator integration, resource management, and application development. These challenges are addressed via a flexible and lightweight user-space runtime environment that enables easy integration of new accelerators, scheduling heuristics, and user applications, and the utility of each is illustrated through various case studies. A prototype compilation toolchain is introduced that enables automatic mapping of unlabeled C code to heterogeneous SoC platforms.
    Type: Application
    Filed: October 22, 2021
    Publication date: January 4, 2024
    Inventors: Umit Ogras, Radu Marculescu, Ali Akoglu, Chaitali Chakrabarti, Daniel Bliss, Samet Egemen Arda, Anderson Sartor, Nirmal Kumbhare, Anish Krishnakumar, Joshua Mack, Ahmet Goksoy, Sumit Mandal
  • Publication number: 20230401092
    Abstract: Runtime task scheduling using imitation learning (IL) for heterogenous many-core systems is provided. Domain-specific systems-on-chip (DSSoCs) are recognized as a key approach to narrow down the performance and energy-efficiency gap between custom hardware accelerators and programmable processors. Reaching the full potential of these architectures depends critically on optimally scheduling the applications to available resources at runtime. Existing optimization-based techniques cannot achieve this objective at runtime due to the combinatorial nature of the task scheduling problem. In an exemplary aspect described herein, scheduling is posed as a classification problem, and embodiments propose a hierarchical IL-based scheduler that learns from an Oracle to maximize the performance of multiple domain-specific applications. Extensive evaluations show that the proposed IL-based scheduler approximates an offline Oracle policy with more than 99% accuracy for performance- and energy-based optimization objectives.
    Type: Application
    Filed: October 22, 2021
    Publication date: December 14, 2023
    Inventors: Umit Ogras, Radu Marculescu, Ali Akoglu, Chaitali Chakrabarti, Daniel Bliss, Samet Egemen Arda, Anderson Sartor, Nirmal Kumbhare, Anish Krishnakumar, Joshua Mack, Ahmet Goksoy, Sumit Mandal
  • Publication number: 20230401422
    Abstract: A full-stack neural network obfuscation framework obfuscates a neural network architecture while preserving its functionality with very limited performance overhead. The framework includes obfuscating parameters or “knobs”, including layer branching, layer widening, selective fusion and schedule pruning, that increase the number of operators, reduce/increase the latency, and number of cache and DRAM accesses. In addition, a genetic algorithm-based approach is adopted to orchestrate the combination of obfuscating knobs to achieve the best obfuscating effect on the layer sequence and dimension parameters so that the architecture information cannot be successfully extracted.
    Type: Application
    Filed: June 9, 2023
    Publication date: December 14, 2023
    Applicant: Arizona Board of Regents on behalf of Arizona State University
    Inventors: Jingtao Li, Chaitali Chakrabarti, Deliang Fan, Adnan Siraj Rakin
  • Publication number: 20230393637
    Abstract: Hierarchical and lightweight imitation learning (IL) for power management of embedded systems-on-chip (SoCs), also referred to herein as HiLITE, is provided. Modern SoCs use dynamic power management (DPM) techniques to improve energy efficiency. However, existing techniques are unable to efficiently adapt the mntime decisions considering multiple objectives (e.g., energy and real-time requirements) simultaneously on heterogeneous platforms. To address this need, embodiments described herein propose HiLITE, a hierarchical IL framework that maximizes energy efficiency while satisfying soft real-time constraints on embedded SoCs. This approach first trains DPM policies using IL; then, it applies a regression policy at runtime to minimize deadline misses. HiLITE improves the energy-delay product by 40% on average, and reduces deadline misses by up to 76%, compared to state-of-the-art approaches.
    Type: Application
    Filed: October 22, 2021
    Publication date: December 7, 2023
    Inventors: Umit Ogras, Radu Marculescu, Ali Akoglu, Chaitali Chakrabarti, Daniel Bliss, Samet Egemen Arda, Anderson Sartor, Nirmal Kumbhare, Anish Krishnakumar, Joshua Mack, Ahmet Goksoy, Sumit Mandal
  • Publication number: 20230129133
    Abstract: Hierarchical coarse-grain sparsity for deep neural networks is provided. An algorithm-hardware co-optimized memory compression technique is proposed to compress deep neural networks in a hardware-efficient manner, which is referred to herein as hierarchical coarse-grain sparsity (HCGS). HCGS provides a new long short-term memory (LSTM) training technique which enforces hierarchical structured sparsity by randomly dropping static block-wise connections between layers. HCGS maintains the same hierarchical structured sparsity throughout training and inference; this reduces weight storage for both training and inference hardware systems.
    Type: Application
    Filed: October 18, 2022
    Publication date: April 27, 2023
    Applicant: Arizona Board of Regents on behalf of Arizona State University
    Inventors: Jae-sun Seo, Deepak Kadetotad, Chaitali Chakrabarti, Visar Berisha
  • Publication number: 20230078473
    Abstract: A robust and accurate binary neural network, referred to as RA-BNN, is provided to simultaneously defend against adversarial noise injection and improve accuracy. Recently developed adversarial weight attack, a.k.a. bit-flip attack (BFA), has shown enormous success in compromising deep neural network (DNN) performance with an extremely small amount of model parameter perturbation. To defend against this threat, embodiments of RA-BNN adopt a complete binary neural network (BNN) to significantly improve DNN model robustness (defined as the number of bit-flips required to degrade the accuracy to as low as a random guess). To improve clean inference accuracy, a novel and efficient two-stage network growing method is proposed and referred to as early growth. Early growth selectively grows the channel size of each BNN layer based on channel-wise binary masks training with Gumbel-Sigmoid function.
    Type: Application
    Filed: September 14, 2022
    Publication date: March 16, 2023
    Applicant: Arizona Board of Regents on behalf of Arizona State University
    Inventors: Deliang Fan, Adnan Siraj Rakin, Li Yang, Chaitali Chakrabarti, Yu Cao, Jae-sun Seo, Jingtao Li
  • Patent number: 10614798
    Abstract: Aspects disclosed in the detailed description include memory compression in a deep neural network (DNN). To support a DNN application, a fully connected weight matrix associated with a hidden layer(s) of the DNN is divided into a plurality of weight blocks to generate a weight block matrix with a first number of rows and a second number of columns. A selected number of weight blocks are randomly designated as active weight blocks in each of the first number of rows and updated exclusively during DNN training. The weight block matrix is compressed to generate a sparsified weight block matrix including exclusively active weight blocks. The second number of columns is compressed to reduce memory footprint and computation power, while the first number of rows is retained to maintain accuracy of the DNN, thus providing the DNN in an efficient hardware implementation without sacrificing accuracy of the DNN application.
    Type: Grant
    Filed: July 27, 2017
    Date of Patent: April 7, 2020
    Assignee: Arizona Board of Regents on behalf of Arizona State University
    Inventors: Jae-sun Seo, Deepak Kadetotad, Sairam Arunachalam, Chaitali Chakrabarti
  • Publication number: 20190164538
    Abstract: Aspects disclosed in the detailed description include memory compression in a deep neural network (DNN). To support a DNN application, a fully connected weight matrix associated with a hidden layer(s) of the DNN is divided into a plurality of weight blocks to generate a weight block matrix with a first number of rows and a second number of columns. A selected number of weight blocks are randomly designated as active weight blocks in each of the first number of rows and updated exclusively during DNN training. The weight block matrix is compressed to generate a sparsified weight block matrix including exclusively active weight blocks. The second number of columns is compressed to reduce memory footprint and computation power, while the first number of rows is retained to maintain accuracy of the DNN, thus providing the DNN in an efficient hardware implementation without sacrificing accuracy of the DNN application.
    Type: Application
    Filed: July 27, 2017
    Publication date: May 30, 2019
    Applicant: Arizona Board of Regents on behalf of Arizona State University
    Inventors: Jae-sun Seo, Deepak Kadetotad, Sairam Arunachalam, Chaitali Chakrabarti