Patents by Inventor Chaitali Chakrabarti
Chaitali Chakrabarti has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240103908Abstract: Provided herein are dynamic adaptive scheduling (DAS) systems. In some embodiments, the DAS systems include a first scheduler, a second scheduler that is slower than the first scheduler, and a runtime preselection classifier that is operably connected to the first scheduler and the second scheduler, which runtime preselection classifier is configured to effect selective use of the first scheduler or the second scheduler to perform a given scheduling task. Related systems, computer readable media, and additional methods are also provided.Type: ApplicationFiled: September 19, 2023Publication date: March 28, 2024Applicants: ARIZONA BOARD OF REGENTS ON BEHALF OF ARIZONA STATE UNIVERSITY, WISCONSIN ALUMNI RESEARCH FOUNDATION, UNIVERSITY OF ARIZONA, BOARD OF REGENTS, THE UNIVERSITY OF TEXAS SYSTEMInventors: Chaitali CHAKRABARTI, Umit OGRAS, Ahmet GOKSOY, Anish KRISHNAKUMAR, Ali AKOGLU, Md Sahil HASSAN, Radu MARCULESCU, Allen-Jasmin FARCAS
-
Publication number: 20240004776Abstract: A user-space emulation framework for heterogeneous system-on-chip (SoC) design is provided. Embodiments described herein propose a portable, Linux-based emulation framework to provide an ecosystem for hardware-software co-design of heterogenous SoCs (e.g., domain-specific SoCs (DSSoCs)) and enable their rapid evaluation during the pre-silicon design phase. This framework holistically targets three key challenges of heterogeneous SoC design: accelerator integration, resource management, and application development. These challenges are addressed via a flexible and lightweight user-space runtime environment that enables easy integration of new accelerators, scheduling heuristics, and user applications, and the utility of each is illustrated through various case studies. A prototype compilation toolchain is introduced that enables automatic mapping of unlabeled C code to heterogeneous SoC platforms.Type: ApplicationFiled: October 22, 2021Publication date: January 4, 2024Inventors: Umit Ogras, Radu Marculescu, Ali Akoglu, Chaitali Chakrabarti, Daniel Bliss, Samet Egemen Arda, Anderson Sartor, Nirmal Kumbhare, Anish Krishnakumar, Joshua Mack, Ahmet Goksoy, Sumit Mandal
-
Publication number: 20230401422Abstract: A full-stack neural network obfuscation framework obfuscates a neural network architecture while preserving its functionality with very limited performance overhead. The framework includes obfuscating parameters or “knobs”, including layer branching, layer widening, selective fusion and schedule pruning, that increase the number of operators, reduce/increase the latency, and number of cache and DRAM accesses. In addition, a genetic algorithm-based approach is adopted to orchestrate the combination of obfuscating knobs to achieve the best obfuscating effect on the layer sequence and dimension parameters so that the architecture information cannot be successfully extracted.Type: ApplicationFiled: June 9, 2023Publication date: December 14, 2023Applicant: Arizona Board of Regents on behalf of Arizona State UniversityInventors: Jingtao Li, Chaitali Chakrabarti, Deliang Fan, Adnan Siraj Rakin
-
Publication number: 20230401092Abstract: Runtime task scheduling using imitation learning (IL) for heterogenous many-core systems is provided. Domain-specific systems-on-chip (DSSoCs) are recognized as a key approach to narrow down the performance and energy-efficiency gap between custom hardware accelerators and programmable processors. Reaching the full potential of these architectures depends critically on optimally scheduling the applications to available resources at runtime. Existing optimization-based techniques cannot achieve this objective at runtime due to the combinatorial nature of the task scheduling problem. In an exemplary aspect described herein, scheduling is posed as a classification problem, and embodiments propose a hierarchical IL-based scheduler that learns from an Oracle to maximize the performance of multiple domain-specific applications. Extensive evaluations show that the proposed IL-based scheduler approximates an offline Oracle policy with more than 99% accuracy for performance- and energy-based optimization objectives.Type: ApplicationFiled: October 22, 2021Publication date: December 14, 2023Inventors: Umit Ogras, Radu Marculescu, Ali Akoglu, Chaitali Chakrabarti, Daniel Bliss, Samet Egemen Arda, Anderson Sartor, Nirmal Kumbhare, Anish Krishnakumar, Joshua Mack, Ahmet Goksoy, Sumit Mandal
-
Publication number: 20230393637Abstract: Hierarchical and lightweight imitation learning (IL) for power management of embedded systems-on-chip (SoCs), also referred to herein as HiLITE, is provided. Modern SoCs use dynamic power management (DPM) techniques to improve energy efficiency. However, existing techniques are unable to efficiently adapt the mntime decisions considering multiple objectives (e.g., energy and real-time requirements) simultaneously on heterogeneous platforms. To address this need, embodiments described herein propose HiLITE, a hierarchical IL framework that maximizes energy efficiency while satisfying soft real-time constraints on embedded SoCs. This approach first trains DPM policies using IL; then, it applies a regression policy at runtime to minimize deadline misses. HiLITE improves the energy-delay product by 40% on average, and reduces deadline misses by up to 76%, compared to state-of-the-art approaches.Type: ApplicationFiled: October 22, 2021Publication date: December 7, 2023Inventors: Umit Ogras, Radu Marculescu, Ali Akoglu, Chaitali Chakrabarti, Daniel Bliss, Samet Egemen Arda, Anderson Sartor, Nirmal Kumbhare, Anish Krishnakumar, Joshua Mack, Ahmet Goksoy, Sumit Mandal
-
Publication number: 20230129133Abstract: Hierarchical coarse-grain sparsity for deep neural networks is provided. An algorithm-hardware co-optimized memory compression technique is proposed to compress deep neural networks in a hardware-efficient manner, which is referred to herein as hierarchical coarse-grain sparsity (HCGS). HCGS provides a new long short-term memory (LSTM) training technique which enforces hierarchical structured sparsity by randomly dropping static block-wise connections between layers. HCGS maintains the same hierarchical structured sparsity throughout training and inference; this reduces weight storage for both training and inference hardware systems.Type: ApplicationFiled: October 18, 2022Publication date: April 27, 2023Applicant: Arizona Board of Regents on behalf of Arizona State UniversityInventors: Jae-sun Seo, Deepak Kadetotad, Chaitali Chakrabarti, Visar Berisha
-
Publication number: 20230078473Abstract: A robust and accurate binary neural network, referred to as RA-BNN, is provided to simultaneously defend against adversarial noise injection and improve accuracy. Recently developed adversarial weight attack, a.k.a. bit-flip attack (BFA), has shown enormous success in compromising deep neural network (DNN) performance with an extremely small amount of model parameter perturbation. To defend against this threat, embodiments of RA-BNN adopt a complete binary neural network (BNN) to significantly improve DNN model robustness (defined as the number of bit-flips required to degrade the accuracy to as low as a random guess). To improve clean inference accuracy, a novel and efficient two-stage network growing method is proposed and referred to as early growth. Early growth selectively grows the channel size of each BNN layer based on channel-wise binary masks training with Gumbel-Sigmoid function.Type: ApplicationFiled: September 14, 2022Publication date: March 16, 2023Applicant: Arizona Board of Regents on behalf of Arizona State UniversityInventors: Deliang Fan, Adnan Siraj Rakin, Li Yang, Chaitali Chakrabarti, Yu Cao, Jae-sun Seo, Jingtao Li
-
Patent number: 10614798Abstract: Aspects disclosed in the detailed description include memory compression in a deep neural network (DNN). To support a DNN application, a fully connected weight matrix associated with a hidden layer(s) of the DNN is divided into a plurality of weight blocks to generate a weight block matrix with a first number of rows and a second number of columns. A selected number of weight blocks are randomly designated as active weight blocks in each of the first number of rows and updated exclusively during DNN training. The weight block matrix is compressed to generate a sparsified weight block matrix including exclusively active weight blocks. The second number of columns is compressed to reduce memory footprint and computation power, while the first number of rows is retained to maintain accuracy of the DNN, thus providing the DNN in an efficient hardware implementation without sacrificing accuracy of the DNN application.Type: GrantFiled: July 27, 2017Date of Patent: April 7, 2020Assignee: Arizona Board of Regents on behalf of Arizona State UniversityInventors: Jae-sun Seo, Deepak Kadetotad, Sairam Arunachalam, Chaitali Chakrabarti
-
Publication number: 20190164538Abstract: Aspects disclosed in the detailed description include memory compression in a deep neural network (DNN). To support a DNN application, a fully connected weight matrix associated with a hidden layer(s) of the DNN is divided into a plurality of weight blocks to generate a weight block matrix with a first number of rows and a second number of columns. A selected number of weight blocks are randomly designated as active weight blocks in each of the first number of rows and updated exclusively during DNN training. The weight block matrix is compressed to generate a sparsified weight block matrix including exclusively active weight blocks. The second number of columns is compressed to reduce memory footprint and computation power, while the first number of rows is retained to maintain accuracy of the DNN, thus providing the DNN in an efficient hardware implementation without sacrificing accuracy of the DNN application.Type: ApplicationFiled: July 27, 2017Publication date: May 30, 2019Applicant: Arizona Board of Regents on behalf of Arizona State UniversityInventors: Jae-sun Seo, Deepak Kadetotad, Sairam Arunachalam, Chaitali Chakrabarti