Patents by Inventor Suhas Parlathaya KUDRAL

Suhas Parlathaya KUDRAL has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11740941
    Abstract: The present invention describes a method of accelerating execution of one or more application tasks in a computing device using machine learning (ML) based model. According to one embodiment, a neural accelerating engine present in the computing device receives a ML input task for execution on the computing device from a user. The neural accelerating engine further retrieves a trained ML model and a corresponding optimal configuration file based on the received ML input task. Also, the current performance status of the computing device for executing the ML input task is obtained. Then, the neural accelerating engine dynamically schedules and dispatches parts of the ML input task to one or more processing units in the computing device for execution based on the retrieved optimal configuration file and the obtained current performance status of the computing device.
    Type: Grant
    Filed: February 23, 2018
    Date of Patent: August 29, 2023
    Inventors: Arun Abraham, Suhas Parlathaya Kudral, Balaji Srinivas Holur, Sarbojit Ganguly, Venkappa Mala, Suneel Kumar Surimani, Sharan Kumar Allur
  • Publication number: 20220366217
    Abstract: Embodiments herein provide a method and system for network and hardware aware computing layout selection for efficient Deep Neural Network (DNN) Inference. The method comprises: receiving, by the electronic device, a DNN model to be executed, wherein the DNN model is associated with a task; dividing the DNN model into a plurality of sub-graphs, wherein each sub-graph is to be processed individually; identifying a computing unit from a plurality of computing units for execution of each sub-graph based on a complexity score; and determining a computing layout from a plurality of computing layouts for each identified computing unit, wherein the sub-graph is executed on the identified computing unit through the determined computing layout.
    Type: Application
    Filed: July 14, 2022
    Publication date: November 17, 2022
    Inventors: Briraj SINGH, Amogha UDUPA SHANKARANARAYANA GOPAL, Aniket DWIVEDI, Bharat MUDRAGADA, Alladi Ashok Kumar SENAPATI, Suhas Parlathaya KUDRAL, Arun ABRAHAM, Praveen Doreswamy NAIDU
  • Publication number: 20200019854
    Abstract: The present invention describes a method of accelerating execution of one or more application tasks in a computing device using machine learning (ML) based model. According to one embodiment, a neural accelerating engine present in the computing device receives a ML input task for execution on the computing device from a user. The neural accelerating engine further retrieves a trained ML model and a corresponding optimal configuration file based on the received ML input task. Also, the current performance status of the computing device for executing the ML input task is obtained. Then, the neural accelerating engine dynamically schedules and dispatches parts of the ML input task to one or more processing units in the computing device for execution based on the retrieved optimal configuration file and the obtained current performance status of the computing device.
    Type: Application
    Filed: February 23, 2018
    Publication date: January 16, 2020
    Inventors: Arun ABRAHAM, Suhas Parlathaya KUDRAL, Balaji Srinivas HOLUR, Sarbojit GANGULY, Venkappa MALA, Suneel Kumar SURIMANI, Sharan Kumar ALLUR