Patents by Inventor Arun Abraham
Arun Abraham has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12020146Abstract: A method of processing a neural network model by using a plurality of processors includes allocating at least one slice to each layer from among a plurality of layers included in the neural network model, allocating each layer from among the plurality of layers to the plurality of processors based on respective processing times of the plurality of processors for processing each of the at least one slice, and processing the neural network model by using the plurality of processors based on a result of the allocation.Type: GrantFiled: August 23, 2019Date of Patent: June 25, 2024Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Manas Sahni, Arun Abraham, Sharan Kumar Allur, Venkappa Mala
-
Publication number: 20240013053Abstract: Provided are systems and methods for optimizing neural networks for on-device deployment in an electronic device. A method for optimizing neural networks for on-device deployment in an electronic device includes receiving a plurality of neural network (NN) models, fusing at least two NN models from the plurality of NN models based on at least one layer of each of the at least two NN models, to generate a fused NN model, identifying at least one redundant layer from the fused NN model, and removing the at least one redundant layer to generate an optimized NN model.Type: ApplicationFiled: July 19, 2023Publication date: January 11, 2024Applicant: SAMSUNG ELECTRONICS CO., LTD.Inventors: Ashutosh Pavagada VISWESWARA, Payal Anand, Arun Abraham, Vikram Nelvoy Rajendiran, Rajath Elias Soans
-
Patent number: 11775907Abstract: A method facilitating business continuity of an enterprise computer network includes receiving an initiate network recovery message at a disaster recovery orchestration platform identifying an enterprise computer network to be recovered. Predetermined network configuration information associated with the enterprise computer network is retrieved from a storage device accessible to the disaster recovery orchestration platform. A virtual recovered enterprise network is built in a virtual computing environment based at least in part on the predetermined network configuration information. A system to facilitate business continuity of an enterprise computer network is also provided. The system includes a disaster recovery orchestration platform, a storage device, and at least one communication interface. The disaster recovery orchestration platform including at least one platform computing device. Each platform computing device including at least one processor and associated memory.Type: GrantFiled: August 23, 2021Date of Patent: October 3, 2023Assignee: DATTO, INC.Inventors: Marcus Anthony Recck, Arun Abraham Philip
-
Patent number: 11740941Abstract: The present invention describes a method of accelerating execution of one or more application tasks in a computing device using machine learning (ML) based model. According to one embodiment, a neural accelerating engine present in the computing device receives a ML input task for execution on the computing device from a user. The neural accelerating engine further retrieves a trained ML model and a corresponding optimal configuration file based on the received ML input task. Also, the current performance status of the computing device for executing the ML input task is obtained. Then, the neural accelerating engine dynamically schedules and dispatches parts of the ML input task to one or more processing units in the computing device for execution based on the retrieved optimal configuration file and the obtained current performance status of the computing device.Type: GrantFiled: February 23, 2018Date of Patent: August 29, 2023Inventors: Arun Abraham, Suhas Parlathaya Kudral, Balaji Srinivas Holur, Sarbojit Ganguly, Venkappa Mala, Suneel Kumar Surimani, Sharan Kumar Allur
-
Publication number: 20230068381Abstract: Various embodiments of the disclosure disclose a method for quantizing a Deep Neural Network (DNN) model in an electronic device. The method includes: estimating, by the electronic device, an activation range of each layer of the DNN model using self-generated data (e.g. retro image, audio, video, etc.) and/or a sensitive index of each layer of the DNN model; quantizing, by the electronic device, the DNN model based on the activation range and/or the sensitive index; and allocating, by the electronic device, a dynamic bit precision for each channel of each layer of the DNN model to quantize the DNN model.Type: ApplicationFiled: October 6, 2022Publication date: March 2, 2023Inventors: Tejpratap Venkata Subbu Lakshmi GOLLANAPALLI, Arun ABRAHAM, Raja KUMAR, Pradeep NELAHONNE SHIVAMURTHAPPA, Vikram Nelvoy RAJENDIRAN, Prasen Kumar SHARMA
-
Publication number: 20220366217Abstract: Embodiments herein provide a method and system for network and hardware aware computing layout selection for efficient Deep Neural Network (DNN) Inference. The method comprises: receiving, by the electronic device, a DNN model to be executed, wherein the DNN model is associated with a task; dividing the DNN model into a plurality of sub-graphs, wherein each sub-graph is to be processed individually; identifying a computing unit from a plurality of computing units for execution of each sub-graph based on a complexity score; and determining a computing layout from a plurality of computing layouts for each identified computing unit, wherein the sub-graph is executed on the identified computing unit through the determined computing layout.Type: ApplicationFiled: July 14, 2022Publication date: November 17, 2022Inventors: Briraj SINGH, Amogha UDUPA SHANKARANARAYANA GOPAL, Aniket DWIVEDI, Bharat MUDRAGADA, Alladi Ashok Kumar SENAPATI, Suhas Parlathaya KUDRAL, Arun ABRAHAM, Praveen Doreswamy NAIDU
-
Publication number: 20220076102Abstract: A method of managing deep neural network (DNN) models on a device is provided. The method includes extracting information associated with each of a plurality of DNN models, identifying, from the information, common information which is common across the plurality of DNN models, separating and storing the common information into a designated location in the device, and controlling at least one DNN model among the plurality of DNN models to access the common information.Type: ApplicationFiled: June 29, 2020Publication date: March 10, 2022Applicant: SAMSUNG ELECTRONICS CO., LTD.Inventors: Arun ABRAHAM, Akshay PARASHAR, Suhas P K, Vikram Nelvoy RAJ END IRAN
-
Publication number: 20210232921Abstract: A method, an apparatus, and a system for configuring a neural network across heterogeneous processors are provided. The method includes creating a unified neural network profile for the plurality of processors; receiving at least one request to perform at least one task using the neural network; determining a type of the requested at least one task as one of an asynchronous task and a synchronous task; and parallelizing processing of the neural network across the plurality of processors to perform the requested at least one task, based on the type of the requested at least one task and the created unified neural network profile.Type: ApplicationFiled: January 27, 2021Publication date: July 29, 2021Applicant: SAMSUNG ELECTRONICS CO., LTD.Inventors: Akshay PARASHAR, Arun ABRAHAM, Payal ANAND, Deepthy RAVI, Venkappa MALA, Vikram Nelvoy RAJENDIRAN
-
Publication number: 20200065671Abstract: A method of processing a neural network model by using a plurality of processors includes allocating at least one slice to each layer from among a plurality of layers included in the neural network model, allocating each layer from among the plurality of layers to the plurality of processors based on respective processing times of the plurality of processors for processing each of the at least one slice, and processing the neural network model by using the plurality of processors based on a result of the allocation.Type: ApplicationFiled: August 23, 2019Publication date: February 27, 2020Applicant: SAMSUNG ELECTRONICS CO., LTD.Inventors: Manas SAHNI, Arun Abraham, Sharan Kumar Allur, Venkappa Mala
-
Publication number: 20200019854Abstract: The present invention describes a method of accelerating execution of one or more application tasks in a computing device using machine learning (ML) based model. According to one embodiment, a neural accelerating engine present in the computing device receives a ML input task for execution on the computing device from a user. The neural accelerating engine further retrieves a trained ML model and a corresponding optimal configuration file based on the received ML input task. Also, the current performance status of the computing device for executing the ML input task is obtained. Then, the neural accelerating engine dynamically schedules and dispatches parts of the ML input task to one or more processing units in the computing device for execution based on the retrieved optimal configuration file and the obtained current performance status of the computing device.Type: ApplicationFiled: February 23, 2018Publication date: January 16, 2020Inventors: Arun ABRAHAM, Suhas Parlathaya KUDRAL, Balaji Srinivas HOLUR, Sarbojit GANGULY, Venkappa MALA, Suneel Kumar SURIMANI, Sharan Kumar ALLUR
-
Publication number: 20140115565Abstract: A computer-implemented method for detecting test similarity between first and second tests for a software system. The computer-implemented method includes receiving data indicative of respective method call sequences executed during each of the first and second tests, generating, with a processor, a similarity score for the first and second tests based on a comparison of the respective method call sequences, and providing, via a user interface, a result of the comparison based on the similarity score.Type: ApplicationFiled: October 18, 2012Publication date: April 24, 2014Applicant: MICROSOFT CORPORATIONInventors: Arun Abraham, Patrick Tseng, Vu Tran, Jing Fan
-
Patent number: 8578326Abstract: Local areas of a visualized modeling language diagram are viewable at different levels of detail without losing information such as model elements and their connectivity. Multiple elements are associated with a group element, which has a visual portion derived from the appearance of a group member element. Connectors between group member elements and non-member elements are suppressed in favor of replacement connectors between the group element and the non-member element(s). The integrity of incoming and outgoing connections to the group is maintained relative to the rest of the model. Ungrouping elements restores the elements to their original state. Grouping can be applied locally to one or more parts of the visual model.Type: GrantFiled: May 28, 2009Date of Patent: November 5, 2013Assignee: Microsoft CorporationInventors: Patrick S. Tseng, Durham Goode, John Joseph Jordan, Bernie Tschirren, Arun Abraham, Abhishek Shah, Andrew Jude Byrne, Suhail Dutta
-
Publication number: 20100251187Abstract: Local areas of a visualized modeling language diagram are viewable at different levels of detail without losing information such as model elements and their connectivity. Multiple elements are associated with a group element, which has a visual portion derived from the appearance of a group member element. Connectors between group member elements and non-member elements are suppressed in favor of replacement connectors between the group element and the non-member element(s). The integrity of incoming and outgoing connections to the group is maintained relative to the rest of the model. Ungrouping elements restores the elements to their original state. Grouping can be applied locally to one or more parts of the visual model.Type: ApplicationFiled: May 28, 2009Publication date: September 30, 2010Applicant: Microsoft CorporationInventors: Patrick S. Tseng, Durham Goode, John Joseph Jordan, Bernie Tschirren, Arun Abraham, Abhishek Shah, Andrew Jude Byrne, Suhail Dutta