Patents by Inventor Nikhil Bendre
Nikhil Bendre has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11620571Abstract: A network system may include a plurality of trainer devices and a computing system disposed within a remote network management platform. The computing system may be configured to: receive, from a client device of a managed network, information indicating (i) training data that is to be used as basis for generating a machine learning (ML) model and (ii) a target variable to be predicted using the ML model; transmit an ML training request for reception by one of the plurality of trainer devices; provide the training data to a particular trainer device executing a particular ML trainer process that is serving the ML training request; receive, from the particular trainer device, the ML model that is generated based on the provided training data and according to the particular ML trainer process; predict the target variable using the ML model; and transmit, to the client device, information indicating the target variable.Type: GrantFiled: July 9, 2019Date of Patent: April 4, 2023Assignee: ServiceNow, Inc.Inventors: Nikhil Bendre, Fernando Ros, Kannan Govindarajan, Baskar Jayaraman, Aniruddha Thakur, Sriram Palapudi, Firat Karakusoglu
-
Patent number: 11082288Abstract: Fault tolerance techniques for a plurality of nodes executing application thread groups include executing at least a portion of a first application thread group based on a delegation by a first node, wherein the first node delegates an execution of the first application thread group amongst the plurality of nodes and has a highest priority indicated by an ordered priority of the plurality of nodes. A failure of the first node can be identified based on the first node failing to respond to a message sent to it. A second node can then be identified as having a next highest priority indicated by the ordered priority such that the second node can delegate an execution of a second application thread group amongst the plurality of nodes.Type: GrantFiled: March 18, 2019Date of Patent: August 3, 2021Assignee: ServiceNow, Inc.Inventors: Nikhil Bendre, Jared Laethem
-
Publication number: 20200005187Abstract: A network system may include a plurality of trainer devices and a computing system disposed within a remote network management platform. The computing system may be configured to: receive, from a client device of a managed network, information indicating (i) training data that is to be used as basis for generating a machine learning (ML) model and (ii) a target variable to be predicted using the ML model; transmit an ML training request for reception by one of the plurality of trainer devices; provide the training data to a particular trainer device executing a particular ML trainer process that is serving the ML training request; receive, from the particular trainer device, the ML model that is generated based on the provided training data and according to the particular ML trainer process; predict the target variable using the ML model; and transmit, to the client device, information indicating the target variable.Type: ApplicationFiled: July 9, 2019Publication date: January 2, 2020Inventors: Nikhil Bendre, Fernando Ros, Kannan Govindarajan, Baskar Jayaraman, Aniruddha Thakur, Sriram Palapudi, Firat Karakusoglu
-
Patent number: 10445661Abstract: A network system may include a plurality of trainer devices and a computing system disposed within a remote network management platform. The computing system may be configured to: receive, from a client device of a managed network, information indicating (i) training data that is to be used as basis for generating a machine learning (ML) model and (ii) a target variable to be predicted using the ML model; transmit an ML training request for reception by one of the plurality of trainer devices; provide the training data to a particular trainer device executing a particular ML trainer process that is serving the ML training request; receive, from the particular trainer device, the ML model that is generated based on the provided training data and according to the particular ML trainer process; predict the target variable using the ML model; and transmit, to the client device, information indicating the target variable.Type: GrantFiled: September 27, 2017Date of Patent: October 15, 2019Assignee: ServiceNow, Inc.Inventors: Nikhil Bendre, Fernando Ros, Kannan Govindarajan, Baskar Jayaraman, Aniruddha Thakur, Sriram Palapudi, Firat Karakusoglu
-
Publication number: 20190280915Abstract: Fault tolerance techniques for a plurality of nodes executing application thread groups include executing at least a portion of a first application thread group based on a delegation by a first node, wherein the first node delegates an execution of the first application thread group amongst the plurality of nodes and has a highest priority indicated by an ordered priority of the plurality of nodes. A failure of the first node can be identified based on the first node failing to respond to a message sent to it. A second node can then be identified as having a next highest priority indicated by the ordered priority such that the second node can delegate an execution of a second application thread group amongst the plurality of nodes.Type: ApplicationFiled: March 18, 2019Publication date: September 12, 2019Inventors: Nikhil Bendre, Jared Laethem
-
Patent number: 10380504Abstract: A network system may include a plurality of trainer devices and a computing system disposed within a remote network management platform. The computing system may be configured to: receive, from a client device of a managed network, information indicating (i) training data that is to be used as basis for generating a machine learning (ML) model and (ii) a target variable to be predicted using the ML model; transmit an ML training request for reception by one of the plurality of trainer devices; provide the training data to a particular trainer device executing a particular ML trainer process that is serving the ML training request; receive, from the particular trainer device, the ML model that is generated based on the provided training data and according to the particular ML trainer process; predict the target variable using the ML model; and transmit, to the client device, information indicating the target variable.Type: GrantFiled: December 20, 2017Date of Patent: August 13, 2019Assignee: ServiceNow, Inc.Inventors: Nikhil Bendre, Fernando Ros, Kannan Govindarajan, Baskar Jayaraman, Aniruddha Thakur, Sriram Palapudi, Firat Karakusoglu
-
Patent number: 10270646Abstract: Fault tolerance techniques for a plurality of nodes executing application thread groups include executing at least a portion of a first application thread group based on a delegation by a first node, wherein the first node delegates an execution of the first application thread group amongst the plurality of nodes and has a highest priority indicated by an ordered priority of the plurality of nodes. A failure of the first node can be identified based on the first node failing to respond to a message sent to it. A second node can then be identified as having a next highest priority indicated by the ordered priority such that the second node can delegate an execution of a second application thread group amongst the plurality of nodes.Type: GrantFiled: October 24, 2016Date of Patent: April 23, 2019Assignee: SERVICENOW, INC.Inventors: Nikhil Bendre, Jared Laethem
-
Publication number: 20180322415Abstract: A network system may include a plurality of trainer devices and a computing system disposed within a remote network management platform. The computing system may be configured to: receive, from a client device of a managed network, information indicating (i) training data that is to be used as basis for generating a machine learning (ML) model and (ii) a target variable to be predicted using the ML model; transmit an ML training request for reception by one of the plurality of trainer devices; provide the training data to a particular trainer device executing a particular ML trainer process that is serving the ML training request; receive, from the particular trainer device, the ML model that is generated based on the provided training data and according to the particular ML trainer process; predict the target variable using the ML model; and transmit, to the client device, information indicating the target variable.Type: ApplicationFiled: September 27, 2017Publication date: November 8, 2018Inventors: Nikhil Bendre, Fernando Ros, Kannan Govindarajan, Baskar Jayaraman, Aniruddha Thakur, Sriram Palapudi, Firat Karakusoglu
-
Publication number: 20180322417Abstract: A network system may include a plurality of trainer devices and a computing system disposed within a remote network management platform. The computing system may be configured to: receive, from a client device of a managed network, information indicating (i) training data that is to be used as basis for generating a machine learning (ML) model and (ii) a target variable to be predicted using the ML model; transmit an ML training request for reception by one of the plurality of trainer devices; provide the training data to a particular trainer device executing a particular ML trainer process that is serving the ML training request; receive, from the particular trainer device, the ML model that is generated based on the provided training data and according to the particular ML trainer process; predict the target variable using the ML model; and transmit, to the client device, information indicating the target variable.Type: ApplicationFiled: December 20, 2017Publication date: November 8, 2018Inventors: Nikhil Bendre, Fernando Ros, Kannan Govindarajan, Baskar Jayaraman, Aniruddha Thakur, Sriram Palapudi, Firat Karakusoglu
-
Publication number: 20180115456Abstract: Fault tolerance techniques for a plurality of nodes executing application thread groups include executing at least a portion of a first application thread group based on a delegation by a first node, wherein the first node delegates an execution of the first application thread group amongst the plurality of nodes and has a highest priority indicated by an ordered priority of the plurality of nodes. A failure of the first node can be identified based on the first node failing to respond to a message sent to it. A second node can then be identified as having a next highest priority indicated by the ordered priority such that the second node can delegate an execution of a second application thread group amongst the plurality of nodes.Type: ApplicationFiled: October 24, 2016Publication date: April 26, 2018Inventors: Nikhil Bendre, Jared Laethem