Patents by Inventor Pravin Shinde
Pravin Shinde has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11959182Abstract: Disclosed herein are precursor compounds, composite electrodes comprising the same, and methods of making and use thereof.Type: GrantFiled: October 26, 2021Date of Patent: April 16, 2024Assignees: THE BOARD OF TRUSTEES OF THE UNIVERSITY OF ALABAMA, THE ADMINISTRATORS OF THE TULANE EDUCATIONAL FUNDInventors: Pravin Shinde, James Donahue, Patricia R. Fontenot, Arunava Gupta, Shanlin Pan
-
Publication number: 20240084053Abstract: Disclosed are methods for upcycling polyvinyl chloride (PVC) that involve the dissolving of PVC in an organic solvent; and contacting the PVC with a base, thereby providing a partially dehydrochlorinated PVC. Polymers made by the disclosed, and articles therefore, are also disclosed.Type: ApplicationFiled: November 14, 2023Publication date: March 14, 2024Inventors: Jason Edward Bara, Pravin Shinde, Ali A. Alshaikh, Kathryn E. O'Harra
-
Patent number: 11657256Abstract: Embodiments use a hierarchy of machine learning models to predict datacenter behavior at multiple hardware levels of a datacenter without accessing operating system generated hardware utilization information. The accuracy of higher-level models in the hierarchy of models is increased by including, as input to the higher-level models, hardware utilization predictions from lower-level models. The hierarchy of models includes: server utilization models and workload/OS prediction models that produce predictions at a server device-level of a datacenter; and also top-of-rack switch models and backbone switch models that produce predictions at higher levels of the datacenter. These models receive, as input, hardware utilization information from non-OS sources. Based on datacenter-level network utilization predictions from the hierarchy of models, the datacenter automatically configures its hardware to avoid any predicted over-utilization of hardware in the datacenter.Type: GrantFiled: July 18, 2022Date of Patent: May 23, 2023Assignee: Oracle International CorporationInventors: Pravin Shinde, Felix Schmidt, Onur Kocberber
-
Publication number: 20220351023Abstract: Embodiments use a hierarchy of machine learning models to predict datacenter behavior at multiple hardware levels of a datacenter without accessing operating system generated hardware utilization information. The accuracy of higher-level models in the hierarchy of models is increased by including, as input to the higher-level models, hardware utilization predictions from lower-level models. The hierarchy of models includes: server utilization models and workload/OS prediction models that produce predictions at a server device-level of a datacenter; and also top-of-rack switch models and backbone switch models that produce predictions at higher levels of the datacenter. These models receive, as input, hardware utilization information from non-OS sources. Based on datacenter-level network utilization predictions from the hierarchy of models, the datacenter automatically configures its hardware to avoid any predicted over-utilization of hardware in the datacenter.Type: ApplicationFiled: July 18, 2022Publication date: November 3, 2022Inventors: Pravin Shinde, Felix Schmidt, Onur Kocberber
-
Patent number: 11443166Abstract: Embodiments use a hierarchy of machine learning models to predict datacenter behavior at multiple hardware levels of a datacenter without accessing operating system generated hardware utilization information. The accuracy of higher-level models in the hierarchy of models is increased by including, as input to the higher-level models, hardware utilization predictions from lower-level models. The hierarchy of models includes: server utilization models and workload/OS prediction models that produce predictions at a server device-level of a datacenter; and also top-of-rack switch models and backbone switch models that produce predictions at higher levels of the datacenter. These models receive, as input, hardware utilization information from non-OS sources. Based on datacenter-level network utilization predictions from the hierarchy of models, the datacenter automatically configures its hardware to avoid any predicted over-utilization of hardware in the datacenter.Type: GrantFiled: October 29, 2018Date of Patent: September 13, 2022Assignee: Oracle International CorporationInventors: Pravin Shinde, Felix Schmidt, Onur Kocberber
-
Publication number: 20220042187Abstract: Disclosed herein are precursor compounds, composite electrodes comprising the same, and methods of making and use thereof.Type: ApplicationFiled: October 26, 2021Publication date: February 10, 2022Inventors: Pravin Shinde, James Donahue, Patricia R. Fontenot, Arunava Gupta, Shanlin Pan
-
Patent number: 11186917Abstract: Disclosed herein are precursor compounds, composite electrodes comprising the same, and methods of making and use thereof.Type: GrantFiled: January 28, 2019Date of Patent: November 30, 2021Assignees: THE BOARD OF TRUSTEES OF THE UNIVERSITY OF ALABAMA, THE ADMINISTRATORS OF THE TULANE EDUCATIONAL FUNDInventors: Pravin Shinde, James Donahue, Patricia R. Fontenot, Arunava Gupta, Shanlin Pan
-
Patent number: 10917203Abstract: Embodiments use Bayesian techniques to efficiently estimate the bit error rates (BERs) of cables in a computer network at a customizable level of confidence. Specifically, a plurality of probability records are maintained for a given cable in a computer system, where each probability record is associated with a hypothetical BER for the cable, and reflects a probability that the cable has the associated hypothetical BER. At configurable time intervals, the probability records are updated using statistics gathered from a switch port connected to the cable. In order to estimate the BER of the cable at a given confidence level, embodiments determine which probability record is associated with a probability mass that indicates the confidence level. The estimate for the cable BER is the hypothetical BER that is associated with the indicated probability mass. Embodiments store the estimate in memory and utilize the estimate to aid in maintaining the computer system.Type: GrantFiled: May 17, 2019Date of Patent: February 9, 2021Assignee: Oracle International CorporationInventors: Stuart Wray, Felix Schmidt, Craig Schelp, Pravin Shinde, Akhilesh Singhania, Nipun Agarwal
-
Patent number: 10892961Abstract: Herein are computerized techniques for autonomous and artificially intelligent administration of a computer cloud health monitoring system. In an embodiment, an orchestration computer automatically detects a current state of network elements of a computer network by processing: a) a network plan that defines a topology of the computer network, and b) performance statistics of the network elements. The network elements include computers that each hosts virtual execution environment(s). Each virtual execution environment hosts analysis logic that transforms raw performance data of a network element into a portion of the performance statistics. For each computer, a configuration specification for each virtual execution environment of the computer is automatically generated based on the network plan and the current state of the computer network. At least one virtual execution environment is automatically tuned and/or re-provisioned based on a generated configuration specification.Type: GrantFiled: February 8, 2019Date of Patent: January 12, 2021Assignee: Oracle International CorporationInventors: Onur Kocberber, Felix Schmidt, Craig Schelp, Pravin Shinde
-
Publication number: 20200366428Abstract: Embodiments use Bayesian techniques to efficiently estimate the bit error rates (BERs) of cables in a computer network at a customizable level of confidence. Specifically, a plurality of probability records are maintained for a given cable in a computer system, where each probability record is associated with a hypothetical BER for the cable, and reflects a probability that the cable has the associated hypothetical BER. At configurable time intervals, the probability records are updated using statistics gathered from a switch port connected to the cable. In order to estimate the BER of the cable at a given confidence level, embodiments determine which probability record is associated with a probability mass that indicates the confidence level. The estimate for the cable BER is the hypothetical BER that is associated with the indicated probability mass. Embodiments store the estimate in memory and utilize the estimate to aid in maintaining the computer system.Type: ApplicationFiled: May 17, 2019Publication date: November 19, 2020Inventors: STUART WRAY, FELIX SCHMIDT, CRAIG SCHELP, PRAVIN SHINDE, AKHILESH SINGHANIA, NIPUN AGARWAL
-
Patent number: 10795690Abstract: Herein are computerized techniques for generation, costing/scoring, optimal selection, and reporting of intermediate configurations for a datacenter change plan. In an embodiment, a computer receives a current configuration of a datacenter and a target configuration. New configurations are generated based on the current configuration. A cost function is applied to calculate a cost of each new configuration based on measuring a logical difference between the new configuration and the target configuration. A particular new configuration is selected that has a least cost. When the particular configuration satisfies the target configuration, the datacenter is reconfigured based on the particular configuration. Otherwise, this process is (e.g. iteratively) repeated with the particular configuration instead used as the current configuration. In embodiments, new configurations are randomly, greedily, and/or manually generated.Type: GrantFiled: October 30, 2018Date of Patent: October 6, 2020Assignee: Oracle International CorporationInventors: Pravin Shinde, Felix Schmidt, Craig Schelp
-
Patent number: 10768982Abstract: Herein are techniques for analysis of data streams. In an embodiment, a computer associates each software actor with data streams. Each software actor has its own backlog queue of data to analyze. In response to receiving some stream content and based on the received stream content, data is distributed to some software actors. In response to determining that the data satisfies completeness criteria of a particular software actor, an indication of the data is appended onto the backlog queue of the particular software actor. The particular software actor is reset to an initial state by loading an execution snapshot of a previous initial execution of an embedded virtual machine. Based on the particular software actor, execution of the execution snapshot of the previous initial execution is resumed to dequeue and process the indication of the data from the backlog queue of the particular software actor to generate a result.Type: GrantFiled: September 19, 2018Date of Patent: September 8, 2020Assignee: Oracle International CorporationInventors: Andrew Brownsword, Tayler Hetherington, Pavan Chandrashekar, Akhilesh Singhania, Stuart Wray, Pravin Shinde, Felix Schmidt, Craig Schelp, Onur Kocberber, Juan Fernandez Peinador, Rod Reddekopp, Manel Fernandez Gomez, Nipun Agarwal
-
Publication number: 20200259722Abstract: Herein are computerized techniques for autonomous and artificially intelligent administration of a computer cloud health monitoring system. In an embodiment, an orchestration computer automatically detects a current state of network elements of a computer network by processing: a) a network plan that defines a topology of the computer network, and b) performance statistics of the network elements. The network elements include computers that each hosts virtual execution environment(s). Each virtual execution environment hosts analysis logic that transforms raw performance data of a network element into a portion of the performance statistics. For each computer, a configuration specification for each virtual execution environment of the computer is automatically generated based on the network plan and the current state of the computer network. At least one virtual execution environment is automatically tuned and/or re-provisioned based on a generated configuration specification.Type: ApplicationFiled: February 8, 2019Publication date: August 13, 2020Inventors: Onur Kocberber, Felix Schmidt, Craig Schelp, Pravin Shinde
-
Publication number: 20200134423Abstract: Embodiments use a hierarchy of machine learning models to predict datacenter behavior at multiple hardware levels of a datacenter without accessing operating system generated hardware utilization information. The accuracy of higher-level models in the hierarchy of models is increased by including, as input to the higher-level models, hardware utilization predictions from lower-level models. The hierarchy of models includes: server utilization models and workload/OS prediction models that produce predictions at a server device-level of a datacenter; and also top-of-rack switch models and backbone switch models that produce predictions at higher levels of the datacenter. These models receive, as input, hardware utilization information from non-OS sources. Based on datacenter-level network utilization predictions from the hierarchy of models, the datacenter automatically configures its hardware to avoid any predicted over-utilization of hardware in the datacenter.Type: ApplicationFiled: October 29, 2018Publication date: April 30, 2020Inventors: Pravin Shinde, Felix Schmidt, Onur Kocberber
-
Publication number: 20200133688Abstract: Herein are computerized techniques for generation, costing/scoring, optimal selection, and reporting of intermediate configurations for a datacenter change plan. In an embodiment, a computer receives a current configuration of a datacenter and a target configuration. New configurations are generated based on the current configuration. A cost function is applied to calculate a cost of each new configuration based on measuring a logical difference between the new configuration and the target configuration. A particular new configuration is selected that has a least cost. When the particular configuration satisfies the target configuration, the datacenter is reconfigured based on the particular configuration. Otherwise, this process is (e.g. iteratively) repeated with the particular configuration instead used as the current configuration. In embodiments, new configurations are randomly, greedily, and/or manually generated.Type: ApplicationFiled: October 30, 2018Publication date: April 30, 2020Inventors: PRAVIN SHINDE, FELIX SCHMIDT, CRAIG SCHELP
-
Publication number: 20200089529Abstract: Herein are techniques for analysis of data streams. In an embodiment, a computer associates each software actor with data streams. Each software actor has its own backlog queue of data to analyze. In response to receiving some stream content and based on the received stream content, data is distributed to some software actors. In response to determining that the data satisfies completeness criteria of a particular software actor, an indication of the data is appended onto the backlog queue of the particular software actor. The particular software actor is reset to an initial state by loading an execution snapshot of a previous initial execution of an embedded virtual machine. Based on the particular software actor, execution of the execution snapshot of the previous initial execution is resumed to dequeue and process the indication of the data from the backlog queue of the particular software actor to generate a result.Type: ApplicationFiled: September 19, 2018Publication date: March 19, 2020Inventors: ANDREW BROWNSWORD, TAYLER HETHERINGTON, PAVAN CHANDRASHEKAR, AKHILESH SINGHANIA, STUART WRAY, PRAVIN SHINDE, FELIX SCHMIDT, CRAIG SCHELP, ONUR KOCBERBER, JUAN FERNANDEZ PEINADOR, ROD REDDEKOPP, MANEL FERNANDEZ GOMEZ, NIPUN AGARWAL
-
Publication number: 20190233953Abstract: Disclosed herein are precursor compounds, composite electrodes comprising the same, and methods of making and use thereof.Type: ApplicationFiled: January 28, 2019Publication date: August 1, 2019Inventors: Pravin Shinde, James Donahue, Patricia Fontenot, Arunava Gupta, Shanlin Pan
-
Patent number: 8380765Abstract: A mechanism is provided for group communications using a MULTI-PIPE synthetic file system. A master application creates a multi-pipe synthetic file in the MULTI-PIPE synthetic file system, the master application indicating a multi-pipe operation to be performed. The master application then writes a header-control block of the multi-pipe synthetic file specifying at least one of a multi-pipe synthetic file system name, a message type, a message size, a specific destination, or a specification of the multi-pipe operation. Any other application participating in the group communications then opens the same multi-pipe synthetic file. A MULTI-PIPE file system module then implements the multi-pipe operation as identified by the master application. The master application and the other applications then either read or write operation messages to the multi-pipe synthetic file and the MULTI-PIPE synthetic file system module performs appropriate actions.Type: GrantFiled: July 31, 2012Date of Patent: February 19, 2013Assignee: International Business Machines CorporationInventors: Pravin Shinde, Eric Van Hensbergen
-
Patent number: 8375070Abstract: A mechanism is provided for group communications using a MULTI-PIPE synthetic file system. A master application creates a multi-pipe synthetic file in the MULTI-PIPE synthetic file system, the master application indicating a multi-pipe operation to be performed. The master application then writes a header-control block of the multi-pipe synthetic file specifying at least one of a multi-pipe synthetic file system name, a message type, a message size, a specific destination, or a specification of the multi-pipe operation. Any other application participating in the group communications then opens the same multi-pipe synthetic file. A MULTI-PIPE file system module then implements the multi-pipe operation as identified by the master application. The master application and the other applications then either read or write operation messages to the multi-pipe synthetic file and the MULTI-PIPE synthetic file system module performs appropriate actions.Type: GrantFiled: September 2, 2010Date of Patent: February 12, 2013Assignee: International Business Machines CorporationInventors: Pravin Shinde, Eric Van Hensbergen
-
Publication number: 20120303681Abstract: A mechanism is provided for group communications using a MULTI-PIPE synthetic file system. A master application creates a multi-pipe synthetic file in the MULTI-PIPE synthetic file system, the master application indicating a multi-pipe operation to be performed. The master application then writes a header-control block of the multi-pipe synthetic file specifying at least one of a multi-pipe synthetic file system name, a message type, a message size, a specific destination, or a specification of the multi-pipe operation. Any other application participating in the group communications then opens the same multi-pipe synthetic file. A MULTI-PIPE file system module then implements the multi-pipe operation as identified by the master application. The master application and the other applications then either read or write operation messages to the multi-pipe synthetic file and the MULTI-PIPE synthetic file system module performs appropriate actions.Type: ApplicationFiled: July 31, 2012Publication date: November 29, 2012Applicant: International Business Machines CorporationInventors: Pravin Shinde, Eric Van Hensbergen