Patents by Inventor Kushagra V. Vaid
Kushagra V. Vaid has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 9436517Abstract: Reliability-aware scheduling of processing jobs on one or more processing entities is based on reliability scores assigned to processing entities and minimum acceptable reliability scores of processing jobs. The reliability scores of processing entities are based on independently derived statistical reliability models as applied to reliability data already available from modern computing hardware. Reliability scores of processing entities are continually updated based upon real-time reliability data, as well as prior reliability scores, which are weighted in accordance with the statistical reliability models being utilized. Individual processing jobs specify reliability requirements from which the minimum acceptable reliability score is determined. Such jobs are scheduled on processing entities whose reliability score is greater than or equal to the minimum acceptable reliability score for such jobs. Already scheduled jobs can be rescheduled on other processing entities if reliability scores change.Type: GrantFiled: December 28, 2012Date of Patent: September 6, 2016Assignee: Microsoft Technology Licensing, LLCInventors: Woongki Baek, Sriram Govindan, Sriram Sankar, Kushagra V. Vaid, Badriddine Khessib
-
Publication number: 20140189706Abstract: Reliability-aware scheduling of processing jobs on one or more processing entities is based on reliability scores assigned to processing entities and minimum acceptable reliability scores of processing jobs. The reliability scores of processing entities are based on independently derived statistical reliability models as applied to reliability data already available from modern computing hardware. Reliability scores of processing entities are continually updated based upon real-time reliability data, as well as prior reliability scores, which are weighted in accordance with the statistical reliability models being utilized. Individual processing jobs specify reliability requirements from which the minimum acceptable reliability score is determined. Such jobs are scheduled on processing entities whose reliability score is greater than or equal to the minimum acceptable reliability score for such jobs. Already scheduled jobs can be rescheduled on other processing entities if reliability scores change.Type: ApplicationFiled: December 28, 2012Publication date: July 3, 2014Applicant: MICROSOFT CORPORATIONInventors: Woongki Baek, Sriram Govindan, Sriram Sankar, Kushagra V. Vaid, Badriddine Khessib
-
Publication number: 20140173157Abstract: Computing unit enclosures are often configured to connect units (e.g., server racks or trays) with a wired network. Because the network type may vary (e.g., Ethernet, InfiniBand, and Fibre Channel), such enclosures often provide network resources connecting each unit with each supported network type. However, such architectures may present inefficiencies such as unused network resources, and may constrain network support for the units to a small set of supported network types. Presented herein are enclosure architectures enabling flexible and efficient network support by including a backplane comprising a backplane bus that exchanges data between the units and a network adapter using an expansion bus protocol, such as PCI-Express.Type: ApplicationFiled: December 14, 2012Publication date: June 19, 2014Applicant: Microsoft CorporationInventors: Mark Edward Shaw, Kushagra V. Vaid, David A. Maltz, Parantap Lahiri
-
Publication number: 20130120931Abstract: Enclosing arrangements of racks of computing devices fully encloses a space, either solely by the racks themselves, or in conjunction with structural features, such as walls and doors. The enclosed space can be either a hot aisle, whose hot air is vented out by fans positioned in at least one vertical extremity of the enclosed space, such as the floor, or ceiling, or it can be a cold aisle, whose cold air is pumped in by those fans. To maintain proper pressurization across a vertical cross-section of the enclosed space, specific ones of the computing devices have their fans adjusted based on their vertical position within the racks or have passive airflow adjustments, such as impedance screens. Computing devices can draw or vent air from their sides, taking advantage of the interstitial space between the racks provided by the enclosing arrangement.Type: ApplicationFiled: November 11, 2011Publication date: May 16, 2013Applicant: MICROSOFT CORPORATIONInventors: Sriram Sankar, Harry Rogers, Kushagra V. Vaid, Mark Shaw, Bryan David Kelly, Grant Cowan Emerson
-
Publication number: 20120240116Abstract: Embodiments of apparatuses and methods for improving performance in a virtualization architecture are disclosed. In one embodiment, an apparatus includes a processor and a processor abstraction layer. The processor abstraction layer includes instructions that, when executed by the processor, support techniques to improve the performance of the apparatus in a virtualization architecture.Type: ApplicationFiled: May 30, 2012Publication date: September 20, 2012Inventors: Hin L. Leung, Amy L. Santoni, Gary N. Hammond, William R. Greene, Kushagra V. Vaid, Dale Morris, Jonathan Ross
-
Patent number: 8261266Abstract: A system and a method are provided. Performance and capacity statistics, with respect to an application executing on one or more VMs, may be accessed and collected. The collected performance and capacity statistics may be analyzed to determine an improved hardware profile for efficiently executing the application on a VM. VMs with a virtual hardware configuration matching the improved hardware profile may be scheduled and deployed to execute the application. Performance and capacity statistics, with respect to the VMs, may be periodically analyzed to determine whether a threshold condition has occurred. When the threshold condition has been determined to have occurred, performance and capacity statistics, with respect to VMs having different configurations corresponding to different hardware profiles, may be automatically analyzed to determine an updated improved hardware profile. VMs for executing the application may be redeployed with virtual hardware configurations matching the updated improved profile.Type: GrantFiled: April 30, 2009Date of Patent: September 4, 2012Assignee: Microsoft CorporationInventors: Robert Pike, Kushagra V. Vaid, Robert Fries
-
Patent number: 8214830Abstract: Embodiments of apparatuses and methods for improving performance in a virtualization architecture are disclosed. In one embodiment, an apparatus includes a processor and a processor abstraction layer. The processor abstraction layer includes instructions that, when executed by the processor, support techniques to improve the performance of the apparatus in a virtualization architecture.Type: GrantFiled: January 19, 2005Date of Patent: July 3, 2012Assignee: Intel CorporationInventors: Hin L. Leung, Amy L. Santoni, Gary N. Hammond, William R. Greene, Kushagra V. Vaid, Dale Morris, Jonathan Ross
-
Patent number: 8135723Abstract: Computational units of any task may run in different silos. In an embodiment, a search query may be evaluated efficiently on a non-uniform memory architecture (NUMA) machine, by assigning separate chunks of the index to separate memories. In a NUMA machine, each socket has an attached memory. The latency time is low or high, depending on whether a processor accesses data in its attached memory or a different memory. Copies of an index manager program, which compares a query to an index, run separately on different processors in a NUMA machine. Each instance of the index manager compares the query to the index chunk in the memory attached to the processor on which that instance is running. Thus, each instance of the index manager may compare a query to a particular portion of the index using low-latency accesses, thereby increasing the efficiency of the search.Type: GrantFiled: November 12, 2008Date of Patent: March 13, 2012Assignee: Microsoft CorporationInventors: Kushagra V. Vaid, Gaurav Sareen
-
Patent number: 7849327Abstract: A technique to improve the performance of virtualized input/output (I/O) resources of a microprocessor within a virtual machine environment. More specifically, embodiments of the invention enable accesses of virtualized I/O resources to be made by guest software without necessarily invoking host software. Furthermore, embodiments of the invention enable more efficient delivery of interrupts to guest software by alleviating the need for host software to be invoked in the delivery process.Type: GrantFiled: January 19, 2005Date of Patent: December 7, 2010Inventors: Hin L. Leung, Kushagra V. Vaid, Amy L. Santoni, Dale Morris, Jonathan Ross
-
Publication number: 20100281482Abstract: A system and a method are provided. Performance and capacity statistics, with respect to an application executing on one or more VMs, may be accessed and collected. The collected performance and capacity statistics may be analyzed to determine an improved hardware profile for efficiently executing the application on a VM. VMs with a virtual hardware configuration matching the improved hardware profile may be scheduled and deployed to execute the application. Performance and capacity statistics, with respect to the VMs, may be periodically analyzed to determine whether a threshold condition has occurred. When the threshold condition has been determined to have occurred, performance and capacity statistics, with respect to VMs having different configurations corresponding to different hardware profiles, may be automatically analyzed to determine an updated improved hardware profile. VMs for executing the application may be redeployed with virtual hardware configurations matching the updated improved profile.Type: ApplicationFiled: April 30, 2009Publication date: November 4, 2010Applicant: MICROSOFT CORPORATIONInventors: Robert Pike, Kushagra V. Vaid, Robert Fries
-
Publication number: 20100121865Abstract: Computational units of any task may run in different silos. In an embodiment, a search query may be evaluated efficiently on a non-uniform memory architecture (NUMA) machine, by assigning separate chunks of the index to separate memories. In a NUMA machine, each socket has an attached memory. The latency time is low or high, depending on whether a processor accesses data in its attached memory or a different memory. Copies of an index manager program, which compares a query to an index, run separately on different processors in a NUMA machine. Each instance of the index manager compares the query to the index chunk in the memory attached to the processor on which that instance is running. Thus, each instance of the index manager may compare a query to a particular portion of the index using low-latency accesses, thereby increasing the efficiency of the search.Type: ApplicationFiled: November 12, 2008Publication date: May 13, 2010Applicant: Microsoft CorporationInventors: Kushagra V. Vaid, Gaurav Sareen
-
Publication number: 20100036903Abstract: Systems and methods that distribute load balancing functionalities in a data center. A network of demultiplexers and load balancer servers enable a calculated scaling and growth operation, wherein capacity of load balancing operation can be adjusted by changing the number of load balancer servers. Accordingly, load balancing functionality/design can be disaggregated to increase resilience and flexibility for both the load balancing and switching mechanisms of the data center.Type: ApplicationFiled: August 11, 2008Publication date: February 11, 2010Applicant: MICROSOFT CORPORATIONInventors: Najam Ahmad, Albert Gordon Greenberg, Parantap Lahiri, Dave Maltz, Parveen K. Patel, Sudipta Sengupta, Kushagra V. Vaid
-
Patent number: 7587639Abstract: A system and method for injecting hardware errors into a microprocessor system is described. In one embodiment, a software interface between system software and system firmware is established. Software test and debug for software error handlers may thus be supported. The software interface may support both a query mode call and a seed mode call. When a query mode call is issued, it may request whether or not the system firmware and hardware support the injection of a specified kind of error. A return from this call may be used to make a list of supported errors for injection. When a seed mode call is issued, the corresponding error may be injected into the hardware.Type: GrantFiled: November 9, 2004Date of Patent: September 8, 2009Assignee: Intel CorporationInventors: Suresh K. Marisetty, Rajendra Kuramkote, Koichi Yamada, Scott D. Brenden, Kushagra V. Vaid
-
Patent number: 7254676Abstract: In one embodiment, a computer boot method allows choosing a predetermined data block alignment for a cache that has multiple cross processor interactions. A cache RAM column of a cache as RAM system is loaded with a tag to prevent unintended cache line evictions, and boot code is executed, with the preloaded cache RAM appearing to the executing boot code stream as a memory store.Type: GrantFiled: November 15, 2002Date of Patent: August 7, 2007Assignee: Intel CorporationInventors: Sham M. Datta, Vincent J. Zimmer, Kushagra V. Vaid, William A. Stevens, Amy Lynn Santoni
-
Publication number: 20040098575Abstract: In one embodiment, a computer boot method allows choosing a predetermined data block alignment for a cache that has multiple cross processor interactions. A cache RAM column of a cache as RAM system is loaded with a tag to prevent unintended cache line evictions, and boot code is executed, with the preloaded cache RAM appearing to the executing boot code stream as a memory store.Type: ApplicationFiled: November 15, 2002Publication date: May 20, 2004Inventors: Sham M. Datta, Vincent J. Zimmer, Kushagra V. Vaid, William A. Stevens, Amy Lynn Santoni
-
Publication number: 20030233601Abstract: Internal bus observation techniques for an electronic module or integrated circuit. In one embodiment, a disclosed apparatus includes observation logic to observe an observed bus and a debug buffer. The observation logic captures a record reflecting signal observed on the observed bus. The debug buffer is coupled to the observation logic to receive the record that reflects signal observed by the observation logic. The debug buffer generates a transaction to transfer the record to a storage device to store the record.Type: ApplicationFiled: June 17, 2002Publication date: December 18, 2003Inventors: Kushagra V. Vaid, Piyush Desai, Victor W. Lee
-
Patent number: 6112295Abstract: A method and apparatus for expediting the processing of a plurality of instructions in a processor is disclosed. In one embodiment, said processor has a plurality of pipeline units to process a plurality of instructions. Each of said pipeline units has a plurality of pipe stages. Further, a decoupling queue is provided to decouple at least one of said pipe stages from another, wherein said decoupling generates non-overlapping read and write signals to support corresponding read and write operations within a single clock cycle of said processor.Type: GrantFiled: September 24, 1998Date of Patent: August 29, 2000Assignee: Intel CorporationInventors: Sriram Bhamidipati, Kushagra V. Vaid