Patents by Inventor Sai Ganesh Ramachandran

Sai Ganesh Ramachandran has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11880702
    Abstract: Hot restart of a hypervisor by replacing a running first hypervisor by a second hypervisor with minimally perceptible downtime to guest partitions. A first hypervisor is executed on a computing system. The first hypervisor is configured to create one or more guest partitions. During the hot restart, a service partition is generated and initialized with a second hypervisor. At least a portion of runtime state of the first hypervisor is migrated and synchronized to the second hypervisor using inverse hypercalls. After the synchronization, the second hypervisor is devirtualized from the service partition to replace the first hypervisor. Devirtualizing includes transferring control of hardware resources from the first hypervisor to the second hypervisor, using the previously migrated and synchronized runtime state.
    Type: Grant
    Filed: August 5, 2022
    Date of Patent: January 23, 2024
    Assignee: Microsoft Tech nology Licensing, LLC
    Inventors: Bruce J. Sherwin, Jr., Sai Ganesh Ramachandran
  • Publication number: 20230061596
    Abstract: Hot restart of a hypervisor by replacing a running first hypervisor by a second hypervisor with minimally perceptible downtime to guest partitions. A first hypervisor is executed on a computing system. The first hypervisor is configured to create one or more guest partitions. During the hot restart, a service partition is generated and initialized with a second hypervisor. At least a portion of runtime state of the first hypervisor is migrated and synchronized to the second hypervisor using inverse hypercalls. After the synchronization, the second hypervisor is devirtualized from the service partition to replace the first hypervisor. Devirtualizing includes transferring control of hardware resources from the first hypervisor to the second hypervisor, using the previously migrated and synchronized runtime state.
    Type: Application
    Filed: August 5, 2022
    Publication date: March 2, 2023
    Inventors: Bruce J. SHERWIN, JR., Sai Ganesh RAMACHANDRAN
  • Patent number: 11436031
    Abstract: Hot restart of a hypervisor by replacing a running first hypervisor by a second hypervisor with minimally perceptible downtime to guest partitions. A first hypervisor is executed on a computing system. The first hypervisor is configured to create one or more guest partitions. During the hot restart, a service partition is generated and initialized with a second hypervisor. At least a portion of runtime state of the first hypervisor is migrated and synchronized to the second hypervisor using inverse hypercalls. After the synchronization, the second hypervisor is devirtualized from the service partition to replace the first hypervisor. Devirtualizing includes transferring control of hardware resources from the first hypervisor to the second hypervisor, using the previously migrated and synchronized runtime state.
    Type: Grant
    Filed: April 15, 2020
    Date of Patent: September 6, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Bruce J. Sherwin, Jr., Sai Ganesh Ramachandran
  • Publication number: 20210326159
    Abstract: Hot restart of a hypervisor by replacing a running first hypervisor by a second hypervisor with minimally perceptible downtime to guest partitions. A first hypervisor is executed on a computing system. The first hypervisor is configured to create one or more guest partitions. During the hot restart, a service partition is generated and initialized with a second hypervisor. At least a portion of runtime state of the first hypervisor is migrated and synchronized to the second hypervisor using inverse hypercalls. After the synchronization, the second hypervisor is devirtualized from the service partition to replace the first hypervisor. Devirtualizing includes transferring control of hardware resources from the first hypervisor to the second hypervisor, using the previously migrated and synchronized runtime state.
    Type: Application
    Filed: April 15, 2020
    Publication date: October 21, 2021
    Inventors: Bruce J. Sherwin, JR., Sai Ganesh Ramachandran
  • Patent number: 10649763
    Abstract: The disclosed technology is generally directed to the patching of executing binaries. In one example of the technology, at separate times, a plurality of hot patch requests is received. Each hot patch request of the plurality of hot patch requests includes a corresponding hot patch to hot patch the executing binary. A cardinality of the plurality of hot patch requested is greater than the fixed number of logical patch slots. with the executing binary continuing to execute, each time a request to apply a hot patch to the executing binary is received, the corresponding hot patch is assigned to an inactive logical patch slot of the fixed number of logical patch slots. The corresponding hot patch is executed from the assigned logical patch slot to hot patch the executing binary based on the corresponding hot patch.
    Type: Grant
    Filed: June 15, 2018
    Date of Patent: May 12, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Sai Ganesh Ramachandran, Bruce J. Sherwin, Jr., David Alan Hepkin
  • Patent number: 10621342
    Abstract: Speculative side channels exist when memory is accessed by speculatively-executed processor instructions. Embodiments use uncacheable memory mappings to close speculative side channels that could allow an unprivileged execution context to access a privileged execution context's memory. Based on allocation of memory location(s) to the unprivileged execution context, embodiments map these memory location(s) as uncacheable within first page table(s) corresponding to the privileged execution context, but map those same memory locations as cacheable within second page table(s) corresponding to the unprivileged execution context. This prevents a processor from carrying out speculative execution of instruction(s) from the privileged execution context that access any of this memory allocated to the unprivileged execution context, due to the unprivileged execution context's memory being mapped as uncacheable for the privileged execution context.
    Type: Grant
    Filed: November 2, 2017
    Date of Patent: April 14, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Kenneth D. Johnson, Sai Ganesh Ramachandran, Xin David Zhang, Arun Upadhyaya Kishan, David Alan Hepkin
  • Publication number: 20190384591
    Abstract: The disclosed technology is generally directed to the patching of executing binaries. In one example of the technology, at separate times, a plurality of hot patch requests is received. Each hot patch request of the plurality of hot patch requests includes a corresponding hot patch to hot patch the executing binary. A cardinality of the plurality of hot patch requested is greater than the fixed number of logical patch slots. with the executing binary continuing to execute, each time a request to apply a hot patch to the executing binary is received, the corresponding hot patch is assigned to an inactive logical patch slot of the fixed number of logical patch slots. The corresponding hot patch is executed from the assigned logical patch slot to hot patch the executing binary based on the corresponding hot patch.
    Type: Application
    Filed: June 15, 2018
    Publication date: December 19, 2019
    Inventors: Sai Ganesh RAMACHANDRAN, Bruce J. SHERWIN, JR., David Alan HEPKIN
  • Publication number: 20190130102
    Abstract: Speculative side channels exist when memory is accessed by speculatively-executed processor instructions. Embodiments use uncacheable memory mappings to close speculative side channels that could allow an unprivileged execution context to access a privileged execution context's memory. Based on allocation of memory location(s) to the unprivileged execution context, embodiments map these memory location(s) as uncacheable within first page table(s) corresponding to the privileged execution context, but map those same memory locations as cacheable within second page table(s) corresponding to the unprivileged execution context. This prevents a processor from carrying out speculative execution of instruction(s) from the privileged execution context that access any of this memory allocated to the unprivileged execution context, due to the unprivileged execution context's memory being mapped as uncacheable for the privileged execution context.
    Type: Application
    Filed: November 2, 2017
    Publication date: May 2, 2019
    Inventors: Kenneth D. JOHNSON, Sai Ganesh RAMACHANDRAN, Xin David ZHANG, Arun Upadhyaya KISHAN, David Alan HEPKIN
  • Patent number: 9621402
    Abstract: In embodiments of load balanced and prioritized data connections, a first connection is established to communicate first data from a first server to a second server over a public network, where the first data is communicated from a private network to a first device or subnet that is connected to the second server. A second connection is established to communicate second data from the first server to the second server over the public network, where the second data is communicated from the private network to a second device or subnet that is connected to the second server. The second server can distinguish the first data from the second data according to an authentication certificate field that identifies one of a first communication interface of the first connection or a second communication interface of the second connection.
    Type: Grant
    Filed: September 12, 2011
    Date of Patent: April 11, 2017
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Uma Mahesh Mudigonda, Sai Ganesh Ramachandran, Amit Kumar Nanda
  • Publication number: 20130067231
    Abstract: In embodiments of load balanced and prioritized data connections, a first connection is established to communicate first data from a first server to a second server over a public network, where the first data is communicated from a private network to a first device or subnet that is connected to the second server. A second connection is established to communicate second data from the first server to the second server over the public network, where the second data is communicated from the private network to a second device or subnet that is connected to the second server. The second server can distinguish the first data from the second data according to an authentication certificate field that identifies one of a first communication interface of the first connection or a second communication interface of the second connection.
    Type: Application
    Filed: September 12, 2011
    Publication date: March 14, 2013
    Inventors: Uma Mahesh Mudigonda, Sai Ganesh Ramachandran, Amit Kumar Nanda