Patents by Inventor Arun U. Kishan
Arun U. Kishan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20230244601Abstract: Techniques for computer memory management are disclosed herein. In one embodiment, a method includes in response to receiving a request for allocation of memory, determining whether the request is for allocation from a first memory region or a second memory region of the physical memory. The first memory region has first memory subregions of a first size and the second memory region having second memory subregions of a second size larger than the first size of the first memory region. The method further includes in response to determining that the request for allocation of memory is for allocation from the first or second memory region, allocating a portion of the first or second multiple memory subregions of the first or second memory region, respectively, in response to the request.Type: ApplicationFiled: February 13, 2023Publication date: August 3, 2023Applicant: Microsoft Technology Licensing, LLCInventors: Yevgeniy M. BAK, Kevin Michael BROAS, David Alan HEPKIN, Landy WANG, Mehmet IYIGUN, Brandon Alec ALLSOP, Arun U. KISHAN
-
Publication number: 20230066840Abstract: Using metadata for a contentless file to provide a guest context access to file content. Within a guest context, a file system is mounted from a container image which lacks a first file's content and which includes metadata defining properties of the first file and mapping data defining information for identifying a second file within another filesystem from which the first file's content is obtainable Based on the properties, a filesystem operation involving the first file is performed without switching to a host context, and a requested access to the first file's content is responded to. Responding includes, based on the mapping data, communicating a request for the host context to supply the first file's content and, after returning from a context switch, responding to the requested access by supplying content of the second file from guest memory page(s) which are mapped to host memory page(s) containing the second file's content.Type: ApplicationFiled: January 27, 2021Publication date: March 2, 2023Inventors: Ping XIE, Scott BRENDER, Shaheed Gulamabbas CHAGANI, John Andrew STARKS, Arun U. KISHAN
-
Patent number: 11593166Abstract: Pooling computing resources based on inferences about a plurality of hardware devices. The method includes identifying inference information about the plurality of devices. The method further includes based on the inference information optimizing resource usage of the plurality of hardware devices.Type: GrantFiled: December 23, 2019Date of Patent: February 28, 2023Assignee: Microsoft Technology Licensing, LLCInventors: Arun U. Kishan, Emily Nicole Wilson, Mohammed Nashaat Soliman, Paresh Maisuria, Shira Weinberg, Gurpreet Virdi, Jared Brown
-
Patent number: 11580019Abstract: Techniques for computer memory management are disclosed herein. In one embodiment, a method includes in response to receiving a request for allocation of memory, determining whether the request is for allocation from a first memory region or a second memory region of the physical memory. The first memory region has first memory subregions of a first size and the second memory region having second memory subregions of a second size larger than the first size of the first memory region. The method further includes in response to determining that the request for allocation of memory is for allocation from the first or second memory region, allocating a portion of the first or second multiple memory subregions of the first or second memory region, respectively, in response to the request.Type: GrantFiled: April 17, 2020Date of Patent: February 14, 2023Assignee: Microsoft Technology Licensing, LLCInventors: Yevgeniy M. Bak, Kevin Michael Broas, David Alan Hepkin, Landy Wang, Mehmet Iyigun, Brandon Alec Allsop, Arun U. Kishan
-
Patent number: 11157306Abstract: To increase the speed with which the hierarchical levels of a Second Layer Address Table (SLAT) are traversed as part of a memory access where the guest physical memory of a virtual machine environment is backed by virtual memory assigned to one or more processes executing on a host computing device, one or more hierarchical levels of tables within the SLAT can be skipped or otherwise not referenced. While the SLAT can be populated with memory correlations at hierarchically higher-levels of tables, the page table of the host computing device, supporting the host computing device's provision of virtual memory, can maintain a corresponding contiguous set of memory correlations at the hierarchically lowest table level, thereby enabling the host computing device to page out, or otherwise manipulate, smaller chunks of memory. If such manipulation occurs, the SLAT can be repopulated with memory correlations at the hierarchically lowest table level.Type: GrantFiled: August 30, 2020Date of Patent: October 26, 2021Assignee: Microsoft Technology Licensing, LLCInventors: Yevgeniy Bak, Mehmet Iyigun, Arun U. Kishan
-
Publication number: 20210326253Abstract: Techniques for computer memory management are disclosed herein. In one embodiment, a method includes in response to receiving a request for allocation of memory, determining whether the request is for allocation from a first memory region or a second memory region of the physical memory. The first memory region has first memory subregions of a first size and the second memory region having second memory subregions of a second size larger than the first size of the first memory region. The method further includes in response to determining that the request for allocation of memory is for allocation from the first or second memory region, allocating a portion of the first or second multiple memory subregions of the first or second memory region, respectively, in response to the request.Type: ApplicationFiled: April 17, 2020Publication date: October 21, 2021Inventors: Yevgeniy M. Bak, Kevin Michael Broas, David Alan Hepkin, Landy Wang, Mehmet Iyigun, Brandon Alec Allsop, Arun U. Kishan
-
Patent number: 10990423Abstract: One embodiment illustrated herein includes a method that may be practiced in a computing environment with a guest architecture running a native architecture system. The method includes acts for handling function calls. The method includes receiving a call to a hybrid binary, wherein the call is in a format for the guest architecture. The hybrid binary includes a native function compiled into a native architecture binary code using guest architecture source code, an interoperability thunk to handle an incompatibility between the guest architecture and the native architecture, and native host remapping metadata that is usable by an emulator to redirect native host callable targets to the interoperability thunk. The method further includes invoking the interoperability thunk to allow the native function in the hybrid binary to be executed natively on the native architecture system.Type: GrantFiled: May 3, 2019Date of Patent: April 27, 2021Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Ten H. Tzen, Arun U. Kishan
-
Publication number: 20200394065Abstract: To increase the speed with which the hierarchical levels of a Second Layer Address Table (SLAT) are traversed as part of a memory access where the guest physical memory of a virtual machine environment is backed by virtual memory assigned to one or more processes executing on a host computing device, one or more hierarchical levels of tables within the SLAT can be skipped or otherwise not referenced. While the SLAT can be populated with memory correlations at hierarchically higher-levels of tables, the page table of the host computing device, supporting the host computing device's provision of virtual memory, can maintain a corresponding contiguous set of memory correlations at the hierarchically lowest table level, thereby enabling the host computing device to page out, or otherwise manipulate, smaller chunks of memory. If such manipulation occurs, the SLAT can be repopulated with memory correlations at the hierarchically lowest table level.Type: ApplicationFiled: August 30, 2020Publication date: December 17, 2020Inventors: Yevgeniy BAK, Mehmet IYIGUN, Arun U. KISHAN
-
Patent number: 10761876Abstract: To increase the speed with which the hierarchical levels of a Second Layer Address Table (SLAT) are traversed as part of a memory access where the guest physical memory of a virtual machine environment is backed by virtual memory assigned to one or more processes executing on a host computing device, one or more hierarchical levels of tables within the SLAT can be skipped or otherwise not referenced. While the SLAT can be populated with memory correlations at hierarchically higher-levels of tables, the page table of the host computing device, supporting the host computing device's provision of virtual memory, can maintain a corresponding contiguous set of memory correlations at the hierarchically lowest table level, thereby enabling the host computing device to page out, or otherwise manipulate, smaller chunks of memory. If such manipulation occurs, the SLAT can be repopulated with memory correlations at the hierarchically lowest table level.Type: GrantFiled: May 27, 2019Date of Patent: September 1, 2020Assignee: Microsoft Technology Licensing, LLCInventors: Yevgeniy Bak, Mehmet Iyigun, Arun U. Kishan
-
Publication number: 20200183747Abstract: Pooling computing resources based on inferences about a plurality of hardware devices. The method includes identifying inference information about the plurality of devices. The method further includes based on the inference information optimizing resource usage of the plurality of hardware devices.Type: ApplicationFiled: December 23, 2019Publication date: June 11, 2020Inventors: Arun U. Kishan, Emily Nicole Wilson, Mohammed Nashaat Soliman, Paresh Maisuria, Shira Weinberg, Gurpreet Virdi, Jared Brown
-
Publication number: 20200159558Abstract: To increase the speed with which the hierarchical levels of a Second Layer Address Table (SLAT) are traversed as part of a memory access where the guest physical memory of a virtual machine environment is backed by virtual memory assigned to one or more processes executing on a host computing device, one or more hierarchical levels of tables within the SLAT can be skipped or otherwise not referenced. While the SLAT can be populated with memory correlations at hierarchically higher-levels of tables, the page table of the host computing device, supporting the host computing device's provision of virtual memory, can maintain a corresponding contiguous set of memory correlations at the hierarchically lowest table level, thereby enabling the host computing device to page out, or otherwise manipulate, smaller chunks of memory. If such manipulation occurs, the SLAT can be repopulated with memory correlations at the hierarchically lowest table level.Type: ApplicationFiled: May 27, 2019Publication date: May 21, 2020Inventors: Yevgeniy BAK, Mehmet IYIGUN, Arun U. KISHAN
-
Patent number: 10628238Abstract: Systems, methods, and apparatus for separately loading and managing foreground work and background work of an application. In some embodiments, a method is provided for use by an operating system executing on at least one computer. The operating system may identify at least one foreground component and at least one background component of an application, and may load the at least one foreground component for execution separately from the at least one background component. For example, the operating system may execute the at least one foreground component without executing the at least one background component. In some further embodiments, the operating system may use a specification associated with the application to identify at least one piece of computer executable code implementing the at least one background component.Type: GrantFiled: May 27, 2016Date of Patent: April 21, 2020Assignee: Microsoft Technology Licensing, LLCInventors: James A. Schwartz, Arun U. Kishan, Richard K. Neves, David B. Probert, Hari Pulapaka, Alain F. Gefflaut
-
Patent number: 10552219Abstract: Pooling computing resources based on inferences about a plurality of hardware devices. The method includes identifying inference information about the plurality of devices. The method further includes based on the inference information optimizing resource usage of the plurality of hardware devices.Type: GrantFiled: February 19, 2016Date of Patent: February 4, 2020Assignee: Microsoft Technology Licensing, LLCInventors: Arun U. Kishan, Emily Nicole Wilson, Mohammed Nashaat Soliman, Paresh Maisuria, Shira Weinberg, Gurpreet Virdi, Jared Brown
-
Patent number: 10503238Abstract: Each processor core in a computing device supports various different frequency ranges, also referred to as p-states, and can operate to run threads at any one of those different frequency ranges. Threads in the computing device are assigned one of multiple importance levels. A processor core is configured to run at a particular frequency range or in accordance with a particular energy performance preference based on the importance level of the thread it is running. A utilization factor of a processor core can also be determined over some time duration, the utilization factor being based on the amount of time during the time duration that the processor core was running a thread(s), and also based on the importance levels of the thread(s) run during the time duration. The utilization factor can then be used to determine whether to park the processor core.Type: GrantFiled: May 30, 2017Date of Patent: December 10, 2019Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Mehmet Iyigun, Kai-Lun Hsu, Rahul Nair, Mark Allan Bellon, Arun U. Kishan, Tristan A. Brown
-
Publication number: 20190265993Abstract: One embodiment illustrated herein includes a method that may be practiced in a computing environment with a guest architecture running a native architecture system. The method includes acts for handling function calls. The method includes receiving a call to a hybrid binary, wherein the call is in a format for the guest architecture. The hybrid binary includes a native function compiled into a native architecture binary code using guest architecture source code, an interoperability thunk to handle an incompatibility between the guest architecture and the native architecture, and native host remapping metadata that is usable by an emulator to redirect native host callable targets to the interoperability thunk. The method further includes invoking the interoperability thunk to allow the native function in the hybrid binary to be executed natively on the native architecture system.Type: ApplicationFiled: May 3, 2019Publication date: August 29, 2019Inventors: Ten H. Tzen, Arun U. Kishan
-
Patent number: 10303498Abstract: One embodiment illustrated herein includes a method that may be practiced in a computing environment with a guest architecture running a native architecture system. The method includes acts for handling function calls. The method includes receiving a call to a target binary, wherein the call is in a format for the guest architecture. The method further includes determining that the call is to a binary that is a hybrid binary. The hybrid binary includes a native function compiled into a native architecture binary code using guest architecture source code and a specialized thunk to handle an incompatibility between the guest architecture and the native architecture. The method further includes invoking the specialized thunk to allow the native function in the hybrid binary to be executed natively on the native architecture system.Type: GrantFiled: October 1, 2015Date of Patent: May 28, 2019Assignee: Microsoft Technology Licensing, LLCInventors: Ten H. Tzen, Arun U. Kishan
-
Patent number: 10198059Abstract: Adaptive doze to hibernate scheme techniques are described for power management of a computing device. Rather than relying upon a fixed timer to control device power states, the adaptive doze to hibernate scheme monitors various hibernate parameters and adapts the hibernation experience in dependence upon the parameters. The hibernate parameters may include but are not limited to a standby budget, minimum standby time, reserve screen on time, and indications of user presence. In operation, a power manager monitors battery drain rate and adaptively determines when to change the device power states of the computing device based on the observed drain rate and the hibernate parameters. The power manager may selectively switch between various states (e.g., high performance, active, wake, standby, hibernate, off, etc.) accordingly.Type: GrantFiled: August 29, 2016Date of Patent: February 5, 2019Assignee: Microsoft Technology Licensing, LLCInventors: M. Nashaat Soliman, Paresh Maisuria, Arun U. Kishan
-
Patent number: 10037270Abstract: A set of memory pages from a working set of a program process, such as at least some of the memory pages that have been modified, are compressed into a compressed store prior to being written to a page file, after which the memory pages can be repurposed by a memory manager. The memory commit charge for the memory pages compressed into the compressed store is borrowed from the program process by a compressed storage manager, reducing the memory commit charge of the compressed storage manager. Subsequent requests from the memory manager for memory pages that have been compressed into a compressed store are satisfied by accessing the compressed store memory pages (including retrieving the compressed store memory pages from the page file if written to the page file), decompressing the requested memory pages, and returning the requested memory pages to the memory manager.Type: GrantFiled: April 14, 2015Date of Patent: July 31, 2018Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Yevgeniy M. Bak, Mehmet Iyigun, Landy Wang, Arun U. Kishan
-
Publication number: 20180120920Abstract: Each processor core in a computing device supports various different frequency ranges, also referred to as p-states, and can operate to run threads at any one of those different frequency ranges. Threads in the computing device are assigned one of multiple importance levels. A processor core is configured to run at a particular frequency range or in accordance with a particular energy performance preference based on the importance level of the thread it is running. A utilization factor of a processor core can also be determined over some time duration, the utilization factor being based on the amount of time during the time duration that the processor core was running a thread(s), and also based on the importance levels of the thread(s) run during the time duration. The utilization factor can then be used to determine whether to park the processor core.Type: ApplicationFiled: May 30, 2017Publication date: May 3, 2018Applicant: Microsoft Technology Licensing, LLCInventors: Mehmet IYIGUN, Kai-Lun HSU, Rahul NAIR, Mark Allan BELLON, Arun U. KISHAN, Tristan A. BROWN
-
Publication number: 20180046536Abstract: Processing faults in a virtual computing environment. A method includes receiving a request to perform a memory access for a virtual machine. The method further includes identifying that that the memory access is unable to be performed without taking a fault. The method further includes identifying that a virtual fault can be taken to service the fault. The virtual fault is taken by servicing the fault asynchronously with respect to the virtual machine. The method further includes identifying that a virtual fault should be taken by evaluating criteria to weigh taking a virtual fault for servicing the fault asynchronously versus servicing the fault synchronously. As a result of identifying that a virtual fault should be taken, the method farther includes notifying the virtual machine that a virtual fault should be taken for the memory access. The method further includes servicing the fault asynchronously with respect to the virtual machine.Type: ApplicationFiled: November 4, 2016Publication date: February 15, 2018Inventors: Mehmet Iyigun, Kevin Michael Broas, Arun U. Kishan, Yevgeniy M. Bak, John Joseph Richardson