Patents by Inventor Landy Wang

Landy Wang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230409490
    Abstract: Ensuring data security when tiering volatile and non-volatile byte-addressable memory. A portion of cache data stored in a first memory that is byte-addressable and volatile is identified for copying to a second memory that is byte-addressable and non-volatile. The portion of cache data is associated with cryptographic requirements for storing the portion of cache data on non-volatile storage. Cryptographic capabilities of the second memory are identified. When each of the cryptographic requirements is met by the cryptographic capabilities, the portion of cache data is copied to the second memory while relying on the second memory to encrypt the portion of cache data. When at least one cryptographic requirement is not met by the cryptographic capabilities, the portion of cache data is encrypted to generate an encrypted portion of cache data, and the encrypted portion of cache data is copied to the second memory.
    Type: Application
    Filed: December 2, 2021
    Publication date: December 21, 2023
    Inventors: Yevgeniy BAK, Mehmet IYIGUN, Landy WANG
  • Publication number: 20230244601
    Abstract: Techniques for computer memory management are disclosed herein. In one embodiment, a method includes in response to receiving a request for allocation of memory, determining whether the request is for allocation from a first memory region or a second memory region of the physical memory. The first memory region has first memory subregions of a first size and the second memory region having second memory subregions of a second size larger than the first size of the first memory region. The method further includes in response to determining that the request for allocation of memory is for allocation from the first or second memory region, allocating a portion of the first or second multiple memory subregions of the first or second memory region, respectively, in response to the request.
    Type: Application
    Filed: February 13, 2023
    Publication date: August 3, 2023
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Yevgeniy M. BAK, Kevin Michael BROAS, David Alan HEPKIN, Landy WANG, Mehmet IYIGUN, Brandon Alec ALLSOP, Arun U. KISHAN
  • Patent number: 11580019
    Abstract: Techniques for computer memory management are disclosed herein. In one embodiment, a method includes in response to receiving a request for allocation of memory, determining whether the request is for allocation from a first memory region or a second memory region of the physical memory. The first memory region has first memory subregions of a first size and the second memory region having second memory subregions of a second size larger than the first size of the first memory region. The method further includes in response to determining that the request for allocation of memory is for allocation from the first or second memory region, allocating a portion of the first or second multiple memory subregions of the first or second memory region, respectively, in response to the request.
    Type: Grant
    Filed: April 17, 2020
    Date of Patent: February 14, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Yevgeniy M. Bak, Kevin Michael Broas, David Alan Hepkin, Landy Wang, Mehmet Iyigun, Brandon Alec Allsop, Arun U. Kishan
  • Publication number: 20210326253
    Abstract: Techniques for computer memory management are disclosed herein. In one embodiment, a method includes in response to receiving a request for allocation of memory, determining whether the request is for allocation from a first memory region or a second memory region of the physical memory. The first memory region has first memory subregions of a first size and the second memory region having second memory subregions of a second size larger than the first size of the first memory region. The method further includes in response to determining that the request for allocation of memory is for allocation from the first or second memory region, allocating a portion of the first or second multiple memory subregions of the first or second memory region, respectively, in response to the request.
    Type: Application
    Filed: April 17, 2020
    Publication date: October 21, 2021
    Inventors: Yevgeniy M. Bak, Kevin Michael Broas, David Alan Hepkin, Landy Wang, Mehmet Iyigun, Brandon Alec Allsop, Arun U. Kishan
  • Patent number: 10908958
    Abstract: Multiple partitions can be run on a computing device, each partition running multiple processes referred to as a workload. Each of the multiple partitions, is isolated from one another, preventing the processes in each partition from interfering with the operation of the processes in the other partitions. Using the techniques discussed herein, some memory pages of a partition (referred to as a sharing partition) can be shared with one or more other partitions. The pages that are shared are file backed (e.g., image or data files) or pagefile backed memory pages. The sharing partition can be, for example, a separate partition that is dedicated to sharing memory pages.
    Type: Grant
    Filed: March 21, 2019
    Date of Patent: February 2, 2021
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Yevgeniy M. Bak, Mehmet Iyigun, Landy Wang
  • Patent number: 10515019
    Abstract: Updating aging information for memory backing a virtual address-backed virtual machine (VM). A virtual memory address (VA) is allocated, within a page table entry (PTE), to a process backing the VM. Based on memory access(es) by the VM to a non-mapped guest-physical memory address (GPA), the GPA is identified as being associated with the VA; an HPA is allocated for the accessed GPA; a host-physical memory address (HPA) is associated with the VA within the PTE; the GPA is associated with the HPA within a second level address translation (SLAT) structure entry; and an accessed flag is set within the SLAT entry. Aging information is updated, including identifying the SLAT entry; querying a value of the accessed flag in the SLAT entry; clearing the accessed flag in the SLAT entry without invalidating the SLAT entry; and updating aging information for the VA and/or HPA based on the queried value.
    Type: Grant
    Filed: September 4, 2018
    Date of Patent: December 24, 2019
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Mehmet Iyigun, Yevgeniy Bak, Landy Wang
  • Publication number: 20190220317
    Abstract: Multiple partitions can be run on a computing device, each partition running multiple processes referred to as a workload. Each of the multiple partitions, is isolated from one another, preventing the processes in each partition from interfering with the operation of the processes in the other partitions. Using the techniques discussed herein, some memory pages of a partition (referred to as a sharing partition) can be shared with one or more other partitions. The pages that are shared are file backed (e.g., image or data files) or pagefile backed memory pages. The sharing partition can be, for example, a separate partition that is dedicated to sharing memory pages.
    Type: Application
    Filed: March 21, 2019
    Publication date: July 18, 2019
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Yevgeniy M. Bak, Mehmet Iyigun, Landy Wang
  • Patent number: 10275169
    Abstract: Multiple partitions can be run on a computing device, each partition running multiple processes referred to as a workload. Each of the multiple partitions, is isolated from one another, preventing the processes in each partition from interfering with the operation of the processes in the other partitions. Using the techniques discussed herein, some memory pages of a partition (referred to as a sharing partition) can be shared with one or more other partitions. The pages that are shared are file backed (e.g., image or data files) or pagefile backed memory pages. The sharing partition can be, for example, a separate partition that is dedicated to sharing memory pages.
    Type: Grant
    Filed: January 18, 2017
    Date of Patent: April 30, 2019
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Yevgeniy M. Bak, Mehmet Iyigun, Landy Wang
  • Patent number: 10216437
    Abstract: Aspects of the subject matter described herein relate to storage systems and aliased memory. In aspects, a file system driver or other component may send a request to a memory controller to create an alias between two blocks of memory. One of the blocks of memory may be used for main memory while the other of the blocks of memory may be used for a storage system. In response, the memory controller may create an alias between the blocks of memory. Until the alias is severed, when the memory controller receives a request for data from the block in main memory, the memory controller may respond with data from the memory block used for the storage system. The memory controller may also implement other actions as described herein.
    Type: Grant
    Filed: May 30, 2017
    Date of Patent: February 26, 2019
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: William R. Tipton, Surendra Verma, Landy Wang, Malcolm James Smith
  • Patent number: 10157268
    Abstract: Each program thread running on a computing device has an associated data stack and control stack. A stack displacement value is generated, which is the difference between the memory address of the base of the data stack and the memory address of the base of the control stack, and is stored in a register of a processor of the computing device that is restricted to operating system kernel use. For each thread on which return flow guard is enabled, prologue and epilogue code is added to each function of the thread (e.g., by a memory manager of the computing device). The data stack and the control stack each store a return address for the function, and when the function completes the epilogue code allows the function to return only if the return addresses on the data stack and the control stack match.
    Type: Grant
    Filed: September 27, 2016
    Date of Patent: December 18, 2018
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Jordan Thomas Rabet, Kenneth D. Johnson, Matthew R. Miller, Adam M. Zabrocki, Shawn Daniel Hoffman, Landy Wang, Yevgeniy M. Bak
  • Patent number: 10102148
    Abstract: A memory is made up of multiple pages, and different pages can have different priority levels. A set of memory pages having at least similar priority levels are identified and compressed into an additional set of memory pages having at least similar priority levels. The additional set of memory pages are classified as being the same type of page as the set of memory pages that was compressed (e.g., as memory pages that can be repurposed). Thus, a particular set of memory pages can be compressed into a different set of memory pages of the same type and corresponding to at least similar priority levels. However, due to the compression, the quantity of memory pages into which the set of memory pages is compressed is reduced, thus increasing the amount of data that can be stored in the memory.
    Type: Grant
    Filed: June 13, 2013
    Date of Patent: October 16, 2018
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Yevgeniy M. Bak, Mehmet Iyigun, Landy Wang
  • Patent number: 10037270
    Abstract: A set of memory pages from a working set of a program process, such as at least some of the memory pages that have been modified, are compressed into a compressed store prior to being written to a page file, after which the memory pages can be repurposed by a memory manager. The memory commit charge for the memory pages compressed into the compressed store is borrowed from the program process by a compressed storage manager, reducing the memory commit charge of the compressed storage manager. Subsequent requests from the memory manager for memory pages that have been compressed into a compressed store are satisfied by accessing the compressed store memory pages (including retrieving the compressed store memory pages from the page file if written to the page file), decompressing the requested memory pages, and returning the requested memory pages to the memory manager.
    Type: Grant
    Filed: April 14, 2015
    Date of Patent: July 31, 2018
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Yevgeniy M. Bak, Mehmet Iyigun, Landy Wang, Arun U. Kishan
  • Publication number: 20180203626
    Abstract: Multiple partitions can be run on a computing device, each partition running multiple processes referred to as a workload. Each of the multiple partitions, is isolated from one another, preventing the processes in each partition from interfering with the operation of the processes in the other partitions. Using the techniques discussed herein, some memory pages of a partition (referred to as a sharing partition) can be shared with one or more other partitions. The pages that are shared are file backed (e.g., image or data files) or pagefile backed memory pages. The sharing partition can be, for example, a separate partition that is dedicated to sharing memory pages.
    Type: Application
    Filed: January 18, 2017
    Publication date: July 19, 2018
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Yevgeniy M. Bak, Mehmet Iyigun, Landy Wang
  • Publication number: 20180088988
    Abstract: Each program thread running on a computing device has an associated data stack and control stack. A stack displacement value is generated, which is the difference between the memory address of the base of the data stack and the memory address of the base of the control stack, and is stored in a register of a processor of the computing device that is restricted to operating system kernel use. For each thread on which return flow guard is enabled, prologue and epilogue code is added to each function of the thread (e.g., by a memory manager of the computing device). The data stack and the control stack each store a return address for the function, and when the function completes the epilogue code allows the function to return only if the return addresses on the data stack and the control stack match.
    Type: Application
    Filed: September 27, 2016
    Publication date: March 29, 2018
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Jordan Thomas Rabet, Kenneth D. Johnson, Matthew R. Miller, Adam M. Zabrocki, Shawn Daniel Hoffman, Landy Wang, Yevgeniy M. Bak
  • Patent number: 9916260
    Abstract: A memory manager in a computer system that ages memory for high performance. The efficiency of operation of the computer system can be improved by dynamically setting an aging schedule based on a predicted time for trimming pages from a working set. An aging schedule that generates aging information that better discriminates among pages in a working set based on activity level enables selection of pages to trim that are less likely to be accessed following trimming. As a result of being able to identify and trim less active pages, inefficiencies arising from restoring trimmed pages to the working set are avoided.
    Type: Grant
    Filed: October 30, 2015
    Date of Patent: March 13, 2018
    Assignee: Microsoft Technology Licensing, LLC
    Inventor: Landy Wang
  • Publication number: 20170262207
    Abstract: Aspects of the subject matter described herein relate to storage systems and aliased memory. In aspects, a file system driver or other component may send a request to a memory controller to create an alias between two blocks of memory. One of the blocks of memory may be used for main memory while the other of the blocks of memory may be used for a storage system. In response, the memory controller may create an alias between the blocks of memory. Until the alias is severed, when the memory controller receives a request for data from the block in main memory, the memory controller may respond with data from the memory block used for the storage system. The memory controller may also implement other actions as described herein.
    Type: Application
    Filed: May 30, 2017
    Publication date: September 14, 2017
    Inventors: William R. TIPTON, Surendra VERMA, Landy WANG, Malcolm James SMITH
  • Patent number: 9734082
    Abstract: In one embodiment, a memory management system temporarily maintains a memory page at an artificially high priority level. The memory management system may assign an initial priority level to a memory page in a page priority list. The memory management system may change the memory page to a target priority level in the page priority list after a protection period has expired.
    Type: Grant
    Filed: July 22, 2015
    Date of Patent: August 15, 2017
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Landy Wang, Yevgeniy Bak, Mehmet Iyigun
  • Patent number: 9678689
    Abstract: Aspects of the subject matter described herein relate to storage systems and aliased memory. In aspects, a file system driver or other component may send a request to a memory controller to create an alias between two blocks of memory. One of the blocks of memory may be used for main memory while the other of the blocks of memory may be used for a storage system. In response, the memory controller may create an alias between the blocks of memory. Until the alias is severed, when the memory controller receives a request for data from the block in main memory, the memory controller may respond with data from the memory block used for the storage system. The memory controller may also implement other actions as described herein.
    Type: Grant
    Filed: September 25, 2013
    Date of Patent: June 13, 2017
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: William R. Tipton, Surendra Verma, Landy Wang, Malcolm James Smith
  • Publication number: 20170123996
    Abstract: Mapping files in host virtual address backed virtual machines. A method includes receiving a request from a guest virtual machine for a file from a host. The method further includes, at the host determining that the file can be directly mapped to a physical memory location for virtual machines requesting access to the file. The method further includes, at the host, providing guest physical memory backed by the file flapping in host virtual memory.
    Type: Application
    Filed: May 16, 2016
    Publication date: May 4, 2017
    Inventors: Arun U. Kishan, Mehmet Iyigun, Landy Wang, Kevin Michael Broas, Yevgeniy M. Bak
  • Patent number: 9632924
    Abstract: A memory manager in a computing device allocates memory to programs running on the computing device, the amount of memory allocated to a program being a memory commit for the program. When a program is in a state where the program can be terminated, the content of the memory pages allocated to the program is compressed, and an amount of the memory commit for the program that can be released is determined. This amount of memory commit is the amount that was committed to the program less any amount still storing (in compressed format) information (e.g., data or instructions) for the program. The determined amount of memory commit is released, allowing that amount of memory to be consumed by other programs as appropriate.
    Type: Grant
    Filed: March 2, 2015
    Date of Patent: April 25, 2017
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Yevgeniy M. Bak, Mehmet Iyigun, Landy Wang, Arun U. Kishan