Patents by Inventor Monish Shantilal SHAH
Monish Shantilal SHAH has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11775442Abstract: Systems and methods related to a memory system with a predictable read latency from media with a long write latency are described. An example memory system includes an array of tiles configured to store data corresponding to a cache line associated with a host. The memory system further includes control logic configured to, in response to a write command from a host, initiate writing of a first cache line to a first tile in a first row of the tiles, a second cache line to a second tile in a second row of the tiles, a third cache line to a third tile in a third row of the tiles, and a fourth cache line in a fourth row of the tiles. The control logic is configured to, in response to a read command from the host, initiate reading of data stored in an entire row of tiles.Type: GrantFiled: January 25, 2022Date of Patent: October 3, 2023Assignee: Microsoft Technology Licensing, LLCInventors: Monish Shantilal Shah, John Grant Bennett
-
Patent number: 11726909Abstract: A memory controller maintains a mapping of target ranges in system memory space interleaved two-ways across locations in a three-rank environment. For each range of the target ranges, the mapping comprises a two-way interleaving of the range across two ranks of the three-rank environment and offsets from base locations in the two ranks. At least one of the ranges has offsets that differ relative to each other. Such offsets allow the three ranks to be fully interleaved, two ways. An instruction to read data at a rank-agnostic location in the diverse-offset range causes the memory controller to map the rank-agnostic location to two interleaved locations offset different amounts from their respective base locations in their ranks. The controller may then affect the transfer of the data at the two interleaved locations.Type: GrantFiled: July 13, 2022Date of Patent: August 15, 2023Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Brett Kenneth Dodds, Monish Shantilal Shah
-
Publication number: 20230251799Abstract: The disclosed technologies provide functionality for non-volatile memory device-assisted live migration of virtual machine (“VM”) data. A host computing device (the “host”) requests that a source non-volatile memory device track changes to a namespace by a VM. In response thereto, the source device tracks changes made by the VM to the namespace and stores one or more data structures that identify the changed portions of the namespace. The host requests the data structures from the source device and requests the contents of the changed portions from the source device. The host then causes the data changed by the VM in the namespace to be written to a namespace on a target non-volatile memory device. The host can also retrieve the device internal state of a child physical function on the source device. The host migrates the retrieved device internal state to a child physical function on the target device.Type: ApplicationFiled: February 8, 2022Publication date: August 10, 2023Inventors: Scott Chao-Chueh LEE, Lei KOU, Monish Shantilal SHAH, Liang YANG, Yimin DENG, Martijn DE KORT
-
Publication number: 20230244390Abstract: The disclosed technologies provide functionality for collecting quality of service (“QoS”) statistics for in-use child physical functions of multiple physical function (“PF”) non-volatile memory devices (“MFNDs”). A host computing device creates a child PF on a MFND and configures the child PF on the MFND to provide a specified QoS level to an associated VM executing on the host computing device. The MFND then collects child PF QoS statistics for the child PF that describe the utilization of resources provided by child PF to an assigned VM. The MFND provides the child PF QoS statistics from the MFND to the host computing device. The collected child PF QoS statistics can be utilized to inform decisions regarding reallocation of MFND-provided resources, provisioning of new MFND-provided resources, and for other purposes.Type: ApplicationFiled: January 28, 2022Publication date: August 3, 2023Inventors: Scott Chao-Chueh LEE, Lei KOU, Monish Shantilal SHAH, Brenda Wai Yan BELL
-
Patent number: 11656981Abstract: Methods and systems related to memory reduction in a system by oversubscribing physical memory shared among compute entities are provided. A portion of the memory includes a combination of a portion of a first physical memory of a first type and a logical pooled memory associated with the system. A logical pooled memory controller is configured to: (1) track both a status of whether a page of the logical pooled memory allocated to any of the plurality of compute entities is a known-pattern page and a relationship between logical memory addresses and physical memory addresses associated with any allocated logical pooled memory, and (2) allow the write operation to write data to any available space in the second physical memory of the first type only up to an extent of physical memory that corresponds to the portion of the logical pooled memory previously allocated to the compute entity.Type: GrantFiled: August 4, 2022Date of Patent: May 23, 2023Assignee: Microsoft Technology Licensing, LLCInventors: Monish Shantilal Shah, Lisa Ru-Feng Hsu, Daniel Sebastian Berger
-
Publication number: 20230004488Abstract: Methods and systems related to memory reduction in a system by oversubscribing physical memory shared among compute entities are provided. A portion of the memory includes a combination of a portion of a first physical memory of a first type and a logical pooled memory associated with the system. A logical pooled memory controller is configured to: (1) track both a status of whether a page of the logical pooled memory allocated to any of the plurality of compute entities is a known-pattern page and a relationship between logical memory addresses and physical memory addresses associated with any allocated logical pooled memory, and (2) allow the write operation to write data to any available space in the second physical memory of the first type only up to an extent of physical memory that corresponds to the portion of the logical pooled memory previously allocated to the compute entity.Type: ApplicationFiled: August 4, 2022Publication date: January 5, 2023Inventors: Monish Shantilal SHAH, Lisa Ru-feng HSU, Daniel Sebastian BERGER
-
Publication number: 20220350737Abstract: A memory controller maintains a mapping of target ranges in system memory space interleaved two-ways across locations in a three-rank environment. For each range of the target ranges, the mapping comprises a two-way interleaving of the range across two ranks of the three-rank environment and offsets from base locations in the two ranks. At least one of the ranges has offsets that differ relative to each other. Such offsets allow the three ranks to be fully interleaved, two ways. An instruction to read data at a rank-agnostic location in the diverse-offset range causes the memory controller to map the rank-agnostic location to two interleaved locations offset different amounts from their respective base locations in their ranks. The controller may then affect the transfer of the data at the two interleaved locations.Type: ApplicationFiled: July 13, 2022Publication date: November 3, 2022Inventors: Brett Kenneth DODDS, Monish Shantilal SHAH
-
Patent number: 11455239Abstract: Methods and systems related to memory reduction in a system by oversubscribing physical memory shared among compute entities are provided. A portion of the memory includes a combination of a portion of a first physical memory of a first type and a logical pooled memory associated with the system. A logical pooled memory controller is configured to: (1) track both a status of whether a page of the logical pooled memory allocated to any of the plurality of compute entities is a known-pattern page and a relationship between logical memory addresses and physical memory addresses associated with any allocated logical pooled memory, and (2) allow the write operation to write data to any available space in the second physical memory of the first type only up to an extent of physical memory that corresponds to the portion of the logical pooled memory previously allocated to the compute entity.Type: GrantFiled: July 2, 2021Date of Patent: September 27, 2022Assignee: Microsoft Technology Licensing, LLCInventors: Monish Shantilal Shah, Lisa Ru-feng Hsu, Daniel Sebastian Berger
-
Patent number: 11429523Abstract: A memory controller maintains a mapping of target ranges in system memory space interleaved two-ways across locations in a three-rank environment. For each range of the target ranges, the mapping comprises a two-way interleaving of the range across two ranks of the three-rank environment and offsets from base locations in the two ranks. At least one of the ranges has offsets that differ relative to each other. Such offsets allow the three ranks to be fully interleaved, two ways. An instruction to read data at a rank-agnostic location in the diverse-offset range causes the memory controller to map the rank-agnostic location to two interleaved locations offset different amounts from their respective base locations in their ranks. The controller may then affect the transfer of the data at the two interleaved locations.Type: GrantFiled: May 15, 2020Date of Patent: August 30, 2022Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Brett Kenneth Dodds, Monish Shantilal Shah
-
Publication number: 20220147461Abstract: Systems and methods related to a memory system with a predictable read latency from media with a long write latency are described. An example memory system includes an array of tiles configured to store data corresponding to a cache line associated with a host. The memory system further includes control logic configured to, in response to a write command from a host, initiate writing of a first cache line to a first tile in a first row of the tiles, a second cache line to a second tile in a second row of the tiles, a third cache line to a third tile in a third row of the tiles, and a fourth cache line in a fourth row of the tiles. The control logic is configured to, in response to a read command from the host, initiate reading of data stored in an entire row of tiles.Type: ApplicationFiled: January 25, 2022Publication date: May 12, 2022Inventors: Monish Shantilal SHAH, John Grant BENNETT
-
Patent number: 11269779Abstract: Systems and methods related to a memory system with a predictable read latency from media with a long write latency are described. An example memory system includes an array of tiles configured to store data corresponding to a cache line associated with a host. The memory system further includes control logic configured to, in response to a write command from a host, initiate writing of a first cache line to a first tile in a first row of the tiles, a second cache line to a second tile in a second row of the tiles, a third cache line to a third tile in a third row of the tiles, and a fourth cache line in a fourth row of the tiles. The control logic is configured to, in response to a read command from the host, initiate reading of data stored in an entire row of tiles.Type: GrantFiled: May 27, 2020Date of Patent: March 8, 2022Assignee: Microsoft Technology Licensing, LLCInventors: Monish Shantilal Shah, John Grant Bennett
-
Publication number: 20210374066Abstract: Systems and methods related to a memory system with a predictable read latency from media with a long write latency are described. An example memory system includes an array of tiles configured to store data corresponding to a cache line associated with a host. The memory system further includes control logic configured to, in response to a write command from a host, initiate writing of a first cache line to a first tile in a first row of the tiles, a second cache line to a second tile in a second row of the tiles, a third cache line to a third tile in a third row of the tiles, and a fourth cache line in a fourth row of the tiles. The control logic is configured to, in response to a read command from the host, initiate reading of data stored in an entire row of tiles.Type: ApplicationFiled: May 27, 2020Publication date: December 2, 2021Inventors: Monish Shantilal SHAH, John Grant BENNETT
-
Publication number: 20210357321Abstract: A memory controller maintains a mapping of target ranges in system memory space interleaved two-ways across locations in a three-rank environment. For each range of the target ranges, the mapping comprises a two-way interleaving of the range across two ranks of the three-rank environment and offsets from base locations in the two ranks. At least one of the ranges has offsets that differ relative to each other. Such offsets allow the three ranks to be fully interleaved, two ways. An instruction to read data at a rank-agnostic location in the diverse-offset range causes the memory controller to map the rank-agnostic location to two interleaved locations offset different amounts from their respective base locations in their ranks. The controller may then affect the transfer of the data at the two interleaved locations.Type: ApplicationFiled: May 15, 2020Publication date: November 18, 2021Inventors: Brett Kenneth Dodds, Monish Shantilal Shah
-
Patent number: 11150825Abstract: In non-limiting examples of the present disclosure, systems, methods and devices for partitioning dies based on spare blocks and workload expectations are provided. Data from non-volatile storage media may be received. The data may comprise information identifying each of a plurality of dies included in the non-volatile storage media and a number of blocks included in each of the plurality of dies. A number of spare blocks included in each of the plurality of dies may be determined. First and second sets of the plurality of dies may be identified, wherein the first set has a higher number of spare blocks than the second set. A first workload may be assigned to the first set of dies, the first workload being classified as write-intensive. A second workload may be assigned to the second set of dies, the second workload being classified as read-intensive.Type: GrantFiled: December 5, 2019Date of Patent: October 19, 2021Assignee: Microsoft Technology Licensing, LLCInventors: Abhilash Ravi Kashyap, Monish Shantilal Shah
-
Publication number: 20210173558Abstract: In non-limiting examples of the present disclosure, systems, methods and devices for partitioning dies based on spare blocks and workload expectations are provided. Data from non-volatile storage media may be received. The data may comprise information identifying each of a plurality of dies included in the non-volatile storage media and a number of blocks included in each of the plurality of dies. A number of spare blocks included in each of the plurality of dies may be determined. First and second sets of the plurality of dies may be identified, wherein the first set has a higher number of spare blocks than the second set. A first workload may be assigned to the first set of dies, the first workload being classified as write-intensive. A second workload may be assigned to the second set of dies, the second workload being classified as read-intensive.Type: ApplicationFiled: December 5, 2019Publication date: June 10, 2021Inventors: Abhilash Ravi Kashyap, Monish Shantilal Shah
-
Publication number: 20210149594Abstract: Solid-state devices (SSDs) reduce latency by employing instruction time slicing to non-volatile memory (NVM) sets mapped to independently programmable NVM planes. Memory cells in a NVM die are divided into planes that each have enough storage capacity for a storage space (NVM set) of an application executing in an electronic device. To allow separate processes to access NVM sets in the same NVM die with reduced tail latency, a SSD employs a SSD control circuit determining instruction-type time slices in which specific types of instructions are generated, and NVM dies capable of concurrently accessing independent memory locations of respective planes. The SSD control circuit determines a write instruction-type time slice and generates a write instruction. A NVM die, in response to the write instruction, writes to a first page in a first plane indicated in the write instruction, and concurrently writes to a second page in a second plane.Type: ApplicationFiled: November 19, 2019Publication date: May 20, 2021Inventor: Monish Shantilal SHAH