Patents by Inventor Ishwar AGARWAL
Ishwar AGARWAL has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20230143375Abstract: Techniques of memory tiering in computing devices are disclosed herein. One example technique includes retrieving, from a first tier in a first memory, data from a data portion and metadata from a metadata portion of the first tier upon receiving a request to read data corresponding to a system memory section. The method can then include analyzing the data location information to determine whether the first tier currently contains data corresponding to the system memory section in the received request. In response to determining that the first tier currently contains data corresponding to the system memory section in the received request, transmitting the retrieved data from the data portion of the first memory to the processor in response to the received request. Otherwise, the method can include identifying a memory location in the first or second memory that contains data corresponding to the system memory section and retrieving the data from the identified memory location.Type: ApplicationFiled: January 13, 2023Publication date: May 11, 2023Applicant: Microsoft Technology Licensing, LLCInventors: Ishwar AGARWAL, George Zacharias CHRYSOS, Oscar ROSELL MARTINEZ
-
Publication number: 20230074943Abstract: A computing device includes a system-on-a-chip. The computing device comprises a network interface controller (NIC) that hosts a plurality of virtual functions and physical functions. Two or more compute nodes are coupled to the NIC. Each compute node is configured to operate a plurality of Virtual Machines (VMs). Each VM is configured to operate in conjunction with a virtual function via a virtual function driver. A dedicated VM operates in conjunction with a virtual NIC using a physical function hosted by the NIC via a physical function driver hosted by the compute node. The computing device further comprises a fabric manager configured to own a physical function of the NIC, to bind virtual functions hosted by the NIC to individual compute nodes, and to pool I/O devices across the two or more compute nodes.Type: ApplicationFiled: October 24, 2022Publication date: March 9, 2023Applicant: Microsoft Technology Licensing, LLCInventors: Siamak TAVALLAEI, Ishwar AGARWAL
-
Patent number: 11599415Abstract: Techniques of memory tiering in computing devices are disclosed herein. One example technique includes retrieving, from a first tier in a first memory, data from a data portion and metadata from a metadata portion of the first tier upon receiving a request to read data corresponding to a system memory section. The method can then include analyzing the data location information to determine whether the first tier currently contains data corresponding to the system memory section in the received request. In response to determining that the first tier currently contains data corresponding to the system memory section in the received request, transmitting the retrieved data from the data portion of the first memory to the processor in response to the received request. Otherwise, the method can include identifying a memory location in the first or second memory that contains data corresponding to the system memory section and retrieving the data from the identified memory location.Type: GrantFiled: July 9, 2021Date of Patent: March 7, 2023Assignee: Microsoft Technology Licensing, LLCInventors: Ishwar Agarwal, George Zacharias Chrysos, Oscar Rosell Martinez
-
Publication number: 20230035420Abstract: Systems and devices can include a controller and a command queue to buffer incoming write requests into the device. The controller can receive, from a client across a link, a non-posted write request (e.g., a deferred memory write (DMWr) request) in a transaction layer packet (TLP) to the command queue; determine that the command queue can accept the DMWr request; identify, from the TLP, a successful completion (SC) message that indicates that the DMWr request was accepted into the command queue; and transmit, to the client across the link, the SC message that indicates that the DMWr request was accepted into the command queue. The controller can receive a second DMWr request in a second TLP; determine that the command queue is full; and transmit a memory request retry status (MRS) message to be transmitted to the client in response to the command queue being full.Type: ApplicationFiled: September 28, 2022Publication date: February 2, 2023Applicant: Intel CorporationInventors: Rajesh M. Sankaran, David J. Harriman, Sean O. Stalley, Rupin H. Vakharwala, Ishwar Agarwal, Pratik M. Marolia, Stephen R. Van Doren
-
Publication number: 20230020131Abstract: Techniques of memory tiering in computing devices are disclosed herein. One example technique includes retrieving, from a first tier in a first memory, data from a data portion and metadata from a metadata portion of the first tier upon receiving a request to read data corresponding to a system memory section. The method can then include analyzing the data location information to determine whether the first tier currently contains data corresponding to the system memory section in the received request. In response to determining that the first tier currently contains data corresponding to the system memory section in the received request, transmitting the retrieved data from the data portion of the first memory to the processor in response to the received request. Otherwise, the method can include identifying a memory location in the first or second memory that contains data corresponding to the system memory section and retrieving the data from the identified memory location.Type: ApplicationFiled: July 9, 2021Publication date: January 19, 2023Inventors: Ishwar Agarwal, George Zacharias Chrysos, Oscar Rosell Martinez
-
Publication number: 20220414001Abstract: Techniques of memory inclusivity management are disclosed herein. One example technique includes receiving a request from a core of the CPU to write a block of data corresponding to a first cacheline to a swap buffer at a memory. In response to the request, the method can include retrieving metadata corresponding to the first cacheline that includes a bit encoding a status value indicating whether the memory block at the memory currently contains data of the first cacheline or data corresponding to a second cacheline. The first and second cachelines alternately sharing the swap buffer at the memory. When the decoded status value indicates that the memory block at the first memory currently contains the data corresponding to the first cacheline, an instruction is transmitted to the memory controller to directly write the block of data to the memory block at the first memory.Type: ApplicationFiled: June 25, 2021Publication date: December 29, 2022Inventors: Ishwar Agarwal, George Zacharias Chrysos, Oscar Rosell Martinez
-
Publication number: 20220405004Abstract: The present disclosure relates to systems, methods, and computer-readable media for tracking memory usage data on a memory controller system and providing a mechanism whereby one or multiple accessing agents (e.g., computing nodes, applications, virtual machines) can access memory usage data for a memory resource managed by a memory controller. Indeed, the systems described herein facilitate generation of and access to heatmaps having memory usage data thereon. The systems described herein describe features and functionality related to generating and maintaining the heatmaps as well as providing access to the heatmaps to a variety of accessing agents. This memory tracking and accessing is performed using low processing overhead while providing useful information to accessing agents in connection with memory resources managed by a memory controller.Type: ApplicationFiled: August 24, 2022Publication date: December 22, 2022Inventors: Lisa Ru-Feng HSU, Aninda MANOCHA, Ishwar AGARWAL, Daniel Sebastian BERGER, Stanko NOVAKOVIC, Janaina Barreiro GAMBARO BUENO, Vishal SONI
-
Publication number: 20220382672Abstract: Disclosed herein is a thin-provisioned multi-node computer system with a disaggregated memory pool and a pooled memory controller. The disaggregated memory pool is configured to make a shared memory capacity available to each of a plurality of compute nodes, such memory capacity being thinly provisioned relative to the plurality of compute nodes. The pooled memory controller is configured to assign a plurality of memory segments of the disaggregated memory pool to the plurality of compute nodes; identify a subset of the plurality of segments as cold segments, such identification being based on determining that a usage characteristic for each such cold segment is below a threshold; and page one or more of the cold segments out to an expanded bulk memory device, thereby freeing one or more assigned memory segments of the disaggregated memory pool.Type: ApplicationFiled: August 10, 2022Publication date: December 1, 2022Applicant: Microsoft Technology Licensing, LLCInventors: Siamak TAVALLAEI, Ishwar AGARWAL
-
Patent number: 11513979Abstract: Systems and devices can include a controller and a command queue to buffer incoming write requests into the device. The controller can receive, from a client across a link, a non-posted write request (e.g., a deferred memory write (DMWr) request) in a transaction layer packet (TLP) to the command queue; determine that the command queue can accept the DMWr request; identify, from the TLP, a successful completion (SC) message that indicates that the DMWr request was accepted into the command queue; and transmit, to the client across the link, the SC message that indicates that the DMWr request was accepted into the command queue. The controller can receive a second DMWr request in a second TLP; determine that the command queue is full; and transmit a memory request retry status (MRS) message to be transmitted to the client in response to the command queue being full.Type: GrantFiled: February 26, 2021Date of Patent: November 29, 2022Assignee: Intel CorporationInventors: Rajesh M. Sankaran, David J. Harriman, Sean O. Stalley, Rupin H. Vakharwala, Ishwar Agarwal, Pratik M. Marolia, Stephen R. Van Doren
-
Publication number: 20220365887Abstract: Aspects of the embodiments are directed to systems and methods for providing and using hints in data packets to perform memory transaction optimization processes prior to receiving one or more data packets that rely on memory transactions. The systems and methods can include receiving, from a device connected to the root complex across a PCIe-compliant link, a data packet; identifying from the received device a memory transaction hint bit; determining a memory transaction from the memory transaction hint bit; and performing an optimization process based, at least in part, on the determined memory transaction.Type: ApplicationFiled: May 27, 2022Publication date: November 17, 2022Applicant: Intel CorporationInventors: Ishwar Agarwal, Rupin H. Vakharwala, Rajesh M. Sankaran, Stephen R. Van Doren
-
Patent number: 11481116Abstract: A computing device comprises two or more compute nodes, that each include two or more processor cores. Each compute node comprises an independently coherent domain that is not coherent with other compute nodes. A central IO die is communicatively coupled to each of the two or more compute nodes. A plurality of natively-attached volatile memory units are attached to the central IO die via one or more memory controllers. The central IO die includes one or more home agents for each compute node. The home agents are configured to map memory access requests received from the compute nodes to one or more addresses within the natively attached volatile memory units.Type: GrantFiled: September 9, 2020Date of Patent: October 25, 2022Assignee: Microsoft Technology Licensing, LLCInventors: Siamak Tavallaei, Ishwar Agarwal
-
Patent number: 11442654Abstract: The present disclosure relates to systems, methods, and computer-readable media for tracking memory usage data on a memory controller system and providing a mechanism whereby one or multiple accessing agents (e.g., computing nodes, applications, virtual machines) can access memory usage data for a memory resource managed by a memory controller. Indeed, the systems described herein facilitate generation of and access to heatmaps having memory usage data thereon. The systems described herein describe features and functionality related to generating and maintaining the heatmaps as well as providing access to the heatmaps to a variety of accessing agents. This memory tracking and accessing is performed using low processing overhead while providing useful information to accessing agents in connection with memory resources managed by a memory controller.Type: GrantFiled: October 15, 2020Date of Patent: September 13, 2022Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Lisa Ru-Feng Hsu, Aninda Manocha, Ishwar Agarwal, Daniel Sebastian Berger, Stanko Novakovic, Janaina Barreiro Gambaro Bueno, Vishal Soni
-
Patent number: 11429518Abstract: Disclosed herein is a thin-provisioned multi-node computer system with a disaggregated memory pool and a pooled memory controller. The disaggregated memory pool is configured to make a shared memory capacity available to each of a plurality of compute nodes, such memory capacity being thinly provisioned relative to the plurality of compute nodes. The pooled memory controller is configured to assign a plurality of memory segments of the disaggregated memory pool to the plurality of compute nodes; identify a subset of the plurality of segments as cold segments, such identification being based on determining that a usage characteristic for each such cold segment is below a threshold; and page one or more of the cold segments out to an expanded bulk memory device, thereby freeing one or more assigned memory segments of the disaggregated memory pool.Type: GrantFiled: December 8, 2020Date of Patent: August 30, 2022Assignee: Microsoft Technology Licensing, LLCInventors: Siamak Tavallaei, Ishwar Agarwal
-
Publication number: 20220197847Abstract: Embodiments may be generally direct to apparatuses, systems, method, and techniques to detect a message to communicate via an interconnect coupled with a device capable of communication via a plurality of interconnect protocols, the plurality of interconnect protocols comprising a non-coherent interconnect protocol, a coherent interconnect protocol, and a memory interconnect protocol. Embodiments also include determining an interconnect protocol of the plurality of interconnect protocols to communicate the message via the interconnect based on the message, and providing the message to a multi-protocol multiplexer coupled with the interconnect, the multi-protocol multiplexer to communicate the message utilizing the interconnect protocol via the interconnect with the device.Type: ApplicationFiled: February 17, 2022Publication date: June 23, 2022Inventors: Stephen R. Van Doren, Rajesh M. Sankaran, David A. Koufaty, Ramacharan Sundararaman, Ishwar Agarwal
-
Publication number: 20220197852Abstract: A circuit system includes slow running logic circuitry that generates write data and a write command for a write request. The circuit system also includes fast running logic circuitry that receives the write data and the write command from the slow running logic circuitry. The fast running logic circuitry stores the write data and the write command. A host system generates a write response in response to receiving the write command from the fast running logic circuitry. The host system sends the write response to the fast running logic circuitry. The fast running logic circuitry sends the write data to the host system in response to receiving the write response from the host system before providing the write response to the slow running logic circuitry.Type: ApplicationFiled: March 10, 2022Publication date: June 23, 2022Applicant: Intel CorporationInventors: Mohan Nair, Ishwar Agarwal, Ashish Gupta, Peeyush Purohit, Vijay Pothi Raj Govindaraj, Nitish Paliwal, Rahul Boyapati, Minjer Juan
-
Patent number: 11366773Abstract: Systems, methods, and devices can include link layer logic that is to identify, by a link layer device, first data received from the memory in a first protocol format, identify, by the link layer device, second data received from the cache in a second protocol format, multiplex, by the link layer device, a portion of the first data and a portion of the second data to produce multiplexed data; and generate, by the link layer device, a flow control unit (flit) that includes the multiplexed data.Type: GrantFiled: April 3, 2020Date of Patent: June 21, 2022Assignee: Intel CorporationInventors: Ishwar Agarwal, Peeyush Purohit, Nitish Paliwal, Archana Srinivasan
-
Publication number: 20220179780Abstract: Disclosed herein is a thin-provisioned multi-node computer system with a disaggregated memory pool and a pooled memory controller. The disaggregated memory pool is configured to make a shared memory capacity available to each of a plurality of compute nodes, such memory capacity being thinly provisioned relative to the plurality of compute nodes. The pooled memory controller is configured to assign a plurality of memory segments of the disaggregated memory pool to the plurality of compute nodes; identify a subset of the plurality of segments as cold segments, such identification being based on determining that a usage characteristic for each such cold segment is below a threshold; and page one or more of the cold segments out to an expanded bulk memory device, thereby freeing one or more assigned memory segments of the disaggregated memory pool.Type: ApplicationFiled: December 8, 2020Publication date: June 9, 2022Applicant: Microsoft Technology Licensing, LLCInventors: Siamak TAVALLAEI, Ishwar AGARWAL
-
Patent number: 11347662Abstract: Aspects of the embodiments are directed to systems and methods for providing and using hints in data packets to perform memory transaction optimization processes prior to receiving one or more data packets that rely on memory transactions. The systems and methods can include receiving, from a device connected to the root complex across a PCIe-compliant link, a data packet; identifying from the received device a memory transaction hint bit; determining a memory transaction from the memory transaction hint bit; and performing an optimization process based, at least in part, on the determined memory transaction.Type: GrantFiled: September 30, 2017Date of Patent: May 31, 2022Assignee: Intel CorporationInventors: Ishwar Agarwal, Rupin H. Vakharwala, Rajesh M. Sankaran, Stephen R. Van Doren
-
Publication number: 20220164118Abstract: The present disclosure relates to systems, methods, and computer-readable media for managing tracked memory usage data and performing various actions based on memory usage data tracked by a memory controller on a memory device. For example, systems described herein involve collecting and compiling data across one or more memory controllers to evaluate characteristics of the memory usage data to determine hotness metric(s) for segments of a memory resource. The systems described herein may perform a variety of segment actions based on the hotness metric(s). In addition, the systems described herein can compile the memory usage data according to one or more access granularities. This compiled data may further be shared with multiple accessing agents in accordance with access resolutions of the respective accessing agents.Type: ApplicationFiled: November 23, 2020Publication date: May 26, 2022Inventors: Lisa Ru-Feng HSU, Aninda MANOCHA, Ishwar AGARWAL, Daniel Sebastian BERGER, Stanko NOVAKOVIC, Janaina Barreiro GAMBARO BUENO, Vishal SONI
-
Patent number: 11321171Abstract: Techniques of memory operations management are disclosed herein. One example technique includes retrieving, from a first memory, data from a data portion and metadata from a metadata portion of the first memory upon receiving a request to read data corresponding to a system memory section. The method can then include analyzing the data location information to determine whether the first memory currently contains data corresponding to the system memory section in the received request. In response to determining that the first memory currently contains data corresponding to the system memory section in the received request, transmitting the retrieved data from the data portion of the first memory to the processor in response to the received request. Otherwise, the method can include identifying a memory location in the second memory that contains data corresponding to the system memory section and retrieving the data from the identified memory location.Type: GrantFiled: May 24, 2021Date of Patent: May 3, 2022Assignee: Microsoft Technology Licensing, LLCInventors: Ishwar Agarwal, George Zacharias Chrysos, Oscar Rosell Martinez