Patents by Inventor Mrittika Ganguli
Mrittika Ganguli has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12292842Abstract: Examples described herein relate to network layer 7 (L7) offload to an infrastructure processing unit (IPU) for a service mesh. An apparatus described herein includes an IPU comprising an IPU memory to store a routing table for a service mesh, the routing table to map shared memory address spaces of the IPU and a host device executing one or more microservices, wherein the service mesh provides an infrastructure layer for the one or more microservices executing on the host device; and one or more IPU cores communicably coupled to the IPU memory, the one or more IPU cores to: host a network L7 proxy endpoint for the service mesh, and communicate messages between the network L7 proxy endpoint and an L7 interface device of the one or more microservices by copying data between the shared memory address spaces of the IPU and the host device based on the routing table.Type: GrantFiled: September 27, 2021Date of Patent: May 6, 2025Assignee: Intel CorporationInventors: Mrittika Ganguli, Anjali Jain, Reshma Lal, Edwin Verplanke, Priya Autee, Chih-Jen Chang, Abhirupa Layek, Nupur Jain
-
Patent number: 12293231Abstract: Examples described herein include a device interface; a first set of one or more processing units; and a second set of one or more processing units. In some examples, the first set of one or more processing units are to perform heavy flow detection for packets of a flow and the second set of one or more processing units are to perform processing of packets of a heavy flow. In some examples, the first set of one or more processing units and second set of one or more processing units are different. In some examples, the first set of one or more processing units is to allocate pointers to packets associated with the heavy flow to a first set of one or more queues of a load balancer and the load balancer is to allocate the packets associated with the heavy flow to one or more processing units of the second set of one or more processing units based, at least in part on a packet receive rate of the packets associated with the heavy flow.Type: GrantFiled: September 10, 2021Date of Patent: May 6, 2025Assignee: Intel CorporationInventors: Chenmin Sun, Yipeng Wang, Rahul R. Shah, Ren Wang, Sameh Gobriel, Hongjun Ni, Mrittika Ganguli, Edwin Verplanke
-
Patent number: 12289239Abstract: Methods, apparatus, systems, and articles of manufacture are disclosed for dynamic load balancing for multi-core computing environments. An example apparatus includes a first and a plurality of second cores of a processor, and circuitry in a die of the processor separate from the first and the second cores, the circuitry to enqueue identifiers in one or more queues in the circuitry associated with respective ones of data packets of a packet flow, allocate one or more of the second cores to dequeue first ones of the identifiers in response to a throughput parameter of the first core not satisfying a throughput threshold to cause the one or more of the second cores to execute one or more operations on first ones of the data packets, and provide the first ones to one or more data consumers to distribute the first data packets.Type: GrantFiled: January 13, 2023Date of Patent: April 29, 2025Assignee: Intel CorporationInventors: Stephen Palermo, Bradley Chaddick, Gage Eads, Mrittika Ganguli, Abhishek Khade, Abhirupa Layek, Sarita Maini, Niall McDonnell, Rahul Shah, Shrikant Shah, William Burroughs, David Sonnier
-
Patent number: 12197601Abstract: Examples described herein relate to offload circuitry comprising one or more compute engines that are configurable to perform a workload offloaded from a process executed by a processor based on a descriptor particular to the workload. In some examples, the offload circuitry is configurable to perform the workload, among multiple different workloads. In some examples, the multiple different workloads include one or more of: data transformation (DT) for data format conversion, Locality Sensitive Hashing (LSH) for neural network (NN), similarity search, sparse general matrix-matrix multiplication (SpGEMM) acceleration of hash based sparse matrix multiplication, data encode, data decode, or embedding lookup.Type: GrantFiled: December 22, 2021Date of Patent: January 14, 2025Assignee: Intel CorporationInventors: Ren Wang, Sameh Gobriel, Somnath Paul, Yipeng Wang, Priya Autee, Abhirupa Layek, Shaman Narayana, Edwin Verplanke, Mrittika Ganguli, Jr-Shian Tsai, Anton Sorokin, Suvadeep Banerjee, Abhijit Davare, Desmond Kirkpatrick, Rajesh M. Sankaran, Jaykant B. Timbadiya, Sriram Kabisthalam Muthukumar, Narayan Ranganathan, Nalini Murari, Brinda Ganesh, Nilesh Jain
-
Patent number: 12155538Abstract: In accordance with some embodiments, a cloud service provider may operate a data center in a way that dynamically reallocates resources across nodes within the data center based on both utilization and service level agreements. In other words, the allocation of resources may be adjusted dynamically based on current conditions. The current conditions in the data center may be a function of the nature of all the current workloads. Instead of simply managing the workloads in a way to increase overall execution efficiency, the data center instead may manage the workload to achieve quality of service requirements for particular workloads according to service level agreements.Type: GrantFiled: June 15, 2023Date of Patent: November 26, 2024Assignee: Intel CorporationInventors: Mrittika Ganguli, Muthuvel M. I, Ananth S. Narayan, Jaideep Moses, Andrew J. Herdrich, Rahul Khanna
-
Patent number: 12073255Abstract: Technologies for providing latency-aware consensus management in a disaggregated system include a compute device. The compute device includes circuitry to determine latencies associated with subsystems of the disaggregated system. Additionally, the circuitry is to determine, as a function of the determined latencies, a time period in which a configuration change to the disaggregated system is to reach a consistent state in the subsystems.Type: GrantFiled: July 2, 2019Date of Patent: August 27, 2024Assignee: Intel CorporationInventors: Mrittika Ganguli, Murugasamy K. Nachimuthu, Muralidharan Sundararajan, Susanne M. Balle, Mohan J. Kumar
-
Publication number: 20240267334Abstract: Methods, apparatus, systems, and articles of manufacture are disclosed for dynamic load balancing for multi-core computing environments. An example apparatus includes a first and a plurality of second cores of a processor, and circuitry in a die of the processor separate from the first and the second cores, the circuitry to enqueue identifiers in one or more queues in the circuitry associated with respective ones of data packets of a packet flow, allocate one or more of the second cores to dequeue first ones of the identifiers in response to a throughput parameter of the first core not satisfying a throughput threshold to cause the one or more of the second cores to execute one or more operations on first ones of the data packets, and provide the first ones to one or more data consumers to distribute the first data packets.Type: ApplicationFiled: March 29, 2024Publication date: August 8, 2024Inventors: Stephen Palermo, Bradley Chaddick, Gage Eads, Mrittika Ganguli, Abhishek Khade, Abhirupa Layek, Sarita Maini, Niall McDonnell, Rahul Shah, Shrikant Shah, William Burroughs, David Sonnier
-
Publication number: 20240129353Abstract: Methods, apparatus, systems, and articles of manufacture are disclosed to improve webservers using dynamic load balancers. An example method includes identifying a first and second data object type associated with media and with first and second data objects of the media. The example method also includes enqueuing first and second event data associated with the first and second data object in a first and second queue in first circuitry in a die of programmable circuitry. The example method further includes dequeuing the first and second event data into a third and fourth queue associated with a first and second core of the programmable circuitry, the first circuitry separate from the first core and the second core. The example method additionally includes causing the first and second core to execute a first and second computing operation based on the first and second event data in the third and fourth queues.Type: ApplicationFiled: December 21, 2023Publication date: April 18, 2024Inventors: Amruta Misra, Niall McDonnell, Mrittika Ganguli, Edwin Verplanke, Stephen Palermo, Rahul Shah, Pushpendra Kumar, Vrinda Khirwadkar, Valerie Parker
-
Patent number: 11943280Abstract: Various systems and methods for implementing a multi-access edge computing (MEC) based system to realize 5G Network Edge and Core Service Dimensioning using Machine Learning and other Artificial Intelligence Techniques, for improved operations and usage of computing and networking resources, and are disclosed herein. In an example, processing circuitry of a compute node on a network is used to analyze execution of an application to obtain operational data. The compute node then may modularize functions of the application based on the operational data to construct modularized functions. A phase transition graph is constructed using a machine-learning based analysis, the phase transition graph representing state transitions from one modularized function to another modularized function, where the phase transition graph is used to dimension the application by distributing the modularized functions across the network.Type: GrantFiled: August 8, 2022Date of Patent: March 26, 2024Assignee: Intel CorporationInventors: Mrittika Ganguli, Stephen T. Palermo, Valerie J. Parker
-
Publication number: 20230327960Abstract: In accordance with some embodiments, a cloud service provider may operate a data center in a way that dynamically reallocates resources across nodes within the data center based on both utilization and service level agreements. In other words, the allocation of resources may be adjusted dynamically based on current conditions. The current conditions in the data center may be a function of the nature of all the current workloads. Instead of simply managing the workloads in a way to increase overall execution efficiency, the data center instead may manage the workload to achieve quality of service requirements for particular workloads according to service level agreements.Type: ApplicationFiled: June 15, 2023Publication date: October 12, 2023Applicant: Intel CorporationInventors: Mrittika Ganguli, Muthuvel M. I, Ananth S. Narayan, Jaideep Moses, Andrew J. Herdrich, Rahul Khanna
-
Patent number: 11722382Abstract: In accordance with some embodiments, a cloud service provider may operate a data center in a way that dynamically reallocates resources across nodes within the data center based on both utilization and service level agreements. In other words, the allocation of resources may be adjusted dynamically based on current conditions. The current conditions in the data center may be a function of the nature of all the current workloads. Instead of simply managing the workloads in a way to increase overall execution efficiency, the data center instead may manage the workload to achieve quality of service requirements for particular workloads according to service level agreements.Type: GrantFiled: October 5, 2021Date of Patent: August 8, 2023Assignee: Intel CorporationInventors: Mrittika Ganguli, Muthuvel M. I, Ananth S. Narayan, Jaideep Moses, Andrew J. Herdrich, Rahul Khanna
-
Publication number: 20230231809Abstract: Methods, apparatus, systems, and articles of manufacture are disclosed for dynamic load balancing for multi-core computing environments. An example apparatus includes a first and a plurality of second cores of a processor, and circuitry in a die of the processor separate from the first and the second cores, the circuitry to enqueue identifiers in one or more queues in the circuitry associated with respective ones of data packets of a packet flow, allocate one or more of the second cores to dequeue first ones of the identifiers in response to a throughput parameter of the first core not satisfying a throughput threshold to cause the one or more of the second cores to execute one or more operations on first ones of the data packets, and provide the first ones to one or more data consumers to distribute the first data packets.Type: ApplicationFiled: January 13, 2023Publication date: July 20, 2023Inventors: Stephen Palermo, Bradley Chaddick, Gage Eads, Mrittika Ganguli, Abhishek Khade, Abhirupa Layek, Sarita Maini, Niall McDonnell, Rahul Shah, Shrikant Shah, William Burroughs, David Sonnier
-
Patent number: 11695668Abstract: Technologies for allocating resources of managed nodes to workloads to balance multiple resource allocation objectives include an orchestrator server to receive resource allocation objective data indicative of multiple resource allocation objectives to be satisfied. The orchestrator server is additionally to determine an initial assignment of a set of workloads among the managed nodes and receive telemetry data from the managed nodes. The orchestrator server is further to determine, as a function of the telemetry data and the resource allocation objective data, an adjustment to the assignment of the workloads to increase an achievement of at least one of the resource allocation objectives without decreasing an achievement of another of the resource allocation objectives, and apply the adjustments to the assignments of the workloads among the managed nodes as the workloads are performed. Other embodiments are also described and claimed.Type: GrantFiled: October 30, 2020Date of Patent: July 4, 2023Assignee: Intel CorporationInventors: Susanne M. Balle, Rahul Khanna, Nishi Ahuja, Mrittika Ganguli
-
Patent number: 11689471Abstract: Technologies for contention-aware cloud compute scheduling include a number of compute nodes in a cloud computing cluster and a cloud controller. Each compute node collects performance data indicative of cache contention on the compute node, for example, cache misses per thousand instructions. Each compute node determines a contention score as a function of the performance data and stores the contention score in a cloud state database. In response to a request for a new virtual machine, the cloud controller receives contention scores for the compute nodes and selects a compute node based on the contention score. The cloud controller schedules the new virtual machine on the selected compute node. The contention score may include a contention metric and a contention score level indicative of the contention metric. The contention score level may be determined by comparing the contention metric to a number of thresholds. Other embodiments are described and claimed.Type: GrantFiled: December 27, 2021Date of Patent: June 27, 2023Assignee: Intel CorporationInventors: Subramony Sesha, Archana Patni, Ananth S. Narayan, Mrittika Ganguli
-
Publication number: 20230199078Abstract: Examples described herein relate to execution of an ingress service to select at least one service to process at least one received packet. In some examples, the ingress service is executed on a frequency tuned processor of one or more processors, accessing a forwarding table from high bandwidth memory (HBM), and utilizing data copy circuitry to copy portions of received packets to memory accessible to the selected services.Type: ApplicationFiled: February 16, 2023Publication date: June 22, 2023Inventors: Beeresh GOPAL, Monika KASHID, Utkarsha GANGTHADE, Manan GUPTA, Mrittika GANGULI
-
Publication number: 20230105491Abstract: Examples described herein relate to a system to estimate latency of operations of a process without receiving a latency value directly based on received performance values and/or estimate throughput of packets transmitted for the process without receiving a throughput value directly based on received performance values. In some examples, the system is to request to adjust resource allocation to perform the process based on the determined latency and throughput.Type: ApplicationFiled: December 2, 2022Publication date: April 6, 2023Inventors: Mrittika GANGULI, Dmytro YERMOLENKO, Adrian C. MOGA, Abhirupa LAYEK, Qiming LIU, Robert ZMUDA TRZEBIATOWSKI, Rafal SZNEJDER, Piotr WYSOCKI, Mohan J. KUMAR, Ranganath SUNKU, Vishakh NAIR
-
Publication number: 20230096451Abstract: Techniques for remote disaggregated infrastructure processing units (IPUs) are described. An apparatus described herein includes an interconnect controller to receive a transaction layer packet (TLP) from a host compute node; identify a sender and a destination from the TLP; and provide, to a content addressable memory (CAM), a key determined from the sender and the destination. The apparatus as described herein can further include core circuitry communicably coupled to the interconnect controller, the core circuitry to determine an output of the CAM based on the key, the output comprising a network address of an infrastructure processing unit (IPU) assigned to the host compute node, wherein the IPU is disaggregated from the host compute node over a network; and send the TLP to the IPU using a transport protocol.Type: ApplicationFiled: September 24, 2021Publication date: March 30, 2023Applicant: Intel CorporationInventors: Salma Johnson, Duane Galbi, Bradley Burres, Jose Niell, Jeongnim Kim, Reshma Lal, Anandhi Jayakumar, Mrittika Ganguli, Thomas Willis
-
Publication number: 20230100935Abstract: Examples described herein relate to circuitry to perform load balancing; at least one memory; and at least one processor. In some examples, at least one processor is to execute instructions stored in the at least one memory that cause the at least one processor to: execute a communication proxy that is to allocate packet data to the circuitry to perform load balancing to allocate workloads among cores and allocate received and transmitted remote procedure calls to at least one queue in circuitry to queue one or more packets.Type: ApplicationFiled: December 5, 2022Publication date: March 30, 2023Inventors: Kelley MULLICK, Mrittika GANGULI, Brian P. JOHNSON, Matthew J. ADILETTA
-
Publication number: 20230082780Abstract: Examples described herein include a device interface; a first set of one or more processing units; and a second set of one or more processing units. In some examples, the first set of one or more processing units are to perform heavy flow detection for packets of a flow and the second set of one or more processing units are to perform processing of packets of a heavy flow. In some examples, the first set of one or more processing units and second set of one or more processing units are different. In some examples, the first set of one or more processing units is to allocate pointers to packets associated with the heavy flow to a first set of one or more queues of a load balancer and the load balancer is to allocate the packets associated with the heavy flow to one or more processing units of the second set of one or more processing units based, at least in part on a packet receive rate of the packets associated with the heavy flow.Type: ApplicationFiled: September 10, 2021Publication date: March 16, 2023Inventors: Chenmin SUN, Yipeng WANG, Rahul R. SHAH, Ren WANG, Sameh GOBRIEL, Hongjun NI, Mrittika GANGULI, Edwin VERPLANKE
-
Publication number: 20230062253Abstract: Various systems and methods for implementing a multi-access edge computing (MEC) based system to realize 5G Network Edge and Core Service Dimensioning using Machine Learning and other Artificial Intelligence Techniques, for improved operations and usage of computing and networking resources, and are disclosed herein. In an example, processing circuitry of a compute node on a network is used to analyze execution of an application to obtain operational data. The compute node then may modularize functions of the application based on the operational data to construct modularized functions. A phase transition graph is constructed using a machine-learning based analysis, the phase transition graph representing state transitions from one modularized function to another modularized function, where the phase transition graph is used to dimension the application by distributing the modularized functions across the network.Type: ApplicationFiled: August 8, 2022Publication date: March 2, 2023Inventors: Mrittika Ganguli, Stephen T. Palermo, Valerie J. Parker