Patents by Inventor Amruta MISRA

Amruta MISRA has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240143505
    Abstract: Methods and apparatus for dynamic selection of super queue size for CPUs with higher number of cores. An apparatus includes a plurality of compute modules, each module including a plurality of processor cores with integrated first level (L1) caches and a shared second level (L2) cache, a plurality of Last Level Caches (LLCs) or LLC blocks and a plurality of memory interface blocks interconnect via a mesh interconnect. A compute module is configured to arbitrate access to the shared L2 cache and enqueue L2 cache misses in a super queue (XQ). The compute module further is configured to dynamically adjust the size of the XQ during runtime operations. The compute module tracks parameters comprising an L2 miss rate or count and LLC hit latency and adjusts the XQ size as a function of these parameters. A lookup table using the L2 miss rate/count and LLC hit latency may be implemented to dynamically select the XQ size.
    Type: Application
    Filed: December 22, 2023
    Publication date: May 2, 2024
    Inventors: Amruta MISRA, Ajay RAMJI, Rajendrakumar CHINNAIYAN, Chris MACNAMARA, Karan PUTTANNAIAH, Pushpendra KUMAR, Vrinda KHIRWADKAR, Sanjeevkumar Shankrappa ROKHADE, John J. BROWNE, Francesc GUIM BERNAT, Karthik KUMAR, Farheena Tazeen SYEDA
  • Publication number: 20240129353
    Abstract: Methods, apparatus, systems, and articles of manufacture are disclosed to improve webservers using dynamic load balancers. An example method includes identifying a first and second data object type associated with media and with first and second data objects of the media. The example method also includes enqueuing first and second event data associated with the first and second data object in a first and second queue in first circuitry in a die of programmable circuitry. The example method further includes dequeuing the first and second event data into a third and fourth queue associated with a first and second core of the programmable circuitry, the first circuitry separate from the first core and the second core. The example method additionally includes causing the first and second core to execute a first and second computing operation based on the first and second event data in the third and fourth queues.
    Type: Application
    Filed: December 21, 2023
    Publication date: April 18, 2024
    Inventors: Amruta Misra, Niall McDonnell, Mrittika Ganguli, Edwin Verplanke, Stephen Palermo, Rahul Shah, Pushpendra Kumar, Vrinda Khirwadkar, Valerie Parker
  • Publication number: 20240086291
    Abstract: An apparatus comprising first circuitry to process a request generated by a first device, the request specifying a memory address range of a second device to monitor for errors; and second circuitry to, based on a determination that a read request targets the memory address range of the second device, compare first data read from the second device with second data read from a memory to determine whether an error has occurred.
    Type: Application
    Filed: September 9, 2022
    Publication date: March 14, 2024
    Applicant: Intel Corporation
    Inventors: Francesc Guim Bernat, Karthik Kumar, Amruta Misra
  • Publication number: 20230342223
    Abstract: Various aspects of methods, systems, and use cases include edge resource management, such as of a processor of an edge device. The edge device may include a processor to execute an application and a device including an interface to the processor and a network interface. The device may include circuitry to monitor a status of the processor; and based on the status and the application having an associated requirement, initiate a migration of execution of the application from the processor.
    Type: Application
    Filed: June 30, 2023
    Publication date: October 26, 2023
    Inventors: Francesc Guim Bernat, Karthik Kumar, Marcos E. Carranza, Amruta Misra
  • Publication number: 20230325246
    Abstract: A platform includes a plurality of hardware blocks to provide respective functionality for use in execution of an application. A subset of the plurality of hardware blocks are deactivated and unavailable for use in the execution of the application at the start of the execution of the application. A hardware profile modification block of the platform identifies receives telemetry data generated by a set of sensors and dynamically activates at least a particular one of the subset of hardware blocks based on the physical characteristics, where following activation of the particular hardware block, the execution of the application continues and uses the particular hardware block.
    Type: Application
    Filed: May 31, 2023
    Publication date: October 12, 2023
    Applicant: Intel Corporation
    Inventors: Francesc Guim Bernat, Karthik Kumar, John J. Browne, Amruta Misra, Chris M. MacNamara
  • Publication number: 20230273821
    Abstract: A method is described. The method includes dispatching jobs across electronic hardware components. The electronic hardware components are to process the jobs. The electronic hardware components are coupled to respective cooling systems. The respective cooling systems are each capable of cooling according to different cooling mechanisms. The different cooling mechanisms have different performance and cost operating realms. The dispatching of the jobs includes assigning the jobs to specific ones of the electronic hardware components to keep the cooling systems operating in one or more of the realms having lower performance and cost than another one of the realms.
    Type: Application
    Filed: April 18, 2023
    Publication date: August 31, 2023
    Inventors: Amruta MISRA, Francesc GUIM BERNAT, Kshitij A. DOSHI, Marcos E. CARRANZA, John J. BROWNE, Arun HODIGERE
  • Publication number: 20230273597
    Abstract: Telemetry systems for monitoring cooling of compute components and related apparatus and methods are disclosed. An example apparatus includes interface circuitry, machine-readable instructions, and programmable circuitry to at least one of instantiate or execute the machine-readable instructions to generate a heatmap based on outputs of one or more sensors in an environment, the environment including a first compute device, the sensor outputs including a metric associated with a property of a coolant and a location of the sensor in the environment, identify a compute performance metric of the first compute device, determine a cooling parameter for the first compute device based on the heatmap and the compute performance metric, and cause a cooling distribution unit to control flow of the coolant in the environment based on the cooling parameter.
    Type: Application
    Filed: May 8, 2023
    Publication date: August 31, 2023
    Inventors: Francesc Guim Bernat, Amruta Misra, Kshitij Arun Doshi, John J. Browne, Marcos Carranza
  • Publication number: 20230259185
    Abstract: Methods, systems, apparatus, and articles of manufacture to control cooling in an edge environment are disclosed. An example apparatus disclosed herein includes programmable circuitry to determine whether a first cooling parameter for a first edge node is satisfied based on first cooling availability information for the first edge node, when the first cooling parameter is satisfied, cause a first distribution unit to maintain an amount of cooling fluid to the first edge node, and when the first cooling parameter is not satisfied, cause at least one of the first distribution unit or a second distribution unit to adjust the amount of cooling fluid to at least one of the first edge node or a second edge node based on the first cooling availability information and second cooling availability information, the second cooling availability information for the second edge node.
    Type: Application
    Filed: April 19, 2023
    Publication date: August 17, 2023
    Inventors: Francesc Guim Bernat, Amruta Misra, Arun Hodigere, John J. Browne, Kshitij Arun Doshi
  • Publication number: 20230259102
    Abstract: Methods and apparatus for maintaining the cooling systems of distributed compute systems are disclosed. An example apparatus disclosed herein includes memory, machine readable instructions, and programmable circuitry to at least one of instantiate or execute the machine readable instructions to input operational data into a machine-learning model, the operational data including first information relating to a workload of a server and second information relating to an ambient condition of the server, compare a predicted cooling power requirement for a time period with a predicted cooling power availability for the time period, the predicted cooling power requirement based on an output of the machine-learning model, and generate a cooling plan based on the comparison, the cooling plan to define operation of at least one of the server or a cooling system used to cool the server during the time period.
    Type: Application
    Filed: April 27, 2023
    Publication date: August 17, 2023
    Inventors: Amruta Misra, Francesc Guim Bernat, Arun Hodigere, Kshitij Arun Doshi, John J. Browne
  • Publication number: 20230244560
    Abstract: Methods and apparatus for maintaining the cooling systems of distributed compute systems are disclosed. An example apparatus disclosed herein includes memory, machine-readable instructions, and processor circuitry to execute the machine-readable instructions to determine a health of a server, determine a threshold based on a workload service level agreement associated with the server, and in response to determining the health does not a satisfy the threshold, throttle a workload on the server.
    Type: Application
    Filed: March 29, 2023
    Publication date: August 3, 2023
    Inventors: Francesc Guim Bernat, Amruta Misra, Arun Hodigere, Kshitij Arun Doshi, John J. Browne
  • Publication number: 20230141508
    Abstract: Methods, apparatus, systems, and articles of manufacture are disclosed to auto tune data center cooling based on workloads. Disclosed herein is an apparatus including processor circuitry to determine environmental conditions setpoint(s) based on at least one of a future workload status or a current workload status of devices within a data center.
    Type: Application
    Filed: December 29, 2022
    Publication date: May 11, 2023
    Inventors: Christopher MacNamara, Kshitij Arun Doshi, Francesc Guim Bernat, John J. Browne, Amruta Misra
  • Publication number: 20230137191
    Abstract: An apparatus of a computing node of a computing network, a method to be performed at the apparatus, one or more computer-readable storage media storing instructions to be implemented at the apparatus, and a system including the apparatus. The apparatus includes a processing circuitry to: receive, from an orchestration block, a first workload (WL) package including a WL and first computing resource (CR) metadata; recompose the first WL package into a second WL package that includes the WL and second CR metadata that is different from the first CR metadata, is based at least in part on CR information regarding a server architecture onto which the WL is to be deployed, and is further to indicate one or more processors of the server architecture onto which the WL is to be deployed; and send the second WL package to one or more processors of the server architecture for deployment of the WL thereon.
    Type: Application
    Filed: December 27, 2022
    Publication date: May 4, 2023
    Inventors: Adrian C. Hoban, Thijs Metsch, John J. Browne, Kshitij A. Doshi, Francesc Guim Bernat, Anand Haridass, Chris M. MacNamara, Amruta Misra, Vikrant Thigle
  • Publication number: 20230134643
    Abstract: Methods and apparatus for distributing coolant between server racks are disclosed herein. An example apparatus described herein includes a compute node including a sensor and a first volume of coolant, a coolant storage, memory, and at least one processor to execute instructions to determine, based on an output of the sensor, if the first volume is effective to maintain a temperature of the compute node at a target temperature, in response to determining the first volume is not effective, reduce a computation load on the first compute node, and pump, from the coolant storage, a second volume of coolant to the compute node. In some examples, the coolant storage can be disposed underground.
    Type: Application
    Filed: December 29, 2022
    Publication date: May 4, 2023
    Inventors: Arun Hodigere, Francesc Guim Bernat, Kshitij Arun Doshi, Amruta Misra, Marcos Carranza
  • Publication number: 20230037609
    Abstract: Examples described herein relate to an interface and a network interface device coupled to the interface and comprising circuitry to: control power utilization by a first set of one or more devices based on power available to a system that includes the first set of one or more devices, wherein the system is communicatively coupled to the network interface and control cooling applied to the first set of one or more devices.
    Type: Application
    Filed: September 28, 2022
    Publication date: February 9, 2023
    Inventors: Paniraj GURURAJA, Navneeth JAYARAJ, Mahammad Yaseen Isasaheb MULLA, Nitesh GUPTA, Hemanth MADDHULA, Laxminarayan KAMATH, Jyotsna BIJAPUR, Delraj Gambhira DAMBEKANA, Vikrant THIGLE, Amruta MISRA, Anand HARIDASS, Rajesh POORNACHANDRAN, Krishnakumar VARADARAJAN, Sudipto PATRA, Nikhil RANE, Teik Wah LIM
  • Publication number: 20230027516
    Abstract: A processor-to-processor agent to provide connectivity over a processor-to-processor interconnect between services/network functions on different processors on a same compute node in a server is provided. The processor-to-processor agent can intercept socket interface calls using a network traffic filter in the network stack and redirect the packets based on traffic matching rules.
    Type: Application
    Filed: September 30, 2022
    Publication date: January 26, 2023
    Inventors: Tomasz KANTECKI, Paul HOUGH, David CREMINS, Ciara LOFTUS, Aman Deep SINGH, John J. BROWNE, David HUNT, Maksim LUKOSHKOV, Amruta MISRA, Nirint SHAH, Chris MACNAMARA
  • Publication number: 20220329450
    Abstract: Examples described herein relate to a network interface device that includes circuitry to perform switching and perform a received command in one or more packets while at least one of the at least one compute device is in a reduced power state, wherein the command is associated with operation of the at least one of the at least one compute device that is in a reduced power state. In some examples, the network interface device is able to control power available to at least one compute device.
    Type: Application
    Filed: June 28, 2022
    Publication date: October 13, 2022
    Inventors: Harald SERVAT, Amruta MISRA, Mikko BYCKLING, Francesc GUIM BERNAT, Jaime ARTEAGA MOLINA, Karthik KUMAR
  • Publication number: 20220011843
    Abstract: Telemetry information in the form of platform telemetry, virtualization layer telemetry, and application telemetry can be used to estimate power consumption of a software entity, such as a virtual machine, container, application, or network slice. A controller can take various actions based on software entity power consumption information. If a power limit of an integrated circuit component is exceeded, the controller can reduce the power consumption of a software entity or move the software entity to another integrated circuit component to reduce the power consumption of the integrated circuit component. The controller can determine a total software entity power consumption for software entities associated with a user entity and take actions to keep the total software entity power consumption within a power budget.
    Type: Application
    Filed: September 22, 2021
    Publication date: January 13, 2022
    Applicant: Intel Corporation
    Inventors: Chris M. MacNamara, John J. Browne, Amruta Misra
  • Publication number: 20210224128
    Abstract: Techniques for managing workloads in processor cores are disclosed. High priority or mission critical workloads may be assigned to processor cores of a processor. When a power limited throttling condition is met, the processor may throttle some of its cores while not throttling the cores with the high priority or mission critical workloads assigned to it. Such an approach can ensure that mission critical workloads continue even upon throttling of the processor cores.
    Type: Application
    Filed: December 24, 2020
    Publication date: July 22, 2021
    Applicant: Intel Corporation
    Inventors: Chris M. MacNamara, John J. Browne, Amruta Misra, Niall C. Power, Dave Cremins, Tomasz Kantecki, Paul Hough, Killian Muldoon
  • Publication number: 20210182194
    Abstract: A performance monitor provides cache miss stall and memory bandwidth usage metric samples to a resource exhaustion detector. The detector can detect the presence of last-level cache and memory bandwidth exhaustion conditions based on the metric samples. If cache miss stalls and memory bandwidth usage are both trending up, the detector reports a memory bandwidth exhaustion condition to a resource controller. If cache miss stalls are trending up and memory bandwidth usage is trending down, the detector reports a last-level cache exhaustion condition to the resource controller. The resource controller can allocate additional last-level cache or memory bandwidth to the processor unit to remediate the resource exhaustion condition. If bandwidth-related metric samples indicate that a processor unit may be overloaded due to receiving high bandwidth traffic, the resource controller can take a traffic rebalancing remedial action.
    Type: Application
    Filed: February 25, 2021
    Publication date: June 17, 2021
    Applicant: Intel Corporation
    Inventors: John J. Browne, Adrian Boczkowski, Marcel D. Cornu, David Hunt, Shobhi Jain, Tomasz Kantecki, Liang Ma, Chris M. MacNamara, Amruta Misra, Terence Nally
  • Publication number: 20210157626
    Abstract: Examples described herein relate to circuitry to boot a virtualized execution environment (VEE) by use of system resources, wherein the system resources are allocated based on a priority level of the VEE. In some examples, the circuitry to boot a VEE by use of system resources is to access an identification of system resources to use to boot the VEE and priority level of the VEE from stored data. In some examples, the priority level of the VEE is based on a service level agreement (SLA), service level objective (SLO), or class of service (COS) that identifies boot time of the VEE. In some examples, the circuitry is to boot a VEE by use of system resources, wherein the system resources are allocated based on a priority level of the VEE and also based on a number of VEEs that boot concurrently.
    Type: Application
    Filed: February 2, 2021
    Publication date: May 27, 2021
    Inventors: Amruta MISRA, Chris MACNAMARA, John J. BROWNE, Liang MA, Shobhi JAIN, David HUNT