Patents by Inventor Karthik Kumar

Karthik Kumar has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20210109584
    Abstract: Various aspects of methods, systems, and use cases include coordinating actions at an edge device based on power production in a distributed edge computing environment. A method may include identifying a long-term service level agreement (SLA) for a component of an edge device, and determining a list of resources related to the component using the long-term SLA. The method may include scheduling a task for the component based on the long-term SLA, a current battery level at the edge device, a current energy harvest rate at the edge device, or an amount of power required to complete the task. A resource of the list of resources may be used to initiate the task, such as according to the scheduling.
    Type: Application
    Filed: December 23, 2020
    Publication date: April 15, 2021
    Inventors: Francesc Guim Bernat, Karthik Kumar, Timothy Verrall
  • Patent number: 10977036
    Abstract: An apparatus is described. The apparatus includes main memory control logic circuitry comprising prefetch intelligence logic circuitry. The prefetch intelligence circuitry to determine, from a read result of a load instruction, an address for a dependent load that is dependent on the read result and direct a read request for the dependent load to a main memory to fetch the dependent load's data.
    Type: Grant
    Filed: September 30, 2016
    Date of Patent: April 13, 2021
    Assignee: Intel Corporation
    Inventors: Patrick Lu, Karthik Kumar, Thomas Willhalm, Francesc Guim Bernat, Martin P. Dimitrov
  • Publication number: 20210103481
    Abstract: Technologies for providing efficient migration of services include a server device. The server device includes compute engine circuitry to execute a set of services on behalf of a terminal device and migration accelerator circuitry. The migration accelerator circuitry is to determine whether execution of the services is to be migrated from an edge station in which the present server device is located to a second edge station in which a second server device is located, determine a prioritization of the services executed by the server device, and send, in response to a determination that the services are to be migrated and as a function of the determined prioritization, data utilized by each service to the second server device of the second edge station to migrate the services. Other embodiments are also described and claimed.
    Type: Application
    Filed: June 29, 2018
    Publication date: April 8, 2021
    Inventors: Francesc Guim BERNAT, Karthik KUMAR, Susanne M. BALLE, Ignacio ASTILLEROS DIEZ, Timothy VERRALL, Ned M. SMITH
  • Patent number: 10969975
    Abstract: The present disclosure relates to a dynamically composable computing system comprising a computing fabric with a plurality of different disaggregated computing hardware resources having respective hardware characteristics. A resource manager has access to the respective hardware characteristics of the different disaggregated computing hardware resources and is configured to assemble a composite computing node by selecting one or more disaggregated computing hardware resources with respective hardware characteristics meeting requirements of an application to be executed on the composite computing node. An orchestrator is configured to schedule the application using the assembled composite computing node.
    Type: Grant
    Filed: August 1, 2019
    Date of Patent: April 6, 2021
    Assignee: Intel Corporation
    Inventors: Francesc Guim Bernat, Karthik Kumar, John Chun Kwok Leung, Mark Schmisseur, Thomas Willhalm
  • Patent number: 10956325
    Abstract: Embodiments provide for a processor including a cache a caching agent and a processing node to decode an instruction including at least one operand specifying an address range within a distributed shared memory (DSM) and perform a flush to a first of a plurality of memory devices in the DSM at the specified address range.
    Type: Grant
    Filed: December 12, 2016
    Date of Patent: March 23, 2021
    Assignee: Intel Corporation
    Inventors: Francesc Guim Bernat, Karthik Kumar, Mohan J. Kumar, Thomas Willhalm, Robert G. Blankenship
  • Patent number: 10949313
    Abstract: A network controller, including: a processor; and a resource permission engine to: provision a composite node including a processor and a first disaggregated compute resource (DCR) remote from the processor, the first DCR to access a target resource; determine that the first DCR has failed; provision a second DCR for the composite node, the second DCR to access the target resource; and instruct the target resource to revoke a permission for the first DCR and grant the permission to the second DCR.
    Type: Grant
    Filed: June 28, 2017
    Date of Patent: March 16, 2021
    Assignee: Intel Corporation
    Inventors: Francesc Guim Bernat, Karthik Kumar, Susanne M. Balle, Daniel Rivas Barragan, Patrick Lu
  • Patent number: 10951516
    Abstract: Technologies for quality of service based throttling in a fabric architecture include a network node of a plurality of network nodes interconnected across the fabric architecture via an interconnect fabric. The network node includes a host fabric interface (HFI) configured to facilitate the transmission of data to/from the network node, monitor quality of service levels of resources of the network node used to process and transmit the data, and detect a throttling condition based on a result of the monitored quality of service levels. The HFI is further configured to generate and transmit a throttling message to one or more of the interconnected network nodes in response to having detected a throttling condition. The HFI is additionally configured to receive a throttling message from another of the network nodes and perform a throttling action on one or more of the resources based on the received throttling message. Other embodiments are described herein.
    Type: Grant
    Filed: March 4, 2019
    Date of Patent: March 16, 2021
    Assignee: Intel Corporation
    Inventors: Francesc Guim Bernat, Karthik Kumar, Thomas Willhalm, Raj K. Ramanujan, Brian J. Slechta
  • Publication number: 20210073138
    Abstract: Embodiments of the invention include a machine-readable medium having stored thereon at least one instruction, which if performed by a machine causes the machine to perform a method that includes decoding, with a node, an invalidate instruction; and executing, with the node, the invalidate instruction for invalidating a memory range specified across a fabric interconnect.
    Type: Application
    Filed: November 17, 2020
    Publication date: March 11, 2021
    Applicant: Intel Corporation
    Inventors: Karthik KUMAR, Thomas WILLHALM, Francesc GUIM BERNAT, Brian J. SLECHTA
  • Patent number: 10944689
    Abstract: There is disclosed in one example a communication apparatus, including: a telemetry interface; a management interface; and an edge gateway configured to: identify diverted traffic, wherein the diverted traffic includes traffic to be serviced by an edge microcloud configured to provide a plurality of services; receive telemetry via the telemetry interface; use the telemetry to anticipate a future per-service demand within the edge microcloud; compute a scale for a resource to meet the future per-service demand; and operate the management interface to instruct the edge microcloud to perform the scale before the future per-service demand occurs.
    Type: Grant
    Filed: June 29, 2018
    Date of Patent: March 9, 2021
    Assignee: Intel Corporation
    Inventors: Francesc Guim Bernat, Karthik Kumar, Thomas Willhalm, Mark A. Schmisseur, Timothy Verrall
  • Publication number: 20210064531
    Abstract: Methods and apparatus for software-defined coherent caching of pooled memory. The pooled memory is implemented in an environment having a disaggregated architecture where compute resources such as compute platforms are connected to disaggregated memory via a network or fabric. Software-defined caching policies are implemented in hardware in a processor SoC or discrete device such as a Network Interface Controller (NIC) by programming logic in an FPGA or accelerator on the SoC or discrete device. The programmed logic is configured to implement software-defined caching policies in hardware for effecting disaggregated memory (DM) caching in an associated DM cache of at least a portion of an address space allocated for the software application in the disaggregated memory.
    Type: Application
    Filed: November 9, 2020
    Publication date: March 4, 2021
    Inventors: Francesc Guim Bernat, Karthik Kumar, Alexander Bachmutsky, Zhongyan Lu, Thomas Willhalm
  • Patent number: 10936039
    Abstract: In one embodiment, an apparatus of an edge computing system includes memory that includes instructions and processing circuitry coupled to the memory. The processing circuitry implements the instructions to process a request to execute at least a portion of a workflow on pooled computing resources, the workflow being associated with a particular tenant, determine an amount of power to be allocated to particular resources of the pooled computing resources for execution of the portion of the workflow based on a power budget associated with the tenant and a current power cost, and control allocation of the determined amount of power to the particular resources of the pooled computing resources during execution of the portion of the workflow.
    Type: Grant
    Filed: June 19, 2018
    Date of Patent: March 2, 2021
    Assignee: INTEL CORPORATION
    Inventors: Francesc Guim Bernat, Suraj Prabhakaran, Timothy Verrall, Karthik Kumar, Mark A. Schmisseur
  • Patent number: 10937302
    Abstract: Methods, devices, and systems for monitoring control panels of a fire control system is described herein. In some examples, one or more embodiments include a computing device comprising a memory, a processor configured to execute instructions stored in the memory to receive, via a gateway device, data from a plurality of fire control panels of a fire control system, and detect an event associated with one of the fire control panels based on the received data, and a user interface configured to display information associated with the detected event.
    Type: Grant
    Filed: July 25, 2019
    Date of Patent: March 2, 2021
    Assignee: Honeywell International Inc.
    Inventors: Deepika Sahai, Simon Foulkes, Karthik Kumar Davanam, Amit Jain, Rajesh Babu Nalukurthy
  • Publication number: 20210051096
    Abstract: Technologies for quality of service based throttling in a fabric architecture include a network node of a plurality of network nodes interconnected across the fabric architecture via an interconnect fabric. The network node includes a host fabric interface (HFI) configured to facilitate the transmission of data to/from the network node, monitor quality of service levels of resources of the network node used to process and transmit the data, and detect a throttling condition based on a result of the monitored quality of service levels. The HFI is further configured to generate and transmit a throttling message to one or more of the interconnected network nodes in response to having detected a throttling condition. The HFI is additionally configured to receive a throttling message from another of the network nodes and perform a throttling action on one or more of the resources based on the received throttling message. Other embodiments are described herein.
    Type: Application
    Filed: October 30, 2020
    Publication date: February 18, 2021
    Inventors: Francesc Guim Bernat, Karthik Kumar, Thomas Willhalm, Raj Ramanujan, Brian Slechta
  • Patent number: 10922265
    Abstract: Various embodiments are generally directed to an apparatus, method and other techniques to receive a transaction request to perform a transaction with the memory, the transaction request including a synchronization indication to indicate utilization of transaction synchronization to perform the transaction. Embodiments may include sending a request to a caching agent to perform the transaction, receiving a response from the caching agent, the response to indicate whether the transaction conflicts or does not conflict with another transaction, and performing the transaction if the response indicates the transaction does not conflict with the other transaction, or delaying the transaction for a period of time if the response indicates the transaction does conflict with the other transaction.
    Type: Grant
    Filed: June 27, 2017
    Date of Patent: February 16, 2021
    Assignee: INTEL CORPORATION
    Inventors: Francesc Guim Bernat, Karthik Kumar, Nicolae Popovici, Thomas Willhalm
  • Patent number: 10915791
    Abstract: Technology for a memory controller is described. The memory controller can receive a request to store training data. The request can include a model identifier (ID) that identifies a model that is associated with the training data. The memory controller can send a write request to store the training data associated with the model ID in a memory region in a pooled memory that is allocated for the model ID. The training data that is stored in the memory region in the pooled memory can be addressable based on the model ID.
    Type: Grant
    Filed: December 27, 2017
    Date of Patent: February 9, 2021
    Assignee: Intel Corporation
    Inventors: Francesc Guim Bernat, Karthik Kumar, Mark A. Schmisseur, Thomas Willhalm
  • Publication number: 20210034130
    Abstract: Examples described herein relate to management of battery-use by one or more computing resources in the event of a power outage. Data used by one or more computing resources can be backed-up using battery power. Battery power is allocated to data back-up operations based at least on one or more of: criticality level of data, priority of an application that processes the data, or priority level of resource. The computing resource can back-up data to a persistent storage media. The computing resource can store a log of data that is backed-up or not backed-up. The log can be used by the computing resource to access the backed-up data for continuing to process the data and to determine what data is not available for processing.
    Type: Application
    Filed: July 29, 2019
    Publication date: February 4, 2021
    Inventors: Francesc GUIM BERNAT, Suraj PRABHAKARAN, Karthik KUMAR, Uzair QURESHI, Timothy VERRALL
  • Publication number: 20210027613
    Abstract: Methods, devices, and systems for monitoring control panels of a fire control system is described herein. In some examples, one or more embodiments include a computing device comprising a memory, a processor configured to execute instructions stored in the memory to receive, via a gateway device, data from a plurality of fire control panels of a fire control system, and detect an event associated with one of the fire control panels based on the received data, and a user interface configured to display information associated with the detected event.
    Type: Application
    Filed: July 25, 2019
    Publication date: January 28, 2021
    Inventors: Deepika Sahai, Simon Foulkes, Karthik Kumar Davanam, Amit Jain, Rajesh Babu Nalukurthy
  • Publication number: 20210011787
    Abstract: Systems and methods for inter-kernel communication using one or more semiconductor devices. The semi-conductor devices include a kernel. The kernel may be in an inactive state unless performing an operation. One kernel of a first device may monitor data for an event. Once an event has occurred, the kernel sends an indication to a first inter-kernel communication circuitry. The inter-kernel communication circuitry determines an activation function of a plurality of activation functions is to be generated, generates the activation function, and transmits the activation function to a second kernel of a second device to waken and perform a function using a peer-to-peer connection.
    Type: Application
    Filed: September 25, 2020
    Publication date: January 14, 2021
    Inventors: Francesc Guim Bernat, Karthik Kumar, Susanne M. Balle, Mark D. Tetreault
  • Publication number: 20210011864
    Abstract: In one embodiment, an apparatus includes: a table to store a plurality of entries, each entry to identify a memory domain of a system and a coherency status of the memory domain; and a control circuit coupled to the table. The control circuit may be configured to receive a request to change a coherency status of a first memory domain of the system, and dynamically update a first entry of the table for the first memory domain to change the coherency status between a coherent memory domain and a non-coherent memory domain, in response to the request. Other embodiments are described and claimed.
    Type: Application
    Filed: September 25, 2020
    Publication date: January 14, 2021
    Inventors: Francesc Guim Bernat, Karthik Kumar, Thomas Willhalm, Alexander Bachmutsky
  • Patent number: 10885004
    Abstract: A group of cache lines in cache may be identified as cache lines not to be flushed to persistent memory until all cache line writes for the group of cache lines have been completed.
    Type: Grant
    Filed: June 19, 2018
    Date of Patent: January 5, 2021
    Assignee: Intel Corporation
    Inventors: Karthik Kumar, Francesc Guim Bernat, Thomas Willhalm, Mark A. Schmisseur, Benjamin Graniello