Patents by Inventor Karthik Kumar

Karthik Kumar has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10402124
    Abstract: The present disclosure relates to a dynamically composable computing system. The dynamically composable computing system comprises at least one compute sled including a set of respective local computing hardware resources; a plurality of disaggregated memory modules; at least one disaggregated memory acceleration logic configured to perform one or more predefined computations on data stored in one or more of the plurality of disaggregated memory modules; and a resource manager module configured to assemble a composite computing node by associating, in accordance with requirements of a user, at least one of the plurality of disaggregated memory modules with the disaggregated memory acceleration logic to provide at least one accelerated disaggregated memory module and connecting the least one accelerated disaggregated memory module to the compute sled.
    Type: Grant
    Filed: March 30, 2017
    Date of Patent: September 3, 2019
    Assignee: Intel Corporation
    Inventors: Francesc Guim Bernat, Mark Schmisseur, Karthik Kumar, Thomas Willhalm, Lidia Warnes
  • Patent number: 10402330
    Abstract: Examples include a processor including a coherency mode indicating one of a directory-based cache coherence protocol and a snoop-based cache coherency protocol, and a caching agent to monitor a bandwidth of reading from and/or writing data to a memory coupled to the processor, to set the coherency mode to the snoop-based cache coherency protocol when the bandwidth exceeds a threshold, and to set the coherency mode to the directory-based cache coherency protocol when the bandwidth does not exceed the threshold.
    Type: Grant
    Filed: April 3, 2018
    Date of Patent: September 3, 2019
    Assignee: Intel Corporation
    Inventors: Karthik Kumar, Mustafa Hajeer, Thomas Willhalm, Francesc Guim Bernat, Benjamin Graniello
  • Patent number: 10389839
    Abstract: An apparatus comprises a processor to generate, in anticipation of receipt of a read request for data of a data set, a prefetch request to retrieve the data set from a memory device, the prefetch request to comprise at least one parameter indicating a size of the data set. The processor is further to cause transmission of the prefetch request to the memory device and in response to a read request for at least a portion of the data set, request the at least a portion of the data set from a cache storing a copy of the data set, wherein the cache is to store the copy of the data set after the copy is received from the memory device in response to the prefetch request.
    Type: Grant
    Filed: June 1, 2016
    Date of Patent: August 20, 2019
    Assignee: Intel Corporation
    Inventors: Francesc Guim Bernat, Karthik Kumar, Thomas Willhalm, Raj K. Ramanujan, Brian J. Slechta
  • Patent number: 10387259
    Abstract: An apparatus is described. The apparatus includes a memory controller having a programmable component. The programmable component is to implement a data checking function. The programmable component is to receive and process partial results of the data checking function from two or more DIMM cards that are coupled to the memory controller.
    Type: Grant
    Filed: June 26, 2015
    Date of Patent: August 20, 2019
    Assignee: Intel Corporation
    Inventors: Karthik Kumar, Martin Dimitrov, Thomas Willhalm
  • Publication number: 20190250916
    Abstract: An apparatus is described. The apparatus includes main memory control logic circuitry comprising prefetch intelligence logic circuitry. The prefetch intelligence circuitry to determine, from a read result of a load instruction, an address for a dependent load that is dependent on the read result and direct a read request for the dependent load to a main memory to fetch the dependent load's data.
    Type: Application
    Filed: September 30, 2016
    Publication date: August 15, 2019
    Inventors: Patrick LU, Karthik KUMAR, Thomas WILLHALM, Francesc GUIM BERNAT, Martin P. DIMITROV
  • Patent number: 10372362
    Abstract: The present disclosure relates to a dynamically composable computing system comprising a computing fabric with a plurality of different disaggregated computing hardware resources having respective hardware characteristics. A resource manager has access to the respective hardware characteristics of the different disaggregated computing hardware resources and is configured to assemble a composite computing node by selecting one or more disaggregated computing hardware resources with respective hardware characteristics meeting requirements of an application to be executed on the composite computing node. An orchestrator is configured to schedule the application using the assembled composite computing node.
    Type: Grant
    Filed: March 30, 2017
    Date of Patent: August 6, 2019
    Assignee: Intel Corporation
    Inventors: Francesc Guim Bernat, Karthik Kumar, John Chun Kwok Leung, Mark Schmisseur, Thomas Willhalm
  • Publication number: 20190235773
    Abstract: Examples relate to a memory controller or memory controller device for a memory pool of a computer system, to a management apparatus or management device for the computer system, and to an apparatus or device for a compute node of the computer system, and to corresponding methods and computer programs. The memory pool comprises computer memory that is accessible to a plurality of compute nodes of the computer system via the memory controller. The memory controller comprises interface circuitry for communicating with the plurality of compute nodes. The memory controller comprises control circuitry being configured to obtain an access control instruction via the interface circuitry. The access control instruction indicates that access to a portion of the computer memory of the memory pool is to be granted to one or more processes being executed by the plurality of compute nodes of the computer system.
    Type: Application
    Filed: April 9, 2019
    Publication date: August 1, 2019
    Inventors: Mark Schmisseur, Thomas Willhalm, Francesc Guim Bernat, Karthik Kumar, Suraj Prabhakaran
  • Publication number: 20190229897
    Abstract: Technologies for accelerated key caching in an edge hierarchy include multiple edge appliance devices organized in tiers. An edge appliance device receives a request for a key, such as a private key. The edge appliance device determines whether the key is included in a local key cache and, if not, requests the key from an edge appliance device included in an inner tier of the edge hierarchy. The edge appliance device may request the key from an edge appliance device included in a peer tier of the edge hierarchy. The edge appliance device may activate per-tenant accelerated logic to identify one or more keys in the key cache for eviction. The edge appliance device may activate per-tenant accelerated logic to identify one or more keys for pre-fetching. Those functions of the edge appliance device may be performed by an accelerator such as an FPGA. Other embodiments are described and claimed.
    Type: Application
    Filed: March 29, 2019
    Publication date: July 25, 2019
    Inventors: Timothy Verrall, Thomas Willhalm, Francesc Guim Bernat, Karthik Kumar, Ned M. Smith, Rajesh Poornachandran, Kapil Sood, Tarun Viswanathan, John J. Browne, Patrick Kutch
  • Publication number: 20190227737
    Abstract: Examples relate to a method for a memory module, a method for a memory controller, a method for a processor, to a memory module controller device or apparatus, to a memory controller device or apparatus, to a processor device or apparatus, a memory module, a memory controller, a processor, a computer system and a computer program. The method for the memory module comprises obtaining one or more memory write instructions of a group memory write instruction. The group memory write instruction comprises a plurality of memory write instructions to be executed atomically. The one or more memory write instructions relate to one or more memory addresses associated with memory of the memory module. The method comprises executing the one or more memory write instructions using previously unallocated memory of the memory module. The method comprises obtaining a commit instruction for the group memory write instruction.
    Type: Application
    Filed: December 17, 2018
    Publication date: July 25, 2019
    Inventors: Ginger GILSDORF, Karthik KUMAR, Thomas WILLHALM, Mark SCHMISSEUR, Francesc GUIM BERNAT
  • Publication number: 20190230191
    Abstract: Technologies for fulfilling service requests in an edge architecture include an edge gateway device to receive a request from an edge device or an intermediate tier device of an edge network to perform a function of a service by an entity hosting the service. The edge gateway device is to identify one or more input data to fulfill the request by the service and request the one or more input data from an edge resource identified to provide the input data. The edge gateway device is to provide the input data to the entity associated with the request.
    Type: Application
    Filed: March 29, 2019
    Publication date: July 25, 2019
    Inventors: Francesc Guim Bernat, Karthik Kumar, Thomas Willhalm, Petar Torre, Ned Smith
  • Publication number: 20190227978
    Abstract: An apparatus is described. The apparatus includes logic circuitry embedded in at least one of a memory controller, network interface and peripheral control hub to process a function as a service (FaaS) function call embedded in a request. The request is formatted according to a protocol. The protocol allows a remote computing system to access a memory that is coupled to the memory controller without invoking processing cores of a local computing system that the memory controller is a component of.
    Type: Application
    Filed: April 2, 2019
    Publication date: July 25, 2019
    Inventors: Francesc GUIM BERNAT, Karthik KUMAR, Mustafa HAJEER
  • Publication number: 20190228326
    Abstract: The disclosure is generally directed to systems in which numerous devices arranged to provide data are deployed. The system includes a source processing device arranged to received data from the data provider devices. The source processing data is arranged to process and/or store all or a part of the data based on whether the part of the data can be used to infer the rest of the data. The received data can be identified as either prediction data or response data. A data processing model can be used to generate inferred response data from the prediction data. Where the inferred response data is within an error threshold of the response data, then the prediction data can be stored. As such, the response data can be reproduced using the data processing model.
    Type: Application
    Filed: March 28, 2019
    Publication date: July 25, 2019
    Applicant: Intel Corporation
    Inventors: Francesc Guim Bernat, Karthik Kumar
  • Publication number: 20190220210
    Abstract: Technologies for providing deduplication of data in an edge network includes a compute device having circuitry to obtain a request to write a data set. The circuitry is also to apply, to the data set, an approximation function to produce an approximated data set. Additionally, the circuitry is to determine whether the approximated data set is already present in a shared memory and write, to a translation table and in response to a determination that the approximated data set is already present in the shared memory, an association between a local memory address and a location, in the shared memory, where the approximated data set is already present. Additionally, the circuitry is to increase a reference count associated with the location in the shared memory.
    Type: Application
    Filed: March 28, 2019
    Publication date: July 18, 2019
    Inventors: Francesc Guim Bernat, Karthik Kumar, Thomas Willhalm, Timothy Verrall, Ned Smith
  • Publication number: 20190222518
    Abstract: Technologies for load balancing on a network device in an edge network are disclosed. According to one embodiment, a network device receives, in the edge network, a request to access a function. The request includes one or more performance requirements. The network device identifies, as a function of an evaluation of the performance requirements and on monitored properties of each device associated with the network device, one or more of the devices to service the request. The network device selects one of the identified devices according to a load balancing policy and sends the request to the selected device.
    Type: Application
    Filed: March 29, 2019
    Publication date: July 18, 2019
    Inventors: Francesc Guim Bernat, Karthik Kumar, Ned M. Smith, Monica Kenguva, Rashmin Patel
  • Publication number: 20190220424
    Abstract: Techniques and mechanisms for providing a shared memory which spans an interconnect fabric coupled between compute nodes. In an embodiment, a field-programmable gate array (FPGA) of a first compute node requests access to a memory resource of another compute node, where the memory resource is registered as part of the shared memory. In a response to the request, the first FPGA receives data from a fabric interface which couples the first compute node to an interconnect fabric. Circuitry of the first FPGA performs an operation, based on the data, independent of any requirement that the data first be stored to a shared memory location which is at the first compute node. In another embodiment, the fabric interface includes a cache agent to provide cache data and to provide cache coherency with one or more other compute nodes.
    Type: Application
    Filed: January 12, 2018
    Publication date: July 18, 2019
    Applicant: Intel Corporation
    Inventors: Francesc Guim Bernat, Thomas Willhalm, Karthik Kumar, Daniel Rivas Barragan, Patrick Lu
  • Patent number: 10346091
    Abstract: Methods and apparatus related to fabric resiliency support for atomic writes of many store operations to remote nodes are described. In one embodiment, non-volatile memory stores data corresponding to a plurality of write operations. A first node includes logic to perform one or more operations (in response to the plurality of write operations) to cause storage of the data at a second node atomically. The plurality of write operations are atomically bound to a transaction and the data is written to the non-volatile memory in response to release of the transaction. Other embodiments are also disclosed and claimed.
    Type: Grant
    Filed: March 31, 2016
    Date of Patent: July 9, 2019
    Assignee: Intel Corporation
    Inventors: Francesc Guim Bernat, Thomas Willhalm, Karthik Kumar, Martin P. Dimitrov, Raj K. Ramanujan
  • Publication number: 20190199620
    Abstract: Technologies for quality of service based throttling in a fabric architecture include a network node of a plurality of network nodes interconnected across the fabric architecture via an interconnect fabric. The network node includes a host fabric interface (HFI) configured to facilitate the transmission of data to/from the network node, monitor quality of service levels of resources of the network node used to process and transmit the data, and detect a throttling condition based on a result of the monitored quality of service levels. The HFI is further configured to generate and transmit a throttling message to one or more of the interconnected network nodes in response to having detected a throttling condition. The HFI is additionally configured to receive a throttling message from another of the network nodes and perform a throttling action on one or more of the resources based on the received throttling message. Other embodiments are described herein.
    Type: Application
    Filed: March 4, 2019
    Publication date: June 27, 2019
    Inventors: Francesc Guim Bernat, Karthik Kumar, Thomas Willhalm, Raj K. Ramanujan, Brian J. Slechta
  • Publication number: 20190178585
    Abstract: A support form defining a longitudinal axis is provided. The support form includes a first section, a second substantially solid section, and at least one flow feature form. The first section includes a plurality of unit cells of a first material joined together to form a lattice. The second section includes a second material and surrounds the first section. The at least one flow feature form is defined in the second section and is configured to generate a flow feature on a heat exchanger tube formed by plating the support form.
    Type: Application
    Filed: December 7, 2017
    Publication date: June 13, 2019
    Inventors: Hendrik Pieter Jacobus de Bock, Karthik Kumar Bodla, William Dwight Gerstler, James Albert Tallman, Konrad Roman Weeber
  • Patent number: 10318417
    Abstract: Persistent caching of memory-side cache content for devices, systems, and methods are disclosed and discussed. In a system including both a volatile memory (VM) and a nonvolatile memory (NVM), both mapped to the system address space, software applications directly access the NVM, and a portion of the VM is used as a memory-side cache (MSC) for the NVM. When power is lost, at least a portion of the MSC cache contents is copied to a storage region in the NVM, which is restored to the MSC upon system reboot.
    Type: Grant
    Filed: March 31, 2017
    Date of Patent: June 11, 2019
    Assignee: Intel Corporation
    Inventors: Patrick Lu, Karthik Kumar, Francesc Guim Bernat, Thomas Willhalm
  • Publication number: 20190171387
    Abstract: Techniques and mechanisms for wear leveling across dual inline memory modules (DIMMs) by migrating data using direct memory accesses. In an embodiment, a direct memory access (DMA) controller detects that a metric of accesses to a first page of a first DIMM is outside of some range. Based on the detecting, the DMA controller disables an access to the first page by a processor core. While the access is disabled, the DMA controller performs DMA operations to migrate data from the first page to a second page of a second DIMM. The first page and the second page correspond, respectively, to a first physical address and a second physical address. In another embodiment, an update to address mapping information replaces a first correspondence of a virtual address to the first physical address with a second correspondence of the virtual address to the second physical address.
    Type: Application
    Filed: January 31, 2019
    Publication date: June 6, 2019
    Inventors: Thomas WILLHALM, Francesc GUIM BERNAT, Karthik KUMAR, Benjamin GRANIELLO, Mustafa HAJEER