Patents by Inventor Karthik Kumar

Karthik Kumar has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20220014588
    Abstract: Methods, apparatus, systems, and articles of manufacture are disclosed that reduce latency and bandwidth consumption when sharing memory across a distributed coherent Edge computing system. The distributed coherent Edge computing system disclosed herein configures a compute express link (CXL) endpoint to share data between memories across an Edge platform. The CXL endpoint configures coherent memory domain(s) of memory addresses, which are initialized from an Edge device connected to the Edge platform. The CXL endpoint also configures coherency rule(s) for the coherent memory domain(s). The CXL endpoint is implemented to snoop the Edge platform in response to read and write requests from the Edge device. The CXL endpoint selectively snoops memory addresses within the coherent memory domain(s) that are defined as coherent based on the coherency rule(s).
    Type: Application
    Filed: September 24, 2021
    Publication date: January 13, 2022
    Inventors: Francesc Guim Bernat, Karthik Kumar, Thomas Willhalm
  • Publication number: 20220012088
    Abstract: Techniques for expanded trusted domains are disclosed. In the illustrative embodiment, a trusted domain can be established that includes hardware components from a processor as well as an off-load device. The off-load device may provide compute resources for the trusted domain. The trusted domain can be expanded and contracted on-demand, allowing for a flexible approach to creating and using trusted domains.
    Type: Application
    Filed: September 24, 2021
    Publication date: January 13, 2022
    Inventors: Francesc Guim Bernat, Karthik Kumar, Ravi L. Sahita, Marcos E. Carranza
  • Publication number: 20220014579
    Abstract: A network appliance includes quality of service circuitry to monitor operational context of the network appliance; and injection circuitry to: identify a content insertion slot in the stream; obtain a rule from a rule database; interface with analytics circuitry to determine inputs for the rule based on a context of the stream; interface with rule execution circuitry to execute the rule with the inputs for the rule, wherein the execution of the rule results in a determination to insert content into the stream; and in response to the determination to insert content into the stream: interface with the quality of service circuitry to determine insertable content to insert into the stream; and transmit over the egress port of the network appliance, the insertable content in the content insertion slot to a device consuming the stream, while buffering the stream in a memory device of the network appliance
    Type: Application
    Filed: September 24, 2021
    Publication date: January 13, 2022
    Inventors: Alexander Bachmutsky, Francesc Guim Bernat, Karthik Kumar
  • Publication number: 20220004468
    Abstract: An embodiment of an electronic apparatus may comprise one or more substrates, and a controller coupled to the one or more substrates, the controller to allocate a first secure portion of a pooled memory to a first instantiation of an application on a first node, and circuitry coupled to the one or more substrates and the controller, the circuitry to provide a failover interface for a second instantiation of the application on a second node to access the first secure portion of the pooled memory in the event of a failure of the first node. Other embodiments are disclosed and claimed.
    Type: Application
    Filed: September 20, 2021
    Publication date: January 6, 2022
    Applicant: Intel Corporation
    Inventors: Thomas Willhalm, Francesc Guim Bernat, Karthik Kumar, Rita Gupta, Mark Schmisseur, Dimitrios Ziakas
  • Publication number: 20220004330
    Abstract: Examples described herein relate to a network interface device, when operational, configured to: select data of a region of addressable memory addresses to migrate from a first memory pool to a second memory pool to lower a transit time of the data of the region of addressable memory addresses to a computing platform. In some examples, selecting data of a region of addressable memory addresses to migrate from a first memory pool to a second memory pool is based at least, in part, on one or more of: (a) memory bandwidth used to access the data; (b) latency to access the data from the first memory pool by the computing platform; (c) number of accesses to the data over a window of time by the computing platform; (d) number of accesses to the data over a window of time by other computing platforms over a window of time; (e) historic congestion to and/or from one or more memory pools accessible to the computing platform; and/or (f) number of different computing platforms that access the data.
    Type: Application
    Filed: September 21, 2021
    Publication date: January 6, 2022
    Inventors: Francesc GUIM BERNAT, Karthik KUMAR
  • Patent number: 11212085
    Abstract: Technologies for accelerated key caching in an edge hierarchy include multiple edge appliance devices organized in tiers. An edge appliance device receives a request for a key, such as a private key. The edge appliance device determines whether the key is included in a local key cache and, if not, requests the key from an edge appliance device included in an inner tier of the edge hierarchy. The edge appliance device may request the key from an edge appliance device included in a peer tier of the edge hierarchy. The edge appliance device may activate per-tenant accelerated logic to identify one or more keys in the key cache for eviction. The edge appliance device may activate per-tenant accelerated logic to identify one or more keys for pre-fetching. Those functions of the edge appliance device may be performed by an accelerator such as an FPGA. Other embodiments are described and claimed.
    Type: Grant
    Filed: March 29, 2019
    Date of Patent: December 28, 2021
    Assignee: Intel Corporation
    Inventors: Timothy Verrall, Thomas Willhalm, Francesc Guim Bernat, Karthik Kumar, Ned M. Smith, Rajesh Poornachandran, Kapil Sood, Tarun Viswanathan, John J. Browne, Patrick Kutch
  • Publication number: 20210397999
    Abstract: Methods, apparatus, systems and articles of manufacture to offload execution of a portion of a machine learning model are disclosed. An example apparatus includes processor circuitry to instantiate offload controller circuitry to select a first portion of layers of the machine learning model for execution at a first node and a second portion of the layers for remote execution for execution at a second node, model executor circuitry to execute the first portion of the layers, serialization circuitry to serialize the output of the execution of the first portion of the layers, and a network interface to transmit a request for execution of the machine learning model to the second node, the request including the serialized output of the execution of the first portion of the layers of the machine learning model and a layer identifier identifying the second portion of the layers of the machine learning model.
    Type: Application
    Filed: June 25, 2021
    Publication date: December 23, 2021
    Inventors: Francesc Guim Bernat, Ned M. Smith, Karthik Kumar, Sunil Cheruvu
  • Publication number: 20210389880
    Abstract: Systems, apparatuses, and methods provide for memory management where an infrastructure processing unit bypasses a central processing unit. Such an infrastructure processing unit determines if incoming packets of memory traffic trigger memory rules stored by the infrastructure processing unit. The incoming packets are routed to the central processing unit in a default mode when the incoming packets do not trigger the memory rules. Conversely, the incoming packets are routed to the infrastructure processing unit and bypass the central processing unit in an inline mode when the incoming packets trigger the memory rules. A memory architecture communicatively coupled to the central processing unit receives a set of atomic transactions from the infrastructure processing unit that bypasses the central processing unit and performs the set of atomic transactions from the infrastructure processing unit.
    Type: Application
    Filed: August 26, 2021
    Publication date: December 16, 2021
    Applicant: Intel Corporation
    Inventors: Francesc Guim Bernat, Karthik Kumar, Thomas Willhalm, Mark A. Schmisseur
  • Patent number: 11196837
    Abstract: Technologies for fulfilling service requests in an edge architecture include an edge gateway device to receive a request from an edge device or an intermediate tier device of an edge network to perform a function of a service by an entity hosting the service. The edge gateway device is to identify one or more input data to fulfill the request by the service and request the one or more input data from an edge resource identified to provide the input data. The edge gateway device is to provide the input data to the entity associated with the request.
    Type: Grant
    Filed: March 29, 2019
    Date of Patent: December 7, 2021
    Assignee: Intel Corporation
    Inventors: Francesc Guim Bernat, Karthik Kumar, Thomas Willhalm, Petar Torre, Ned Smith, Brinda Ganesh, Evan Custodio, Suraj Prabhakaran
  • Publication number: 20210373954
    Abstract: Data management for edge architected computing systems extends current storage and memory schemes of edge resources to expose interfaces to allow a device, such as an endpoint or client device, or another edge resource, to specify criteria for managing data originating from the device and stored in an edge resource, and extends the storage and memory controllers to manage data in accordance with the criteria, including removing stored data that no longer satisfies the criteria. The criteria includes a temporal hint to specify a time after which the data can be removed, a physical hint to specify a list of edge resources outside of which the data can be removed, an event-based hint to specify an event after which the data can be removed, and a quality of service condition to modify the time specified in the temporal hint based on a condition, such as memory and storage capacity of the edge resource in which the data is managed.
    Type: Application
    Filed: August 13, 2021
    Publication date: December 2, 2021
    Inventors: Francesc GUIM BERNAT, Ramanathan SETHURAMAN, Karthik KUMAR, Mark A. SCHMISSEUR, Brinda GANESH
  • Publication number: 20210377150
    Abstract: A system comprising a traffic handler comprising circuitry to determine that data of a memory request is stored remotely in a memory pool; generate a packet based on the memory request; and direct the packet to a path providing a guaranteed latency for completion of the memory request.
    Type: Application
    Filed: August 17, 2021
    Publication date: December 2, 2021
    Applicant: Intel Corporation
    Inventors: Francois Dugast, Francesc Guim Bernat, Durgesh Srivastava, Karthik Kumar
  • Patent number: 11176091
    Abstract: Techniques and apparatus for providing access to data in a plurality of storage formats are described. In one embodiment, for example, an apparatus may include logic, at least a portion of comprised in hardware coupled to the at least one memory, to determine a first storage format of a database operation on a database having a second storage format, and perform a format conversion process responsive to the first storage format being different than the second storage format, the format conversion process to translate a virtual address of the database operation to a physical address, and determine a converted physical address comprising a memory address according to the first storage format. Other embodiments are described and claimed.
    Type: Grant
    Filed: September 29, 2017
    Date of Patent: November 16, 2021
    Assignee: INTEL CORPORATION
    Inventors: Mark A. Schmisseur, Thomas Willhalm, Francesc Guim Bernat, Karthik Kumar
  • Publication number: 20210349512
    Abstract: In one embodiment, an apparatus includes an interface to couple a plurality of devices of a system, the interface to enable communication according to a Compute Express Link (CXL) protocol, and a power management circuit coupled to the interface. The power management circuit may: receive, from a first device of the plurality of devices, a request according to the CXL protocol for updated power credits; identify at least one other device of the plurality of devices to provide at least some of the updated power credits; and communicate with the first device and the at least one other device to enable the first device to increase power consumption according to the at least some of the updated power credits. Other embodiments are described and claimed.
    Type: Application
    Filed: July 26, 2021
    Publication date: November 11, 2021
    Inventors: Francesc Guim Bernat, Karthik Kumar, Alexander Bachmutsky, Dimitrios Ziakas, Rita D. Gupta
  • Publication number: 20210349840
    Abstract: In one embodiment, an apparatus includes: an interface to couple a plurality of devices of a system and enable communication according to a Compute Express Link (CXL) protocol. The interface may receive a consistent memory request having a type indicator to indicate a type of consistency to be applied to the consistent memory request. A request scheduler coupled to the interface may receive the consistent memory request and schedule it for execution according to the type of consistency, based at least in part on a priority of the consistent memory request and one or more pending consistent memory requests. Other embodiments are described and claimed.
    Type: Application
    Filed: July 26, 2021
    Publication date: November 11, 2021
    Inventors: Karthik Kumar, Francesc Guim Bernat
  • Patent number: 11169853
    Abstract: Technologies for providing dynamic selection of edge and local accelerator resources includes a device having circuitry to identify a function of an application to be accelerated, determine one or more properties of an accelerator resource available at the edge of a network where the device is located, and determine one or more properties of an accelerator resource available in the device. Additionally, the circuitry is to determine a set of acceleration selection factors associated with the function, wherein the acceleration factors are indicative of one or more objectives to be satisfied in the acceleration of the function. Further, the circuitry is to select, as a function of the one or more properties of the accelerator resource available at the edge, the one or more properties of the accelerator resource available in the device, and the acceleration selection factors, one or more of the accelerator resources to accelerate the function.
    Type: Grant
    Filed: December 28, 2018
    Date of Patent: November 9, 2021
    Assignee: Intel Corporation
    Inventors: Francesc Guim Bernat, Karthik Kumar, Ned Smith, Thomas Willhalm, Timothy Verrall
  • Patent number: 11163682
    Abstract: Systems, methods and apparatuses for distributed consistency memory. In some embodiments, the apparatus comprises at least one monitoring circuit to monitor for memory accesses to an address space; at least one a monitoring table to store an identifier of the address space; and at least one hardware core to execute an instruction to enable the monitoring circuit.
    Type: Grant
    Filed: December 29, 2015
    Date of Patent: November 2, 2021
    Assignee: Intel Corporation
    Inventors: Francesc Guim Bernet, Narayan Ranganathan, Karthik Kumar, Raj K. Ramanujan, Robert G. Blankenship
  • Patent number: 11159454
    Abstract: Technologies for accelerating edge device workloads at a device edge network include a network computing device which includes a processor platform that includes at least one processor which supports a plurality of non-accelerated function-as-a-service (FaaS) operations and an accelerated platform that includes at least one accelerator which supports a plurality of accelerated FaaS (AFaaS) operation. The network computing device is configured to receive a request to perform a FaaS operation, determine whether the received request indicates that an AFaaS operation is to be performed on the received request, and identify compute requirements for the AFaaS operation to be performed. The network computing device is further configured to select an accelerator platform to perform the identified AFaaS operation and forward the received request to the selected accelerator platform to perform the identified AFaaS operation. Other embodiments are described and claimed.
    Type: Grant
    Filed: January 21, 2020
    Date of Patent: October 26, 2021
    Assignee: Intel Corporation
    Inventors: Francesc Guim Bernat, Anil Rao, Suraj Prabhakaran, Mohan Kumar, Karthik Kumar
  • Patent number: 11157311
    Abstract: Methods, apparatus, systems and machine-readable storage media of an edge computing device which is enabled to access and select the use of local or remote acceleration resources for edge computing processing is disclosed. In an example, an edge computing device obtains first telemetry information that indicates availability of local acceleration circuitry to execute a function, and obtains second telemetry that indicates availability of a remote acceleration function to execute the function. An estimated time (and cost or other identifiable or estimateable considerations) to execute the function at the respective location is identified. The use of the local acceleration circuitry or the remote acceleration resource is selected based on the estimated time and other appropriate factors in relation to a service level agreement.
    Type: Grant
    Filed: September 27, 2019
    Date of Patent: October 26, 2021
    Assignee: Intel Corproation
    Inventors: Francesc Guim Bernat, Karthik Kumar, Ned M. Smith, Thomas Willhalm, Timothy Verrall
  • Publication number: 20210326763
    Abstract: Devices, methods, apparatus, systems, and articles of manufacture to propagate a model in edge architecture are disclosed. An example device includes an interface to access a model received via the edge architecture; at least one memory; instructions in the device; and one or more processors to execute the instructions to: determine a number of attestation responses based on a blockchain associated with the model; determine if the number satisfies a threshold number; initiate an execution of the model in response to verifying that the number satisfies the threshold number; and transmit the model to a plurality of edge appliances in response the number not satisfying the threshold number.
    Type: Application
    Filed: June 25, 2021
    Publication date: October 21, 2021
    Inventors: Francesc Guim Bernat, Karthik Kumar, Thomas Willhalm, Timothy Verrall
  • Publication number: 20210325954
    Abstract: System and techniques for power-based adaptive hardware reliability on a device are described herein. A hardware platform is divided into multiple partitions. Here, each partition includes a hardware component with an adjustable reliability feature. The several partitions are placed into one of multiple reliability categories. A workload with a reliability requirement is obtained and executed on a partition in a reliability category that satisfies the reliability requirements. A change in operating parameters for the device is detected and the adjustable reliability feature for the partition is modified based on the change in the operating parameters of the device.
    Type: Application
    Filed: June 25, 2021
    Publication date: October 21, 2021
    Inventors: Francesc Guim Bernat, Karthik Kumar, Marcos E. Carranza, Cesar Martinez-Spessot, Mustafa Hajeer