Patents by Inventor Karthik Kumar

Karthik Kumar has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20220206857
    Abstract: Technologies for providing dynamic selection of edge and local accelerator resources includes a device having circuitry to identify a function of an application to be accelerated, determine one or more properties of an accelerator resource available at the edge of a network where the device is located, and determine one or more properties of an accelerator resource available in the device. Additionally, the circuitry is to determine a set of acceleration selection factors associated with the function, wherein the acceleration factors are indicative of one or more objectives to be satisfied in the acceleration of the function. Further, the circuitry is to select, as a function of the one or more properties of the accelerator resource available at the edge, the one or more properties of the accelerator resource available in the device, and the acceleration selection factors, one or more of the accelerator resources to accelerate the function.
    Type: Application
    Filed: November 8, 2021
    Publication date: June 30, 2022
    Inventors: Francesc Guim Bernat, Karthik Kumar, Ned Smith, Thomas Willhalm, Timothy Verrall
  • Publication number: 20220210073
    Abstract: Technologies for load balancing on a network device in an edge network are disclosed. An example network device includes circuitry to receive, in an edge network, a request to access a function, the request including one or more performance requirements, identify, as a function of an evaluation of the performance requirements and on monitored properties of each of a plurality of devices associated with the network device, one or more of the plurality of devices to service the request, select one of the identified devices according to a load balancing policy, and send the request to the selected device.
    Type: Application
    Filed: January 4, 2022
    Publication date: June 30, 2022
    Inventors: Francesc Guim Bernat, Karthik Kumar, Ned M. Smith, Monica Kenguva, Rashmin Patel
  • Publication number: 20220197819
    Abstract: Examples described herein relate to a memory controller to allocate an address range for a process among multiple memory pools based on a service level parameters associated with the address range and performance capabilities of the multiple memory pools. In some examples, the service level parameters include one or more of latency, network bandwidth, amount of memory allocation, memory bandwidth, data encryption use, type of encryption to apply to stored data, use of data encryption to transport data to a requester, memory technology, and/or durability of a memory device.
    Type: Application
    Filed: March 10, 2022
    Publication date: June 23, 2022
    Inventors: Karthik KUMAR, Francesc GUIM BERNAT, Thomas WILLHALM, Marcos E. CARRANZA, Cesar Ignacio MARTINEZ SPESSOT
  • Publication number: 20220197729
    Abstract: An apparatus comprising a network interface controller comprising a queue for messages for a thread executing on a host computing system, wherein the queue is dedicated to the thread; and circuitry to send a notification to the host computing system to resume execution of the thread when a monitoring rule for the queue has been triggered.
    Type: Application
    Filed: December 23, 2020
    Publication date: June 23, 2022
    Applicant: Intel Corporation
    Inventors: Francesc Guim Bernat, Karthik Kumar, Patrick G. Kutch, Alexander Bachmutsky, Nicolae Octavian Popovici
  • Publication number: 20220200788
    Abstract: Technologies for accelerated key caching in an edge hierarchy include multiple edge appliance devices organized in tiers. An edge appliance device receives a request for a key, such as a private key. The edge appliance device determines whether the key is included in a local key cache and, if not, requests the key from an edge appliance device included in an inner tier of the edge hierarchy. The edge appliance device may request the key from an edge appliance device included in a peer tier of the edge hierarchy. The edge appliance device may activate per-tenant accelerated logic to identify one or more keys in the key cache for eviction. The edge appliance device may activate per-tenant accelerated logic to identify one or more keys for pre-fetching. Those functions of the edge appliance device may be performed by an accelerator such as an FPGA. Other embodiments are described and claimed.
    Type: Application
    Filed: December 23, 2021
    Publication date: June 23, 2022
    Inventors: Timothy Verrall, Thomas Willhalm, Francesc Guim Bernat, Karthik Kumar, Ned M. Smith, Rajesh Poornachandran, Kapil Sood, Tarun Viswanathan, John J. Browne, Patrick Kutch
  • Patent number: 11366782
    Abstract: An apparatus is described. The apparatus includes logic circuitry embedded in at least one of a memory controller, network interface and peripheral control hub to process a function as a service (FaaS) function call embedded in a request. The request is formatted according to a protocol. The protocol allows a remote computing system to access a memory that is coupled to the memory controller without invoking processing cores of a local computing system that the memory controller is a component of.
    Type: Grant
    Filed: June 30, 2021
    Date of Patent: June 21, 2022
    Assignee: Intel Corporation
    Inventors: Francesc Guim Bernat, Karthik Kumar, Mustafa Hajeer
  • Patent number: 11354329
    Abstract: A system for mining of real-time data from non-production environments (e.g., test and development environments). The data that is mined/extracted is “live” data that reflects instantaneous changes, modifications, to the data. In addition, since embodiments of the present invention provide users/testers with a “live” real-time view of the mined data, the data is stored in temporary storage/non-cache memory as opposed to permanent storage (i.e., cache memory). As a result, once the user/tester consumes the data (i.e., modifies, changes or otherwise conditions the data), the data is deleted from the temporary/non-cache storage location. Thus, embodiments of the invention eliminate the need to provide for and maintain a large database for permanent storage of mined test data.
    Type: Grant
    Filed: October 28, 2019
    Date of Patent: June 7, 2022
    Assignee: BANK OF AMERICA CORPORATION
    Inventors: Sujata Devon Raju, Vinod Kumar Alladi, Bhimeswar Rao Kharade Maratha, Jayadev Mynampati, Parthiban Tiruvayur Shanmugam, Durga Prasad Turaga, Karthik Kumar Venkatasubramanian
  • Publication number: 20220166846
    Abstract: Technologies for managing telemetry and sensor data on an edge networking platform are disclosed. According to one embodiment disclosed herein, a device monitors telemetry data associated with multiple services provided in the edge networking platform. The device identifies, for each of the services and as a function of the associated telemetry data, one or more service telemetry patterns. The device generates a profile including the identified service telemetry patterns.
    Type: Application
    Filed: July 30, 2021
    Publication date: May 26, 2022
    Inventors: Ramanathan Sethuraman, Timothy Verrall, Ned M. Smith, Thomas Willhalm, Brinda Ganesh, Francesc Guim Bernat, Karthik Kumar, Evan Custodio, Suraj Prabhakaran, Ignacio Astilleros Diez, Nilesh K. Jain, Ravi Iyer, Andrew J. Herdrich, Alexander Vul, Patrick G. Kutch, Kevin Bohan, Trevor Cooper
  • Publication number: 20220166847
    Abstract: Technologies for fulfilling service requests in an edge architecture include an edge gateway device to receive a request from an edge device or an intermediate tier device of an edge network to perform a function of a service by an entity hosting the service. The edge gateway device is to identify one or more input data to fulfill the request by the service and request the one or more input data from an edge resource identified to provide the input data. The edge gateway device is to provide the input data to the entity associated with the request.
    Type: Application
    Filed: December 3, 2021
    Publication date: May 26, 2022
    Inventors: Francesc Guim Bernat, Karthik Kumar, Thomas Willhalm, Petar Torre, Ned Smith, Brinda Ganesh, Evan Custodio, Suraj Prabhakaran
  • Patent number: 11343177
    Abstract: Technologies for quality of service based throttling in a fabric architecture include a network node of a plurality of network nodes interconnected across the fabric architecture via an interconnect fabric. The network node includes a host fabric interface (HFI) configured to facilitate the transmission of data to/from the network node, monitor quality of service levels of resources of the network node used to process and transmit the data, and detect a throttling condition based on a result of the monitored quality of service levels. The HFI is further configured to generate and transmit a throttling message to one or more of the interconnected network nodes in response to having detected a throttling condition. The HFI is additionally configured to receive a throttling message from another of the network nodes and perform a throttling action on one or more of the resources based on the received throttling message. Other embodiments are described herein.
    Type: Grant
    Filed: October 30, 2020
    Date of Patent: May 24, 2022
    Assignee: Intel Corporation
    Inventors: Francesc Guim Bernat, Karthik Kumar, Thomas Willhalm, Raj Ramanujan, Brian Slechta
  • Publication number: 20220158943
    Abstract: A traffic flow based map cache refresh may be provided. A computing device may receive a dropped packet message when a packet associated with a flow having a destination and a source was dropped before it reached the destination. Next, in response to receiving the dropped packet message, a map request message may be sent to a Map Server (MS). In response to sending the map request message, a map response message may be received indicating an updated destination for the flow. A map cache may then be refreshed for the source of the flow based on the updated destination from the received map response message.
    Type: Application
    Filed: November 17, 2020
    Publication date: May 19, 2022
    Applicant: Cisco Technology, Inc.
    Inventors: Prakash C. Jain, Sanjay Kumar Hooda, Karthik Kumar Thatikonda, Denis Neogi, Rajeev Kumar
  • Patent number: 11336547
    Abstract: Technologies for dynamically managing resources in disaggregated accelerators include an accelerator. The accelerator includes acceleration circuitry with multiple logic portions, each capable of executing a different workload. Additionally, the accelerator includes communication circuitry to receive a workload to be executed by a logic portion of the accelerator and a dynamic resource allocation logic unit to identify a resource utilization threshold associated with one or more shared resources of the accelerator to be used by a logic portion in the execution of the workload, limit, as a function of the resource utilization threshold, the utilization of the one or more shared resources by the logic portion as the logic portion executes the workload, and subsequently adjust the resource utilization threshold as the workload is executed. Other embodiments are also described and claimed.
    Type: Grant
    Filed: April 20, 2021
    Date of Patent: May 17, 2022
    Assignee: Intel Corporation
    Inventors: Francesc Guim Bernat, Susanne M. Balle, Rahul Khanna, Sujoy Sen, Karthik Kumar
  • Publication number: 20220150125
    Abstract: Methods, apparatus, systems, and articles of manufacture to manage an edge infrastructure including a plurality of artificial intelligence models are disclosed. An example edge infrastructure apparatus includes a model data structure to identify a plurality of models and associated meta-data from a plurality of circuitry connectable via the edge infrastructure apparatus. The example apparatus includes model inventory circuitry to manage the model data structure to at least one of query for one or more models, add a model, update a model, or remove a model from the model data structure. The example apparatus includes model discovery circuitry to select at least one selected model of the plurality of models identified in the model data structure in response to a query. The example apparatus includes execution logic circuitry to inference the selected model.
    Type: Application
    Filed: December 22, 2021
    Publication date: May 12, 2022
    Inventors: Karthik Kumar, Francesc Guim Bernat, Marcos Carranza, Rita Wouhaybi, Srikathyayani Srikanteswara
  • Publication number: 20220138003
    Abstract: Methods, apparatus, systems and machine-readable storage media of an edge computing device which is enabled to access and select the use of local or remote acceleration resources for edge computing processing is disclosed. In an example, an edge computing device obtains first telemetry information that indicates availability of local acceleration circuitry to execute a function, and obtains second telemetry that indicates availability of a remote acceleration function to execute the function. An estimated time (and cost or other identifiable or estimateable considerations) to execute the function at the respective location is identified. The use of the local acceleration circuitry or the remote acceleration resource is selected based on the estimated time and other appropriate factors in relation to a service level agreement.
    Type: Application
    Filed: October 18, 2021
    Publication date: May 5, 2022
    Inventors: Francesc Guim Bernat, Karthik Kumar, Ned M. Smith, Thomas Willhalm, Timothy Verrall
  • Publication number: 20220121556
    Abstract: Systems, methods, articles of manufacture, and apparatus for end-to-end hardware tracing in an Edge network are disclosed. An example compute device includes at least one memory, instructions in the compute device, and processing circuitry to execute the instructions to, in response to receiving detecting an object having a global group identifier, generate monitoring data corresponding to a respective process executing on the compute device, the monitoring data including a process identifier, index the monitoring data having the process identifier to the corresponding global group identifier, synchronize a time stamp of the monitoring data to a network time protocol corresponding to the global group identifier, and transmit the indexed and synchronized monitoring data as tracing data in to the a tracing datastore.
    Type: Application
    Filed: December 23, 2021
    Publication date: April 21, 2022
    Inventors: Francesc Guim Bernat, Sunil Cheruvu, Tushar Gohad, Karthik Kumar, Ned M. Smith
  • Publication number: 20220121481
    Abstract: Examples described herein relate to offload service mesh management and selection of memory pool accessed by services associated with the service mesh to a switch. Based on telemetry data of one or more nodes and network traffic, one or more processes can be allocated to execute on the one or more nodes and a memory pool can be selected to store data generated by the one or more processes.
    Type: Application
    Filed: December 24, 2021
    Publication date: April 21, 2022
    Inventors: Francesc Guim Bernat, Karthik Kumar, Alexander Bachmutsky, Marcos E. Carranza, Cesar Ignacio Martinez Spessot
  • Publication number: 20220121566
    Abstract: Methods, apparatus, systems, and articles of manufacture are disclosed for network service management. An example apparatus includes microservice translation circuitry to query, at a first time, a memory address range corresponding to a plurality of services, and generate state information corresponding to the plurality of services at the first time. The example apparatus also includes microservice request circuitry to query, at a second time, the memory address range to identify a memory address state change, the memory address state change indicative of an instantiation request for at least one of the plurality of services, and microservice instantiation circuitry to cause a first compute device to instantiate the at least one of the plurality of services.
    Type: Application
    Filed: December 23, 2021
    Publication date: April 21, 2022
    Inventors: Francesc Guim Bernat, Karthik Kumar, Thomas Willhalm, Alexander Bachmutsky, Marcos Carranza
  • Publication number: 20220114032
    Abstract: System and techniques for infrastructure managed workload distribution are described herein. An infrastructure processing unit (IPU) receives a workload that includes a workload definition. The workload definition includes stages of the workload and a performance expectation. The IPU provides the workload, for execution, to a processing unit of a compute node to which the IPU belongs. The IPU monitors execution of the workload to determine that a stage of the workload is performing outside of the performance expectation from the workload definition. In response, the IPU modifies the execution of the workload.
    Type: Application
    Filed: December 22, 2021
    Publication date: April 14, 2022
    Inventors: Francesc Guim Bernat, Karthik Kumar, Alexander Bachmutsky, Marcos E. Carranza, Rita H. Wouhaybi
  • Publication number: 20220114010
    Abstract: Various aspects of methods, systems, and use cases include dynamic edge scheduling at an edge device of a system of edge devices. An edge device may include processing circuitry to execute instructions including operations to determine a set of capabilities and constraints of each of a plurality of remote edge devices. The operations may include determining candidate remote edge devices from the plurality of remote edge devices based on function requirements for a function and the set of capabilities and constraints. The operations may include selecting, from the candidate remote edge devices, a remote edge device to execute the function based on a power efficiency for the system determined using the set of capabilities and constraints.
    Type: Application
    Filed: December 22, 2021
    Publication date: April 14, 2022
    Inventors: Francesc Guim Bernat, Srikathyayani Srikanteswara, Karthik Kumar, Alexander Bachmutsky
  • Publication number: 20220109742
    Abstract: Methods, apparatus, systems, and articles of manufacture are disclosed to partition neural network models for executing at distributed Edge nodes. An example apparatus includes processor circuitry to perform at least one of first, second, or third operations to instantiate power consumption estimation circuitry to estimate a computation energy consumption for executing the neural network model on a first edge node, network bandwidth determination circuitry to determine a first transmission time for sending an intermediate result from the first edge node to a second or third edge node, power consumption estimation circuitry to estimate a transmission energy consumption for sending the intermediate result to the second or the third edge node, and neural network partitioning circuitry to partition the neural network model into a first portion to be executed at the first edge node and a second portion to be executed at the second or third edge node.
    Type: Application
    Filed: December 17, 2021
    Publication date: April 7, 2022
    Inventors: Karthik Kumar, Francesc Guim Bernat