Patents by Inventor Karthik Kumar

Karthik Kumar has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250244960
    Abstract: Aspects of the disclosed technology include computer-implemented systems and methods for integrating machine-learned generative models with code editing tools. A code editor is configured to execute computer-executable code within code cells of a code editor interface including a first interface portion and a second interface portion. The interface portion is configured to receive user input for defining and editing a set of code cells within the first interface portion. Each code cell of the set of code cells is independently executable by the code editor application. The second interface portion is configured to receive user input for defining and submitting user queries to a machine-learned generative model. The code editor is configured to modify at least one code cell of the set of cells based at least in part on an output of the machine-learned generative model in response to a user query.
    Type: Application
    Filed: January 30, 2024
    Publication date: July 31, 2025
    Inventors: Piyush Arora, Zi Yun, Karthik Kumar Ramachandran, Salem Elie Haykal
  • Patent number: 12375390
    Abstract: A system comprising a traffic handler comprising circuitry to determine that data of a memory request is stored remotely in a memory pool; generate a packet based on the memory request; and direct the packet to a path providing a guaranteed latency for completion of the memory request.
    Type: Grant
    Filed: August 17, 2021
    Date of Patent: July 29, 2025
    Assignee: Intel Corporation
    Inventors: Francois Dugast, Francesc Guim Bernat, Durgesh Srivastava, Karthik Kumar
  • Patent number: 12348424
    Abstract: Methods, apparatus, systems and articles of manufacture are disclosed for edge data prioritization. An example apparatus includes at least one memory, instructions, and processor circuitry to at least one of execute or instantiate the instructions to identify an association of a data packet with a data stream based on one or more data stream parameters included in the data packet corresponding to the data stream, the data packet associated with a first priority, execute a model based on the one or more data stream parameters to generate a model output, determine a second priority of at least one of the data packet or the data stream based on the model output, the model output indicative of an adjustment of the first priority to the second priority, and cause transmission of at least one of the data packet or the data stream based on the second priority.
    Type: Grant
    Filed: June 25, 2021
    Date of Patent: July 1, 2025
    Assignee: Intel Corporation
    Inventors: Francesc Guim Bernat, Karthik Kumar, Marcos Carranza, Rita Wouhaybi, Cesar Martinez-Spessot
  • Patent number: 12332740
    Abstract: Methods and apparatus for application aware memory patrol scrubbing techniques. The method may be performed on a computing system including one or more memory devices and running multiple applications with associated processes. The computer system may be implemented in a multi-tenant environment, where virtual instances of physical resources provided by the system are allocated to separate tenants, such as through virtualization schemes employing virtual machines or containers. Quality of Service (QoS) scrubbing logic and novel interfaces are provided to enable memory scrubbing QoS policies to be applied at the tenant, application, and/or process level. This QoS policies may include memory ranges for which specific policies are applied, as well as bandwidth allocations for performing scrubbing operations. A pattern generator is also provided for generating scrubbing patterns based on observed or predicted memory access patterns and/or predefined patterns.
    Type: Grant
    Filed: June 23, 2021
    Date of Patent: June 17, 2025
    Assignee: Intel Corporation
    Inventors: Francesc Guim Bernat, Karthik Kumar, Mark A. Schmisseur, Thomas Willhalm, Marcos E. Carranza
  • Publication number: 20250191086
    Abstract: Various embodiments disclosed herein provide techniques for forecasting commodity consumption at the individual meter level. In various embodiments, a method includes receiving, by a metering device, data associated with consumption of a commodity by a plurality of consumption devices at a location. The method also includes determining, by the metering device, a number of users at the location. Also, the method includes generating, by the metering device using a machine learning model, a forecast of future consumption of the commodity based on the data associated with the consumption of the commodity and the number of users at the location, wherein the machine learning model is trained based on previously recorded data associated with the consumption of the commodity monitored by the metering device.
    Type: Application
    Filed: December 8, 2023
    Publication date: June 12, 2025
    Inventors: Narayana Rao KALURI, Sudhakar YADAGANI, Karthik Kumar VENKATESH, Surbhi GOLHANI
  • Publication number: 20250175518
    Abstract: A multi-tenant dynamic secure data region in which encryption keys can be shared by services running in nodes reduces the need for decrypting data as encrypted data is transferred between nodes in the data center. Instead of using a key per process/service, that is created by a memory controller when the service is instantiated (for example, MKTME), a software stack can specify that a set of processes or compute entities (for example, bit-streams) share a private key that is created and provided by the data center.
    Type: Application
    Filed: January 28, 2025
    Publication date: May 29, 2025
    Applicant: Intel Corporation
    Inventors: Francesc GUIM BERNAT, Karthik KUMAR, Alexander BACHMUTSKY
  • Patent number: 12314188
    Abstract: The platform data aging for adaptive memory scaling described herein provides technical solutions for technical problems facing power management for electronic device processors. Technical solutions described herein include improved processor power management based on a memory region life-cycle (e.g., short-lived, long-lived, static). In an example, a short-term memory request is allocated to a short-term memory region, and that short-term memory region is powered down upon expiration of the lifetime of all short-term memory requests on the short-term memory region. Multiple memory regions may be scaled down (e.g., shut down) or scaled up based on demands for memory capacity and bandwidth.
    Type: Grant
    Filed: June 24, 2021
    Date of Patent: May 27, 2025
    Assignee: Intel Corporation
    Inventors: Karthik Kumar, Francesc Guim Bernat, Mark A Schmisseur
  • Patent number: 12298430
    Abstract: Technologies for temperature gain control on radar units are described. A method includes determining a first value for an internal operating temperature corresponding to a steady-state of a radar unit. The method further includes causing the radar unit to operate in a first mode that heats the radar unit. The method further includes obtaining a second value for the internal operating temperature at a first time in the first mode and determining a third value indicating a measurement bias associated with the radar unit. The method further includes determining a fourth value using the second value and the third value. The fourth value indicates an updated internal operating temperature of the radar unit. The method further includes determining that the fourth value satisfies a threshold temperature condition corresponding to the first value. The method further includes causing the radar unit to stop operating in the first mode.
    Type: Grant
    Filed: March 2, 2022
    Date of Patent: May 13, 2025
    Assignee: Amazon Technologies, Inc.
    Inventors: Karthik Kumar, Morris Yuanhsiang Hsu, Tianchen Li
  • Patent number: 12289362
    Abstract: A multi-tenant dynamic secure data region in which encryption keys can be shared by services running in nodes reduces the need for decrypting data as encrypted data is transferred between nodes in the data center. Instead of using a key per process/service, that is created by a memory controller when the service is instantiated (for example, MKTME), a software stack can specify that a set of processes or compute entities (for example, bit-streams) share a private key that is created and provided by the data center.
    Type: Grant
    Filed: December 26, 2020
    Date of Patent: April 29, 2025
    Assignee: Intel Corporation
    Inventors: Francesc Guim Bernat, Karthik Kumar, Alexander Bachmutsky
  • Patent number: 12282366
    Abstract: In one embodiment, an apparatus includes an interface to couple a plurality of devices of a system, the interface to enable communication according to a Compute Express Link (CXL) protocol, and a power management circuit coupled to the interface. The power management circuit may: receive, from a first device of the plurality of devices, a request according to the CXL protocol for updated power credits; identify at least one other device of the plurality of devices to provide at least some of the updated power credits; and communicate with the first device and the at least one other device to enable the first device to increase power consumption according to the at least some of the updated power credits. Other embodiments are described and claimed.
    Type: Grant
    Filed: July 26, 2021
    Date of Patent: April 22, 2025
    Assignee: Intel Corporation
    Inventors: Francesc Guim Bernat, Karthik Kumar, Alexander Bachmutsky, Dimitrios Ziakas, Rita D. Gupta
  • Publication number: 20250117673
    Abstract: Techniques described herein address the above challenges that arise when using host executed software to manage vector databases by providing a vector database accelerator and shard management offload logic that is implemented within hardware and by software executed on device processors and programmable data planes of a programmable network interface device. In one embodiment, a programmable network interface device includes infrastructure management circuitry configured to facilitate data access for a neural network inference engine having a distributed data model via dynamic management of a node associated with the neural network inference engine, the node including a database shard of a vector database.
    Type: Application
    Filed: December 16, 2024
    Publication date: April 10, 2025
    Applicant: Intel Corporation
    Inventors: Anjali Singhai Jain, Tamar Bar-Kanarik, Marcos Carranza, Karthik Kumar, Cristian Florin Dumitrescu, Keren Guy, Patrick Connor
  • Patent number: 12271248
    Abstract: System and techniques for power-based adaptive hardware reliability on a device are described herein. A hardware platform is divided into multiple partitions. Here, each partition includes a hardware component with an adjustable reliability feature. The several partitions are placed into one of multiple reliability categories. A workload with a reliability requirement is obtained and executed on a partition in a reliability category that satisfies the reliability requirements. A change in operating parameters for the device is detected and the adjustable reliability feature for the partition is modified based on the change in the operating parameters of the device.
    Type: Grant
    Filed: June 25, 2021
    Date of Patent: April 8, 2025
    Assignee: Intel Corporation
    Inventors: Francesc Guim Bernat, Karthik Kumar, Marcos E. Carranza, Cesar Martinez-Spessot, Mustafa Hajeer
  • Publication number: 20250103965
    Abstract: An apparatus includes a host interface, a network interface, and programmable circuitry communicably coupled to the host interface and the network interface, the programmable circuitry comprising one or more processors are to implement network interface functionality and are to receive a prompt directed to an artificial intelligence (AI) model hosted by a host device communicably coupled to the host interface, apply a prompt tuning model to the prompt to generate an initial augmented prompt, compare the initial augmented prompt for a match with stored data of a prompt augmentation tracking table comprising real-time datacenter trend data and cross-network historical augmentation data from programmable network interface devices in a datacenter hosting the apparatus, generate, in response to identification of the match with the stored data, a final augmented prompt based on the match, and transmit the final augmented prompt to the AI model.
    Type: Application
    Filed: December 6, 2024
    Publication date: March 27, 2025
    Applicant: Intel Corporation
    Inventors: Karthik Kumar, Marcos Carranza, Thomas Willhalm, Patrick Connor
  • Publication number: 20250097306
    Abstract: An architecture to perform resource management among multiple network nodes and associated resources is disclosed. Example resource management techniques include those relating to: proactive reservation of edge computing resources; deadline-driven resource allocation; speculative edge QOS pre-allocation; and automatic QoS migration across edge computing nodes.
    Type: Application
    Filed: September 24, 2024
    Publication date: March 20, 2025
    Inventors: Francesc Guim Bernat, Patrick Bohan, Kshitij Arun Doshi, Brinda Ganesh, Andrew J. Herdrich, Monica Kenguva, Karthik Kumar, Patrick G. Kutch, Felipe Pastor Beneyto, Rashmin Patel, Suraj Prabhakaran, Ned M. Smith, Petar Torre, Alexander Vul
  • Patent number: 12254361
    Abstract: Embodiments described herein are generally directed to the use of sidecars to perform dynamic Application Programming Interface (API) contract generation and conversion. In an example, a first sidecar of a source microservice intercepts a first call to a first API exposed by a destination microservice. The first call makes use of a first API technology specified by a first contract and is originated by the source microservice. An API technology is selected from multiple API technologies. The selected API technology is determined to be different than the first API technology. Based on the first contract, a second contract is dynamically generated that specifies an intermediate API that makes use of the selected API technology. A second sidecar of the destination microservice is caused to generate the intermediate API and connect the intermediate API to the first API.
    Type: Grant
    Filed: December 15, 2023
    Date of Patent: March 18, 2025
    Assignee: Intel Corporation
    Inventors: Marcos Carranza, Cesar Martinez-Spessot, Mateo Guzman, Francesc Guim Bernat, Karthik Kumar, Rajesh Poornachandran, Kshitij Arun Doshi
  • Patent number: 12253948
    Abstract: Methods and apparatus for software-defined coherent caching of pooled memory. The pooled memory is implemented in an environment having a disaggregated architecture where compute resources such as compute platforms are connected to disaggregated memory via a network or fabric. Software-defined caching policies are implemented in hardware in a processor SoC or discrete device such as a Network Interface Controller (NIC) by programming logic in an FPGA or accelerator on the SoC or discrete device. The programmed logic is configured to implement software-defined caching policies in hardware for effecting disaggregated memory (DM) caching in an associated DM cache of at least a portion of an address space allocated for the software application in the disaggregated memory.
    Type: Grant
    Filed: November 9, 2020
    Date of Patent: March 18, 2025
    Assignee: Intel Corporation
    Inventors: Francesc Guim Bernat, Karthik Kumar, Alexander Bachmutsky, Zhongyan Lu, Thomas Willhalm
  • Patent number: 12254337
    Abstract: Techniques for expanded trusted domains are disclosed. In the illustrative embodiment, a trusted domain can be established that includes hardware components from a processor as well as an off-load device. The off-load device may provide compute resources for the trusted domain. The trusted domain can be expanded and contracted on-demand, allowing for a flexible approach to creating and using trusted domains.
    Type: Grant
    Filed: September 24, 2021
    Date of Patent: March 18, 2025
    Assignee: Intel Corporation
    Inventors: Francesc Guim Bernat, Karthik Kumar, Ravi L. Sahita, Marcos E. Carranza
  • Publication number: 20250086424
    Abstract: Deployment of resources utilizing improved mixture of experts processing is described. An example of an apparatus includes one or more network ports; one or more direct memory access (DMA) engines; and circuitry for mixture of experts (MoE) processing in the network, wherein the circuitry includes at least circuitry to track routing of tokens in MoE processing, prediction circuitry to generate predictions regarding MoE processing, including predicting future token loads for MoE processing, and routing management circuitry to manage the routing of the tokens in MoE processing based at least in part on the predictions regarding the MoE processing.
    Type: Application
    Filed: November 20, 2024
    Publication date: March 13, 2025
    Applicant: Intel Corporation
    Inventors: Karthik Kumar, Marcos Carranza, Patrick Connor
  • Publication number: 20250086284
    Abstract: An apparatus includes a host interface, a network interface, and a programmable circuitry communicably coupled to the host interface and the network interface. The programmable circuitry can include one or more processors to implement network interface functionality, and a discrete trusted platform module (dTPM) to enable the one or more processors to establish a secure boot mechanism for the apparatus, wherein the one or more processors are to instantiate a virtual TPM (vTPM) manager that is associated with the dTPM, the vTPM manager to host vTPM instances corresponding to one or more virtualized environments hosted on at least one of the programmable circuitry or a host device communicable coupled to the apparatus.
    Type: Application
    Filed: November 22, 2024
    Publication date: March 13, 2025
    Applicant: Intel Corporation
    Inventors: Marcos Carranza, Dario Oliver, Mateo Guzman, Mariano Ortega De Mues, Cesar Martinez-Spessot, Karthik Kumar, Carolyn Wyborny, Yashaswini Raghuram Prathivadi Bhayankaram
  • Publication number: 20250086123
    Abstract: In an embodiment, network device apparatus is provided that includes packet processing circuitry to determine if target data associated with a memory access request is stored in a different device than that identified in the memory access request, and based on the target data associated with the memory access request identified as stored in a different device than that identified in the memory access request, may cause transmission of the memory access request to the different device. The memory access request may comprise an identifier of a requester of the memory access request and the identifier may comprise a Process Address Space identifier (PASID).
    Type: Application
    Filed: September 24, 2024
    Publication date: March 13, 2025
    Applicant: Intel Corporation
    Inventors: Karthik KUMAR, Francesc GUIM BERNAT