Patents by Inventor Karthik Kumar
Karthik Kumar has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20250140537Abstract: Semiconductor processing chambers and systems, as well as methods of cleaning such chambers and systems are provided. Processing chambers and systems include a chamber body that defines a processing region, a liner positioned within the chamber body that defines a liner volume, a faceplate positioned atop the liner, a substrate support disposed within the chamber body, and a cleaning gas source coupled with the liner volume through a cleaning gas plenum and one or more inlet apertures. Systems and chambers include where at least one of the one or more inlet apertures is disposed in the processing region between the faceplate and a bottom wall of the chamber body.Type: ApplicationFiled: December 13, 2023Publication date: May 1, 2025Applicant: Applied Materials, Inc.Inventors: Zaoyuan Ge, Manjunath Veerappa Chobari Patil, Pavan Kumar S M, Dinesh Babu, Nuo Wang, Kaili Yu, Xinyi Zhong, Bharati Neelamraju, Liangfa Hu, Neela Ayalasomayajula, Sungwon Ha, Prashant Kumar Kulshreshtha, Amit Bansal, Daemian Raj Benjamin Raj, Badri N. Ramamurthi, Travis Mazzy, Mohammed Salman Mohiuddin, Karthik Suresh Menon, Lihua Wu, Prasath Poomani
-
Patent number: 12289362Abstract: A multi-tenant dynamic secure data region in which encryption keys can be shared by services running in nodes reduces the need for decrypting data as encrypted data is transferred between nodes in the data center. Instead of using a key per process/service, that is created by a memory controller when the service is instantiated (for example, MKTME), a software stack can specify that a set of processes or compute entities (for example, bit-streams) share a private key that is created and provided by the data center.Type: GrantFiled: December 26, 2020Date of Patent: April 29, 2025Assignee: Intel CorporationInventors: Francesc Guim Bernat, Karthik Kumar, Alexander Bachmutsky
-
Patent number: 12282366Abstract: In one embodiment, an apparatus includes an interface to couple a plurality of devices of a system, the interface to enable communication according to a Compute Express Link (CXL) protocol, and a power management circuit coupled to the interface. The power management circuit may: receive, from a first device of the plurality of devices, a request according to the CXL protocol for updated power credits; identify at least one other device of the plurality of devices to provide at least some of the updated power credits; and communicate with the first device and the at least one other device to enable the first device to increase power consumption according to the at least some of the updated power credits. Other embodiments are described and claimed.Type: GrantFiled: July 26, 2021Date of Patent: April 22, 2025Assignee: Intel CorporationInventors: Francesc Guim Bernat, Karthik Kumar, Alexander Bachmutsky, Dimitrios Ziakas, Rita D. Gupta
-
Publication number: 20250126141Abstract: Control of network traffic in a network is provided, including classifying a network request from a network source address using request classifiers selected from a plurality of request classifiers based on the network request satisfying classification conditions of the selected request classifiers, associating the network request with each classifier metric corresponding to the selected request classifiers, aggregating the classifier metrics associated with the network request to determine an aggregate request control metric of the network request, and instructing a network traffic controller to operate on the network request based on whether the aggregate request control metric satisfies a request control condition. Each of the plurality of request classifiers is associated in memory with a corresponding classifier metric.Type: ApplicationFiled: October 13, 2023Publication date: April 17, 2025Inventors: Karthik UTHAMAN, Ashok Kumar NANDOORI
-
Publication number: 20250117673Abstract: Techniques described herein address the above challenges that arise when using host executed software to manage vector databases by providing a vector database accelerator and shard management offload logic that is implemented within hardware and by software executed on device processors and programmable data planes of a programmable network interface device. In one embodiment, a programmable network interface device includes infrastructure management circuitry configured to facilitate data access for a neural network inference engine having a distributed data model via dynamic management of a node associated with the neural network inference engine, the node including a database shard of a vector database.Type: ApplicationFiled: December 16, 2024Publication date: April 10, 2025Applicant: Intel CorporationInventors: Anjali Singhai Jain, Tamar Bar-Kanarik, Marcos Carranza, Karthik Kumar, Cristian Florin Dumitrescu, Keren Guy, Patrick Connor
-
Patent number: 12271248Abstract: System and techniques for power-based adaptive hardware reliability on a device are described herein. A hardware platform is divided into multiple partitions. Here, each partition includes a hardware component with an adjustable reliability feature. The several partitions are placed into one of multiple reliability categories. A workload with a reliability requirement is obtained and executed on a partition in a reliability category that satisfies the reliability requirements. A change in operating parameters for the device is detected and the adjustable reliability feature for the partition is modified based on the change in the operating parameters of the device.Type: GrantFiled: June 25, 2021Date of Patent: April 8, 2025Assignee: Intel CorporationInventors: Francesc Guim Bernat, Karthik Kumar, Marcos E. Carranza, Cesar Martinez-Spessot, Mustafa Hajeer
-
Publication number: 20250104009Abstract: Various embodiments described herein relate to systems and methods for assessing a condition of a package in a facility. In this regard, first data from one or more first sensors is received once a worker picks the package at a first location in the facility. Further, based on the first data it is determined if the worker is holding the package. If the worker holds the package, second data is received from one or more second sensors such that the one or more second sensors are different from the one or more first sensors. Based on the second data, a package handling pattern for the package is determined. One or more notifications are provided to a mobile device associated with the worker if it is determined that the package handling pattern causes damage to the package.Type: ApplicationFiled: September 25, 2023Publication date: March 27, 2025Inventors: Karthik Gundlapalli, Sunil Kumar Mishra, Ashwini Kumar Vangala
-
Publication number: 20250103965Abstract: An apparatus includes a host interface, a network interface, and programmable circuitry communicably coupled to the host interface and the network interface, the programmable circuitry comprising one or more processors are to implement network interface functionality and are to receive a prompt directed to an artificial intelligence (AI) model hosted by a host device communicably coupled to the host interface, apply a prompt tuning model to the prompt to generate an initial augmented prompt, compare the initial augmented prompt for a match with stored data of a prompt augmentation tracking table comprising real-time datacenter trend data and cross-network historical augmentation data from programmable network interface devices in a datacenter hosting the apparatus, generate, in response to identification of the match with the stored data, a final augmented prompt based on the match, and transmit the final augmented prompt to the AI model.Type: ApplicationFiled: December 6, 2024Publication date: March 27, 2025Applicant: Intel CorporationInventors: Karthik Kumar, Marcos Carranza, Thomas Willhalm, Patrick Connor
-
Patent number: 12259936Abstract: A method for providing one or more customized experience to a user profile associated with an application is disclosed. The method comprises querying a key-value pair store for retrieving a key-value pair associated with the user profile. The retrieved key-value pair is loaded into a first memory. Further, one or more segment definitions for the one or more segments are received from a second memory. The one or more segment definitions are based on at least one of the user profile, user behaviour, user transaction on the application, user interaction with the application, and user subscription. Further, the one or more segment definitions and the one or more user events are evaluated in the first memory. The customized experience is provided to the user profile within sub-second latency and the customized experience is updated based on change in the one or more user events associated with the user profile.Type: GrantFiled: August 21, 2024Date of Patent: March 25, 2025Assignee: MOENGAGE INC.Inventors: Arvinder Singh, Nilesh Kumar Soni, Karthik Deivasigamani, Yashwanth Kumar, Ajish Nair
-
Publication number: 20250094233Abstract: A disclosed method reduces memory consumption of a trained sequential model. The method includes receiving, from a client application, an initial processing request identifying an input sequence to be processed by the trained sequential model and an initial value for an output size parameter specifying a requested size of output from the trained sequential model. The method further includes sequentially transmitting, to the trained sequential model, multiple partial processing requests based on the initial processing request that each specify a fraction of the initial value as the output size parameter and receiving a sequence of output responses from the trained sequential model generated in response to processing the multiple partial processing requests. The method further provides for returning, to the client application, a final merged response that includes the sequence of output responses.Type: ApplicationFiled: September 20, 2023Publication date: March 20, 2025Inventors: Wenbin MENG, Hemant KUMAR, Rakesh KELKAR, Karthik RAMAN, Sanjay RAMANUJAN, Kevin Joseph RIEHM, Theodore Dragov TODOROV
-
Publication number: 20250094027Abstract: A method includes obtaining an indication of a user interface (UI) component of a user interface, and determining an association between the UI component and a dynamic identifier. The method also includes, based on determining the association, determining one or more static properties of one or more parent UI components of the UI component, and generating a component selector for the UI component based on the one or more static properties. The method further includes outputting the component selector for the UI component.Type: ApplicationFiled: September 18, 2023Publication date: March 20, 2025Inventors: Sathi Babu Peddada, Akash Kumar, Karthik Macherla, Hari Teja Varma Jampana, Ravindra Sunkaranam, Aditya Ramamurthy
-
Publication number: 20250094237Abstract: A system provides capacity-based load balancing across model endpoints of a cloud-based artificial intelligence (AI) model. The system includes a consumption determination engine executable to determine a net resource consumption for processing tasks in a workload generated by a client application for input to the trained machine learning model. The system also includes a load balancer that determines a distribution of available resource capacity in a shared resource pool comprising compute resources at each of the multiple model endpoints. The load balancer allocates parallelizable tasks of the workload among the compute resources at the multiple model endpoints based on the net resource consumption of the tasks and on the distribution of available resource capacity in the shared resource pool.Type: ApplicationFiled: September 20, 2023Publication date: March 20, 2025Inventors: Wenbin MENG, Hemant KUMAR, Rakesh KELKAR, Karthik RAMAN, Sanjay RAMANUJAN, Kevin Joseph RIEHM, Theodore Dragov TODOROV
-
Publication number: 20250097306Abstract: An architecture to perform resource management among multiple network nodes and associated resources is disclosed. Example resource management techniques include those relating to: proactive reservation of edge computing resources; deadline-driven resource allocation; speculative edge QOS pre-allocation; and automatic QoS migration across edge computing nodes.Type: ApplicationFiled: September 24, 2024Publication date: March 20, 2025Inventors: Francesc Guim Bernat, Patrick Bohan, Kshitij Arun Doshi, Brinda Ganesh, Andrew J. Herdrich, Monica Kenguva, Karthik Kumar, Patrick G. Kutch, Felipe Pastor Beneyto, Rashmin Patel, Suraj Prabhakaran, Ned M. Smith, Petar Torre, Alexander Vul
-
Publication number: 20250094240Abstract: A disclosed method facilitates an increase in utilization with respect to a resource quota allocated to a tenant from a shared resource pool. The method includes transmitting a lease request to a quota service on behalf of the tenant, where the lease request identifies a processing task and specifies quantity of cloud-based resources requested from the shared resource pool for execution of the processing task. The method further provides for determining, based on a feedback signal received from the quota service, whether grant of the lease request would cause the tenant to exceed a resource quota allocated to the tenant and dynamically decreasing parallelism of active tasks being processed by the cloud-based resources on behalf of the tenant in response to determining that grant of the lease request would cause the tenant to exceed the resource quota limit.Type: ApplicationFiled: September 20, 2023Publication date: March 20, 2025Inventors: Wenbin MENG, Hemant KUMAR, Rakesh KELKAR, Karthik RAMAN, Sanjay RAMANUJAN, Kevin Joseph RIEHM, Theodore Dragov TODOROV
-
Patent number: 12253948Abstract: Methods and apparatus for software-defined coherent caching of pooled memory. The pooled memory is implemented in an environment having a disaggregated architecture where compute resources such as compute platforms are connected to disaggregated memory via a network or fabric. Software-defined caching policies are implemented in hardware in a processor SoC or discrete device such as a Network Interface Controller (NIC) by programming logic in an FPGA or accelerator on the SoC or discrete device. The programmed logic is configured to implement software-defined caching policies in hardware for effecting disaggregated memory (DM) caching in an associated DM cache of at least a portion of an address space allocated for the software application in the disaggregated memory.Type: GrantFiled: November 9, 2020Date of Patent: March 18, 2025Assignee: Intel CorporationInventors: Francesc Guim Bernat, Karthik Kumar, Alexander Bachmutsky, Zhongyan Lu, Thomas Willhalm
-
Patent number: 12254361Abstract: Embodiments described herein are generally directed to the use of sidecars to perform dynamic Application Programming Interface (API) contract generation and conversion. In an example, a first sidecar of a source microservice intercepts a first call to a first API exposed by a destination microservice. The first call makes use of a first API technology specified by a first contract and is originated by the source microservice. An API technology is selected from multiple API technologies. The selected API technology is determined to be different than the first API technology. Based on the first contract, a second contract is dynamically generated that specifies an intermediate API that makes use of the selected API technology. A second sidecar of the destination microservice is caused to generate the intermediate API and connect the intermediate API to the first API.Type: GrantFiled: December 15, 2023Date of Patent: March 18, 2025Assignee: Intel CorporationInventors: Marcos Carranza, Cesar Martinez-Spessot, Mateo Guzman, Francesc Guim Bernat, Karthik Kumar, Rajesh Poornachandran, Kshitij Arun Doshi
-
Patent number: 12254337Abstract: Techniques for expanded trusted domains are disclosed. In the illustrative embodiment, a trusted domain can be established that includes hardware components from a processor as well as an off-load device. The off-load device may provide compute resources for the trusted domain. The trusted domain can be expanded and contracted on-demand, allowing for a flexible approach to creating and using trusted domains.Type: GrantFiled: September 24, 2021Date of Patent: March 18, 2025Assignee: Intel CorporationInventors: Francesc Guim Bernat, Karthik Kumar, Ravi L. Sahita, Marcos E. Carranza
-
Publication number: 20250086123Abstract: In an embodiment, network device apparatus is provided that includes packet processing circuitry to determine if target data associated with a memory access request is stored in a different device than that identified in the memory access request, and based on the target data associated with the memory access request identified as stored in a different device than that identified in the memory access request, may cause transmission of the memory access request to the different device. The memory access request may comprise an identifier of a requester of the memory access request and the identifier may comprise a Process Address Space identifier (PASID).Type: ApplicationFiled: September 24, 2024Publication date: March 13, 2025Applicant: Intel CorporationInventors: Karthik KUMAR, Francesc GUIM BERNAT
-
Publication number: 20250086424Abstract: Deployment of resources utilizing improved mixture of experts processing is described. An example of an apparatus includes one or more network ports; one or more direct memory access (DMA) engines; and circuitry for mixture of experts (MoE) processing in the network, wherein the circuitry includes at least circuitry to track routing of tokens in MoE processing, prediction circuitry to generate predictions regarding MoE processing, including predicting future token loads for MoE processing, and routing management circuitry to manage the routing of the tokens in MoE processing based at least in part on the predictions regarding the MoE processing.Type: ApplicationFiled: November 20, 2024Publication date: March 13, 2025Applicant: Intel CorporationInventors: Karthik Kumar, Marcos Carranza, Patrick Connor
-
Patent number: D1073758Type: GrantFiled: October 13, 2022Date of Patent: May 6, 2025Assignee: LAM RESEARCH CORPORATIONInventors: Karthik Adappa Sathish, Cody Barnett, Mitali Mrigendra Basargi, Ravi Kumar