Patents by Inventor Karthik Kumar
Karthik Kumar has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20250083807Abstract: An apparatus for aircraft propulsion includes a propeller and a hybrid-cooled electric engine mounted to a support structure and configured to rotate the propeller. The electric engine is located in an enclosure, with a first heat transfer element thermally coupled to the electric engine via a fluid flow path, partially located outside the enclosure. A second heat transfer element, integral to or thermally coupled to the electric engine, provides air cooling. Both heat transfer elements are housed within the aircraft's support structure. At least one air inlet on the upper side of the support structure receives propeller downwash. A first cooling path directs a portion of the downwash from the air inlet to the first heat transfer element, and a first air outlet exhausts the downwash.Type: ApplicationFiled: November 22, 2024Publication date: March 13, 2025Applicant: Archer Aviation Inc.Inventors: Karthik Kumar BODLA, Bharat TULSYAN, Christopher M. HEATH, Kerry MANNING, Alan D. TEPE
-
Patent number: 12240599Abstract: A VTOL aircraft includes a plurality of lift propellers configured to rotated by lift motors to provide vertical thrust during takeoff, landing and hovering operations. The lift propellers are configured to generate a cooling airflow to cool the lift motors during use. During a cruise operation when the VTOL aircraft is in forward motion, the lift propellers may be stowed in a stationary position. Therefore, the cooling airflow may be reduced or eliminated when it is not needed.Type: GrantFiled: February 2, 2023Date of Patent: March 4, 2025Assignee: Archer Aviation Inc.Inventors: Karthik Kumar Bodla, Bharat Tulsyan, Christopher M. Heath, Kerry Manning, Alan D. Tepe
-
Publication number: 20250068438Abstract: Described herein are technique to enable the autonomous generation of configurations for a network environment, including but not limited to an edge network of a datacenter. Additional embodiments include prompt-based generation of network and device configurations and neural network based systems for adaptive network management.Type: ApplicationFiled: November 13, 2024Publication date: February 27, 2025Applicant: Intel CorporationInventors: Mateo Guzman, Marcos Carranza, Daniel Biederman, Chihjen Chang, Jeremy Petsinger, Yadong Li, Mitu Aggarwal, Suyog Kulkarni, Mariano Ortega de Mues, Rajesh Poornachandran, Cesar Martinez, Mats Agerstam, Francesc Guim Bernat, Karthik Kumar, Usharani Ayyalasomayajula
-
Publication number: 20250071037Abstract: Management of data transfer for network operation is described. An example of an apparatus includes one or more network interfaces and a circuitry for management of data transfer for a network, wherein the circuitry for management of data transfer includes at least circuitry to analyze a plurality of data elements transferred on the network to identify data elements that are delayed or missing in transmission on the network, circuitry to determine one or more responses to delayed or missing data on the network, and circuitry to implement one or more data modifications for delayed or missing data on the network, including circuitry to provide replacement data for the delayed or missing data on the network.Type: ApplicationFiled: November 14, 2024Publication date: February 27, 2025Applicant: Intel CorporationInventors: Daniel Biederman, Patrick Connor, Karthik Kumar, Marcos Carranza, Anjali Singhai Jain
-
Publication number: 20250068457Abstract: An apparatus includes a host interface; a network interface; and a programmable circuitry communicably coupled to the host interface and the network interface, the programmable circuitry comprising one or more processors to implement network interface functionality and to: determine portions of a set of computer vision (CV) processes to be deployed on the programmable circuitry and a host device, wherein the host device to be communicably coupled to the programmable network interface device; access instructions to cause the portions of the set of the CV processes to be deployed on the host device and the programmable network interface device; and wherein a media processing portion of the set of the CV processes is to be deployed to the programmable circuitry, and wherein the programmable circuitry is to utilize media processing hardware circuitry hosted by the apparatus to perform the media processing portion.Type: ApplicationFiled: November 12, 2024Publication date: February 27, 2025Applicant: Intel CorporationInventors: Marcos Carranza, Karthik Kumar, Mariano Ortega De Mues, Mateo Guzman, Patrick Connor, Cesar Martinez-Spessot
-
Patent number: 12231487Abstract: Methods and apparatus for scale out hardware-assisted tracing schemes for distributed and scale-out applications. In connection with execution of one or more applications using a distributed processing environment including multiple compute nodes, telemetry and tracing data are obtained using hardware-based logic on the compute nodes. Processes associated with applications are identified, as well as the compute nodes on which instances of the processes are executed. Process instances are associated with process application space identifiers (PASIDs), while processes used for an application are associating with a global group identifier (GGID) that serves as an application ID. The PASIDs and GGIDs are used to store telemetry and/or tracing data on the compute nodes and/or forward such data to a tracing server in a manner that enables telemetry and/or tracing data to be aggregated on an application basis.Type: GrantFiled: February 13, 2020Date of Patent: February 18, 2025Assignee: Intel CorporationInventors: Francesc Guim Bernat, Patrick Kutch, Trevor Cooper, Timothy Verrall, Karthik Kumar
-
Publication number: 20250053423Abstract: Compressed configuration data may be read from a non-volatile memory of a computing device, decompressed, and used to configure circuitry of the computing device. The decompressed configuration data may be in the form of key-value pairs. A lookup table of most frequently occurring values in the original or uncompressed configuration data may be used to determine the values.Type: ApplicationFiled: August 10, 2023Publication date: February 13, 2025Inventors: Karthik KUMAR RAO, Omesh Kumar HANDA, Suhrid BHATT
-
Patent number: 12223371Abstract: Systems and methods for inter-kernel communication using one or more semiconductor devices. The semi-conductor devices include a kernel. The kernel may be in an inactive state unless performing an operation. One kernel of a first device may monitor data for an event. Once an event has occurred, the kernel sends an indication to a first inter-kernel communication circuitry. The inter-kernel communication circuitry determines an activation function of a plurality of activation functions is to be generated, generates the activation function, and transmits the activation function to a second kernel of a second device to waken and perform a function using a peer-to-peer connection.Type: GrantFiled: September 25, 2020Date of Patent: February 11, 2025Assignee: Intel CorporationInventors: Francesc Guim Bernat, Karthik Kumar, Susanne M. Balle, Mark D. Tetreault
-
Patent number: 12204396Abstract: Various aspects of methods, systems, and use cases include coordinating actions at an edge device based on power production in a distributed edge computing environment. A method may include identifying a long-term service level agreement (SLA) for a component of an edge device, and determining a list of resources related to the component using the long-term SLA. The method may include scheduling a task for the component based on the long-term SLA, a current battery level at the edge device, a current energy harvest rate at the edge device, or an amount of power required to complete the task. A resource of the list of resources may be used to initiate the task, such as according to the scheduling.Type: GrantFiled: December 23, 2020Date of Patent: January 21, 2025Assignee: Intel CorporationInventors: Francesc Guim Bernat, Karthik Kumar, Timothy Verrall
-
Publication number: 20250021374Abstract: A hardware device receives a work request from a guest, the work request identifying a virtual address within a guest address space. The hardware device sends an address translation request to an address translation resource to translate the virtual address to a corresponding physical address in a physical address space. A blocking message is received from the address translation resource based on a determination that the virtual address is a faulty address and the blocking message identifies a source of the faulty address. The hardware device prevents a later address translation request for a later work request from the source based on the blocking message.Type: ApplicationFiled: September 27, 2024Publication date: January 16, 2025Inventors: Raghunathan Srinivasan, Karthik V. Narayanan, Francesc Guim Bernat, Karthik Kumar, Svyatoslav Pankratov
-
Patent number: 12189545Abstract: In one embodiment, an apparatus includes: an interface to couple a plurality of devices of a system and enable communication according to a Compute Express Link (CXL) protocol. The interface may receive a consistent memory request having a type indicator to indicate a type of consistency to be applied to the consistent memory request. A request scheduler coupled to the interface may receive the consistent memory request and schedule it for execution according to the type of consistency, based at least in part on a priority of the consistent memory request and one or more pending consistent memory requests. Other embodiments are described and claimed.Type: GrantFiled: July 26, 2021Date of Patent: January 7, 2025Assignee: Intel CorporationInventors: Karthik Kumar, Francesc Guim Bernat
-
Patent number: 12189512Abstract: Examples described herein relate to an apparatus that includes a memory and at least one processor where the at least one processor is to receive configuration to gather performance data for a function from one or more platforms and during execution of the function, collect performance data for the function and store the performance data after termination of execution of the function. Some examples include an interface coupled to the at least one processor and the interface is to receive one or more of: an identifier of a function, resources to be tracked as part of function execution, list of devices to be tracked as part of function execution, type of monitoring of function execution, or meta-data to identify when the function is complete. Performance data can be accessed to determine performance of multiple executions of the short-lived function.Type: GrantFiled: March 25, 2020Date of Patent: January 7, 2025Assignee: Intel CorporationInventors: Francesc Guim Bernat, Steven Briscoe, Karthik Kumar, Alexander Bachmutsky, Timothy Verrall
-
Patent number: 12191987Abstract: Technologies for dynamically managing resources in disaggregated accelerators include an accelerator. The accelerator includes acceleration circuitry with multiple logic portions, each capable of executing a different workload. Additionally, the accelerator includes communication circuitry to receive a workload to be executed by a logic portion of the accelerator and a dynamic resource allocation logic unit to identify a resource utilization threshold associated with one or more shared resources of the accelerator to be used by a logic portion in the execution of the workload, limit, as a function of the resource utilization threshold, the utilization of the one or more shared resources by the logic portion as the logic portion executes the workload, and subsequently adjust the resource utilization threshold as the workload is executed. Other embodiments are also described and claimed.Type: GrantFiled: November 9, 2023Date of Patent: January 7, 2025Assignee: Intel CorporationInventors: Francesc Guim Bernat, Susanne M. Balle, Rahul Khanna, Sujoy Sen, Karthik Kumar
-
Patent number: 12164977Abstract: An apparatus comprising a network interface controller comprising a queue for messages for a thread executing on a host computing system, wherein the queue is dedicated to the thread; and circuitry to send a notification to the host computing system to resume execution of the thread when a monitoring rule for the queue has been triggered.Type: GrantFiled: December 23, 2020Date of Patent: December 10, 2024Assignee: Intel CorporationInventors: Francesc Guim Bernat, Karthik Kumar, Patrick G. Kutch, Alexander Bachmutsky, Nicolae Octavian Popovici
-
Publication number: 20240407142Abstract: Hybrid and adaptive cooling systems are described. A method comprises selecting a cooling system type from a set of cooling system types of a hybrid cooling system to cool an electronic component of an electronic device, generating a control directive to activate a cooling component of the cooling system type, and performing thermal management of the electronic component of the electronic device using the cooling component of the cooling system type. Other embodiments are described and claimed.Type: ApplicationFiled: August 9, 2024Publication date: December 5, 2024Applicant: INTEL CORPORATIONInventors: Francesc Guim Bernat, Karthik Kumar, Uzair Qureshi, Marcos Carranza, Marek Piotrowski
-
Publication number: 20240396852Abstract: There is disclosed in one example an application-specific integrated circuit (ASIC), including: an artificial intelligence (AI) circuit; and circuitry to: identify a flow, the flow including traffic diverted from a core cloud service of a network to be serviced by an edge node closer to an edge of the network than to the core of the network; receive telemetry related to the flow, the telemetry including fine-grained and flow-level network monitoring data for the flow; operate the AI circuit to predict, from the telemetry, a future service-level demand for the edge node; and cause a service parameter of the edge node to be tuned according to the prediction.Type: ApplicationFiled: August 1, 2024Publication date: November 28, 2024Applicant: Intel CorporationInventors: Francesc Guim Bernat, Karthik Kumar, Thomas Willhalm, Mark A. Schmisseur, Timothy Verrall
-
Publication number: 20240385884Abstract: Methods, apparatus, systems, and articles of manufacture are disclosed to estimate workload complexity. An example apparatus includes processor circuitry to perform at least one of first, second, or third operations to instantiate payload interface circuitry to extract workload objective information and service level agreement (SLA) criteria corresponding to a workload, and acceleration circuitry to select a pre-processing model based on (a) the workload objective information and (b) feedback corresponding to workload performance metrics of at least one prior workload execution iteration, execute the pre-processing model to calculate a complexity metric corresponding to the workload, and select candidate resources based on the complexity metric.Type: ApplicationFiled: December 23, 2021Publication date: November 21, 2024Inventors: Karthik Kumar, Timothy Verrall, Thomas Willhalm, Francesc Guim Bernat, Zhongyan Lu
-
Patent number: 12132790Abstract: An architecture to perform resource management among multiple network nodes and associated resources is disclosed. Example resource management techniques include those relating to: proactive reservation of edge computing resources; deadline-driven resource allocation; speculative edge QoS pre-allocation; and automatic QoS migration across edge computing nodes.Type: GrantFiled: July 28, 2022Date of Patent: October 29, 2024Assignee: Intel CorporationInventors: Francesc Guim Bernat, Patrick Bohan, Kshitij Arun Doshi, Brinda Ganesh, Andrew J. Herdrich, Monica Kenguva, Karthik Kumar, Patrick G Kutch, Felipe Pastor Beneyto, Rashmin Patel, Suraj Prabhakaran, Ned M. Smith, Petar Torre, Alexander Vul
-
Patent number: 12132664Abstract: Example edge gateway circuitry to schedule service requests in a network computing system includes: gateway-level hardware queue manager circuitry to: parse the service requests based on service parameters in the service requests; and schedule the service requests in a queue based on the service parameters, the service requests received from client devices; and hardware queue manager communication interface circuitry to send ones of the service requests from the queue to rack-level hardware queue manager circuitry in a physical rack, the ones of the service requests corresponding to functions as a service provided by resources in the physical rack.Type: GrantFiled: December 19, 2022Date of Patent: October 29, 2024Assignee: INTEL CORPORATIONInventors: Francesc Guim Bernat, Karthik Kumar, Suraj Prabhakaran, Ignacio Astilleros Diez, Timothy Verrall
-
Patent number: 12130754Abstract: Examples described herein relate to a network device apparatus that includes a packet processing circuitry configured to determine if target data associated with a memory access request is stored in a different device than that identified in the memory access request and based on the target data associated with the memory access request identified as stored in a different device than that identified in the memory access request, cause transmission of the memory access request to the different device. In some examples, the memory access request comprises an identifier of a requester of the memory access request and the identifier comprises a Process Address Space identifier (PASID) and wherein the configuration that a redirection operation is permitted to be performed for a memory access request is based at least on the identifier.Type: GrantFiled: August 17, 2020Date of Patent: October 29, 2024Assignee: Intel CorporationInventors: Karthik Kumar, Francesc Guim Bernat