Patents by Inventor Karthik Kumar

Karthik Kumar has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12040690
    Abstract: An electric motor may include a stator assembly comprising a stator housing, and one or more rotors coupled to the stator by a rotor shaft assembly. The stator housing may include a cooling structure that has a plurality of cooling body portions and a plurality of cooling conduits defined by the plurality of cooling body portions. A method of forming a stator housing for an electric machine may include additively manufacturing a stator housing that includes a cooling structure defining a fluid domain, coupling a working fluid source to the stator housing and introducing a working fluid into the fluid domain defined by the cooling structure, and sealing the cooling structure with the working fluid contained within the fluid domain of the cooling structure.
    Type: Grant
    Filed: July 30, 2021
    Date of Patent: July 16, 2024
    Assignee: General Electric Company
    Inventors: Joseph John Zierer, Brian Magann Rush, Karthik K. Bodla, Andrew Thomas Cross, Vandana Prabhakar Rallabandi, Konrad Roman Weeber, Anoop Kumar Jassal
  • Publication number: 20240231924
    Abstract: It is provided an apparatus comprising interface circuitry, machine-readable instructions, and processing circuitry to execute the machine-readable instructions. The machine-readable instructions comprise instructions to identify a processing flow pattern of a large language model, LLM, wherein the LLM is executed on a processor circuitry comprising a plurality of processor cores and wherein the processing flow pattern comprising a plurality of processing phases. The machine-readable instructions further comprise instructions to identify a processing phase of the LLM from the processing flow pattern. The machine-readable instructions further comprise instructions to allocate processing resources to the processor circuitry based on the identified processing phase of the LLM.
    Type: Application
    Filed: March 27, 2024
    Publication date: July 11, 2024
    Inventors: Sharanyan SRIKANTHAN, Karthik KUMAR, Francesc GUIM BERNAT, Rajesh POORNACHANDRAN, Marcos CARRANZA
  • Publication number: 20240212379
    Abstract: Methods, systems and computer program products for generating standardized structured data from unstructured and semi-structured images of document pages is disclosed. The embodiments include a training framework where, boundaries of one or more instances of a first and a second set-of-fields are detected, from images of document pages and tagged using unique labels. Individual fields within the set-of-fields are identified and associated with each instance and the unique labels, to generate large number of synthetically labelled documents. A neural network model is trained using the original document image and the large number of generated synthetically labelled documents. An inference framework receives as input scanned images of unstructured and semi-structured document pages. Custom object recognition module identifies different set-of-fields, and an OCR module recognizes the text from the input images. The outputs of these modules are stitched together to create a standardized structured data.
    Type: Application
    Filed: December 23, 2022
    Publication date: June 27, 2024
    Applicant: Quantiphi Inc.
    Inventors: Bhaskar Kalita, Karthik Kumar Veldandi, Alok Kumar Garg, Sagar Kewalramani, Arunima Gautam
  • Patent number: 12019768
    Abstract: An embodiment of an electronic apparatus may include one or more substrates, and logic coupled to the one or more substrates, the logic to process memory operation requests from a memory controller, and provide a front end interface to remote pooled memory hosted at a near edge device. An embodiment of another electronic apparatus may include local memory and logic communicatively coupled the local memory, the logic to allocate a range of the local memory as remote pooled memory, and provide a back end interface to the remote pooled memory for memory requests from a far edge device. Other embodiments are disclosed and claimed.
    Type: Grant
    Filed: March 26, 2020
    Date of Patent: June 25, 2024
    Assignee: Intel Corporation
    Inventors: Francesc Guim Bernat, Karthik Kumar, Mark Schmisseur, Thomas Willhalm
  • Publication number: 20240193284
    Abstract: Techniques and mechanisms to allocate functionality of a chiplet for access by one or more processor cores which are coupled to remote processor via a network switch. In an embodiment, a composite chip communicates with the switch via a Compute Express Link (CXL) link. The switch receives capability information which identifies both a chiplet of the composite chip, and a functionality which is available from a resource of that chiplet. Based on the capability information, the switch provides an inventory of chiplet resources. In response to an allocation request, the switch accesses the inventory to identify whether a suitable chiplet resource is available. Based on the access, the switch configures a chip to enable an allocation of a chiplet resource. In another embodiment, the chiplet resource is allocated at a sub-processor level of granularity, and disables access to the chiplet resource by one or more local processor cores.
    Type: Application
    Filed: December 13, 2022
    Publication date: June 13, 2024
    Applicant: Intel Corporation
    Inventors: Francesc Guim Bernat, Marcos Carranza, Kshitij Doshi, Ned Smith, Karthik Kumar
  • Publication number: 20240193617
    Abstract: Systems, apparatus, articles of manufacture, and methods are disclosed. An example apparatus includes programmable circuitry to at least: obtain a first response associated with an estimate of emissions to be produced by execution of a workload on first hardware; obtain a second response associated with an estimate of emissions to be produced by execution of the workload on second hardware; and assign one of the first or the second hardware to execute the workload based on the first response and the second response, the assigned one of the first or the second hardware to at least one of utilize more time or more memory to execute the workload than the other of the first or the second hardware.
    Type: Application
    Filed: December 15, 2023
    Publication date: June 13, 2024
    Inventors: Francesc Guim Bernat, Karthik Kumar, Akhilesh S. Thyagaturu, Thijs Metsch, Adrian Hoban
  • Publication number: 20240179578
    Abstract: Systems, apparatus, articles of manufacture, and methods are disclosed to manage network slices. An example apparatus includes interface circuitry to acquire network information, machine-readable instructions, and at least one processor circuit to be programmed by the machine-readable instructions to reserve first network slices to satisfy service level objectives (SLOs) corresponding to first nodes, reserve second network slices to satisfy SLOs corresponding to second nodes, and reconfigure the first network slices to accept network communications from the second nodes when the network communications from the second nodes exceed a performance metric threshold.
    Type: Application
    Filed: January 30, 2024
    Publication date: May 30, 2024
    Inventors: Akhilesh Shivanna Thyagaturu, Hassnaa Moustafa Ep. Yehia, Jing Zhu, Karthik Kumar, Shu-Ping Yeh, Henning Schroeder, Menglei Zhang, Mohit Kumar Garg, Shiva Radhakrishnan Iyer, Francesc Guim Bernat
  • Patent number: 11994932
    Abstract: Methods and apparatus for platform ambient data management schemes for tiered architectures. A platform including one or more CPUs coupled to multiple tiers of memory comprising various types of DIMMs (e.g., DRAM, hybrid, DCPMM) is powered by a battery subsystem receiving input energy harvested from one or more green energy sources. Energy threshold conditions are detected, and associated memory reconfiguration is performed. The memory reconfiguration may include but is not limited to copying data between DIMMs (or memory ranks on the DIMMS in the same tier, copying data between a first type of memory to a second type of memory on a hybrid DIMM, and flushing dirty lines in a DIMM in a first memory tier being used as a cache for a second memory tier. Following data copy and flushing operations, the DIMMs and/or their memory devices are powered down and/or deactivated.
    Type: Grant
    Filed: June 21, 2020
    Date of Patent: May 28, 2024
    Assignee: Intel Corporation
    Inventors: Karthik Kumar, Thomas Willhalm, Francesc Guim Bernat
  • Patent number: 11994997
    Abstract: Systems, apparatuses and methods may provide for a memory controller to manage quality of service enforcement and migration between local and pooled memory. A memory controller may include logic to communicate with a local memory and with a pooled memory controller to track memory page usage on a per application basis, instruct the pooled memory controller to perform a quality of service enforcement in response to a determination that an application is latency bound or bandwidth bound, wherein the determination that the application is latency bound or bandwidth bound is based on a cycles per instruction determination, and instruct a Direct Memory Access engine to perform a migration from a remote memory to the local memory in response to a determination that the quality of service cannot be enforced.
    Type: Grant
    Filed: December 23, 2020
    Date of Patent: May 28, 2024
    Assignee: Intel Corporation
    Inventors: Francesc Guim Bernat, Karthik Kumar, Mark A. Schmisseur
  • Patent number: 11983437
    Abstract: In one embodiment, an apparatus includes: a first queue to store requests that are guaranteed to be delivered to a persistent memory; a second queue to store requests that are not guaranteed to be delivered to the persistent memory; a control circuit to receive the requests and to direct the requests to the first queue or the second queue; and an egress circuit coupled to the first queue to deliver the requests stored in the first queue to the persistent memory even when a power failure occurs. Other embodiments are described and claimed.
    Type: Grant
    Filed: May 26, 2020
    Date of Patent: May 14, 2024
    Assignee: Intel Corporation
    Inventors: Francesc Guim Bernat, Karthik Kumar, Donald Faw, Thomas Willhalm
  • Publication number: 20240146639
    Abstract: Systems, apparatus, articles of manufacture, and methods are disclosed to reduce emissions in guided network environments. An apparatus includes interface circuitry, machine readable instructions, and programmable circuitry to at least one of instantiate or execute the machine readable instructions to collect data from respective network nodes corresponding to a request to access information, predict an emission of accessing the information via the respective network nodes using the data, and select a network path including at least one of the network nodes based on the predicted emission.
    Type: Application
    Filed: December 21, 2023
    Publication date: May 2, 2024
    Inventors: Francesc Guim Bernat, Manish Dave, Karthik Kumar, Akhilesh S. Thyagaturu, Matthew Henry Birkner, Adrian Hoban
  • Publication number: 20240143505
    Abstract: Methods and apparatus for dynamic selection of super queue size for CPUs with higher number of cores. An apparatus includes a plurality of compute modules, each module including a plurality of processor cores with integrated first level (L1) caches and a shared second level (L2) cache, a plurality of Last Level Caches (LLCs) or LLC blocks and a plurality of memory interface blocks interconnect via a mesh interconnect. A compute module is configured to arbitrate access to the shared L2 cache and enqueue L2 cache misses in a super queue (XQ). The compute module further is configured to dynamically adjust the size of the XQ during runtime operations. The compute module tracks parameters comprising an L2 miss rate or count and LLC hit latency and adjusts the XQ size as a function of these parameters. A lookup table using the L2 miss rate/count and LLC hit latency may be implemented to dynamically select the XQ size.
    Type: Application
    Filed: December 22, 2023
    Publication date: May 2, 2024
    Inventors: Amruta MISRA, Ajay RAMJI, Rajendrakumar CHINNAIYAN, Chris MACNAMARA, Karan PUTTANNAIAH, Pushpendra KUMAR, Vrinda KHIRWADKAR, Sanjeevkumar Shankrappa ROKHADE, John J. BROWNE, Francesc GUIM BERNAT, Karthik KUMAR, Farheena Tazeen SYEDA
  • Patent number: 11972291
    Abstract: An apparatus and method for conditional quality of service in a processor. For example, one embodiment of a processor comprises: a plurality of processor resources to be allocated to a plurality of executed processes in accordance with a set of quality of service (QoS) rules; and conditional quality of service (QoS) circuitry/logic to monitor usage of the plurality of processor resources by the plurality of processes and to responsively modify an allocation of a first processor resource for a first process in response to detecting a first threshold value being reached in a second resource allocated to the first process.
    Type: Grant
    Filed: December 27, 2019
    Date of Patent: April 30, 2024
    Assignee: Intel Corporation
    Inventors: Francesc Guim, Karthik Kumar, Mustafa Hajeer, Tushar Gohad
  • Publication number: 20240134436
    Abstract: An integrated circuit (IC) device, such as a system on chip, includes a plurality of hardware circuitry components and a power delivery network to delivery power to the plurality of hardware circuitry elements. The power delivery network has a plurality of integrated switches and a corresponding data plane to couple the plurality of switches to a controller on the IC device. The controller sends signals on the control plane of the power delivery network to granularly select which of the plurality of hardware circuitry elements to power-gate and may do so at the direction of a software system.
    Type: Application
    Filed: December 29, 2023
    Publication date: April 25, 2024
    Inventors: Akhilesh Thyagaturu, Karthik Kumar, Francesc Guim Bernat, Manish Dave, Xiangyang Zhuang
  • Publication number: 20240134432
    Abstract: A method is claimed. The method includes receiving information associated with a software application's workflow. The method includes receiving information that describes a platform's current power consumption state and current thermal state. The method includes selecting platform components to support execution of the workflow. The method includes prior to execution of the workflow upon the selected platform components, estimating a thermal impact to the platform's current thermal state as a consequence of the workflow's execution upon the selected platform components. The method includes determining a change to be made to a thermal cooling system of the platform in response to the estimating and causing the change to be made to the thermal cooling system prior to execution of at least a portion of the workflow on the platform.
    Type: Application
    Filed: December 12, 2023
    Publication date: April 25, 2024
    Inventors: Akhilesh S. THYAGATURU, Francesc GUIM BERNAT, Karthik KUMAR, Jonathan KYLE, Marek PIOTROWSKI
  • Publication number: 20240134726
    Abstract: A method is described. The method includes invoking one of more functions from a set of API functions that expose the current respective cooling states of different, respective cooling devices for different components of a hardware platform. The method includes orchestrating concurrent execution of multiple applications on the hardware platform in view of the current respective cooling states. The method includes, in order to prepare the hardware platform for the concurrent execution of the multiple applications, prior to the concurrent execution of the multiple applications, sending one or more commands to the hardware platform to change a cooling state of at least one of the cooling devices.
    Type: Application
    Filed: December 12, 2023
    Publication date: April 25, 2024
    Inventors: Akhilesh S. THYAGATURU, Francesc GUIM BERNAT, Karthik KUMAR, Adrian HOBAN, Marek PIOTROWSKI
  • Publication number: 20240126579
    Abstract: A server platform in a cloud computing system is determined to be in an unused state and a request from a remote computing system outside the data center system is received to control hardware of at least one of the server platforms of the cloud computing systems. A bare-metal-as-is (BMAI) session is initiated for the remote computing system to use the server platform based on the unused state, wherein exclusive control of at least a portion of hardware of the server platform is temporarily handed over to the remote computing system in the BMAI session. Control of the portion of the hardware of the server platform is reclaimed based on an end of the BMAI session.
    Type: Application
    Filed: December 27, 2023
    Publication date: April 18, 2024
    Inventors: Akhilesh Thyagaturu, Jonathan L. Kyle, Mohit Kumar Garg, Karthik Kumar, Francesc Guim Bernat
  • Publication number: 20240126606
    Abstract: Data that is to be processed by a particular service executed by a first edge computing device in an application, is analyzed to determine characteristics of the data. An opportunity to replicate the particular service on a plurality of edge computing devices is determined based on characteristics of the data. A second edge computing device is determined to be available to execute a replicated instance of the particular service. Replication of the particular service is initiated on a plurality of edge computing devices including the second edge computing device. An output of an instance of the particular service executed on the first edge computing device and an output of the replicated instance of the particular service executed on the second edge computing device are combined to form a single output for the particular service.
    Type: Application
    Filed: December 27, 2023
    Publication date: April 18, 2024
    Inventors: Akhilesh Thyagaturu, Jonathan L. Kyle, Karthik Kumar, Francesc Guim Bernat, Mohit Kumar Garg
  • Publication number: 20240116627
    Abstract: A VTOL aircraft includes a plurality of lift propellers configured to rotated by lift motors to provide vertical thrust during takeoff, landing and hovering operations. The lift propellers are configured to generate a cooling airflow to cool the lift motors during use. During a cruise operation when the VTOL aircraft is in forward motion, the lift propellers may be stowed in a stationary position. Therefore, the cooling airflow may be reduced or eliminated when it is not needed.
    Type: Application
    Filed: February 2, 2023
    Publication date: April 11, 2024
    Applicant: Archer Aviation, Inc.
    Inventors: Karthik Kumar BODLA, Bharat TULSYAN, Christopher M. HEATH, Kerry MANNING, Alan D. TEPE
  • Publication number: 20240111615
    Abstract: Embodiments described herein are generally directed to the use of sidecars to perform dynamic API contract generation and conversion. In an example, a first sidecar of a source microservice intercepts a first call to a first API exposed by a destination microservice. The first call makes use of a first API technology specified by a first contract and is originated by the source microservice. An API technology is selected from multiple API technologies. The selected API technology is determined to be different than the first API technology. Based on the first contract, a second contract is dynamically generated that specifies an intermediate API that makes use of the selected API technology. A second sidecar of the destination microservice is caused to generate the intermediate API and connect the intermediate API to the first API.
    Type: Application
    Filed: December 15, 2023
    Publication date: April 4, 2024
    Applicant: Intel Corporation
    Inventors: Marcos Carranza, Cesar Martinez-Spessot, Mateo Guzman, Francesc Guim Bernat, Karthik Kumar, Rajesh Poornachandran, Kshitij Arun Doshi