Patents by Inventor Karthik Kumar

Karthik Kumar has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20200396177
    Abstract: Technologies for accelerating edge device workloads at a device edge network include a network computing device which includes a processor platform that includes at least one processor which supports a plurality of non-accelerated function-as-a-service (FaaS) operations and an accelerated platform that includes at least one accelerator which supports a plurality of accelerated FaaS (AFaaS) operation. The network computing device is configured to receive a request to perform a FaaS operation, determine whether the received request indicates that an AFaaS operation is to be performed on the received request, and identify compute requirements for the AFaaS operation to be performed. The network computing device is further configured to select an accelerator platform to perform the identified AFaaS operation and forward the received request to the selected accelerator platform to perform the identified AFaaS operation. Other embodiments are described and claimed.
    Type: Application
    Filed: January 21, 2020
    Publication date: December 17, 2020
    Inventors: Francesc Guim Bernat, Anil Rao, Suraj Prabhakaran, Mohan Kumar, Karthik Kumar
  • Publication number: 20200387310
    Abstract: A memory controller method and apparatus includes a modification of at least one of a first timing scheme or a second timing scheme based on information about one or more data requests to be included in at least one of a first queue scheduler or a second queue scheduler, the first timing scheme indicating when one or more requests in the first queue scheduler are to be issued to the first memory set via a first memory set interface and over a channel, the second timing scheme indicating when one or more requests in the second queue scheduler are to be issued to the second memory set via a second memory set interface and over the channel.
    Type: Application
    Filed: June 19, 2020
    Publication date: December 10, 2020
    Inventors: Francesc GUIM BERNAT, Karthik KUMAR, Thomas WILLHALM, Mark SCHMISSEUR
  • Publication number: 20200389410
    Abstract: An example system to schedule service requests in a network computing system using hardware queue managers includes: a gateway-level hardware queue manager in an edge gateway to schedule the service requests received from client devices in a queue; a rack-level hardware queue manager in a physical rack in communication with the edge gateway, the rack-level hardware queue manager to send a pull request to the gateway-level hardware queue manager for a first one of the service requests; and a drawer-level hardware queue manager in a drawer of the physical rack, the drawer-level hardware queue manager to send a second pull request to the rack-level hardware queue manager for the first one of the service requests, the drawer including a resource to provide a function as a service specified in the first one of the service requests.
    Type: Application
    Filed: March 30, 2018
    Publication date: December 10, 2020
    Inventors: Francesc Guim Bernat, Karthik Kumar, Suraj Prabhakaran, Ignacio Astilleros Diez, Timothy Verrall
  • Patent number: 10860451
    Abstract: Systems and methods for predicting computing system issues include: receiving a set of incident management tickets for a set of computing system issues and a set of computer log files for multiple modules of the computing system; arranging the set of tickets into chronologically ordered groups associated with particular computing system issues; pre-processing the set of computer log files to remove specified information, append to each log entry an indicator of the module of the log file, and merge the log entries; determining for each group a set of patterns for the group's associated computing system issue before the group's associated computing system issue arises; calculating for each pattern in each group a similarity score; selecting a subset of patterns whose similarity scores exceed a specified threshold; and generating a computing model associating the subset of patterns in each group with the group's associated computing system issue.
    Type: Grant
    Filed: January 31, 2020
    Date of Patent: December 8, 2020
    Assignee: FMR LLC
    Inventors: Bhanu Prashanthi Murthy, Sajith Kumar Vadakaraveedu, Prashanth Bottangada Machaiah, Aanchal Gupta, M. Karthik Kumar
  • Publication number: 20200379922
    Abstract: Examples described herein relate to a network device apparatus that includes a packet processing circuitry configured to determine if target data associated with a memory access request is stored in a different device than that identified in the memory access request and based on the target data associated with the memory access request identified as stored in a different device than that identified in the memory access request, cause transmission of the memory access request to the different device. In some examples, the memory access request comprises an identifier of a requester of the memory access request and the identifier comprises a Process Address Space identifier (PASID) and wherein the configuration that a redirection operation is permitted to be performed for a memory access request is based at least on the identifier.
    Type: Application
    Filed: August 17, 2020
    Publication date: December 3, 2020
    Inventors: Karthik KUMAR, Francesc GUIM BERNAT
  • Patent number: 10855144
    Abstract: An electrical winding topology having a core and a plurality of windings is provided. The plurality of windings is operatively coupled to the core, where at least one of the plurality of windings includes an evaporator section and a condenser section. Further, at least a portion of one or more of the plurality of windings includes heat pipes.
    Type: Grant
    Filed: April 3, 2018
    Date of Patent: December 1, 2020
    Assignee: General Electric Company
    Inventors: Karthik Kumar Bodla, Samir Armando Salamah
  • Patent number: 10846230
    Abstract: Embodiments of the invention include a machine-readable medium having stored thereon at least one instruction, which if performed by a machine causes the machine to perform a method that includes decoding, with a node, an invalidate instruction; and executing, with the node, the invalidate instruction for invalidating a memory range specified across a fabric interconnect.
    Type: Grant
    Filed: December 12, 2016
    Date of Patent: November 24, 2020
    Assignee: Intel Corporation
    Inventors: Karthik Kumar, Thomas Willhalm, Francesc Guim Bernat, Brian J. Slechta
  • Patent number: 10846014
    Abstract: Examples relate to a method for a memory module, a method for a memory controller, a method for a processor, to a memory module controller device or apparatus, to a memory controller device or apparatus, to a processor device or apparatus, a memory module, a memory controller, a processor, a computer system and a computer program. The method for the memory module comprises obtaining one or more memory write instructions of a group memory write instruction. The group memory write instruction comprises a plurality of memory write instructions to be executed atomically. The one or more memory write instructions relate to one or more memory addresses associated with memory of the memory module. The method comprises executing the one or more memory write instructions using previously unallocated memory of the memory module. The method comprises obtaining a commit instruction for the group memory write instruction.
    Type: Grant
    Filed: December 17, 2018
    Date of Patent: November 24, 2020
    Assignee: Intel Corporation
    Inventors: Ginger Gilsdorf, Karthik Kumar, Thomas Willhalm, Mark Schmisseur, Francesc Guim Bernat
  • Patent number: 10838647
    Abstract: Devices and systems for distributing data across disaggregated memory resources is disclosed and described. An acceleration controller device can include an adaptive data migration engine (ADME) configured to communicatively couple to a fabric interconnect, and is further configured to monitor application data performance metrics at the plurality of disaggregated memory pools for a plurality of applications executing on the plurality of compute resources, select a current application having a current application data performance metric, determine an alternate memory pool from the plurality of disaggregated memory pools estimated to increase application data performance relative to the current application data performance metric, and migrate the data from the current memory pool to the alternate memory pool.
    Type: Grant
    Filed: March 14, 2018
    Date of Patent: November 17, 2020
    Assignee: Intel Corporation
    Inventors: Francesc Guim Bernat, Karthik Kumar, Thomas Willhalm, Mark A. Schmisseur
  • Patent number: 10824358
    Abstract: Technologies for dynamically managing the reliability of disaggregated resources in a managed node include a resource manager server. The resource manager server includes communication circuit to receive resource data from a set of disaggregated resources that indicates reliability of each disaggregated resource of the set of disaggregated resources and a node request to compose a managed node. The resource manager server further includes a compute engine to determine node parameters from the node request indicative of a target reliability of one or more disaggregated resources of the set of disaggregated resources to be included in the managed node, compose a managed node from the set of disaggregated resources that satisfies the node parameters by configuring the compute sled to utilize the disaggregated resources of the managed node for the execution of a workload, and monitor the disaggregated resources of the managed node for a failure.
    Type: Grant
    Filed: September 29, 2017
    Date of Patent: November 3, 2020
    Assignee: Intel Corporation
    Inventors: Francesc Guim Bernat, Karthik Kumar, Susanne M. Balle, Murugasamy K. Nachimuthu, Daniel Rivas Barragan
  • Patent number: 10809328
    Abstract: Embodiments of the present disclosure include an inductor including at least one inductor coil, the at least one inductor coil including a plurality of outer longitudinal portions aligned around an outer periphery of the inductor, and a plurality of inner longitudinal portions aligned around an interior of the inductor. The plurality of outer longitudinal portions and the plurality of inner longitudinal portions collectively form two width-wise sides of the inductor and two length-wise sides of the inductor. The two width-wise sides and the two lengthwise sides define a substantially rectangular prism shape. The two width-wise sides and the two lengthwise sides define a hollow inductor core.
    Type: Grant
    Filed: October 18, 2018
    Date of Patent: October 20, 2020
    Assignee: General Electric Company
    Inventors: Ruxi Wang, Juan Antonio Sabate, Gary Dwayne Mandrusiak, Kevin Patrick Rooney, Karthik Kumar Bodla
  • Publication number: 20200319696
    Abstract: Methods and apparatus for platform ambient data management schemes for tiered architectures. A platform including one or more CPUs coupled to multiple tiers of memory comprising various types of DIMMs (e.g., DRAM, hybrid, DCPMM) is powered by a battery subsystem receiving input energy harvested from one or more green energy sources. Energy threshold conditions are detected, and associated memory reconfiguration is performed. The memory reconfiguration may include but is not limited to copying data between DIMMs (or memory ranks on the DIMMS in the same tier, copying data between a first type of memory to a second type of memory on a hybrid DIMM, and flushing dirty lines in a DIMM in a first memory tier being used as a cache for a second memory tier. Following data copy and flushing operations, the DIMMs and/or their memory devices are powered down and/or deactivated.
    Type: Application
    Filed: June 21, 2020
    Publication date: October 8, 2020
    Inventors: Karthik Kumar, Thomas Willhalm, Francesc Guim Bernat
  • Patent number: 10784746
    Abstract: A method includes fabricating a core, wherein the core comprises a chemically soluble first polymer, forming a body around the core, wherein the body comprises a second polymer, and etching away the core to reveal a cooling channel extending through the body.
    Type: Grant
    Filed: June 15, 2017
    Date of Patent: September 22, 2020
    Assignee: GENERAL ELECTRIC COMPANY
    Inventors: Karthik Kumar Bodla, Naveenan Thiagarajan, Patel Bhageerath Reddy, Yogen Vishwas Utturkar
  • Publication number: 20200285420
    Abstract: In one embodiment, an apparatus includes: a first queue to store requests that are guaranteed to be delivered to a persistent memory; a second queue to store requests that are not guaranteed to be delivered to the persistent memory; a control circuit to receive the requests and to direct the requests to the first queue or the second queue; and an egress circuit coupled to the first queue to deliver the requests stored in the first queue to the persistent memory even when a power failure occurs. Other embodiments are described and claimed.
    Type: Application
    Filed: May 26, 2020
    Publication date: September 10, 2020
    Inventors: FRANCESC GUIM BERNAT, KARTHIK KUMAR, DONALD FAW, THOMAS WILLHALM
  • Publication number: 20200278804
    Abstract: A memory request manager in a memory system registers a tenant for access to a plurality of memory devices, registers one or more service level agreement (SLA) requirements for the tenant for access to the plurality of memory devices, monitors usage of the plurality of memory devices by tenants, receives a memory request from the tenant to access a selected one of the plurality of memory devices, and allows the access when usage of the plurality of memory devices meets the one or more SLA requirements for the tenant.
    Type: Application
    Filed: April 13, 2020
    Publication date: September 3, 2020
    Inventors: Francesc GUIM BERNAT, Karthik KUMAR, Tushar Sudhakar GOHAD, Mark A. SCHMISSEUR, Thomas WILLHALM
  • Patent number: 10747691
    Abstract: Examples provide a memory device, a dual inline memory module, a storage device, an apparatus for storing, a method for storing, a computer program, a machine readable storage, and a machine readable medium. A memory device is configured to store data and comprises one or more interfaces configured to receive and to provide data. The memory device further comprises a memory module configured to store the data, and a memory logic component configured to control the one or more interfaces and the memory module. The memory logic component is further configured to receive information on a specific memory region with one or more model identifications, to receive information on an instruction to perform an acceleration function for one or more certain model identifications, and to perform the acceleration function on data in a specific memory region with the one or more certain model identifications.
    Type: Grant
    Filed: April 10, 2018
    Date of Patent: August 18, 2020
    Assignee: Intel Corporation
    Inventors: Mark Schmisseur, Thomas Willhalm, Francesc Guim Bernat, Karthik Kumar
  • Patent number: 10746084
    Abstract: At least one thermal module in fluidic communication with the one or more electronic components. The thermal module including a hydraulic motor operable to rotate a motor output shaft. The module further including a fan coupled to the motor output shaft, at least one heat exchanger in fluidic communication with the fan to provide passage therethrough of an air stream in response to rotational movement of the fan, and a conduit carrying a pressurized liquid stream through the hydraulic motor and each of the at least one heat exchanger. The pressurized liquid stream causing the motor output shaft to rotate and wherein heat in one of the air stream or the pressurized liquid stream is passed through each of the at least one heat exchanger and rejected into the other of the air stream or the pressurized liquid stream. A thermal management system including the at least one thermal module is disclosed.
    Type: Grant
    Filed: December 13, 2018
    Date of Patent: August 18, 2020
    Assignee: General Electric Company
    Inventors: Juan Antonio Sabate, Ruxi Wang, Karthik Kumar Bodla, Krishna Mainali, Yash Veer Singh, Gary Dwayne Mandrusiak, William John Bonneau, Douglas Carl Hofer
  • Publication number: 20200241999
    Abstract: Examples described herein relate to an apparatus that includes a memory and at least one processor where the at least one processor is to receive configuration to gather performance data for a function from one or more platforms and during execution of the function, collect performance data for the function and store the performance data after termination of execution of the function. Some examples include an interface coupled to the at least one processor and the interface is to receive one or more of: an identifier of a function, resources to be tracked as part of function execution, list of devices to be tracked as part of function execution, type of monitoring of function execution, or meta-data to identify when the function is complete. Performance data can be accessed to determine performance of multiple executions of the short-lived function.
    Type: Application
    Filed: March 25, 2020
    Publication date: July 30, 2020
    Inventors: Francesc GUIM BERNAT, Steven BRISCOE, Karthik KUMAR, Alexander BACHMUTSKY, Timothy VERRALL
  • Patent number: 10728311
    Abstract: A computing device, method and system to implement an adaptive compression scheme in a network fabric. The computing device may include a memory device and a fabric controller coupled to the memory device. The fabric controller may include processing circuitry having logic to communicate with a plurality of peer computing devices in the network fabric. The logic may be configured to implement the adaptive compression scheme to select, based on static information and on dynamic information relating to a peer computing device of the plurality of peer computing devices, a compression algorithm to compress a data payload destined for the peer computing device, and to compress the data payload based on the compression algorithm. The static information may include information on data payload decompression supported methods of the peer computing device, and the dynamic information may include information on link load at the peer computing device.
    Type: Grant
    Filed: February 16, 2017
    Date of Patent: July 28, 2020
    Assignee: Intel Corporation
    Inventors: Karthik Kumar, Francesc Guim Bernat, Thomas Willhalm, Nicolas A. Salhuana, Daniel Rivas Barragan
  • Publication number: 20200228626
    Abstract: Technologies for providing advanced resource management in a disaggregated environment include a compute device. The compute device includes circuitry to obtain a workload to be executed by a set of resources in a disaggregated system, query a sled in the disaggregated system to identify an estimated time to complete execution of a portion of the workload to be accelerated using a kernel, and assign, in response to a determination that the estimated time to complete execution of the portion of the workload satisfies a target quality of service associated with the workload, the portion of the workload to the sled for acceleration.
    Type: Application
    Filed: March 25, 2020
    Publication date: July 16, 2020
    Inventors: Francesc Guim Bernat, Slawomir Putyrski, Susanne M. Balle, Thomas Willhalm, Karthik Kumar