Patents by Inventor Karthik Kumar

Karthik Kumar has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230136615
    Abstract: Various approaches for deploying and using virtual pools of compute resources with the use of infrastructure processing units (IPUs) and similar networked processing units are disclosed. A host computing system may be configured to operate a virtual pool of resources, with operations including: identifying, at the host computing system, availability of a resource at the host computing system; transmitting, to a network infrastructure device, a notification that the resource at the host computing system is available for use in a virtual resource pool in the edge computing network; receiving a request for the resource in the virtual resource pool that is provided on behalf of a client computing system, based on the request being coordinated via the network infrastructure device and includes at least one quality of service (QoS) requirement; and servicing the request for the resource, based on the at least one QoS requirement.
    Type: Application
    Filed: December 29, 2022
    Publication date: May 4, 2023
    Inventors: Francesc Guim Bernat, Karthik Kumar, Marcos E. Carranza, Cesar Martinez-Spessot, Kshitij Arun Doshi
  • Publication number: 20230132992
    Abstract: Various approaches for monitoring and responding to orchestration or service failures with the use of infrastructure processing units (IPUs) and similar networked processing units are disclosed. A method performed by a computing device for deploying remedial actions in failure scenarios of an orchestrated edge computing environment may include: identifying an orchestration configuration of a controller entity (responsible for orchestration) and a worker entity (subject to the orchestration to provide at least one service); determining a failure scenario of the orchestration of the worker entity, such as at a networked processing unit implemented at a network interface located between the controller entity and the worker entity; and causing a remedial action to resolve the failure scenario and modify the orchestration configuration, such as replacing functionality of the controller entity or the worker entity with functionality at a replacement entity.
    Type: Application
    Filed: December 29, 2022
    Publication date: May 4, 2023
    Inventors: Francesc Guim Bernat, Christian Maciocco, Kshitij Arun Doshi, Karthik Kumar
  • Publication number: 20230135645
    Abstract: Various approaches for deploying and controlling distributed compute operations with the use of infrastructure processing units (IPUs) and similar networked processing units are disclosed.
    Type: Application
    Filed: December 29, 2022
    Publication date: May 4, 2023
    Inventors: Francesc Guim Bernat, Karthik Kumar, Kshitij Arun Doshi, Marcos E. Carranza
  • Publication number: 20230135938
    Abstract: Various approaches for service mech switching, including the use of infrastructure processing units (IPUs) and similar networked processing units, are disclosed. For example, a packet that includes a service request for a service may be received at a networking infrastructure device. The service may include an application that spans multiple nodes in a network. An outbound interface of the networking infrastructure device may be selected through which to route the packet. The selection of the outbound interface may be based on a service component of the service request in the packet and network metrics that correspond to the service. The packet may then be transmitted using the outbound interface.
    Type: Application
    Filed: December 29, 2022
    Publication date: May 4, 2023
    Inventors: Marcos E. Carranza, Francesc Guim Bernat, Kshitij Arun Doshi, Karthik Kumar, Srikathyayani Srikanteswara, Mateo Guzman
  • Publication number: 20230134683
    Abstract: Various approaches for configuring interleaving in a memory pool used in an edge computing arrangement, including with the use of infrastructure processing units (IPUs) and similar networked processing units, are disclosed. An example system may discover and map disaggregated memory resources at respective compute locations connected to each another via at least one interconnect. The system may identify workload requirements for use of the compute locations by respective workloads, for workloads provided by client devices to the compute locations. The system may determine an interleaving arrangement for a memory pool that fulfills the workload requirements, to use the interleaving arrangement to distribute data for the respective workloads among the disaggregated memory resources. The system may configure the memory pool for use by the client devices of the network, as the memory pool causes the disaggregated memory resources to host data based on the interleaving arrangement.
    Type: Application
    Filed: December 29, 2022
    Publication date: May 4, 2023
    Inventors: Marcos E Carranza, Francesc Guim Bernat, Karthik Kumar, Kshitij Arun Doshi
  • Patent number: 11613350
    Abstract: A VTOL aircraft includes a plurality of lift propellers configured to rotated by lift motors to provide vertical thrust during takeoff, landing and hovering operations. The lift propellers are configured to generate a cooling airflow to cool the lift motors during use. During a cruise operation when the VTOL aircraft is in forward motion, the lift propellers may be stowed in a stationary position. Therefore, the cooling airflow may be reduced or eliminated when it is not needed.
    Type: Grant
    Filed: November 14, 2022
    Date of Patent: March 28, 2023
    Assignee: Archer Aviation, Inc.
    Inventors: Karthik Kumar Bodla, Bharat Tulsyan, Christopher M. Heath, Kerry Manning, Alan D. Tepe
  • Patent number: 11611491
    Abstract: An architecture to enable verification, ranking, and identification of respective edge service properties and associated service level agreement (SLA) properties, such as in an edge cloud or other edge computing environment, is disclosed. In an example, management and use of service information for an edge service includes: providing SLA information for an edge service to an operational device, for accessing an edge service hosted in an edge computing environment, with the SLA information providing reputation information for computing functions of the edge service according to an identified SLA; receiving a service request for use of the computing functions of the edge service, under the identified SLA; requesting, from the edge service, performance of the computing functions of the edge service according to the service request; and tracking the performance of the computing functions of the edge service according to the service request and compliance with the identified SLA.
    Type: Grant
    Filed: December 28, 2018
    Date of Patent: March 21, 2023
    Assignee: Intel Corporation
    Inventors: Ned M. Smith, Ben McCahill, Francesc Guim Bernat, Felipe Pastor Beneyto, Karthik Kumar, Timothy Verrall
  • Patent number: 11609859
    Abstract: Embodiments of the invention include a machine-readable medium having stored thereon at least one instruction, which if performed by a machine causes the machine to perform a method that includes decoding, with a node, an invalidate instruction; and executing, with the node, the invalidate instruction for invalidating a memory range specified across a fabric interconnect.
    Type: Grant
    Filed: November 17, 2020
    Date of Patent: March 21, 2023
    Assignee: Intel Corporation
    Inventors: Karthik Kumar, Thomas Willhalm, Francesc Guim Bernat, Brian J. Slechta
  • Publication number: 20230078777
    Abstract: In one aspect, a method of hyperspectral image correction includes the step of generating one or more lookup tables with a radiative transfer model for converting an at-sensor digital number image from a hyperspectral satellite to a bottom of atmosphere radiance and reflectance value image. The intermediate method includes conversion of at-sensor image DN values to TOA radiance and then to TOA reflectance. Later, the method include creating a pre-classification layer using the TOA reflectance image to mask the TOA radiance image. Further, the method includes performing aerosol correction on the masked at-sensors radiance image by applying a pixel-wise albedo estimation using the one or more lookup tables to generate an aerosol corrected radiance image. The method includes performing a water vapor correction on the aerosol corrected radiance image to generate a BOA radiance image. At last, the method includes converting the BOA radiance image to a BOA reflectance.
    Type: Application
    Filed: August 16, 2022
    Publication date: March 16, 2023
    Inventors: Rahul Raj, Karthik Kumar Billa
  • Patent number: 11601523
    Abstract: Generally discussed herein are systems, devices, and methods for prefetcher in a multi-tiered memory (DSM) system. A node can include a network interface controller (NIC) comprising system address decoder (SAD) circuitry configured to determine a node identification of a node to which a memory request from a processor is homed, and prefetcher circuitry communicatively coupled to the SAD circuitry, the prefetcher circuitry to determine, based on an address in the memory request, one or more addresses from which to prefetch data, the one or more addresses corresponding to respective entries in a memory of a node on a different network than the NIC.
    Type: Grant
    Filed: December 16, 2016
    Date of Patent: March 7, 2023
    Assignee: Intel Corporation
    Inventors: Karthik Kumar, Francesc Cesc Guim Bernat, Thomas Willhalm, Martin P Dimitrov, Raj K. Ramanujan
  • Publication number: 20230036751
    Abstract: A network processing device identifies a first request to access a line of memory in a remote memory resource and determines, based on the address of the line of memory, that the line of memory is associated with a sparse region in a memory pool. The address is provided as an input to a probabilistic data structure, where the probabilistic data structure is to generate a result to identify whether the line of memory includes a common data pattern. The network processing device returns the common data pattern as a response to the first request if the result of the probabilistic data structure indicates that the first line of memory includes the common data pattern.
    Type: Application
    Filed: September 30, 2022
    Publication date: February 2, 2023
    Applicant: Intel Corporation
    Inventors: Francesc Guim Bernat, Karthik Kumar, Thomas J. Willhalm
  • Patent number: 11567683
    Abstract: Technologies for providing deduplication of data in an edge network includes a compute device having circuitry to obtain a request to write a data set. The circuitry is also to apply, to the data set, an approximation function to produce an approximated data set. Additionally, the circuitry is to determine whether the approximated data set is already present in a shared memory and write, to a translation table and in response to a determination that the approximated data set is already present in the shared memory, an association between a local memory address and a location, in the shared memory, where the approximated data set is already present. Additionally, the circuitry is to increase a reference count associated with the location in the shared memory.
    Type: Grant
    Filed: March 28, 2019
    Date of Patent: January 31, 2023
    Assignee: INTEL CORPORATION
    Inventors: Francesc Guim Bernat, Karthik Kumar, Thomas Willhalm, Timothy Verrall, Ned Smith
  • Patent number: 11570264
    Abstract: An apparatus to facilitate provenance audit trails for microservices architectures is disclosed. The apparatus includes one or more processors to: obtain, by a microservice of a service hosted in a datacenter, provisioned credentials for the microservice based on an attestation protocol; generate, for a task performed by the microservice, provenance metadata for the task, the provenance metadata including identification of the microservice, operating state of at least one of a hardware resource or a software resource used to execute the microservice and the task, and operating state of a sidecar of the microservice during the task; encrypt the provenance metadata with the provisioned credentials for the microservice; and record the encrypted provenance metadata in a local blockchain of provenance metadata maintained for the hardware resource executing the task and the microservice.
    Type: Grant
    Filed: December 21, 2021
    Date of Patent: January 31, 2023
    Assignee: INTEL CORPORATION
    Inventors: Rajesh Poornachandran, Vincent Zimmer, Subrata Banik, Marcos Carranza, Kshitij Arun Doshi, Francesc Guim Bernat, Karthik Kumar
  • Publication number: 20230023229
    Abstract: In a server system, a host computing platform can have a processing unit separate from the host processor to detect and respond to failure of the host processor. The host computing platform includes a memory to store data for the host processor. The processing unit has an interface to the host processor and the memory and an interface to a network external to the host processor and has access to the memory. In response to detection of failure of the host processor, the processing unit migrates data from the memory to another memory or storage.
    Type: Application
    Filed: September 26, 2022
    Publication date: January 26, 2023
    Inventors: Karthik KUMAR, Francesc GUIM BERNAT, Alexander BACHMUTSKY, Susanne M. BALLE, Andrzej KURIATA, Nagabhushan CHITLUR
  • Publication number: 20230029026
    Abstract: A network processing device connects to one or more devices in a computing node and connects to one or more other network processing devices of other computing nodes. The network processing device identifies a policy for allowing devices in other computing nodes to access a particular resource of one of the devices in its computing node. The network processing device receives an access request to access the particular resource from another network processing device and sends a request to the device hosting the particular resource based on the access request and the policy.
    Type: Application
    Filed: September 30, 2022
    Publication date: January 26, 2023
    Applicant: Intel Corporation
    Inventors: Francesc Guim Bernat, Karthik Kumar, Susanne M. Balle, Andrzej Kuriata, Duane Galbi
  • Publication number: 20230022544
    Abstract: In one embodiment, an apparatus couples to a host processor over a Compute Express Link (CXL)-based link. The apparatus includes a transaction queue to queue memory transactions to be completed in an addressable memory coupled to the apparatus, a transaction cache, conflict detection circuitry to determine whether a conflict exists between memory transactions, and transaction execution circuitry. The transaction execution circuitry may access a transaction from the transaction queue, the transaction to implement one or more memory operations in the memory, store data from the memory to be accessed by the transaction operations in the transaction cache, execute operations of the transaction, including modifying data from the memory location stored in the transaction cache, and based on completion of the transaction, cause the modified data from the transaction cache to be stored in the memory.
    Type: Application
    Filed: September 30, 2022
    Publication date: January 26, 2023
    Applicant: Intel Corporation
    Inventors: Thomas J. Willhalm, Francesc Guim Bernat, Karthik Kumar, Marcos E. Carranza
  • Publication number: 20230022620
    Abstract: An architecture to perform resource management among multiple network nodes and associated resources is disclosed. Example resource management techniques include those relating to: proactive reservation of edge computing resources; deadline-driven resource allocation; speculative edge QoS pre-allocation; and automatic QoS migration across edge computing nodes.
    Type: Application
    Filed: July 28, 2022
    Publication date: January 26, 2023
    Inventors: Francesc Guim Bernat, Patrick Bohan, Kshitij Arun Doshi, Brinda Ganesh, Andrew J. Herdrich, Monica Kenguva, Karthik Kumar, Patrick G. Kutch, Felipe Pastor Beneyto, Rashmin Patel, Suraj Prabhakaran, Ned M. Smith, Petar Torre, Alexander Vul
  • Patent number: 11561868
    Abstract: Embodiments described herein are generally directed to intelligent management of microservices failover. In an example, responsive to an uncorrectable hardware error associated with a processing resource of a platform on which a task of a service is being performed by a primary microservice, a failover trigger is received by a failover service. A secondary microservice is identified by the failover service that is operating in lockstep mode with the primary microservice. The secondary microservice is caused by the failover service to takeover performance of the task in non-lockstep mode based on failover metadata persisted by the primary microservice. The primary microservice is caused by the failover service to be taken offline.
    Type: Grant
    Filed: December 23, 2021
    Date of Patent: January 24, 2023
    Assignee: Intel Corporation
    Inventors: Rajesh Poornachandran, Marcos Carranza, Kshitij Arun Doshi, Francesc Guim Bernat, Karthik Kumar
  • Patent number: 11560266
    Abstract: A food delivery enclosure for delivering groceries includes a low insulation zone and a high insulation zone. The low insulation zone includes energy packs, such as chill packs or hot packs and food items suitable for contact with the energy packs. The high insulation zone is above the low insulation zone and includes food items that are not suitable for contact with the energy packs.
    Type: Grant
    Filed: August 30, 2017
    Date of Patent: January 24, 2023
    Assignee: AMAZON TECHNOLOGIES, INC.
    Inventor: Karthik Kumar
  • Publication number: 20230004417
    Abstract: Scalable I/O Virtualization (Scalable IOV) allows efficient and scalable sharing of Input/Output (I/O) devices across a large number of containers or Virtual Machines. Scalable IOV defines the granularity of sharing of a device as an Assignable Device Interface (ADI). In response to a request for a virtual device composition, an ADI is selected based on affinity to the same NUMA node as the running virtual machine, utilization metrics for the Input-Output Memory Management Unit (IOMMU) unit and utilization metrics of a device of a same device class. Selecting the ADI based on locality and utilization metrics reduces latency and increases throughput for a virtual machine running critical or real-time workloads.
    Type: Application
    Filed: September 6, 2022
    Publication date: January 5, 2023
    Inventors: Karthik V. NARAYANAN, Raghunathan SRINIVASAN, Karthik KUMAR