Patents by Inventor Utkarsh Y. Kakaiya

Utkarsh Y. Kakaiya has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240338319
    Abstract: Embodiments of apparatuses, methods, and systems for unified address translation for virtualization of input/output devices are described. In an embodiment, an apparatus includes first circuitry to use at least an identifier of a device to locate a context entry and second circuitry to use at least a process address space identifier (PASID) to locate a PASID-entry. The context entry is to include at least one of a page-table pointer to a page-table translation structure and a PASID. The PASID-entry is to include at least one of a first-level page-table pointer to a first-level translation structure and a second-level page-table pointer to a second-level translation structure. The PASID is to be supplied by the device. At least one of the apparatus, the context entry, and the PASID entry is to include one or more control fields to indicate whether the first-level page-table pointer or the second-level page-table pointer is to be used.
    Type: Application
    Filed: June 17, 2024
    Publication date: October 10, 2024
    Applicant: Intel Corporation
    Inventors: Utkarsh Y. Kakaiya, Sanjay Kumar, Rajesh M. Sankaran, Philip R. Lantz, Ashok Raj, Kun Tian
  • Patent number: 12112204
    Abstract: A system comprising an accelerator circuit comprising an accelerator function unit to implement a first function, and one or more device feature header (DFH) circuits to provide attributes associated with the accelerator function unit, and a processor to retrieve the attributes of the accelerator function unit by traversing a device feature list (DFL) referencing the one or more DFH circuits, execute, based on the attributes, an application encoding the first function to cause the accelerator function unit to perform the first function.
    Type: Grant
    Filed: August 9, 2022
    Date of Patent: October 8, 2024
    Assignee: Intel Corporation
    Inventors: Pratik M. Marolia, Aaron J. Grier, Henry M. Mitchel, Joseph Grecco, Michael C. Adler, Utkarsh Y. Kakaiya, Joshua D. Fender, Sundar Nadathur, Nagabhushan Chitlur
  • Patent number: 12086082
    Abstract: Methods and apparatus for PASID-based routing extension for Scalable IOV systems. The system may include a Central Processing Unit (CPU) operatively coupled to a scalable Input/Output Virtualization (IOV) device via an in-line device such as a smart controller or accelerator. A Control Process Address Space Identifier (C-PASID) associated with a first memory space is implemented in an Assignable Device Interface (ADI) for the IOV device. The ADI also implements a Data PASID (D-PASID) associated with a second memory space in which data are stored. The C-PASID is used to fetch a descriptor in the first memory space and the D-PASID is employed to fetch data in the second memory space. A hub embedded on the in-line device or implemented as a discrete device is used to steer memory access requests and/or fetches to the CPU or to the in-line device using the C-PASID and D-PASID.
    Type: Grant
    Filed: September 21, 2020
    Date of Patent: September 10, 2024
    Assignee: Intel Corporation
    Inventors: Pratik Marolia, Sanjay Kumar, Rajesh Sankaran, Utkarsh Y. Kakaiya
  • Publication number: 20240248792
    Abstract: Systems, methods, and devices for isolating a misbehaving accelerator circuit, such as an accelerator function unit or an accelerated function context, are provided. An integrated circuit may include a region that includes an accelerator circuit. When the accelerator circuit issues a request, another region of the integrated circuit or a processor connected to the integrated circuit may determine whether there is a misbehavior associated with the request and, in response to determining that there is a misbehavior associated with the request, may perform a misbehavior response to mitigate a negative impact of the misbehavior of the accelerator circuit.
    Type: Application
    Filed: March 30, 2024
    Publication date: July 25, 2024
    Inventors: Sundar Nadathur, Pratik M. Marolia, Henry M. Mitchel, Joseph J. Grecco, Utkarsh Y. Kakaiya, David A. Munday
  • Patent number: 12045640
    Abstract: In one embodiment, a data mover accelerator is to receive, from a first agent having a first address space and a first process address space identifier (PASID) to identify the first address space, a first job descriptor comprising a second PASID selector to specify a second PASID to identify a second address space. In response to the first job descriptor, the data mover accelerator is to securely access the first address space and the second address space. Other embodiments are described and claimed.
    Type: Grant
    Filed: June 23, 2020
    Date of Patent: July 23, 2024
    Assignee: Intel Corporation
    Inventors: Sanjay K. Kumar, Philip Lantz, Rajesh Sankaran, Narayan Ranganathan, Saurabh Gayen, David A. Koufaty, Utkarsh Y. Kakaiya
  • Publication number: 20240231801
    Abstract: The technology disclosed herein includes getting a system update configuration for managing updating of at least one of a software component and a firmware component of a computing system powered by a battery; determining an estimated system update time of usage of the battery to update the at least one of the software component and the firmware component based at least in part on the system update configuration; updating the at least one of the software component and the firmware component when resource requirements of the system update configuration are met and the estimated system update time is less than or equal to a minimum remaining time of usage of the battery; and deferring the updating when the resource requirements of the system update configuration are not met or the estimated system update time is greater than the minimum remaining time of usage of the battery.
    Type: Application
    Filed: May 26, 2022
    Publication date: July 11, 2024
    Applicant: Intel Corporation
    Inventors: Subrata BANIK, Rajesh POORNACHANDRAN, Vincent ZIMMER, Utkarsh Y. KAKAIYA
  • Publication number: 20240220622
    Abstract: Circuitry and methods for implementing address translation extensions for confidential computing hosts are described.
    Type: Application
    Filed: December 30, 2022
    Publication date: July 4, 2024
    Inventors: Utkarsh Y. Kakaiya, Eric Geisler, Rupin H. Vakharwala, Michael Prinke, David Koufaty
  • Patent number: 12013790
    Abstract: Embodiments of apparatuses, methods, and systems for unified address translation for virtualization of input/output devices are described. In an embodiment, an apparatus includes first circuitry to use at least an identifier of a device to locate a context entry and second circuitry to use at least a process address space identifier (PASID) to locate a PASID-entry. The context entry is to include at least one of a page-table pointer to a page-table translation structure and a PASID. The PASID-entry is to include at least one of a first-level page-table pointer to a first-level translation structure and a second-level page-table pointer to a second-level translation structure. The PASID is to be supplied by the device. At least one of the apparatus, the context entry, and the PASID entry is to include one or more control fields to indicate whether the first-level page-table pointer or the second-level page-table pointer is to be used.
    Type: Grant
    Filed: May 22, 2023
    Date of Patent: June 18, 2024
    Assignee: Intel Corporation
    Inventors: Utkarsh Y. Kakaiya, Sanjay Kumar, Rajesh M. Sankaran, Philip R. Lantz, Ashok Raj, Kun Tian
  • Patent number: 11995462
    Abstract: Techniques for transferring virtual machines and resource management in a virtualized computing environment are described. In one embodiment, for example, an apparatus may include at least one memory, at least one processor, and logic for transferring a virtual machine (VM), at least a portion of the logic comprised in hardware coupled to the at least one memory and the at least one processor, the logic to generate a plurality of virtualized capability registers for a virtual device (VDEV) by virtualizing a plurality of device-specific capability registers of a physical device to be virtualized by the VM, the plurality of virtualized capability registers comprising a plurality of device-specific capabilities of the physical device, determine a version of the physical device to support via a virtual machine monitor (VMM), and expose a subset of the virtualized capability registers associated with the version to the VM. Other embodiments are described and claimed.
    Type: Grant
    Filed: January 11, 2023
    Date of Patent: May 28, 2024
    Assignee: Intel Corporation
    Inventors: Sanjay Kumar, Philip R. Lantz, Kun Tian, Utkarsh Y. Kakaiya, Rajesh M. Sankaran
  • Patent number: 11966281
    Abstract: Systems, methods, and devices for isolating a misbehaving accelerator circuit, such as an accelerator function unit or an accelerated function context, are provided. An integrated circuit may include a region that includes an accelerator circuit. When the accelerator circuit issues a request, another region of the integrated circuit or a processor connected to the integrated circuit may determine whether there is a misbehavior associated with the request and, in response to determining that there is a misbehavior associated with the request, may perform a misbehavior response to mitigate a negative impact of the misbehavior of the accelerator circuit.
    Type: Grant
    Filed: April 18, 2022
    Date of Patent: April 23, 2024
    Assignee: Intel Corporation
    Inventors: Sundar Nadathur, Pratik M. Marolia, Henry M. Mitchel, Joseph J. Grecco, Utkarsh Y. Kakaiya, David A. Munday
  • Publication number: 20240126555
    Abstract: A method of an aspect includes receiving a request for a chained accelerator operation, and configuring a chain of accelerators to perform the chained accelerator operation. This may include configuring a first accelerator to access an input data from a source memory location in system memory, process the input data, and generate first intermediate data. This may also include configuring a second accelerator to receive the first intermediate data, without the first intermediate data having been sent to the system memory, process the first intermediate data, and generate additional data. Other apparatus, methods, systems, and machine-readable medium are disclosed.
    Type: Application
    Filed: October 17, 2022
    Publication date: April 18, 2024
    Inventors: Saurabh GAYEN, Christopher J. HUGHES, Utkarsh Y. KAKAIYA, Alexander F. HEINECKE
  • Publication number: 20240127392
    Abstract: A chip or other apparatus of an aspect includes a first accelerator and a second accelerator. The first accelerator has support for a chained accelerator operation. The first accelerator is to be controlled as part of the chained accelerator operation to access an input data from a source memory location in system memory, process the input data, generate first intermediate data, and store the first intermediate data to a storage. The second accelerator also has support for the chained accelerator operation. The second accelerator is to be controlled as part of the chained accelerator operation to receive the first intermediate data from the storage, without the first intermediate data having been sent to the system memory, process the first intermediate data, and generate additional data. Other apparatus, methods, systems, and machine-readable medium are disclosed.
    Type: Application
    Filed: October 17, 2022
    Publication date: April 18, 2024
    Inventors: Christopher J. HUGHES, Saurabh GAYEN, Utkarsh Y. KAKAIYA, Alexander F. HEINECKE
  • Publication number: 20240126613
    Abstract: A chip or other apparatus of an aspect includes a first accelerator and a second accelerator. The first accelerator has support for a chained accelerator operation. The first accelerator is to be controlled as part of the chained accelerator operation to access an input data from a source memory location in system memory, process the input data, and generate first intermediate data. The second accelerator also has support for the chained accelerator operation. The second accelerator is to be controlled as part of the chained accelerator operation to receive the first intermediate data, without the first intermediate data having been sent to the system memory, process the first intermediate data, and generate additional data. Other apparatus, methods, systems, and machine-readable medium are disclosed.
    Type: Application
    Filed: October 17, 2022
    Publication date: April 18, 2024
    Inventors: Saurabh GAYEN, Christopher J. HUGHES, Utkarsh Y. KAKAIYA, Alexander F. HEINECKE
  • Publication number: 20240089239
    Abstract: Examples include techniques for a trusted execution environment (TEE) at a compute server to request a service to be performed by an accelerator that is located at or with a service server. Examples are described of the TEE at the compute server authenticating the remote accelerator to enable establishment of one or more secure communication sessions for the accelerator to decrypt encrypted data, perform a transformation on the decrypted data and then re-encrypt the transformed data. Examples are also described of the TEE at the compute server authenticating a service TEE at the service server as well as the accelerator to enable the service TEE and the accelerator to collaboratively decrypt encrypted data, perform a transformation on the decrypted data and then re-encrypt the transformed data.
    Type: Application
    Filed: September 12, 2022
    Publication date: March 14, 2024
    Inventor: Utkarsh Y. KAKAIYA
  • Patent number: 11907744
    Abstract: In one embodiment, a processor comprises: a first configuration register to store quality of service (QoS) information for a process address space identifier (PASID) value associated with a first process; and an execution circuit coupled to the first configuration register, where the execution circuit, in response to a first instruction, is to obtain command data from a first location identified in a source operand of the first instruction, insert the QoS information and the PASID value into the command data, and send a request comprising the command data to a device coupled to the processor, to enable the device to use the QoS information of a plurality of requests to manage sharing between a plurality of processes. Other embodiments are described and claimed.
    Type: Grant
    Filed: June 25, 2020
    Date of Patent: February 20, 2024
    Assignee: Intel Corporation
    Inventors: Utkarsh Y. Kakaiya, Sanjay K. Kumar, Philip Lantz, Gilbert Neiger, Rajesh Sankaran, Vedvyas Shanbhogue
  • Publication number: 20240054011
    Abstract: Methods and apparatus relating to data streaming accelerators are described. In an embodiment, a hardware accelerator such as a Data Streaming Accelerator (DSA) logic circuitry performs data movement and/or data transformation for data to be transferred between a processor (having one or more processor cores) and a storage device. Other embodiments are also disclosed and claimed.
    Type: Application
    Filed: August 12, 2023
    Publication date: February 15, 2024
    Applicant: Intel Corporation
    Inventors: Rajesh M. Sankaran, Philip R. Lantz, Narayan Ranganathan, Saurabh Gayen, Sanjay Kumar, Nikhil Rao, Dhananjay A. Joshi, Hai Ming Khor, Utkarsh Y. Kakaiya
  • Publication number: 20240004990
    Abstract: Methods and apparatus relating to techniques to enable co-existence and inter-operation of legacy devices and Trusted Execution Environment (TEE) Input/Output (TO) capable devices from confidential virtual machines are described. In an embodiment, a processor executes at least one Trusted Environment (TE) with a TE address space and a non-TE address space. Logic circuitry selects between the TE address space and the non-TE address space based at least in part on a value of a TE tag for a transaction. The TE address space maps one or more TE Input/Output (TO) devices and the non-TE address space maps one or more legacy IO devices. Other embodiments are also disclosed and claimed.
    Type: Application
    Filed: June 30, 2022
    Publication date: January 4, 2024
    Applicant: Intel Corporation
    Inventor: Utkarsh Y. Kakaiya
  • Publication number: 20230418762
    Abstract: Embodiments of apparatuses, methods, and systems for unified address translation for virtualization of input/output devices are described. In an embodiment, an apparatus includes first circuitry to use at least an identifier of a device to locate a context entry and second circuitry to use at least a process address space identifier (PASID) to locate a PASID-entry. The context entry is to include at least one of a page-table pointer to a page-table translation structure and a PASID. The PASID-entry is to include at least one of a first-level page-table pointer to a first-level translation structure and a second-level page-table pointer to a second-level translation structure. The PASID is to be supplied by the device. At least one of the apparatus, the context entry, and the PASID entry is to include one or more control fields to indicate whether the first-level page-table pointer or the second-level page-table pointer is to be used.
    Type: Application
    Filed: May 22, 2023
    Publication date: December 28, 2023
    Applicant: Intel Corporation
    Inventors: Utkarsh Y. Kakaiya, Sanjay Kumar, Rajesh M. Sankaran, Philip R. Lantz, Ashok Raj, Kun Tian
  • Patent number: 11816040
    Abstract: Device memory protection for supporting trust domains is described. An example of a computer-readable storage medium includes instructions for allocating device memory for one or more trust domains (TDs) in a system including one or more processors and a graphics processing unit (GPU); allocating a trusted key ID for a TD of the one or more TDs; creating LMTT (Local Memory Translation Table) mapping for address translation tables, the address translation tables being stored in a device memory of the GPU; transitioning the TD to a secure state; and receiving and processing a memory access request associated with the TD, processing the memory access request including accessing a secure version of the address translation tables.
    Type: Grant
    Filed: April 2, 2022
    Date of Patent: November 14, 2023
    Assignee: INTEL CORPORATION
    Inventors: Vidhya Krishnan, Siddhartha Chhabra, David Puffer, Ankur Shah, Daniel Nemiroff, Utkarsh Y. Kakaiya
  • Publication number: 20230289229
    Abstract: Methods and apparatus relating to confidential computing extensions for highly scalable accelerators are described. One or more embodiments provide extensions for scalable accelerator(s) to be able to directly assign accelerator work-queue(s) to Trusted Execution Environment (TEE) Virtual Machines (TVMs). Other embodiments are also disclosed and claimed.
    Type: Application
    Filed: June 30, 2022
    Publication date: September 14, 2023
    Applicant: Intel Corporation
    Inventors: Utkarsh Y. Kakaiya, Saurabh Gayen, Kapil Sood, Naveen Lakkakula