Patents by Inventor Philip R Lantz

Philip R Lantz has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240054011
    Abstract: Methods and apparatus relating to data streaming accelerators are described. In an embodiment, a hardware accelerator such as a Data Streaming Accelerator (DSA) logic circuitry performs data movement and/or data transformation for data to be transferred between a processor (having one or more processor cores) and a storage device. Other embodiments are also disclosed and claimed.
    Type: Application
    Filed: August 12, 2023
    Publication date: February 15, 2024
    Applicant: Intel Corporation
    Inventors: Rajesh M. Sankaran, Philip R. Lantz, Narayan Ranganathan, Saurabh Gayen, Sanjay Kumar, Nikhil Rao, Dhananjay A. Joshi, Hai Ming Khor, Utkarsh Y. Kakaiya
  • Publication number: 20230418762
    Abstract: Embodiments of apparatuses, methods, and systems for unified address translation for virtualization of input/output devices are described. In an embodiment, an apparatus includes first circuitry to use at least an identifier of a device to locate a context entry and second circuitry to use at least a process address space identifier (PASID) to locate a PASID-entry. The context entry is to include at least one of a page-table pointer to a page-table translation structure and a PASID. The PASID-entry is to include at least one of a first-level page-table pointer to a first-level translation structure and a second-level page-table pointer to a second-level translation structure. The PASID is to be supplied by the device. At least one of the apparatus, the context entry, and the PASID entry is to include one or more control fields to indicate whether the first-level page-table pointer or the second-level page-table pointer is to be used.
    Type: Application
    Filed: May 22, 2023
    Publication date: December 28, 2023
    Applicant: Intel Corporation
    Inventors: Utkarsh Y. Kakaiya, Sanjay Kumar, Rajesh M. Sankaran, Philip R. Lantz, Ashok Raj, Kun Tian
  • Patent number: 11734209
    Abstract: Implementations of the disclosure provide processing device comprising: an interrupt managing circuit to receive an interrupt message directed to an application container from an assignable interface (AI) of an input/output (I/O) device. The interrupt message comprises an address space identifier (ASID), an interrupt handle and a flag to distinguish the interrupt message from a direct memory access (DMA) message. Responsive to receiving the interrupt message, a data structure associated with the interrupt managing circuit is identified. An interrupt entry from the data structure is selected based on the interrupt handle. It is determined that the ASID associated with the interrupt message matches an ASID in the interrupt entry. Thereupon, an interrupt in the interrupt entry is forwarded to the application container.
    Type: Grant
    Filed: December 14, 2021
    Date of Patent: August 22, 2023
    Assignee: Intel Corporation
    Inventors: Sanjay Kumar, Rajesh M. Sankaran, Philip R. Lantz, Utkarsh Y. Kakaiya, Kun Tian
  • Publication number: 20230251986
    Abstract: Embodiments of apparatuses, methods, and systems for highly scalable accelerators are described. In an embodiment, an apparatus includes an interface to receive a plurality of work requests from a plurality of clients and a plurality of engines to perform the plurality of work requests. The work requests are to be dispatched to the plurality of engines from a plurality of work queues. The work queues are to store a work descriptor per work request. Each work descriptor is to include all information needed to perform a corresponding work request.
    Type: Application
    Filed: April 6, 2023
    Publication date: August 10, 2023
    Applicant: Intel Corporation
    Inventors: Philip R. Lantz, Sanjay Kumar, Rajesh M. Sankaran, Saurabh Gayen
  • Patent number: 11714853
    Abstract: In one embodiment, an apparatus comprises a storage device and a processor. The storage device stores a feature vector index, wherein the feature vector index comprises a sparse-array data structure representing a feature space for a set of labeled feature vectors, wherein the set of labeled feature vectors are assigned to a plurality of classes. The processor is to: receive a query corresponding to a target feature vector; access, via the storage device, a first portion of the feature vector index, wherein the first portion of the feature vector index comprises a subset of labeled feature vectors that correspond to a same portion of the feature space as the target feature vector; determine the corresponding class of the target feature vector based on the subset of labeled feature vectors; and provide a response to the query based on the corresponding class.
    Type: Grant
    Filed: June 29, 2021
    Date of Patent: August 1, 2023
    Assignee: Intel Corporation
    Inventors: Luis Carlos Maria Remis, Vishakha Gupta, Christina R. Strong, Philip R. Lantz
  • Patent number: 11698866
    Abstract: Embodiments of apparatuses, methods, and systems for unified address translation for virtualization of input/output devices are described. In an embodiment, an apparatus includes first circuitry to use at least an identifier of a device to locate a context entry and second circuitry to use at least a process address space identifier (PASID) to locate a PASID-entry. The context entry is to include at least one of a page-table pointer to a page-table translation structure and a PASID. The PASID-entry is to include at least one of a first-level page-table pointer to a first-level translation structure and a second-level page-table pointer to a second-level translation structure. The PASID is to be supplied by the device. At least one of the apparatus, the context entry, and the PASID entry is to include one or more control fields to indicate whether the first-level page-table pointer or the second-level page-table pointer is to be used.
    Type: Grant
    Filed: December 29, 2017
    Date of Patent: July 11, 2023
    Assignee: Intel Corporation
    Inventors: Utkarsh Y. Kakaiya, Sanjay Kumar, Rajesh M. Sankaran, Philip R. Lantz, Ashok Raj, Kun Tian
  • Patent number: 11656899
    Abstract: Implementations of the disclosure provide a processing device comprising an address translation circuit to intercept a work request from an I/O device. The work request comprises a first ASID to map to a work queue. A second ASID of a host is allocated for the first ASID based on the work queue. The second ASID is allocated to at least one of: an ASID register for a dedicated work queue (DWQ) or an ASID translation table for a shared work queue (SWQ). Responsive to receiving a work submission from the SVM client to the I/O device, the first ASID of the application container is translated to the second ASID of the host machine for submission to the I/O device using at least one of: the ASID register for the DWQ or the ASID translation table for the SWQ based on the work queue associated with the I/O device.
    Type: Grant
    Filed: August 17, 2021
    Date of Patent: May 23, 2023
    Assignee: Intel Corporation
    Inventors: Sanjay Kumar, Rajesh M. Sankaran, Gilbert Neiger, Philip R. Lantz, Jason W. Brandt, Vedvyas Shanbhogue, Utkarsh Y. Kakaiya, Kun Tian
  • Patent number: 11650947
    Abstract: Embodiments of apparatuses, methods, and systems for highly scalable accelerators are described. In an embodiment, an apparatus includes an interface to receive a plurality of work requests from a plurality of clients and a plurality of engines to perform the plurality of work requests. The work requests are to be dispatched to the plurality of engines from a plurality of work queues. The work queues are to store a work descriptor per work request. Each work descriptor is to include all information needed to perform a corresponding work request.
    Type: Grant
    Filed: August 24, 2021
    Date of Patent: May 16, 2023
    Assignee: Intel Corporation
    Inventors: Philip R. Lantz, Sanjay Kumar, Rajesh M. Sankaran, Saurabh Gayen
  • Publication number: 20230134657
    Abstract: A system comprises a physical processor to execute a virtual machine manager to run, on a logical core, a virtual machine including a guest user application and a virtual CPU. Circuitry coupled to an external device is to receive an interrupt request from the external device for the guest user application, locate a first interrupt data structure associated with the guest user application, generate a first interrupt with the first interrupt data structure based on a first interrupt vector for the interrupt request, locate a second interrupt data structure associated with the virtual CPU, and generate a first notification interrupt for the virtual CPU with the second interrupt data structure based on a first notification vector in the first interrupt data structure. The circuitry may generate a second notification interrupt for the logical core using a second notification vector and a logical core identifier from the second interrupt data structure.
    Type: Application
    Filed: November 4, 2021
    Publication date: May 4, 2023
    Applicant: Intel Corporation
    Inventors: Sanjay Kumar, Philip R. Lantz, Rajesh M. Sankaran, Gilbert Neiger, Rupin H. Vakharwala
  • Publication number: 20230103000
    Abstract: Embodiments of apparatuses, methods, and systems for hardware manage address translation services are described. In an embodiment, an apparatus includes a first interconnect, a second interconnect, address translation hardware, a device, a translation lookaside buffer. The address translation hardware is coupled to the interconnect and is to provide a translation of a first address to a second address. The device is coupled to the first interconnect and the second interconnect and is to provide the first address to the address translation hardware through the first interconnect. The translation lookaside buffer includes an entry to store the translation, which is to be provided to the translation lookaside buffer through the first interconnect by the address translation hardware. The device is to access a system memory through the second interconnect using the second address from the entry in the translation lookaside buffer.
    Type: Application
    Filed: September 25, 2021
    Publication date: March 30, 2023
    Applicant: Intel Corporation
    Inventors: Rupin Vakharwala, Prashant Sethi, Rajesh M. Sankaran, Philip R. Lantz, David J. Harriman, Utkarsh Y. Kakaiya, Vinay Raghav, Ashok Raj, Siva Bhanu Krishna Boga
  • Publication number: 20230032236
    Abstract: Methods and apparatus relating to data streaming accelerators are described. In an embodiment, a hardware accelerator such as a Data Streaming Accelerator (DSA) logic circuitry provides high-performance data movement and/or data transformation for data to be transferred between a processor (having one or more processor cores) and a storage device. Other embodiments are also disclosed and claimed.
    Type: Application
    Filed: July 27, 2022
    Publication date: February 2, 2023
    Applicant: Intel Corporation
    Inventors: Rajesh M. Sankaran, Philip R. Lantz, Narayan Ranganathan, Saurabh Gayen, Sanjay Kumar, Nikhil Rao, Dhananjay A. Joshi, Hai Ming Khor, Utkarsh Y. Kakaiya
  • Publication number: 20230032586
    Abstract: Methods and apparatus relating to scalable access control checking for cross-address-space data movement are described. In an embodiment, a memory stores an InterDomain Permissions Table (IDPT) having a plurality of entries. At least one entry of the IDPT provides a relationship between a target address space identifier and a plurality of requester address space identifiers. A hardware accelerator device allows access to a target address space, corresponding to the target address space identifier, by one or more of requesters, corresponding to the plurality of requester address space identifiers, respectively, based at least in part on the relationship provided by the at least one entry of the IDPT. Other embodiments are also disclosed and claimed.
    Type: Application
    Filed: April 1, 2022
    Publication date: February 2, 2023
    Applicant: Intel Corporation
    Inventors: Narayan Ranganathan, Philip R. Lantz, Rajesh M. Sankaran, Sanjay Kumar, Saurabh Gayen, Nikhil Rao, Utkarsh Y. Kakaiya, Dhananjay A. Joshi, David Jiang, Ashok Raj
  • Publication number: 20230013023
    Abstract: In one embodiment, an apparatus includes a processor comprising an address translation cache (ATC); a shared work queue (SWQ) associated with the ATC, and a port to couple to a host processor over a Peripheral Component Interconnect Express (PCIe)-based link. The apparatus also includes circuitry to receive address translation information from a memory management unit of the host processor that includes virtual memory address to physical memory address translations, store the address translation information in the ATC, receive an invalidation command from the host processor indicating an invalidation of address translation information stored in the ATC, modify the address translation information in the ATC based on the invalidation command, and store completion data in a memory location indicated by the invalidation command.
    Type: Application
    Filed: September 22, 2022
    Publication date: January 19, 2023
    Applicant: Intel Corporation
    Inventors: Rupin H. Vakharwala, Philip R. Lantz
  • Patent number: 11556363
    Abstract: Techniques for transferring virtual machines and resource management in a virtualized computing environment are described. In one embodiment, for example, an apparatus may include at least one memory, at least one processor, and logic for transferring a virtual machine (VM), at least a portion of the logic comprised in hardware coupled to the at least one memory and the at least one processor, the logic to generate a plurality of virtualized capability registers for a virtual device (VDEV) by virtualizing a plurality of device-specific capability registers of a physical device to be virtualized by the VM, the plurality of virtualized capability registers comprising a plurality of device-specific capabilities of the physical device, determine a version of the physical device to support via a virtual machine monitor (VMM), and expose a subset of the virtualized capability registers associated with the version to the VM. Other embodiments are described and claimed.
    Type: Grant
    Filed: March 31, 2017
    Date of Patent: January 17, 2023
    Assignee: INTEL CORPORATION
    Inventors: Sanjay Kumar, Philip R. Lantz, Kun Tian, Utkarsh Y. Kakaiya, Rajesh M. Sankaran
  • Patent number: 11461100
    Abstract: Process address space identifier virtualization uses hardware paging hint.
    Type: Grant
    Filed: December 21, 2018
    Date of Patent: October 4, 2022
    Assignee: Intel Corporation
    Inventors: Kun Tian, Sanjay Kumar, Ashok Raj, Yi Liu, Rajesh M. Sankaran, Philip R. Lantz
  • Publication number: 20220164384
    Abstract: In one embodiment, an apparatus comprises a storage device and a processor. The storage device stores a feature vector index, wherein the feature vector index comprises a sparse-array data structure representing a feature space for a set of labeled feature vectors, wherein the set of labeled feature vectors are assigned to a plurality of classes. The processor is to: receive a query corresponding to a target feature vector; access, via the storage device, a first portion of the feature vector index, wherein the first portion of the feature vector index comprises a subset of labeled feature vectors that correspond to a same portion of the feature space as the target feature vector; determine the corresponding class of the target feature vector based on the subset of labeled feature vectors; and provide a response to the query based on the corresponding class.
    Type: Application
    Filed: June 29, 2021
    Publication date: May 26, 2022
    Applicant: Intel Corporation
    Inventors: Luis Carlos Maria Remis, Vishakha Gupta, Christina R. Strong, Philip R. Lantz
  • Publication number: 20220107910
    Abstract: Implementations of the disclosure provide processing device comprising: an interrupt managing circuit to receive an interrupt message directed to an application container from an assignable interface (AI) of an input/output (I/O) device. The interrupt message comprises an address space identifier (ASID), an interrupt handle and a flag to distinguish the interrupt message from a direct memory access (DMA) message. Responsive to receiving the interrupt message, a data structure associated with the interrupt managing circuit is identified. An interrupt entry from the data structure is selected based on the interrupt handle. It is determined that the ASID associated with the interrupt message matches an ASID in the interrupt entry. Thereupon, an interrupt in the interrupt entry is forwarded to the application container.
    Type: Application
    Filed: December 14, 2021
    Publication date: April 7, 2022
    Inventors: Sanjay KUMAR, Rajesh M. SANKARAN, Philip R. LANTZ, Utkarsh Y. KAKAIYA, Kun TIAN
  • Patent number: 11200183
    Abstract: Implementations of the disclosure provide processing device comprising: an interrupt managing circuit to receive an interrupt message directed to an application container from an assignable interface (AI) of an input/output (I/O) device. The interrupt message comprises an address space identifier (ASID), an interrupt handle and a flag to distinguish the interrupt message from a direct memory access (DMA) message. Responsive to receiving the interrupt message, a data structure associated with the interrupt managing circuit is identified. An interrupt entry from the data structure is selected based on the interrupt handle. It is determined that the ASID associated with the interrupt message matches an ASID in the interrupt entry. Thereupon, an interrupt in the interrupt entry is forwarded to the application container.
    Type: Grant
    Filed: March 31, 2017
    Date of Patent: December 14, 2021
    Assignee: Intel Corporation
    Inventors: Sanjay Kumar, Rajesh M. Sankaran, Philip R. Lantz, Utkarsh Y. Kakaiya, Kun Tian
  • Publication number: 20210382836
    Abstract: Embodiments of apparatuses, methods, and systems for highly scalable accelerators are described. In an embodiment, an apparatus includes an interface to receive a plurality of work requests from a plurality of clients and a plurality of engines to perform the plurality of work requests. The work requests are to be dispatched to the plurality of engines from a plurality of work queues. The work queues are to store a work descriptor per work request. Each work descriptor is to include all information needed to perform a corresponding work request.
    Type: Application
    Filed: August 24, 2021
    Publication date: December 9, 2021
    Applicant: Intel Corporation
    Inventors: Philip R. Lantz, Sanjay Kumar, Rajesh M. Sankaran, Saurabh Gayen
  • Publication number: 20210373934
    Abstract: Implementations of the disclosure provide a processing device comprising an address translation circuit to intercept a work request from an I/O device. The work request comprises a first ASID to map to a work queue. A second ASID of a host is allocated for the first ASID based on the work queue. The second ASID is allocated to at least one of: an ASID register for a dedicated work queue (DWQ) or an ASID translation table for a shared work queue (SWQ). Responsive to receiving a work submission from the SVM client to the I/O device, the first ASID of the application container is translated to the second ASID of the host machine for submission to the I/O device using at least one of: the ASID register for the DWQ or the ASID translation table for the SWQ based on the work queue associated with the I/O device.
    Type: Application
    Filed: August 17, 2021
    Publication date: December 2, 2021
    Applicant: Intel Corporation
    Inventors: Sanjay KUMAR, Rajesh M. SANKARAN, Gilbert NEIGER, Philip R. LANTZ, Jason W. BRANDT, Vedvyas SHANBHOGUE, Utkarsh Y. KAKAIYA, Kun TIAN