Patents by Inventor SOHAM JAYESH DESAI
SOHAM JAYESH DESAI has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12346734Abstract: An apparatus to facilitate metrics and security-based accelerator service rescheduling and auto-scaling using a programmable network device is disclosed. The apparatus includes processors to collect metrics corresponding to communication links between microservices of a service managed by a service mesh; determine, based on analysis of the metrics, that a workload of the service can be accelerated by offload to a hardware accelerator device; generate a scaling request to cause the hardware accelerator device to be allocated to a cluster of hardware devices configured for the service; cause the scaling request to be transmitted to a programmable network device managing the hardware accelerator device, the programmable network device to allocate the hardware accelerator device to the cluster and to register the hardware accelerator device with the service mesh; and schedule the workload of the service to the hardware accelerator device.Type: GrantFiled: September 22, 2021Date of Patent: July 1, 2025Assignee: INTEL CORPORATIONInventors: Mikko Ylinen, Ismo Puustinen, Reshma Lal, Soham Jayesh Desai
-
Publication number: 20250138958Abstract: An integrated circuit includes a host interface coupled to a host device executing a tenant operating system (OS) on bare metal and hardware accelerator(s) coupled to the host interface and a network interface. The hardware accelerator(s) receive, over the host interface, a snapshot request relating to a snapshot of tenant OS. Snapshot request includes a location, in a physical memory of the host device, of a swap file having contents of random access memory of the host device. The hardware accelerator(s) encrypt the swap file and initiate transfer of the encrypted swap file to a network storage device coupled to a cloud-based server. The hardware accelerator(s) send, over the network interface, to a snapshot manager hosted by the cloud-based server, metadata associated with storing the encrypted swap file in the cloud-based server, to allow the snapshot manager to manage the snapshot of the tenant OS.Type: ApplicationFiled: October 30, 2023Publication date: May 1, 2025Inventors: Soham Jayesh Desai, Rami Ailabouni, Newton Paine Liu, Binu Ramakrishnan
-
Publication number: 20250117501Abstract: Technologies for trusted I/O include a computing device having a hardware cryptographic agent, a cryptographic engine, and an I/O controller. The hardware cryptographic agent intercepts a message from the I/O controller and identifies boundaries of the message. The message may include multiple DMA transactions, and the start of message is the start of the first DMA transaction. The cryptographic engine encrypts the message and stores the encrypted data in a memory buffer. The cryptographic engine may skip and not encrypt header data starting at the start of message or may read a value from the header to determine the skip length. In some embodiments, the cryptographic agent and the cryptographic engine may be an inline cryptographic engine. In some embodiments, the cryptographic agent may be a channel identifier filter, and the cryptographic engine may be processor-based. Other embodiments are described and claimed.Type: ApplicationFiled: October 1, 2024Publication date: April 10, 2025Applicant: Intel CorporationInventors: Soham Jayesh Desai, Siddhartha Chhabra, Bin Xing, Pradeep M. Pappachan, Reshma Lal
-
Patent number: 12260263Abstract: An apparatus to facilitate disaggregated computing for a distributed confidential computing environment is disclosed. The apparatus includes a graphics processing unit (GPU) to: provide a virtual GPU monitor (VGM) to interface over a network with a middleware layer of a client platform, the VGM to interface with the middleware layer using a message passing interface; configure and expose, by the VGM, virtual functions (VFs) of the GPU to the middleware layer of the client platform; intercept, by the VGM, request messages directed to the GPU from the middleware layer, the request messages corresponding to VFs of the GPU to be utilized by the client platform; and generate, by the VGM, a response to the request messages for the middleware client.Type: GrantFiled: November 17, 2021Date of Patent: March 25, 2025Assignee: INTEL CORPORATIONInventors: Reshma Lal, Pradeep Pappachan, Luis Kida, Soham Jayesh Desai, Sujoy Sen, Selvakumar Panneer, Robert Sharp
-
Patent number: 12229605Abstract: An apparatus to facilitate disaggregated computing for a distributed confidential computing environment is disclosed. The apparatus includes one or more processors to facilitate receiving a manifest corresponding to graph nodes representing regions of memory of a remote client machine, the graph nodes corresponding to a command buffer and to associated data structures and kernels of the command buffer used to initialize a hardware accelerator and execute the kernels, and the manifest indicating a destination memory location of each of the graph nodes and dependencies of each of the graph nodes; identifying, based on the manifest, the command buffer and the associated data structures to copy to the host memory; identifying, based on the manifest, the kernels to copy to local memory of the hardware accelerator; and patching addresses in the command buffer copied to the host memory with updated addresses of corresponding locations in the host memory.Type: GrantFiled: December 13, 2023Date of Patent: February 18, 2025Assignee: INTEL CORPORATIONInventors: Reshma Lal, Pradeep Pappachan, Luis Kida, Soham Jayesh Desai, Sujoy Sen, Selvakumar Panneer, Robert Sharp
-
Patent number: 12164973Abstract: An apparatus to facilitate disaggregated computing for a distributed confidential computing environment is disclosed. The apparatus includes one or more processors to facilitate receiving a manifest corresponding to graph nodes representing regions of memory of a remote client machine, the graph nodes corresponding to a command buffer and to associated data structures and kernels of the command buffer used to initialize a hardware accelerator and execute the kernels, and the manifest indicating a destination memory location of each of the graph nodes and dependencies of each of the graph nodes; identifying, based on the manifest, the command buffer and the associated data structures to copy to the host memory; identifying, based on the manifest, the kernels to copy to local memory of the hardware accelerator; and patching addresses in the command buffer copied to the host memory with updated addresses of corresponding locations in the host memory.Type: GrantFiled: November 16, 2023Date of Patent: December 10, 2024Assignee: INTEL CORPORATIONInventors: Reshma Lal, Pradeep Pappachan, Luis Kida, Soham Jayesh Desai, Sujoy Sen, Selvakumar Panneer, Robert Sharp
-
Patent number: 12135801Abstract: Technologies for trusted I/O include a computing device having a hardware cryptographic agent, a cryptographic engine, and an I/O controller. The hardware cryptographic agent intercepts a message from the I/O controller and identifies boundaries of the message. The message may include multiple DMA transactions, and the start of message is the start of the first DMA transaction. The cryptographic engine encrypts the message and stores the encrypted data in a memory buffer. The cryptographic engine may skip and not encrypt header data starting at the start of message or may read a value from the header to determine the skip length. In some embodiments, the cryptographic agent and the cryptographic engine may be an inline cryptographic engine. In some embodiments, the cryptographic agent may be a channel identifier filter, and the cryptographic engine may be processor-based. Other embodiments are described and claimed.Type: GrantFiled: August 18, 2022Date of Patent: November 5, 2024Assignee: Intel CorporationInventors: Soham Jayesh Desai, Siddhartha Chhabra, Bin Xing, Pradeep M. Pappachan, Reshma Lal
-
Patent number: 12093748Abstract: An apparatus to facilitate disaggregated computing for a distributed confidential computing environment is disclosed. The apparatus includes one or more processors to facilitate receiving a manifest corresponding to graph nodes representing regions of memory of a remote client machine, the graph nodes corresponding to a command buffer and to associated data structures and kernels of the command buffer used to initialize a hardware accelerator and execute the kernels, and the manifest indicating a destination memory location of each of the graph nodes and dependencies of each of the graph nodes; identifying, based on the manifest, the command buffer and the associated data structures to copy to the host memory; identifying, based on the manifest, the kernels to copy to local memory of the hardware accelerator; and patching addresses in the command buffer copied to the host memory with updated addresses of corresponding locations in the host memory.Type: GrantFiled: December 23, 2020Date of Patent: September 17, 2024Assignee: INTEL CORPORATIONInventors: Reshma Lal, Pradeep Pappachan, Luis Kida, Soham Jayesh Desai, Sujoy Sen, Selvakumar Panneer, Robert Sharp
-
Publication number: 20240281302Abstract: An apparatus to facilitate disaggregated computing for a distributed confidential computing environment is disclosed. The apparatus includes one or more processors to: provide a remote GPU middleware layer to act as a proxy for an application stack on a client platform that is separate from the remote server platform, wherein the remote GPU middleware layer comprises is to expose an abstraction of the remote GPU to userspace components of a remote GPU stack, the userspace components running on the client machine; communicate with a kernel mode driver of the one or more processors to cause the host memory to be allocated for data structures used to communicate commands between the client and the remote GPU; and invoke the kernel mode driver to submit a workload generated by the application stack, the workload submitted for processing by the remote GPU using the data structures allocated in the host memory.Type: ApplicationFiled: April 16, 2024Publication date: August 22, 2024Applicant: Intel CorporationInventors: Reshma Lal, Pradeep Pappachan, Luis Kida, Soham Jayesh Desai, Sujoy Sen, Selvakumar Panneer, Robert Sharp
-
Patent number: 12033005Abstract: An apparatus to facilitate disaggregated computing for a distributed confidential computing environment is disclosed. The apparatus includes a programmable integrated circuit (IC) comprising secure device manager (SDM) hardware circuitry to: receive a tenant bitstream of a tenant and a tenant use policy for utilization of the programmable IC via the tenant bitstream, wherein the tenant use policy is cryptographically bound to the tenant bitstream by a cloud service provider (CSP) authorizing entity and signed with a signature of the CSP authorizing entity; in response to successfully verifying the signature, extract the tenant use policy to provide to a policy manager of the programmable IC for verification; in response to the policy manager verifying the tenant bitstream based on the tenant use policy, configure a partial reconfiguration (PR) region of the programmable IC using the tenant bitstream; and associate a slot ID of the PR region with the tenant use policy.Type: GrantFiled: November 22, 2021Date of Patent: July 9, 2024Assignee: Intel CorporationInventors: Reshma Lal, Pradeep Pappachan, Luis Kida, Soham Jayesh Desai, Sujoy Sen, Selvakumar Panneer, Robert Sharp
-
Publication number: 20240184639Abstract: An apparatus to facilitate disaggregated computing for a distributed confidential computing environment is disclosed. The apparatus includes one or more processors to facilitate receiving a manifest corresponding to graph nodes representing regions of memory of a remote client machine, the graph nodes corresponding to a command buffer and to associated data structures and kernels of the command buffer used to initialize a hardware accelerator and execute the kernels, and the manifest indicating a destination memory location of each of the graph nodes and dependencies of each of the graph nodes; identifying, based on the manifest, the command buffer and the associated data structures to copy to the host memory; identifying, based on the manifest, the kernels to copy to local memory of the hardware accelerator; and patching addresses in the command buffer copied to the host memory with updated addresses of corresponding locations in the host memory.Type: ApplicationFiled: December 13, 2023Publication date: June 6, 2024Applicant: Intel CorporationInventors: Reshma Lal, Pradeep Pappachan, Luis Kida, Soham Jayesh Desai, Sujoy Sen, Selvakumar Panneer, Robert Sharp
-
Patent number: 11989595Abstract: An apparatus to facilitate disaggregated computing for a distributed confidential computing environment is disclosed. The apparatus includes one or more processors to: provide a remote GPU middleware layer to act as a proxy for an application stack on a client platform separate from the apparatus; communicate, by the remote GPU middleware layer, with a kernel mode driver of the one or more processors to cause the host memory to be allocated for command buffers and data structures received from the client platform for consumption by a command streamer of a remote GPU of the apparatus; and invoke, by the remote GPU middleware layer, the kernel mode driver to submit a workload generated by the application stack, the workload submitted for processing by the remote GPU using the command buffers and the data structures allocated in the host memory as directed by the command streamer.Type: GrantFiled: November 15, 2021Date of Patent: May 21, 2024Assignee: INTEL CORPORATIONInventors: Reshma Lal, Pradeep Pappachan, Luis Kida, Soham Jayesh Desai, Sujoy Sen, Selvakumar Panneer, Robert Sharp
-
Publication number: 20240160488Abstract: A computing platform comprising a plurality of disaggregated data center resources and an infrastructure processing unit (IPU), communicatively coupled to the plurality of resources, to compose a platform of the plurality of disaggregated data center resources for allocation of micro service s cluster.Type: ApplicationFiled: December 14, 2023Publication date: May 16, 2024Applicant: Intel CorporationInventors: Soham Jayesh Desai, Reshma Lal
-
Publication number: 20240126691Abstract: Technologies for cryptographic separation of MMIO operations with an accelerator device include a computing device having a processor and an accelerator. The processor establishes a trusted execution environment. The accelerator determines, based on a target memory address, a first memory address range associated with the memory-mapped I/O transaction, generates a second authentication tag using a first cryptographic key from a set of cryptographic keys, wherein the first key is uniquely associated with the first memory address range. An accelerator validator determines whether the first authentication tag matches the second authentication tag, and a memory mapper commits the memory-mapped I/O transaction in response to a determination that the first authentication tag matches the second authentication tag. Other embodiments are described and claimed.Type: ApplicationFiled: September 7, 2023Publication date: April 18, 2024Applicant: Intel CorporationInventors: Luis S. Kida, Reshma Lal, Soham Jayesh Desai
-
Patent number: 11941457Abstract: An apparatus to facilitate disaggregated computing for a distributed confidential computing environment is disclosed. The apparatus includes a source remote direct memory access (RDMA) network interface controller (RNIC); a queue to store a data entry corresponding to an RDMA request between the source RNIC and a sink RNIC; a data buffer to store data for an RDMA transfer corresponding to the RDMA request, the RDMA transfer between the source RNIC and the sink RNIC; and a trusted execution environment (TEE) comprising an authentication tag controller to: initialize a first authentication tag calculated using a first key known between a source consumer generating the RDMA request and the source RNIC; associate the first authentication tag with the data entry as integrity verification; initialize a second authentication tag calculated using a second key; and associate the second authentication tag with the data buffer as integrity verification for the data buffer.Type: GrantFiled: November 12, 2021Date of Patent: March 26, 2024Assignee: INTEL CORPORATIONInventors: Reshma Lal, Pradeep Pappachan, Luis Kida, Soham Jayesh Desai, Sujoy Sen, Selvakumar Panneer, Robert Sharp
-
Publication number: 20240086258Abstract: An apparatus to facilitate disaggregated computing for a distributed confidential computing environment is disclosed. The apparatus includes one or more processors to facilitate receiving a manifest corresponding to graph nodes representing regions of memory of a remote client machine, the graph nodes corresponding to a command buffer and to associated data structures and kernels of the command buffer used to initialize a hardware accelerator and execute the kernels, and the manifest indicating a destination memory location of each of the graph nodes and dependencies of each of the graph nodes; identifying, based on the manifest, the command buffer and the associated data structures to copy to the host memory; identifying, based on the manifest, the kernels to copy to local memory of the hardware accelerator; and patching addresses in the command buffer copied to the host memory with updated addresses of corresponding locations in the host memory.Type: ApplicationFiled: November 16, 2023Publication date: March 14, 2024Applicant: Intel CorporationInventors: Reshma Lal, Pradeep Pappachan, Luis Kida, Soham Jayesh Desai, Sujoy Sen, Selvakumar Panneer, Robert Sharp
-
Patent number: 11893425Abstract: An apparatus to facilitate disaggregated computing for a distributed confidential computing environment is disclosed. The apparatus includes a processor executing a trusted execution environment (TEE) comprising a field-programmable gate array (FPGA) driver to interface with an FPGA device that is remote to the apparatus; and a remote memory-mapped input/output (MMIO) driver to expose the FPGA device as a legacy device to the FPGA driver, wherein the processor to utilize the remote MMIO driver to: enumerate the FPGA device using FPGA enumeration data provided by a remote management controller of the FPGA device, the FPGA enumeration data comprising a configuration space and device details; load function drivers for the FPGA device in the TEE; create corresponding device files in the TEE based on the FPGA enumeration data; and handle remote MMIO reads and writes to the FPGA device via a network transport protocol.Type: GrantFiled: November 19, 2021Date of Patent: February 6, 2024Assignee: INTEL CORPORATIONInventors: Reshma Lal, Pradeep Pappachan, Luis Kida, Soham Jayesh Desai, Sujoy Sen, Selvakumar Panneer, Robert Sharp
-
Patent number: 11861406Abstract: A computing platform comprising a plurality of disaggregated data center resources and an infrastructure processing unit (IPU), communicatively coupled to the plurality of resources, to compose a platform of the plurality of disaggregated data center resources for allocation of microservices cluster.Type: GrantFiled: June 24, 2021Date of Patent: January 2, 2024Assignee: Intel CorporationInventors: Soham Jayesh Desai, Reshma Lal
-
Patent number: 11782829Abstract: Technologies for cryptographic separation of MMIO operations with an accelerator device include a computing device having a processor and an accelerator. The processor establishes a trusted execution environment. The accelerator determines, based on a target memory address, a first memory address range associated with the memory-mapped I/O transaction, generates a second authentication tag using a first cryptographic key from a set of cryptographic keys, wherein the first key is uniquely associated with the first memory address range. An accelerator validator determines whether the first authentication tag matches the second authentication tag, and a memory mapper commits the memory-mapped I/O transaction in response to a determination that the first authentication tag matches the second authentication tag. Other embodiments are described and claimed.Type: GrantFiled: March 4, 2022Date of Patent: October 10, 2023Assignee: INTEL CORPORATIONInventors: Luis S. Kida, Reshma Lal, Soham Jayesh Desai
-
Publication number: 20230071723Abstract: Technologies for secure I/O data transfer includes a compute device, which includes a processor to execute a trusted application, an input/output (I/O) device, and an I/O subsystem. The I/O subsystem is configured to establish a secured channel between the I/O subsystem and a trusted application running on the compute device, and receive, in response to an establishment of the secured channel, I/O data from the I/O device via an unsecured channel. The I/O subsystem is further configured to encrypt, in response to a receipt of the I/O data, the I/O data using a security key associated with the trusted application that is to process the I/O data and transmit the encrypted I/O data to the trusted application via the secured channel, wherein the secured channel has a data transfer rate that is higher than a data transfer rate of the unsecured channel between the I/O device and the I/O subsystem.Type: ApplicationFiled: November 2, 2022Publication date: March 9, 2023Applicant: Intel CorporationInventors: Reshma Lal, Luis S. Kida, Soham Jayesh Desai