Patents by Inventor Kshitij A. Doshi
Kshitij A. Doshi has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12333025Abstract: Examples herein relate to an interface selectively providing access to a memory region for a work request from an entity by providing selective access to a physical address of the memory region and selective access to a cryptographic key for use by a memory controller to access the memory region. In some examples, providing selective access to a physical address conversion is based on one or more of: validation of a certificate received with the work request and an identifier of the entity being associated with a process with access to the memory region. Access to the memory region can be specified to be one or more of: create, read, update, delete, write, or notify. A memory region can be a page or sub-page sized region. Different access rights can be associated with different sub-portions of the memory region, wherein the access rights comprise one or more of: create, read, update, delete, write, or notify.Type: GrantFiled: September 19, 2023Date of Patent: June 17, 2025Assignee: Intel CorporationInventors: Ned Smith, Kshitij A. Doshi, Francesc Guim Bernat, Kapil Sood, Tarun Viswanathan
-
Patent number: 12314178Abstract: Examples described herein relate to a network interface device. In some examples, the network interface device includes a device interface; input/output circuitry to receive Ethernet compliant packets and output Ethernet compliant packets; circuitry to monitor a particular page for a rate of data copying among nodes within a group of two or more nodes; and circuitry to perform one or more actions based, at least in part, on the rate of data copying among the nodes within the group of two or more nodes to attempt to reduce a number of copy operations of the data among the nodes within the group of two or more nodes, wherein the group of two or more nodes are part of a distributed shared memory (DSM).Type: GrantFiled: June 10, 2021Date of Patent: May 27, 2025Assignee: Intel CorporationInventors: Kshitij A. Doshi, Francesc Guim Bernat, Suraj Prabhakaran, Tushar Sudhakar Gohad
-
Patent number: 12314782Abstract: A computing apparatus, including: a hardware computing platform; and logic to operate on the hardware computing platform, configured to: receive a microservice instance registration for a microservice accelerator, wherein the registration includes a microservice that the microservice accelerator is configured to provide, and a microservice connection capability indicating an ability of the microservice instance to communicate directly with other instances of the same or a different microservice; and log the registration in a microservice registration database.Type: GrantFiled: March 27, 2023Date of Patent: May 27, 2025Assignee: Intel CorporationInventors: Vadim Sukhomlinov, Kshitij A. Doshi
-
Publication number: 20250153712Abstract: An apparatus comprising a memory to store an observed trajectory of a pedestrian, the observed trajectory comprising a plurality of observed locations of the pedestrian over a first plurality of timesteps; and a processor to generate a predicted trajectory of the pedestrian, the predicted trajectory comprising a plurality of predicted locations of the pedestrian over the first plurality of timesteps and over a second plurality of timesteps occurring after the first plurality of timesteps; determine a likelihood of the predicted trajectory based on a comparison of the plurality of predicted locations of the pedestrian over the first plurality of timesteps and the plurality of observed locations of the pedestrian over the first plurality of timesteps; and responsive to the determined likelihood of the predicted trajectory, provide information associated with the predicted trajectory to a vehicle to warn the vehicle of a potential collision with the pedestrian.Type: ApplicationFiled: November 22, 2024Publication date: May 15, 2025Applicant: Intel CorporationInventors: David Gomez Gutierrez, Javier Felip Leon, Kshitij A. Doshi, Leobardo E. Campos Macias, Nilesh Amar Ahuja, Omesh Tickoo
-
Publication number: 20250147822Abstract: A computing apparatus, including: a hardware computing platform; and logic to operate on the hardware computing platform, configured to: receive a microservice instance registration for a microservice accelerator, wherein the registration includes a microservice that the microservice accelerator is configured to provide, and a microservice connection capability indicating an ability of the microservice instance to communicate directly with other instances of the same or a different microservice; and log the registration in a microservice registration database.Type: ApplicationFiled: January 6, 2025Publication date: May 8, 2025Applicant: Intel CorporationInventors: Vadim Sukhomlinov, Kshitij A. Doshi
-
Publication number: 20250097120Abstract: Examples include techniques for artificial intelligence (AI) capabilities at a network switch. These examples include receiving a request to register a neural network for loading to an inference resource located at the network switch and loading the neural network based on information included in the request to support an AI service to be provided by users requesting the AI service.Type: ApplicationFiled: December 2, 2024Publication date: March 20, 2025Applicant: Intel CorporationInventors: Francesc GUIM BERNAT, Suraj PRABHAKARAN, Kshitij A. DOSHI, Brinda GANESH, Timothy VERRALL
-
Patent number: 12204471Abstract: In an example, there is disclosed a host-fabric interface (HFI), including: an interconnect interface to communicatively couple the HFI to an interconnect; a network interface to communicatively couple the HFI to a network; network interface logic to provide communication between the interconnect and the network; a coprocessor configured to provide an offloaded function for the network; a memory; and a caching agent configured to: designate a region of the memory as a shared memory between the HFI and a core communicatively coupled to the HFI via the interconnect; receive a memory operation directed to the shared memory; and issue a memory instruction to the memory according to the memory operation.Type: GrantFiled: September 7, 2023Date of Patent: January 21, 2025Assignee: Intel CorporationInventors: Francesc Guim Bernat, Daniel Rivas Barragan, Kshitij A. Doshi, Mark A. Schmisseur
-
Publication number: 20250021621Abstract: Detailed are embodiments related to bit matrix multiplication in a processor. For example, in some embodiments a processor comprising: decode circuitry to decode an instruction have fields for an opcode, an identifier of a first source bit matrix, an identifier of a second source bit matrix, an identifier of a destination bit matrix, and an immediate; and execution circuitry to execute the decoded instruction to perform a multiplication of a matrix of S-bit elements of the identified first source bit matrix with S-bit elements of the identified second source bit matrix, wherein the multiplication and accumulation operations are selected by the operation selector and store a result of the matrix multiplication into the identified destination bit matrix, wherein S indicates a plural bit size is described.Type: ApplicationFiled: June 26, 2024Publication date: January 16, 2025Inventors: Dmitry Y. Babokin, Kshitij A. Doshi, Vadim Sukhomlinov
-
Publication number: 20240320179Abstract: There is disclosed an example of an artificial intelligence (AI) system, including: a first hardware platform; a fabric interface configured to communicatively couple the first hardware platform to a second hardware platform; a processor hosted on the first hardware platform and programmed to operate on an AI problem; and a first training accelerator, including: an accelerator hardware; a platform inter-chip link (ICL) configured to communicatively couple the first training accelerator to a second training accelerator on the first hardware platform without aid of the processor; a fabric ICL to communicatively couple the first training accelerator to a third training accelerator on a second hardware platform without aid of the processor; and a system decoder configured to operate the fabric ICL and platform ICL to share data of the accelerator hardware between the first training accelerator and second and third training accelerators without aid of the processor.Type: ApplicationFiled: May 31, 2024Publication date: September 26, 2024Applicant: Intel CorporationInventors: Francesc Guim Bernat, Da-Ming Chiang, Kshitij A. Doshi, Suraj Prabhakaran, Mark A. Schmisseur
-
Patent number: 12045308Abstract: Detailed are embodiments related to bit matrix multiplication in a processor. For example, in some embodiments a processor comprising: decode circuitry to decode an instruction have fields for an opcode, an identifier of a first source bit matrix, an identifier of a second source bit matrix, an identifier of a destination bit matrix, and an immediate; and execution circuitry to execute the decoded instruction to perform a multiplication of a matrix of S-bit elements of the identified first source bit matrix with S-bit elements of the identified second source bit matrix, wherein the multiplication and accumulation operations are selected by the operation selector and store a result of the matrix multiplication into the identified destination bit matrix, wherein S indicates a plural bit size is described.Type: GrantFiled: December 16, 2022Date of Patent: July 23, 2024Assignee: Intel CorporationInventors: Dmitry Y. Babokin, Kshitij A. Doshi, Vadim Sukhomlinov
-
Patent number: 12038861Abstract: There is disclosed an example of an artificial intelligence (AI) system, including: a first hardware platform; a fabric interface configured to communicatively couple the first hardware platform to a second hardware platform; a processor hosted on the first hardware platform and programmed to operate on an AI problem; and a first training accelerator, including: an accelerator hardware; a platform inter-chip link (ICL) configured to communicatively couple the first training accelerator to a second training accelerator on the first hardware platform without aid of the processor; a fabric ICL to communicatively couple the first training accelerator to a third training accelerator on a second hardware platform without aid of the processor; and a system decoder configured to operate the fabric ICL and platform ICL to share data of the accelerator hardware between the first training accelerator and second and third training accelerators without aid of the processor.Type: GrantFiled: October 25, 2022Date of Patent: July 16, 2024Assignee: Intel CorporationInventors: Francesc Guim Bernat, Da-Ming Chiang, Kshitij A. Doshi, Suraj Prabhakaran, Mark A. Schmisseur
-
Patent number: 12032977Abstract: In one embodiment, a computing device comprises memory circuitry and processing circuitry. The memory circuitry is to store a plurality of container images, comprising: a first container image comprising a first set of applications; and a second container image comprising a virtual machine, a guest operating system, and a second set of applications. The processing circuitry is to: instantiate a plurality of containers on a host operating system, wherein the plurality of containers comprises a first container and a second container; execute the first set of applications in the first container, wherein the first set of applications is to be executed on the host operating system; and execute the virtual machine in the second container, wherein the guest operating system is to be executed on the virtual machine and the second set of applications is to be executed on the guest operating system.Type: GrantFiled: May 11, 2020Date of Patent: July 9, 2024Assignee: Intel CorporationInventors: Bryan J. Rodriguez, Kshitij A. Doshi, Ned M. Smith, Michael G. Millsap
-
Publication number: 20240211310Abstract: Technologies for dynamically sharing remote resources include a computing node that sends a resource request for remote resources to a remote computing node in response to a determination that additional resources are required by the computing node. The computing node configures a mapping of a local address space of the computing node to the remote resources of the remote computing node in response to sending the resource request. In response to generating an access to the local address, the computing node identifies the remote computing node based on the local address with the mapping of the local address space to the remote resources of the remote computing node and performs a resource access operation with the remote computing node over a network fabric. The remote computing node may be identified with system address decoders of a caching agent and a host fabric interface. Other embodiments are described and claimed.Type: ApplicationFiled: March 7, 2024Publication date: June 27, 2024Applicant: Intel CorporationInventors: Francesc Guim Bernat, Kshitij A. Doshi, Daniel Rivas Barragan, Alejandro Duran Gonzalez, Harald Servat
-
Publication number: 20240179078Abstract: Embodiments may be generally directed to techniques to cause communication of a registration request between a first end-point and a second end-point of an end-to-end path, the registration request to establish resource load monitoring for one or more resources of the end-to-end path, receive one or more acknowledgements indicating resource loads for each of the one or more resources of the end-to-end path, at least one of the acknowledgements to indicate a resource of the one or more resources is not meeting a threshold requirement for the end-to-end path, and perform an action for communication traffic utilizing the one or more resources based on the acknowledgement.Type: ApplicationFiled: December 1, 2023Publication date: May 30, 2024Applicant: INTEL CORPORATIONInventors: FRANCESC GUIM BERNAT, KSHITIJ A. DOSHI, DANIEL RIVAS BARRAGAN, MARK A. SCHMISSEUR, STEEN LARSEN
-
Patent number: 11954528Abstract: Technologies for dynamically sharing remote resources include a computing node that sends a resource request for remote resources to a remote computing node in response to a determination that additional resources are required by the computing node. The computing node configures a mapping of a local address space of the computing node to the remote resources of the remote computing node in response to sending the resource request. In response to generating an access to the local address, the computing node identifies the remote computing node based on the local address with the mapping of the local address space to the remote resources of the remote computing node and performs a resource access operation with the remote computing node over a network fabric. The remote computing node may be identified with system address decoders of a caching agent and a host fabric interface. Other embodiments are described and claimed.Type: GrantFiled: November 1, 2022Date of Patent: April 9, 2024Assignee: Intel CorporationInventors: Francesc Guim Bernat, Kshitij A. Doshi, Daniel Rivas Barragan, Alejandro Duran Gonzalez, Harald Servat
-
Publication number: 20240111879Abstract: Examples herein relate to an interface selectively providing access to a memory region for a work request from an entity by providing selective access to a physical address of the memory region and selective access to a cryptographic key for use by a memory controller to access the memory region. In some examples, providing selective access to a physical address conversion is based on one or more of: validation of a certificate received with the work request and an identifier of the entity being associated with a process with access to the memory region. Access to the memory region can be specified to be one or more of: create, read, update, delete, write, or notify. A memory region can be a page or sub-page sized region. Different access rights can be associated with different sub-portions of the memory region, wherein the access rights comprise one or more of: create, read, update, delete, write, or notify.Type: ApplicationFiled: September 19, 2023Publication date: April 4, 2024Inventors: Ned SMITH, Kshitij A. DOSHI, Francesc GUIM BERNAT, Kapil SOOD, Tarun VISWANATHAN
-
Publication number: 20240080246Abstract: Examples include techniques for artificial intelligence (AI) capabilities at a network switch. These examples include receiving a request to register a neural network for loading to an inference resource located at the network switch and loading the neural network based on information included in the request to support an AI service to be provided by users requesting the AI service.Type: ApplicationFiled: October 2, 2023Publication date: March 7, 2024Inventors: Francesc GUIM BERNAT, Suraj PRABHAKARAN, Kshitij A. DOSHI, Brinda GANESH, Timothy VERRALL
-
Publication number: 20240045814Abstract: In an example, there is disclosed a host-fabric interface (HFI), including: an interconnect interface to communicatively couple the HFI to an interconnect; a network interface to communicatively couple the HFI to a network; network interface logic to provide communication between the interconnect and the network; a coprocessor configured to provide an offloaded function for the network; a memory; and a caching agent configured to: designate a region of the memory as a shared memory between the HFI and a core communicatively coupled to the HFI via the interconnect; receive a memory operation directed to the shared memory; and issue a memory instruction to the memory according to the memory operation.Type: ApplicationFiled: September 7, 2023Publication date: February 8, 2024Applicant: Intel CorporationInventors: Francesc Guim Bernat, Daniel Rivas Barragan, Kshitij A. Doshi, Mark A. Schmisseur
-
Patent number: 11870662Abstract: Embodiments may be generally directed to techniques to cause communication of a registration request between a first end-point and a second end-point of an end-to-end path, the registration request to establish resource load monitoring for one or more resources of the end-to-end path, receive one or more acknowledgements indicating resource loads for each of the one or more resources of the end-to-end path, at least one of the acknowledgements to indicate a resource of the one or more resources is not meeting a threshold requirement for the end-to-end path, and perform an action for communication traffic utilizing the one or more resources based on the acknowledgement.Type: GrantFiled: March 9, 2022Date of Patent: January 9, 2024Assignee: Intel CorporationInventors: Francesc Guim Bernat, Kshitij A. Doshi, Daniel Rivas Barragan, Mark A. Schmisseur, Steen Larsen
-
Publication number: 20240004709Abstract: Examples relate to a concept for software application container hardware resource allocation, and in particular to sidecar apparatuses, sidecar devices, methods for a software application container sidecars, a resource management controller apparatus, a resource management controller device, and corresponding computer programs and computer systems. A sidecar apparatus comprises interface circuitry, machine-readable instructions and processing circuitry to execute the machine-readable instructions to obtain information on hardware resources desired by a software application container from the software application container, and to provide a request for changing the hardware resources allocated to the software application container to another entity capable of influencing an allocation of hardware resources to the software application container.Type: ApplicationFiled: June 30, 2022Publication date: January 4, 2024Inventors: Rajesh POORNACHANDRAN, Kshitij A. DOSHI, Rita H. WOUHAYBI, Francesc GUIM BERNAT, Marcos CARRANZA