Patents by Inventor Kshitij A. Doshi

Kshitij A. Doshi has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20210021431
    Abstract: Methods, systems and apparatus disclosed herein create an overlay of nodes to permit the nodes to engage in a peer-to-peer resource bidding process. An example apparatus at an edge of a network includes a first configurer to configure a network interface of a first node of the network in a first configuration, the first configuration to permit the first node to participate in a peer-to-peer resource bidding process with a plurality of other nodes of the network. The apparatus further includes a second configurer to configure the network interface of the first node of the network in a second configuration, the second configuration to prevent the first node from participation in the peer-to-peer resource bidding process.
    Type: Application
    Filed: September 25, 2020
    Publication date: January 21, 2021
    Inventors: Francesc Guim Bernat, Ned Smith, Kshitij Doshi, Rajesh Gadiyar
  • Publication number: 20210021484
    Abstract: Methods, apparatus, systems and articles of manufacture are disclosed to schedule workloads based on secure edge to device telemetry by calculating a difference between a first telemetric data received from a first hardware device and an operating parameter and computing an adjustment for a second hardware device based on the difference between the first telemetric data and the operating parameter.
    Type: Application
    Filed: September 25, 2020
    Publication date: January 21, 2021
    Inventors: Kapil Sood, Timothy Verrall, Ned M. Smith, Tarun Viswanathan, Kshitij Doshi, Francesc Guim Bernat, John J. Browne, Katalin Bartfai-Walcott, Maryam Tahhan, Eoin Walsh, Damien Power
  • Publication number: 20210012282
    Abstract: Apparatus, systems, articles of manufacture, and methods are disclosed for generating a data supply chain object. An example non-transitory computer readable storage medium disclosed herein includes data which may be configured into executable instructions and, when configured and executed, cause one or more processors to at least: derive a provenance of a first data supply chain object; identify a first stakeholder from the provenance; determine if the first stakeholder is verified; utilize data associated with the data supply chain when the first stakeholder is verified; build a tag-value structure based on the utilization of the data; build a second data supply chain object based on the tag-value structure and an identity of a second stakeholder; and add the second data supply chain object to the data supply chain.
    Type: Application
    Filed: September 25, 2020
    Publication date: January 14, 2021
    Inventors: Ned M. Smith, Kshitij Doshi, Francesc Guim Bernat
  • Publication number: 20210014113
    Abstract: Methods, apparatus, systems, and articles of manufacture to orchestrate execution of services in a mesh of orchestrators of an edge cloud computing environment are disclosed. An example orchestrator apparatus includes an interface to receive a service to be executed and to monitor execution of the service. The example apparatus includes an orchestration delegate to select resources and orchestrate deployment of the service to execute using the selected resources. The example apparatus includes a delegated orchestration manager to manage the orchestration delegate based on information from the interface and the orchestration delegate regarding execution of the service.
    Type: Application
    Filed: September 25, 2020
    Publication date: January 14, 2021
    Inventors: Francesc Guim Bernat, Kshitij Doshi, Katalin Bartfai-Walcott
  • Publication number: 20210014133
    Abstract: Methods and apparatus to coordinate edge platforms are disclosed. A disclosed example apparatus includes to control processing of data associated with edges includes an orchestrator analyzer to determine a first performance requirement of a first microservice of an application and a second performance requirement of a second microservice of the application. The apparatus also includes an orchestrator controller to assign the first microservice and the second microservice across first and second edge nodes between a source network and a destination network by: assigning the first microservice to the first edge node based on a first capability of the first edge node satisfying the first performance requirement of the first microservice, and assigning the second microservice to the second edge node based on a second capability of the second edge node satisfying the second performance requirement of the second microservice.
    Type: Application
    Filed: September 25, 2020
    Publication date: January 14, 2021
    Inventors: Christian Maciocco, Kshitij Doshi, Francesc Guim Bernat, Ned M. Smith, Marcin Spoczynski, Timothy Verrall, Rajesh Gadiyar, Trevor Cooper, Valerie Parker
  • Publication number: 20210014301
    Abstract: Methods, apparatus, systems and articles of manufacture to select a location of execution of a computation are disclosed. An example apparatus includes a cache digest interface to identify a node capable of performing a computation. A compute plan solver is to obtain a cost estimate of performing the computation from the node. Privacy weighting circuitry is to apply a privacy weighting value to the cost estimate to determine a weighted cost estimate. The compute plan solver is to select the node for performance of the computation based on the weighted cost estimate. A plan executor is to transmit a request for the selected node to perform the computation.
    Type: Application
    Filed: September 25, 2020
    Publication date: January 14, 2021
    Inventors: Kshitij Doshi, Francesc Guim Bernat, Ned Smith, Timothy Verrall, Uzair Qureshi
  • Publication number: 20210004685
    Abstract: Examples include techniques to manage training or trained models for deep learning applications. Examples include routing commands to configure a training model to be implemented by a training module or configure a trained model to be implemented by an inference module. The commands routed via out-of-band (OOB) link while training data for the training models or input data for the trained models are routed via inband links.
    Type: Application
    Filed: September 18, 2020
    Publication date: January 7, 2021
    Applicant: Intel Corporation
    Inventors: Francesc GUIM BERNAT, Suraj PRABHAKARAN, Kshitij A. DOSHI, Da-Ming CHIANG
  • Patent number: 10860390
    Abstract: A computing apparatus, including: a hardware computing platform; and logic to operate on the hardware computing platform, configured to: receive a microservice instance registration for a microservice accelerator, wherein the registration includes a microservice that the microservice accelerator is configured to provide, and a microservice connection capability indicating an ability of the microservice instance to communicate directly with other instances of the same or a different microservice; and log the registration in a microservice registration database.
    Type: Grant
    Filed: June 28, 2017
    Date of Patent: December 8, 2020
    Assignee: Intel Corporation
    Inventors: Vadim Sukhomlinov, Kshitij A. Doshi
  • Publication number: 20200356406
    Abstract: Systems, apparatuses and methods may provide for technology that creates one or more capabilities of a software container prior to issuance of a request to create the container, wherein the one or more capabilities are associated with a computational overhead that exceeds a first threshold and a memory overhead that does not exceed a second threshold, intercepts the request to create the software container after creation of the one or more capabilities, and associates the one or more capabilities with the software container.
    Type: Application
    Filed: July 24, 2020
    Publication date: November 12, 2020
    Inventors: Anup Mohan, Harshad Sane, Saikrishna Edupuganti, Nimisha Raut, Kshitij Doshi, Karan Kamatgi
  • Patent number: 10831491
    Abstract: The present disclosure is directed to systems and methods for mitigating or eliminating the effectiveness of a side channel attack, such as a Spectre type attack, by limiting the ability of a user-level branch prediction inquiry to access system-level branch prediction data. The branch prediction data stored in the BTB may be apportioned into a plurality of BTB data portions. BTB control circuitry identifies the initiator of a received branch prediction inquiry. Based on the identity of the branch prediction inquiry initiator, the BTB control circuitry causes BTB look-up circuitry to selectively search one or more of the plurality of BTB data portions.
    Type: Grant
    Filed: June 29, 2018
    Date of Patent: November 10, 2020
    Assignee: Intel Corporation
    Inventors: Vadim Sukhomlinov, Kshitij Doshi
  • Publication number: 20200348936
    Abstract: A computing system includes a memory controller having a plurality of bypass parameters set by a software program, a thresholds matrix to store threshold values selectable by the plurality of bypass parameters, and a bypass function to determine whether a first cache line is to be displaced with a second cache line in a first memory or the first cache line remains in the first memory and the second cache line is to be accessed by at least one of a processor core and the cache from a second memory.
    Type: Application
    Filed: July 13, 2020
    Publication date: November 5, 2020
    Inventors: Harshad S. SANE, Anup MOHAN, Kshitij A. DOSHI, Mark A. SCHMISSEUR
  • Publication number: 20200334157
    Abstract: Embodiments of the present disclosure relate to a controller that includes a monitor to determine an access pattern for a range of memory of a first computer memory device, and a pre-loader to pre-load a second computer memory device with a copy of a subset of the range of memory based at least in part on the access pattern, wherein the subset includes a plurality of cache lines. In some embodiments, the controller includes a specifier and the monitor determines the access pattern based at least in part on one or more configuration elements in the specifier. Other embodiments may be described and/or claimed.
    Type: Application
    Filed: July 2, 2020
    Publication date: October 22, 2020
    Inventors: Francesc Guim Bernat, Kshitij Doshi
  • Publication number: 20200320003
    Abstract: The present disclosure is directed to systems and methods that include cache operation storage circuitry that selectively enables/disables the Cache Line Flush (CLFLUSH) operation. The cache operation storage circuitry may also selectively replace the CLFLUSH operation with one or more replacement operations that provide similar functionality but beneficially and advantageously prevent an attacker from placing processor cache circuitry in a known state during a timing-based, side channel attack such as Spectre or Meltdown. The cache operation storage circuitry includes model specific registers (MSRs) that contain information used to determine whether to enable/disable CLFLUSH functionality. The cache operation storage circuitry may include model specific registers (MSRs) that contain information used to select appropriate replacement operations such as Cache Line Demote (CLDEMOTE) and/or Cache Line Write Back (CLWB) to selectively replace CLFLUSH operations.
    Type: Application
    Filed: June 22, 2020
    Publication date: October 8, 2020
    Applicant: Intel Corporation
    Inventors: Vadim Sukhomlinov, Kshitij Doshi
  • Patent number: 10795585
    Abstract: An embodiment of a semiconductor apparatus may include technology to determine if a memory operation on a memory is avoidable, and suppress the memory operation if the memory operation is determined to be avoidable. Other embodiments are disclosed and claimed.
    Type: Grant
    Filed: June 22, 2018
    Date of Patent: October 6, 2020
    Assignee: Intel Corporation
    Inventors: Kshitij Doshi, Bhanu Shankar
  • Patent number: 10798157
    Abstract: Technologies for function as a service (FaaS) arbitration include an edge gateway, multiple endpoint devices, and multiple service providers. The edge gateway receives a registration request from a service provider that is indicative of an FaaS function identifier and a transform function. The edge gateway verifies an attestation received from the service provider and registers the service provider. The edge gateway receives a function execution request from an endpoint device that is indicative of the FaaS function identifier. The edge gateway selects the service provider based on the FaaS function identifier, programs an accelerator with the transform function, executes the transform function with the accelerator to transform the function execution request to a provider request, and submits the provider request to the service provider. The service provider may be selected based on an expected service level included in the function execution request. Other embodiments are described and claimed.
    Type: Grant
    Filed: December 28, 2018
    Date of Patent: October 6, 2020
    Assignee: Intel Corporation
    Inventors: Francesc Guim Bernat, Ned Smith, Kshitij Doshi, Alexander Bachmutsky, Suraj Prabhakaran
  • Publication number: 20200310957
    Abstract: A processor including a processing core to execute an instruction prior to executing a memory allocation call; one or more last branch record (LBR) registers to store one or more recently retired branch instructions; a performance monitoring unit (PMU) comprising a logic circuit to: retrieve the one or more recently retired branch instructions from the one or more LBR registers; identify, based on the retired branch instructions, a signature of the memory allocation call; provide the signature to software to determine a memory tier to allocate memory for the memory allocation call.
    Type: Application
    Filed: March 29, 2019
    Publication date: October 1, 2020
    Applicant: Intel Corporation
    Inventors: Harshad Sane, Kshitij Doshi
  • Publication number: 20200302301
    Abstract: Logic may determine a specific performance of a neural network based on an event and may present the specific performance to provide a user with an explanation of the inference by a machine learning model such as a neural network. Logic may determine a first activation profile associated with the event, the first activation profile based on activation of nodes in one or more layers of the neural network during inference to generate an output. Logic may correlate the first activation profile against a second activation profile associated with a first training sample of training data. Logic may determine that the first training sample is associated with the event based on the correlation. Logic may output an indicator to identify the first training sample as being associated with the event.
    Type: Application
    Filed: June 5, 2020
    Publication date: September 24, 2020
    Applicant: Intel Corporation
    Inventors: Glen J. Anderson, Rajesh Poornachandran, Ignacio Alvarez, Giuseppe Raffa, Jill Boyce, Ankur Agrawal, Kshitij Doshi
  • Publication number: 20200304425
    Abstract: Technologies for performing switch-based collective operations in a fabric architecture include a network switch communicatively coupled to a plurality of computing nodes. The network switch is configured to identify sub-operations of a collective operation of a collective operation request received from one of the computing nodes and identify a plurality of operands for each of the sub-operations. The network switch is additionally configured to request a value for each of the operands from a corresponding target computing node at which the respective value is stored, determine a result of the collective operation as a function of the requested operand values, and transmit the result to the requesting computing node. Other embodiments are described herein.
    Type: Application
    Filed: June 8, 2020
    Publication date: September 24, 2020
    Inventors: Francesc Guim Bernat, Kshitij A. Doshi, Daniel Rivas Barragan, Alejandro Duran Gonzalez
  • Patent number: 10785295
    Abstract: Fabric encapsulated resilient storage is hardware-assisted resilient storage in which the reliability capabilities of a storage server are abstracted and managed transparently by a host fabric interface (HFI) to a switch. The switch abstracts the reliability capabilities of a storage server into a level of resilience in a hierarchy of levels of resilience. The resilience levels are accessible by clients as a quantifiable characteristic of the storage server. The resilience levels are used by the switch fabric to filter which storage servers store objects responsive to client requests to store objects at a specified level of resilience.
    Type: Grant
    Filed: June 30, 2016
    Date of Patent: September 22, 2020
    Assignee: Intel Corporation
    Inventors: Francesc Guim Bernat, Daniel Rivas Barragan, Kshitij A. Doshi, Mark A. Schmisseur, Steen Larsen
  • Patent number: 10782969
    Abstract: A processor of an aspect includes a plurality of packed data registers, and a decode unit to decode a vector cache line write back instruction. The vector cache line write back instruction is to indicate a source packed memory indices operand that is to include a plurality of memory indices. The processor also includes a cache coherency system coupled with the packed data registers and the decode unit. The cache coherency system, in response to the vector cache line write back instruction, to cause, any dirty cache lines, in any caches in a coherency domain, which are to have stored therein data for any of a plurality of memory addresses that are to be indicated by any of the memory indices of the source packed memory indices operand, to be written back toward one or more memories. Other processors, methods, and systems are also disclosed.
    Type: Grant
    Filed: May 14, 2018
    Date of Patent: September 22, 2020
    Assignee: Intel Corporation
    Inventors: Kshitij A. Doshi, Thomas Willhalm