Patents by Inventor SURAJ PRABHAKARAN

SURAJ PRABHAKARAN has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11082525
    Abstract: Technologies for managing telemetry and sensor data on an edge networking platform are disclosed. According to one embodiment disclosed herein, a device monitors telemetry data associated with multiple services provided in the edge networking platform. The device identifies, for each of the services and as a function of the associated telemetry data, one or more service telemetry patterns. The device generates a profile including the identified service telemetry patterns.
    Type: Grant
    Filed: May 17, 2019
    Date of Patent: August 3, 2021
    Assignee: Intel Corporation
    Inventors: Ramanathan Sethuraman, Timothy Verrall, Ned M. Smith, Thomas Willhalm, Brinda Ganesh, Francesc Guim Bernat, Karthik Kumar, Evan Custodio, Suraj Prabhakaran, Ignacio Astilleros Diez, Nilesh K. Jain, Ravi Iyer, Andrew J. Herdrich, Alexander Vul, Patrick G. Kutch, Kevin Bohan, Trevor Cooper
  • Publication number: 20210209469
    Abstract: Examples include techniques to manage training or trained models for deep learning applications. Examples include routing commands to configure a training model to be implemented by a training module or configure a trained model to be implemented by an inference module. The commands routed via out-of-band (OOB) link while training data for the training models or input data for the trained models are routed via inband links.
    Type: Application
    Filed: March 22, 2021
    Publication date: July 8, 2021
    Applicant: Intel Corporation
    Inventors: Francesc GUIM BERNAT, Suraj PRABHAKARAN, Kshitij A. DOSHI, Da-Ming CHIANG
  • Patent number: 11044099
    Abstract: Technologies for providing certified telemetry data indicative of resource utilizations include a device with circuitry configured to obtain telemetry data indicative of a utilization of one or more device resources over a time period. The circuitry is additionally configured to sign the obtained telemetry data with a private key associated with the present device. Further, the circuitry is configured to send the signed telemetry data to a telemetry service for analysis.
    Type: Grant
    Filed: December 28, 2018
    Date of Patent: June 22, 2021
    Assignee: Intel Corporation
    Inventors: Francesc Guim Bernat, Johan Van De Groenendaal, Kshitij A. Doshi, Susanne M. Balle, Suraj Prabhakaran
  • Patent number: 11025411
    Abstract: Technologies for providing streamlined provisioning of accelerated functions in a disaggregated architecture include a compute sled. The compute sled includes a network interface controller and circuitry to determine whether to accelerate a function of a workload executed by the compute sled, and send, to a memory sled and in response to a determination to accelerate the function, a data set on which the function is to operate. The circuitry is also to receive, from the memory sled, a service identifier indicative of a memory location independent handle for data associated with the function, send, to a compute device, a request to schedule acceleration of the function on the data set, receive a notification of completion of the acceleration of the function, and obtain, in response to receipt of the notification and using the service identifier, a resultant data set from the memory sled. The resultant data set was produced by an accelerator device during acceleration of the function on the data set.
    Type: Grant
    Filed: March 6, 2018
    Date of Patent: June 1, 2021
    Assignee: Intel Corporation
    Inventors: Francesc Guim Bernat, Suraj Prabhakaran, Kshitij Doshi, Timothy Verrall
  • Publication number: 20210109785
    Abstract: Methods, apparatus, systems and articles of manufacture are disclosed to batch functions. An example apparatus includes a function evaluator to, in response to receiving a function request associated with a function and an input, flag the function for batching, a timing handler to determine a waiting threshold associated with the function, a queue handler to store the function, the input, and the waiting threshold in a queue, and a client interface to, in response to a time duration the function is stored in the queue satisfying the waiting threshold, send the function and the input to a client device to increase throughput to the client device.
    Type: Application
    Filed: December 23, 2020
    Publication date: April 15, 2021
    Inventors: Suraj Prabhakaran, Francesc Guim Bernat
  • Publication number: 20210103544
    Abstract: There is disclosed an example of an artificial intelligence (AI) system, including: a first hardware platform; a fabric interface configured to communicatively couple the first hardware platform to a second hardware platform; a processor hosted on the first hardware platform and programmed to operate on an AI problem; and a first training accelerator, including: an accelerator hardware; a platform inter-chip link (ICL) configured to communicatively couple the first training accelerator to a second training accelerator on the first hardware platform without aid of the processor; a fabric ICL to communicatively couple the first training accelerator to a third training accelerator on a second hardware platform without aid of the processor; and a system decoder configured to operate the fabric ICL and platform ICL to share data of the accelerator hardware between the first training accelerator and second and third training accelerators without aid of the processor.
    Type: Application
    Filed: December 17, 2020
    Publication date: April 8, 2021
    Applicant: Intel Corporation
    Inventors: Francesc Guim Bernat, Da-Ming Chiang, Kshitij A. Doshi, Suraj Prabhakaran, Mark A. Schmisseur
  • Patent number: 10970216
    Abstract: An embodiment of a semiconductor package apparatus may include technology to create a tracking structure for a memory controller to track a range of memory addresses of a persistent memory, identify a write request at the memory controller for a memory location within the range of tracked memory addresses, and set a flag in the tracking structure to indicate that the memory location had the identified write request. Other embodiments are disclosed and claimed.
    Type: Grant
    Filed: December 27, 2017
    Date of Patent: April 6, 2021
    Assignee: Intel Corporation
    Inventors: Kshitij A. Doshi, Francesc Guim Bernat, Daniel Rivas Barragan, Suraj Prabhakaran
  • Publication number: 20210099516
    Abstract: Technologies for function as a service (FaaS) arbitration include an edge gateway, multiple endpoint devices, and multiple service providers. The edge gateway receives a registration request from a service provider that is indicative of an FaaS function identifier and a transform function. The edge gateway verifies an attestation received from the service provider and registers the service provider. The edge gateway receives a function execution request from an endpoint device that is indicative of the FaaS function identifier. The edge gateway selects the service provider based on the FaaS function identifier, programs an accelerator with the transform function, executes the transform function with the accelerator to transform the function execution request to a provider request, and submits the provider request to the service provider. The service provider may be selected based on an expected service level included in the function execution request. Other embodiments are described and claimed.
    Type: Application
    Filed: August 10, 2020
    Publication date: April 1, 2021
    Inventors: Francesc Guim Bernat, Ned Smith, Kshitij Doshi, Alexander Bachmutsky, Suraj Prabhakaran
  • Publication number: 20210099362
    Abstract: Various systems and methods for implementing a service-level agreement (SLA) apparatus receive a request from a requester via a network interface of the gateway, the request comprising an inference model identifier that identifies a handler of the request, and a response time indicator. The response time indicator relates to a time within which the request is to be handled indicates an undefined time within which the request is to be handled. The apparatus determines a network location of a handler that is a platform or an inference model to handle the request consistent with the response time indicator, and routes the request to the handler at the network location.
    Type: Application
    Filed: October 8, 2020
    Publication date: April 1, 2021
    Inventors: Francesc Guim Bernat, Kshitij Arun Doshi, Suraj Prabhakaran, Raghu Kondapalli, Alexander Bachmutsky
  • Publication number: 20210100070
    Abstract: Various systems and methods for enhancing a distributed computing environment with multiple edge hosts and user devices, including in multi-access edge computing (MEC) network platforms and settings, are described herein. A device of a lifecycle management (LCM) proxy apparatus obtains a request, from a device application, for an application multiple context of an application. The application multiple context for the application is determined. The request from the device application for the application multiple context for the application is authorized. A device application identifier based on the request is added to the application multiple context. A created response for the device application based on the authorization of the request is transmitted to the device application. The response includes an identifier of the application multiple context.
    Type: Application
    Filed: August 17, 2020
    Publication date: April 1, 2021
    Inventors: Dario Sabella, Ned M. Smith, Neal Conrad Oliver, Kshitij Arun Doshi, Suraj Prabhakaran, Francesc Guim Bernat, Miltiadis Filippou
  • Patent number: 10936039
    Abstract: In one embodiment, an apparatus of an edge computing system includes memory that includes instructions and processing circuitry coupled to the memory. The processing circuitry implements the instructions to process a request to execute at least a portion of a workflow on pooled computing resources, the workflow being associated with a particular tenant, determine an amount of power to be allocated to particular resources of the pooled computing resources for execution of the portion of the workflow based on a power budget associated with the tenant and a current power cost, and control allocation of the determined amount of power to the particular resources of the pooled computing resources during execution of the portion of the workflow.
    Type: Grant
    Filed: June 19, 2018
    Date of Patent: March 2, 2021
    Assignee: INTEL CORPORATION
    Inventors: Francesc Guim Bernat, Suraj Prabhakaran, Timothy Verrall, Karthik Kumar, Mark A. Schmisseur
  • Patent number: 10917461
    Abstract: Technologies for matching security requirements for a function-as-a-service (FaaS) function request to an edge resource having security features matching the security requirements are disclosed. According to one embodiment of the present disclosure, an edge gateway device receives, from an edge device, a request to execute an accelerated function. The edge gateway device selects, as a function of one or more security requirements requested by the edge device, an edge resource to fulfill the request. The edge gateway device transmits the request to the edge resource to fulfill the request of the edge device, according to the one or more security requirements.
    Type: Grant
    Filed: March 29, 2019
    Date of Patent: February 9, 2021
    Assignee: INTEL CORPORATION
    Inventors: Kshitij Doshi, Francesc Guim Bernat, Suraj Prabhakaran, Ned M. Smith
  • Publication number: 20210034130
    Abstract: Examples described herein relate to management of battery-use by one or more computing resources in the event of a power outage. Data used by one or more computing resources can be backed-up using battery power. Battery power is allocated to data back-up operations based at least on one or more of: criticality level of data, priority of an application that processes the data, or priority level of resource. The computing resource can back-up data to a persistent storage media. The computing resource can store a log of data that is backed-up or not backed-up. The log can be used by the computing resource to access the backed-up data for continuing to process the data and to determine what data is not available for processing.
    Type: Application
    Filed: July 29, 2019
    Publication date: February 4, 2021
    Inventors: Francesc GUIM BERNAT, Suraj PRABHAKARAN, Karthik KUMAR, Uzair QURESHI, Timothy VERRALL
  • Publication number: 20210021594
    Abstract: Various aspects of methods, systems, and use cases for biometric security for edge platform management. An edge cloud system to implement biometric security for edge platform management comprises a biometric sensor; and an edge node in an edge network, the edge node to: receive a request to access a feature of the edge node, the request originating from an entity, wherein the request comprises an entity identifier and a feature identifier; receive from the biometric sensor, biometric data of the entity; authenticate the entity using the biometric data; and in response to authenticating the entity using the biometric data, grant access to the feature based on a crosscheck to an access control list that includes entity identifiers correlated to feature identifiers, using the received entity identifier and the received feature identifier.
    Type: Application
    Filed: September 25, 2020
    Publication date: January 21, 2021
    Inventors: Francesc Guim Bernat, Ned M. Smith, Kshitij Arun Doshi, Suraj Prabhakaran, Brinda Ganesh
  • Publication number: 20210021533
    Abstract: Systems and techniques for intelligent data forwarding in edge networks are described herein. A request may be received from an edge user device for a service via a first endpoint. A time value may be calculated using a timestamp of the request. Motion characteristics may be determined for the edge user device using the time value. A response to the request may be transmitted to a second endpoint based on the motion characteristics.
    Type: Application
    Filed: September 25, 2020
    Publication date: January 21, 2021
    Inventors: Francesc Guim Bernat, Ned M. Smith, Kshitij Arun Doshi, Suraj Prabhakaran, Timothy Verrall, Kapil Sood, Tarun Viswanathan
  • Publication number: 20210004685
    Abstract: Examples include techniques to manage training or trained models for deep learning applications. Examples include routing commands to configure a training model to be implemented by a training module or configure a trained model to be implemented by an inference module. The commands routed via out-of-band (OOB) link while training data for the training models or input data for the trained models are routed via inband links.
    Type: Application
    Filed: September 18, 2020
    Publication date: January 7, 2021
    Applicant: Intel Corporation
    Inventors: Francesc GUIM BERNAT, Suraj PRABHAKARAN, Kshitij A. DOSHI, Da-Ming CHIANG
  • Patent number: 10873521
    Abstract: Techniques for fast startup for composite nodes in software-defined infrastructures (SDI) are described. A SDI system may include an SDI manager component, including one or more processor circuits to access one or more remote resources, the SDI manager component may including a node manager to determine, based upon one or more reservation tables stored in a non-transitory computer-readable storage medium, an initial set of resources for creating the composite node from among the one or more remote resources. The partition manager may create the composite node using the initial set of resources, the initial set of resources is a subset of resources required by the composite node. Other embodiments are described and claimed.
    Type: Grant
    Filed: July 20, 2017
    Date of Patent: December 22, 2020
    Assignee: INTEL CORPORATION
    Inventors: Francesc Guim Bernat, Susanne M. Balle, Daniel Rivas Barragan, John Chun Kwok Leung, Suraj Prabhakaran, Murugasamy K. Nachimuthu, Slawomir Putyrski
  • Publication number: 20200396177
    Abstract: Technologies for accelerating edge device workloads at a device edge network include a network computing device which includes a processor platform that includes at least one processor which supports a plurality of non-accelerated function-as-a-service (FaaS) operations and an accelerated platform that includes at least one accelerator which supports a plurality of accelerated FaaS (AFaaS) operation. The network computing device is configured to receive a request to perform a FaaS operation, determine whether the received request indicates that an AFaaS operation is to be performed on the received request, and identify compute requirements for the AFaaS operation to be performed. The network computing device is further configured to select an accelerator platform to perform the identified AFaaS operation and forward the received request to the selected accelerator platform to perform the identified AFaaS operation. Other embodiments are described and claimed.
    Type: Application
    Filed: January 21, 2020
    Publication date: December 17, 2020
    Inventors: Francesc Guim Bernat, Anil Rao, Suraj Prabhakaran, Mohan Kumar, Karthik Kumar
  • Publication number: 20200389410
    Abstract: An example system to schedule service requests in a network computing system using hardware queue managers includes: a gateway-level hardware queue manager in an edge gateway to schedule the service requests received from client devices in a queue; a rack-level hardware queue manager in a physical rack in communication with the edge gateway, the rack-level hardware queue manager to send a pull request to the gateway-level hardware queue manager for a first one of the service requests; and a drawer-level hardware queue manager in a drawer of the physical rack, the drawer-level hardware queue manager to send a second pull request to the rack-level hardware queue manager for the first one of the service requests, the drawer including a resource to provide a function as a service specified in the first one of the service requests.
    Type: Application
    Filed: March 30, 2018
    Publication date: December 10, 2020
    Inventors: Francesc Guim Bernat, Karthik Kumar, Suraj Prabhakaran, Ignacio Astilleros Diez, Timothy Verrall
  • Patent number: 10833969
    Abstract: Techniques for increasing malleability in software-defined infrastructures are described. A compute node, including one or more processor circuits, may be configured to access one or more remote resources via a fabric, the compute node may be configured to monitor utilization of the one or more remote resources. The compute node may be further configured to identify based on one or more criteria that one or more remote resources may be released and initiate release of identified one or more remote resources. The compute node may be configured to generate a notification to a software stack indicating that the identified one or more remote resources has been released. Other embodiments are described and claimed.
    Type: Grant
    Filed: July 20, 2017
    Date of Patent: November 10, 2020
    Assignee: INTEL CORPORATION
    Inventors: Francesc Guim Bernat, Susanne M. Balle, Daniel Rivas Barragan, John Chun Kwok Leung, Suraj Prabhakaran, Murugasamy K. Nachimuthu, Slawomir Putyrski