Patents by Inventor Timothy Verrall

Timothy Verrall has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20190281132
    Abstract: Technologies for managing telemetry and sensor data on an edge networking platform are disclosed. According to one embodiment disclosed herein, a device monitors telemetry data associated with multiple services provided in the edge networking platform. The device identifies, for each of the services and as a function of the associated telemetry data, one or more service telemetry patterns. The device generates a profile including the identified service telemetry patterns.
    Type: Application
    Filed: May 17, 2019
    Publication date: September 12, 2019
    Inventors: Ramanathan Sethuraman, Timothy Verrall, Ned M. Smith, Thomas Willhalm, Brinda Ganesh, Francesc Guim Bernat, Karthik Kumar, Evan Custodio, Suraj Prabhakaran, Ignacio Astilleros Diez, Nilesh K. Jain, Ravi Iyer, Andrew J. Herdrich, Alexander Vul, Patrick G. Kutch, Kevin Bohan, Trevor Cooper
  • Publication number: 20190243685
    Abstract: Some examples provide for uninterruptible power supply form (UPS) resources and non-UPS resources to be offered in a composite node for customers to use. For a workload run on the composite node, monitoring of non-UPS resource power availability, resource temperature, and/or cooling facilities can take place. In the event, a non-UPS resource experiences a power outage or reduction in available power, temperature that is at or above a threshold level, and/or cooling facility outage, monitoring of performance of a workload executing on the non-UPS resource can take place. If the performance is acceptable and the power available to the non-UPS resource exceeds a threshold level, the supplied power can be reduced. If the performance experiences excessive levels of errors or slows unacceptably, the workload can be migrated to another non-UPS or UPS compliant resource.
    Type: Application
    Filed: April 15, 2019
    Publication date: August 8, 2019
    Inventors: Francesc GUIM BERNAT, Felipe PASTOR BENEYTO, Kshitij A. DOSHI, Timothy VERRALL, Suraj PRABHAKARAN
  • Publication number: 20190230002
    Abstract: Technologies for accelerated orchestration and attestation include multiple edge devices. An edge appliance device performs an attestation process with each of its components to generate component certificates. The edge appliance device generates an appliance certificate that is indicative of the component certificates and a current utilization of the edge appliance device and provides the appliance certificate to a relying party. The relying party may be an edge orchestrator device. The edge orchestrator device receives a workload scheduling request with a service level agreement requirement. The edge orchestrator device verifies the appliance certificate and determines whether the service level agreement requirement is satisfied based on the appliance certificate. If satisfied, the workload is scheduled to the edge appliance device. Attestation and generation of the appliance certificate by the edge appliance device may be performed by an accelerator of the edge appliance device.
    Type: Application
    Filed: March 29, 2019
    Publication date: July 25, 2019
    Inventors: Francesc Guim Bernat, Kapil Sood, Tarun Viswanathan, Kshitij Doshi, Timothy Verrall, Ned M. Smith, Manish Dave, Alex Vul
  • Publication number: 20190229897
    Abstract: Technologies for accelerated key caching in an edge hierarchy include multiple edge appliance devices organized in tiers. An edge appliance device receives a request for a key, such as a private key. The edge appliance device determines whether the key is included in a local key cache and, if not, requests the key from an edge appliance device included in an inner tier of the edge hierarchy. The edge appliance device may request the key from an edge appliance device included in a peer tier of the edge hierarchy. The edge appliance device may activate per-tenant accelerated logic to identify one or more keys in the key cache for eviction. The edge appliance device may activate per-tenant accelerated logic to identify one or more keys for pre-fetching. Those functions of the edge appliance device may be performed by an accelerator such as an FPGA. Other embodiments are described and claimed.
    Type: Application
    Filed: March 29, 2019
    Publication date: July 25, 2019
    Inventors: Timothy Verrall, Thomas Willhalm, Francesc Guim Bernat, Karthik Kumar, Ned M. Smith, Rajesh Poornachandran, Kapil Sood, Tarun Viswanathan, John J. Browne, Patrick Kutch
  • Publication number: 20190220210
    Abstract: Technologies for providing deduplication of data in an edge network includes a compute device having circuitry to obtain a request to write a data set. The circuitry is also to apply, to the data set, an approximation function to produce an approximated data set. Additionally, the circuitry is to determine whether the approximated data set is already present in a shared memory and write, to a translation table and in response to a determination that the approximated data set is already present in the shared memory, an association between a local memory address and a location, in the shared memory, where the approximated data set is already present. Additionally, the circuitry is to increase a reference count associated with the location in the shared memory.
    Type: Application
    Filed: March 28, 2019
    Publication date: July 18, 2019
    Inventors: Francesc Guim Bernat, Karthik Kumar, Thomas Willhalm, Timothy Verrall, Ned Smith
  • Publication number: 20190140919
    Abstract: An architecture to enable verification, ranking, and identification of respective edge service properties and associated service level agreement (SLA) properties, such as in an edge cloud or other edge computing environment, is disclosed. In an example, management and use of service information for an edge service includes: providing SLA information for an edge service to an operational device, for accessing an edge service hosted in an edge computing environment, with the SLA information providing reputation information for computing functions of the edge service according to an identified SLA; receiving a service request for use of the computing functions of the edge service, under the identified SLA; requesting, from the edge service, performance of the computing functions of the edge service according to the service request; and tracking the performance of the computing functions of the edge service according to the service request and compliance with the identified SLA.
    Type: Application
    Filed: December 28, 2018
    Publication date: May 9, 2019
    Inventors: Ned M. Smith, Ben McCahill, Francesc Guim Bernat, Felipe Pastor Beneyto, Karthik Kumar, Timothy Verrall
  • Publication number: 20190138534
    Abstract: Technologies for providing dynamic persistence of data in edge computing include a device including circuitry configured to determine multiple different logical domains of data storage resources for use in storing data from a client compute device at an edge of a network. Each logical domain has a different set of characteristics. The circuitry is also to configured to receive, from the client compute device, a request to persist data. The request includes a target persistence objective indicative of an objective to be satisfied in the storage of the data. Additionally, the circuitry is configured to select, as a function of the characteristics of the logical domains and the target persistence objective, a logical domain into which to persist the data and provide the data to the selected logical domain.
    Type: Application
    Filed: December 28, 2018
    Publication date: May 9, 2019
    Inventors: Francesc Guim Bernat, Karthik Kumar, Suraj Prabhakaran, Ramanathan Sethuraman, Timothy Verrall
  • Publication number: 20190138361
    Abstract: Technologies for providing dynamic selection of edge and local accelerator resources includes a device having circuitry to identify a function of an application to be accelerated, determine one or more properties of an accelerator resource available at the edge of a network where the device is located, and determine one or more properties of an accelerator resource available in the device. Additionally, the circuitry is to determine a set of acceleration selection factors associated with the function, wherein the acceleration factors are indicative of one or more objectives to be satisfied in the acceleration of the function. Further, the circuitry is to select, as a function of the one or more properties of the accelerator resource available at the edge, the one or more properties of the accelerator resource available in the device, and the acceleration selection factors, one or more of the accelerator resources to accelerate the function.
    Type: Application
    Filed: December 28, 2018
    Publication date: May 9, 2019
    Inventors: Francesc Guim Bernat, Karthik Kumar, Ned Smith, Thomas Willhalm, Timothy Verrall
  • Publication number: 20190141120
    Abstract: Technologies for providing selective offload of execution of an application to the edge include a device that includes circuitry to determine whether a section of an application to be executed by the device is available to be offloaded. Additionally, the circuitry is to determine one or more characteristics of an edge resource available to execute the section. Further, the circuitry is to determine, as a function of the one or more characteristics and a target performance objective associated with the section, whether to offload the section to the edge resource and offload, in response to a determination to offload the section, the section to the edge resource.
    Type: Application
    Filed: December 28, 2018
    Publication date: May 9, 2019
    Inventors: Francesc Guim Bernat, Ned Smith, Thomas Willhalm, Francesc Guim Bernat, Karthik Kumar, Timothy Verrall
  • Publication number: 20190140913
    Abstract: Examples include techniques for artificial intelligence (AI) capabilities at a network switch. These examples include receiving a request to register a neural network for loading to an inference resource located at the network switch and loading the neural network based on information included in the request to support an AI service to be provided by users requesting the AI service.
    Type: Application
    Filed: December 28, 2018
    Publication date: May 9, 2019
    Inventors: Francesc GUIM BERNAT, Suraj PRABHAKARAN, Kshitij A. DOSHI, Brinda GANESH, Timothy VERRALL
  • Publication number: 20190097889
    Abstract: A computing apparatus, including: a hardware platform; and an interworking broker function (IBF) hosted on the hardware platform, the IBF including a translation driver (TD) associated with a legacy network appliance lacking native interoperability with an orchestrator, the IBF configured to: receive from the orchestrator a network function provisioning or configuration command for the legacy network appliance; operate the TD to translate the command to a format consumable by the legacy network appliance; and forward the command to the legacy network appliance.
    Type: Application
    Filed: September 27, 2017
    Publication date: March 28, 2019
    Applicant: Intel Corporation
    Inventors: John J. Browne, Timothy Verrall, Maryam Tahhan, Michael J. McGrath, Sean Harte, Kevin Devey, Jonathan Kenny, Christopher MacNamara
  • Publication number: 20190042234
    Abstract: Technologies for providing streamlined provisioning of accelerated functions in a disaggregated architecture include a compute sled. The compute sled includes a network interface controller and circuitry to determine whether to accelerate a function of a workload executed by the compute sled, and send, to a memory sled and in response to a determination to accelerate the function, a data set on which the function is to operate. The circuitry is also to receive, from the memory sled, a service identifier indicative of a memory location independent handle for data associated with the function, send, to a compute device, a request to schedule acceleration of the function on the data set, receive a notification of completion of the acceleration of the function, and obtain, in response to receipt of the notification and using the service identifier, a resultant data set from the memory sled. The resultant data set was produced by an accelerator device during acceleration of the function on the data set.
    Type: Application
    Filed: March 6, 2018
    Publication date: February 7, 2019
    Inventors: Francesc Guim Bernat, Suraj Prabhakaran, Kshitij Doshi, Timothy Verrall
  • Publication number: 20190042617
    Abstract: Examples provide a network component, a network switch, a central office, a base station, a data storage element, a method, an apparatus, a computer program, a machine readable storage, and a machine readable medium. A network component (10) is configured to manage data consistency among two or more data storage elements (20, 30) in a network (40). The network component (10) comprises one or more interfaces (12) configured to register information on the two or more data storage elements (20, 30) comprising the data, information on a temporal range for the data consistency, and information on one or more address spaces at the two or more data storage elements (20, 30) to address the data.
    Type: Application
    Filed: April 10, 2018
    Publication date: February 7, 2019
    Inventors: Francesc Guim Bernat, Karthik Kumar, Mark Schmisseur, Timothy Verrall, Thomas Willhalm
  • Publication number: 20190042294
    Abstract: A method and system for implementing virtualized network functions (VNFs) in a network. Physical resources of the network are abstracted into virtual resource pools and shared by virtual network entities. A virtual channel is set up for communicating data between a first VNF and a second VNF. A memory pool is allocated for the virtual channel from a set of memory pools. New interfaces are provided for communication between VNFs. The new interfaces may allow to push and pull payloads or data units from one VNF to another. The data may be stored in a queue in the pooled memory allocated for the VNFs/services. Certain processing may be performed before the data is stored in the memory pool.
    Type: Application
    Filed: April 13, 2018
    Publication date: February 7, 2019
    Inventors: Francesc Guim Bernat, Karthik Kumar, Timothy Verrall, Suraj Prabhakaran, Mark Schmisseur
  • Publication number: 20190042739
    Abstract: Technologies for cache side channel attack detection and mitigation include an analytics server and one or more monitored computing devices. The analytics server polls each computing device for analytics counter data. The computing device generates the analytics counter data using a resource manager of a processor of the computing device. The analytics counter data may include last-level cache data or memory bandwidth data. The analytics server identifies suspicious core activity based on the analytics counter data and, if identified, deploys a detection process to the computing device. The computing device executes the detection process to identify suspicious application activity. If identified, the computing device may perform one or more corrective actions. Corrective actions include limiting resource usage by a suspicious process using the resource manager of the processor. The resource manager may limit cache occupancy or memory bandwidth used by the suspicious process.
    Type: Application
    Filed: June 29, 2018
    Publication date: February 7, 2019
    Inventors: John J. Browne, Marcel Cornu, Timothy Verrall, Tomasz Kantecki, Niall Power, Weigang Li, Eoin Walsh, Maryam Tahhan
  • Publication number: 20190041960
    Abstract: In one embodiment, an apparatus of an edge computing system includes memory that includes instructions and processing circuitry coupled to the memory. The processing circuitry implements the instructions to process a request to execute at least a portion of a workflow on pooled computing resources, the workflow being associated with a particular tenant, determine an amount of power to be allocated to particular resources of the pooled computing resources for execution of the portion of the workflow based on a power budget associated with the tenant and a current power cost, and control allocation of the determined amount of power to the particular resources of the pooled computing resources during execution of the portion of the workflow.
    Type: Application
    Filed: June 19, 2018
    Publication date: February 7, 2019
    Inventors: Francesc Guim Bernat, Suraj Prabhakaran, Timothy Verrall, Karthik Kumar, Mark A. Schmisseur
  • Publication number: 20190045000
    Abstract: Technologies for load-aware traffic steering include a compute device that includes a multi-homed network interface controller (NIC) with a plurality of NICs. The compute device determines a target virtual network function (VNF) of a plurality of VNFs to perform a processing operation on a network packet. The compute device further identifies a first steering point of a first NIC to steer the received network packet to virtual machines (VMs) associated with the target VNF and retrieves a resource utilization metric that corresponds to a usage level of a component of the compute device used by the VMs to process the network packet. Additionally, the compute device determines whether the resource utilization metric indicates a potential overload condition and provides a steering instruction to a second steering point of a second NIC that is usable to redirect the network traffic to the other VMs via the identified second steering point.
    Type: Application
    Filed: June 29, 2018
    Publication date: February 7, 2019
    Inventors: Chetan Hiremath, Timothy Verrall, Andrey Chilikin, Thomas Long, Maryam Tahhan, Eoin Walsh, Andrew Duignan, Rory Browne
  • Publication number: 20190045022
    Abstract: An apparatus is described. The apparatus includes switch circuitry to route packets that are destined for one or more cloud storage services instead to local caching resources. The packets are sent from different tenants of the one or more cloud storage services and have respective payloads that contain read/write commands for one or more cloud storage services. The apparatus includes storage controller circuitry to be coupled to non volatile memory. The non volatile memory is to implement the local caching resources. The storage controller is to implement customized caching treatment for the different tenants. The apparatus includes network interface circuitry coupled between the switch circuitry and the storage controller circuitry to implement customized network end point processing for the different tenants.
    Type: Application
    Filed: March 29, 2018
    Publication date: February 7, 2019
    Inventors: Francesc GUIM BERNAT, Eoin WALSH, Paul MANNION, Timothy VERRALL, Mark A. SCHMISSEUR
  • Publication number: 20190042314
    Abstract: Particular embodiments described herein provide for an electronic device that can be configured to partition a resource into a plurality of partitions and allocate a reserved portion and a corresponding burst portion in each of the plurality of partitions. Each of the allocated reserved portions and corresponding burst portions are reserved for a specific component or application, where any part of the allocated burst portion not being used by the specific component or application can be used by other components and/or applications.
    Type: Application
    Filed: January 12, 2018
    Publication date: February 7, 2019
    Applicant: Intel Corporation
    Inventors: Timothy Verrall, John J. Browne, Tomasz Kantecki, Maryam Tahhan, Eoin Walsh, Andrew Duignan, Alan Carey, Wojciech Andralojc, Damien Power, Tarun Viswanathan
  • Publication number: 20190042454
    Abstract: Examples include techniques to manage cache resource allocations associated with one or more cache class of service (CLOS) assignments for a processor cache. Examples include flushing portions of an allocated cache resource responsive to reassignments of CLOS.
    Type: Application
    Filed: June 29, 2018
    Publication date: February 7, 2019
    Inventors: Tomasz KANTECKI, John BROWNE, Chris MACNAMARA, Timothy VERRALL, Marcel CORNU, Eoin WALSH, Andrew J. HERDRICH