Patents by Inventor Timothy Verrall

Timothy Verrall has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240080246
    Abstract: Examples include techniques for artificial intelligence (AI) capabilities at a network switch. These examples include receiving a request to register a neural network for loading to an inference resource located at the network switch and loading the neural network based on information included in the request to support an AI service to be provided by users requesting the AI service.
    Type: Application
    Filed: October 2, 2023
    Publication date: March 7, 2024
    Inventors: Francesc GUIM BERNAT, Suraj PRABHAKARAN, Kshitij A. DOSHI, Brinda GANESH, Timothy VERRALL
  • Patent number: 11922227
    Abstract: Technologies for providing efficient migration of services include a server device. The server device includes compute engine circuitry to execute a set of services on behalf of a terminal device and migration accelerator circuitry. The migration accelerator circuitry is to determine whether execution of the services is to be migrated from an edge station in which the present server device is located to a second edge station in which a second server device is located, determine a prioritization of the services executed by the server device, and send, in response to a determination that the services are to be migrated and as a function of the determined prioritization, data utilized by each service to the second server device of the second edge station to migrate the services. Other embodiments are also described and claimed.
    Type: Grant
    Filed: December 21, 2022
    Date of Patent: March 5, 2024
    Assignee: Intel Corporation
    Inventors: Francesc Guim Bernat, Karthik Kumar, Susanne M. Balle, Ignacio Astilleros Diez, Timothy Verrall, Ned M. Smith
  • Publication number: 20240031219
    Abstract: Methods, apparatus, and systems are disclosed for mapping active assurance intents to resource orchestration and life cycle management. An example apparatus disclosed herein is to reserve a probe on a compute device in a cluster of compute devices based on a request to satisfy a resource availability criterion associated with a resource of the cluster, apply a risk mitigation operation based on the resource availability criterion before deployment of a workload to the cluster, and monitor whether the criterion is satisfied based on data from the probe after deployment of the workload to the cluster.
    Type: Application
    Filed: September 29, 2023
    Publication date: January 25, 2024
    Inventors: John J. Browne, Kshitij Arun Doshi, Francesc Guim Bernat, Adrian Hoban, Mats Agerstam, Shekar Ramachandran, Thijs Metsch, Timothy Verrall, Ciara Loftus, Emma Collins, Krzysztof Kepka, Pawel Zak, Aibhne Breathnach, Ivens Zambrano, Shanshu Yang
  • Patent number: 11880714
    Abstract: Technologies for providing dynamic selection of edge and local accelerator resources includes a device having circuitry to identify a function of an application to be accelerated, determine one or more properties of an accelerator resource available at the edge of a network where the device is located, and determine one or more properties of an accelerator resource available in the device. Additionally, the circuitry is to determine a set of acceleration selection factors associated with the function, wherein the acceleration factors are indicative of one or more objectives to be satisfied in the acceleration of the function. Further, the circuitry is to select, as a function of the one or more properties of the accelerator resource available at the edge, the one or more properties of the accelerator resource available in the device, and the acceleration selection factors, one or more of the accelerator resources to accelerate the function.
    Type: Grant
    Filed: November 8, 2021
    Date of Patent: January 23, 2024
    Assignee: Intel Corporation
    Inventors: Francesc Guim Bernat, Karthik Kumar, Ned Smith, Thomas Willhalm, Timothy Verrall
  • Publication number: 20240015067
    Abstract: A computing apparatus, including: a hardware platform; and an interworking broker function (IBF) hosted on the hardware platform, the IBF including a translation driver (TD) associated with a legacy network appliance lacking native interoperability with an orchestrator, the IBF configured to: receive from the orchestrator a network function provisioning or configuration command for the legacy network appliance; operate the TD to translate the command to a format consumable by the legacy network appliance; and forward the command to the legacy network appliance.
    Type: Application
    Filed: September 19, 2023
    Publication date: January 11, 2024
    Applicant: Intel Corporation
    Inventors: John J. Browne, Timothy Verrall, Maryam Tahhan, Michael J. McGrath, Sean Harte, Kevin Devey, Jonathan Kenny, Christopher MacNamara
  • Patent number: 11831507
    Abstract: Various approaches for deployment and use of configurable edge computing platforms are described. In an edge computing system, an edge computing device includes hardware resources that can be composed from a configuration of chiplets, as the chiplets are disaggregated for selective use and deployment (for compute, acceleration, memory, storage, or other resources). In an example, configuration operations are performed to: identify a condition for use of the hardware resource, based on an edge computing workload received at the edge computing device; obtain, determine, or identify properties of a configuration for the hardware resource that are available to be implemented with the chiplets, with the configuration enabling the hardware resource to satisfy the condition for use of the hardware resource; and compose the chiplets into the configuration, according to the properties of the configuration, to enable the use of the hardware resource for the edge computing workload.
    Type: Grant
    Filed: May 5, 2022
    Date of Patent: November 28, 2023
    Assignee: Intel Corporation
    Inventors: Francesc Guim Bernat, Kshitij Arun Doshi, Ned M. Smith, Timothy Verrall, Uzair Qureshi
  • Patent number: 11824732
    Abstract: Examples include techniques for artificial intelligence (AI) capabilities at a network switch. These examples include receiving a request to register a neural network for loading to an inference resource located at the network switch and loading the neural network based on information included in the request to support an AI service to be provided by users requesting the AI service.
    Type: Grant
    Filed: December 28, 2018
    Date of Patent: November 21, 2023
    Assignee: Intel Corporation
    Inventors: Francesc Guim Bernat, Suraj Prabhakaran, Kshitij A. Doshi, Brinda Ganesh, Timothy Verrall
  • Patent number: 11818008
    Abstract: A computing apparatus, including: a hardware platform; and an interworking broker function (IBF) hosted on the hardware platform, the IBF including a translation driver (TD) associated with a legacy network appliance lacking native interoperability with an orchestrator, the IBF configured to: receive from the orchestrator a network function provisioning or configuration command for the legacy network appliance; operate the TD to translate the command to a format consumable by the legacy network appliance; and forward the command to the legacy network appliance.
    Type: Grant
    Filed: September 15, 2022
    Date of Patent: November 14, 2023
    Assignee: Intel Corporation
    Inventors: John J. Browne, Timothy Verrall, Maryam Tahhan, Michael J. McGrath, Sean Harte, Kevin Devey, Jonathan Kenny, Christopher MacNamara
  • Patent number: 11809252
    Abstract: Examples described herein relate to management of battery-use by one or more computing resources in the event of a power outage. Data used by one or more computing resources can be backed-up using battery power. Battery power is allocated to data back-up operations based at least on one or more of: criticality level of data, priority of an application that processes the data, or priority level of resource. The computing resource can back-up data to a persistent storage media. The computing resource can store a log of data that is backed-up or not backed-up. The log can be used by the computing resource to access the backed-up data for continuing to process the data and to determine what data is not available for processing.
    Type: Grant
    Filed: July 29, 2019
    Date of Patent: November 7, 2023
    Assignee: Intel Corporation
    Inventors: Francesc Guim Bernat, Suraj Prabhakaran, Karthik Kumar, Uzair Qureshi, Timothy Verrall
  • Patent number: 11768705
    Abstract: Methods, apparatus, systems and machine-readable storage media of an edge computing device which is enabled to access and select the use of local or remote acceleration resources for edge computing processing is disclosed. In an example, an edge computing device obtains first telemetry information that indicates availability of local acceleration circuitry to execute a function, and obtains second telemetry that indicates availability of a remote acceleration function to execute the function. An estimated time (and cost or other identifiable or estimateable considerations) to execute the function at the respective location is identified. The use of the local acceleration circuitry or the remote acceleration resource is selected based on the estimated time and other appropriate factors in relation to a service level agreement.
    Type: Grant
    Filed: October 18, 2021
    Date of Patent: September 26, 2023
    Assignee: Intel Corporation
    Inventors: Francesc Guim Bernat, Karthik Kumar, Ned M. Smith, Thomas Willhalm, Timothy Verrall
  • Publication number: 20230267004
    Abstract: Various approaches for implementing multi-tenant data protection are described. In an edge computing system deployment, a system includes memory and processing circuitry coupled to the memory. The processing circuitry is configured to obtain a workflow execution plan that includes workload metadata defining a plurality of workloads associated with a plurality of edge service instances executing respectively on one or more edge computing devices. The workload metadata is translated to obtain workload configuration information for the plurality of workloads. The workload configuration information identifies a plurality of memory access configurations and service authorizations identifying at least one edge service instance authorized to access one or more of the memory access configurations. The memory is partitioned into a plurality of shared memory regions using the memory access configurations.
    Type: Application
    Filed: May 1, 2023
    Publication date: August 24, 2023
    Inventors: Kshitij Arun Doshi, Ned M. Smith, Francesc Guim Bernat, Timothy Verrall
  • Publication number: 20230205718
    Abstract: Examples described herein relate to a configurable switch with dynamically configurable device connections to a processor socket, where the device connections are configured to meet service level agreement (SLA) parameters of a first service executing on the processor socket. For a second service that is to execute on the processor socket and the second service is higher priority than the first service, device connections of the switch to the processor socket are dynamically reconfigured to meet SLA parameters of the second service.
    Type: Application
    Filed: December 24, 2021
    Publication date: June 29, 2023
    Inventors: Francesc GUIM BERNAT, Deepak S, Kannan Babu R. RAMIA, Palaniappan RAMANATHAN, Timothy VERRALL
  • Publication number: 20230205604
    Abstract: Technologies for providing efficient migration of services include a server device. The server device includes compute engine circuitry to execute a set of services on behalf of a terminal device and migration accelerator circuitry. The migration accelerator circuitry is to determine whether execution of the services is to be migrated from an edge station in which the present server device is located to a second edge station in which a second server device is located, determine a prioritization of the services executed by the server device, and send, in response to a determination that the services are to be migrated and as a function of the determined prioritization, data utilized by each service to the second server device of the second edge station to migrate the services. Other embodiments are also described and claimed.
    Type: Application
    Filed: December 21, 2022
    Publication date: June 29, 2023
    Inventors: Francesc Guim Bernat, Karthik Kumar, Susanne M. Balle, Ignacio Astilleros Diez, Timothy Verrall, Ned M. Smith
  • Patent number: 11669368
    Abstract: In an edge computing system deployment, a system includes memory and processing circuitry coupled to the memory. The processing circuitry is configured to obtain a workflow execution plan that includes workload metadata defining a plurality of workloads associated with a plurality of edge service instances executing respectively on one or more edge computing devices. The workload metadata is translated to obtain workload configuration information for the plurality of workloads. The workload configuration information identifies a plurality of memory access configurations and service authorizations identifying at least one edge service instance authorized to access one or more of the memory access configurations. The memory is partitioned into a plurality of shared memory regions using the memory access configurations. A memory access request for accessing one of the shared memory regions is processed based on the service authorizations.
    Type: Grant
    Filed: December 20, 2019
    Date of Patent: June 6, 2023
    Assignee: Intel Corporation
    Inventors: Kshitij Arun Doshi, Ned M. Smith, Francesc Guim Bernat, Timothy Verrall
  • Publication number: 20230156826
    Abstract: Various approaches for the integration and use of edge computing operations in satellite communication environments are discussed herein. For example, connectivity and computing approaches are discussed with reference to: identifying satellite coverage and compute operations available in low earth orbit (LEO) satellites, establishing connection streams via LEO satellite networks, identifying and implementing geofences for LEO satellites, coordinating and planning data transfers across ephemeral satellite connected devices, service orchestration via LEO satellites based on data cost, handover of compute and data operations in LEO satellite networks, and managing packet processing, among other aspects.
    Type: Application
    Filed: December 24, 2020
    Publication date: May 18, 2023
    Inventors: Stephen T. Palermo, Francesc Guim Bernat, Marcos E. Carranza, Kshitij Arun Doshi, Cesar Martinez-Spessot, Thijs Metsch, Ned M. Smith, Srikathyayani Srikanteswara, Timothy Verrall, Rita H. Wouhaybi, Yi Zhang, Weiqiang MA, Atul Kwatra
  • Publication number: 20230142539
    Abstract: Example edge gateway circuitry to schedule service requests in a network computing system includes: gateway-level hardware queue manager circuitry to: parse the service requests based on service parameters in the service requests; and schedule the service requests in a queue based on the service parameters, the service requests received from client devices; and hardware queue manager communication interface circuitry to send ones of the service requests from the queue to rack-level hardware queue manager circuitry in a physical rack, the ones of the service requests corresponding to functions as a service provided by resources in the physical rack.
    Type: Application
    Filed: December 19, 2022
    Publication date: May 11, 2023
    Inventors: Francesc Guim Bernat, Karthik Kumar, Suraj Prabhakaran, Ignacio Astilleros Diez, Timothy Verrall
  • Patent number: 11625277
    Abstract: Systems and methods may be used to determine where to run a service based on workload-based conditions or system-level conditions. An example method may include determining whether power available to a resource of a compute device satisfies a target power, for example to satisfy a target performance for a workload. When the power available is insufficient, an additional resource may be provided, for example on a remote device from the compute device. The additional resource may be used as a replacement for the resource of the compute device or to augment the resource of the compute device.
    Type: Grant
    Filed: May 20, 2020
    Date of Patent: April 11, 2023
    Assignee: Intel Corporation
    Inventors: Francesc Guim Bernat, Kshitij Arun Doshi, Bassam N. Coury, Suraj Prabhakaran, Timothy Verrall
  • Publication number: 20230108421
    Abstract: Methods and systems that allow a user to see the people or groups who have access to files that are maintained by a plurality of cloud content sharing services. In particular, the user may see what specific party has access to each particular file or directory, regardless of multiple cloud content sharing services involved. Moreover, a user interface and exposed application program interface allows the user to manipulate the permissions, e.g., granting access, to another person or group, to a file or directory. The user interface may also allow the user to terminate access to the file or directory for a person or group. The user's action to change a permission may be effected independently of the particular cloud content sharing service.
    Type: Application
    Filed: September 29, 2022
    Publication date: April 6, 2023
    Inventors: Steven J. Birkel, Rita H. Wouhaybi, Timothy Verrall, Mrigank Shekhar
  • Patent number: 11611491
    Abstract: An architecture to enable verification, ranking, and identification of respective edge service properties and associated service level agreement (SLA) properties, such as in an edge cloud or other edge computing environment, is disclosed. In an example, management and use of service information for an edge service includes: providing SLA information for an edge service to an operational device, for accessing an edge service hosted in an edge computing environment, with the SLA information providing reputation information for computing functions of the edge service according to an identified SLA; receiving a service request for use of the computing functions of the edge service, under the identified SLA; requesting, from the edge service, performance of the computing functions of the edge service according to the service request; and tracking the performance of the computing functions of the edge service according to the service request and compliance with the identified SLA.
    Type: Grant
    Filed: December 28, 2018
    Date of Patent: March 21, 2023
    Assignee: Intel Corporation
    Inventors: Ned M. Smith, Ben McCahill, Francesc Guim Bernat, Felipe Pastor Beneyto, Karthik Kumar, Timothy Verrall
  • Publication number: 20230045505
    Abstract: Technologies for accelerated orchestration and attestation include multiple edge devices. An edge appliance device performs an attestation process with each of its components to generate component certificates. The edge appliance device generates an appliance certificate that is indicative of the component certificates and a current utilization of the edge appliance device and provides the appliance certificate to a relying party. The relying party may be an edge orchestrator device. The edge orchestrator device receives a workload scheduling request with a service level agreement requirement. The edge orchestrator device verifies the appliance certificate and determines whether the service level agreement requirement is satisfied based on the appliance certificate. If satisfied, the workload is scheduled to the edge appliance device. Attestation and generation of the appliance certificate by the edge appliance device may be performed by an accelerator of the edge appliance device.
    Type: Application
    Filed: August 19, 2022
    Publication date: February 9, 2023
    Inventors: Francesc Guim Bernat, Kapil Sood, Tarun Viswanathan, Kshitij Doshi, Timothy Verrall, Ned M. Smith, Manish Dave, Alex Vul