Patents by Inventor Timothy Verrall

Timothy Verrall has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250097120
    Abstract: Examples include techniques for artificial intelligence (AI) capabilities at a network switch. These examples include receiving a request to register a neural network for loading to an inference resource located at the network switch and loading the neural network based on information included in the request to support an AI service to be provided by users requesting the AI service.
    Type: Application
    Filed: December 2, 2024
    Publication date: March 20, 2025
    Applicant: Intel Corporation
    Inventors: Francesc GUIM BERNAT, Suraj PRABHAKARAN, Kshitij A. DOSHI, Brinda GANESH, Timothy VERRALL
  • Patent number: 12244507
    Abstract: Systems and techniques for intelligent data forwarding in edge networks are described herein. A request may be received from an edge user device for a service via a first endpoint. A time value may be calculated using a timestamp of the request. Motion characteristics may be determined for the edge user device using the time value. A response to the request may be transmitted to a second endpoint based on the motion characteristics.
    Type: Grant
    Filed: September 25, 2020
    Date of Patent: March 4, 2025
    Assignee: Intel Corporation
    Inventors: Francesc Guim Bernat, Ned M. Smith, Kshitij Arun Doshi, Suraj Prabhakaran, Timothy Verrall, Kapil Sood, Tarun Viswanathan
  • Publication number: 20250071023
    Abstract: Methods, apparatus, systems and articles of manufacture are disclosed to manage telemetry data in an edge environment. An example apparatus includes a publisher included in a first edge platform to publish a wish list obtained from a consumer, the wish list including tasks to execute, a commitment determiner to determine whether a commitment is viable to execute at least one of the tasks in the wish list, the commitment to be processed to identify the telemetry data, and a communication interface to establish a communication channel to facilitate transmission of the telemetry data from the first edge platform to a second edge platform.
    Type: Application
    Filed: August 22, 2023
    Publication date: February 27, 2025
    Inventors: Kshitij Doshi, Francesc Guim Bernat, Ned Smith, Timothy Verrall, Rajesh Gadiyar
  • Patent number: 12231487
    Abstract: Methods and apparatus for scale out hardware-assisted tracing schemes for distributed and scale-out applications. In connection with execution of one or more applications using a distributed processing environment including multiple compute nodes, telemetry and tracing data are obtained using hardware-based logic on the compute nodes. Processes associated with applications are identified, as well as the compute nodes on which instances of the processes are executed. Process instances are associated with process application space identifiers (PASIDs), while processes used for an application are associating with a global group identifier (GGID) that serves as an application ID. The PASIDs and GGIDs are used to store telemetry and/or tracing data on the compute nodes and/or forward such data to a tracing server in a manner that enables telemetry and/or tracing data to be aggregated on an application basis.
    Type: Grant
    Filed: February 13, 2020
    Date of Patent: February 18, 2025
    Assignee: Intel Corporation
    Inventors: Francesc Guim Bernat, Patrick Kutch, Trevor Cooper, Timothy Verrall, Karthik Kumar
  • Patent number: 12204396
    Abstract: Various aspects of methods, systems, and use cases include coordinating actions at an edge device based on power production in a distributed edge computing environment. A method may include identifying a long-term service level agreement (SLA) for a component of an edge device, and determining a list of resources related to the component using the long-term SLA. The method may include scheduling a task for the component based on the long-term SLA, a current battery level at the edge device, a current energy harvest rate at the edge device, or an amount of power required to complete the task. A resource of the list of resources may be used to initiate the task, such as according to the scheduling.
    Type: Grant
    Filed: December 23, 2020
    Date of Patent: January 21, 2025
    Assignee: Intel Corporation
    Inventors: Francesc Guim Bernat, Karthik Kumar, Timothy Verrall
  • Patent number: 12189512
    Abstract: Examples described herein relate to an apparatus that includes a memory and at least one processor where the at least one processor is to receive configuration to gather performance data for a function from one or more platforms and during execution of the function, collect performance data for the function and store the performance data after termination of execution of the function. Some examples include an interface coupled to the at least one processor and the interface is to receive one or more of: an identifier of a function, resources to be tracked as part of function execution, list of devices to be tracked as part of function execution, type of monitoring of function execution, or meta-data to identify when the function is complete. Performance data can be accessed to determine performance of multiple executions of the short-lived function.
    Type: Grant
    Filed: March 25, 2020
    Date of Patent: January 7, 2025
    Assignee: Intel Corporation
    Inventors: Francesc Guim Bernat, Steven Briscoe, Karthik Kumar, Alexander Bachmutsky, Timothy Verrall
  • Patent number: 12158966
    Abstract: Methods and systems that allow a user to see the people or groups who have access to files that are maintained by a plurality of cloud content sharing services. In particular, the user may see what specific party has access to each particular file or directory, regardless of multiple cloud content sharing services involved. Moreover, a user interface and exposed application program interface allows the user to manipulate the permissions, e.g., granting access, to another person or group, to a file or directory. The user interface may also allow the user to terminate access to the file or directory for a person or group. The user's action to change a permission may be effected independently of the particular cloud content sharing service.
    Type: Grant
    Filed: September 29, 2022
    Date of Patent: December 3, 2024
    Assignee: Intel Corporation
    Inventors: Steven J. Birkel, Rita H. Wouhaybi, Timothy Verrall, Mrigank Shekhar
  • Publication number: 20240396852
    Abstract: There is disclosed in one example an application-specific integrated circuit (ASIC), including: an artificial intelligence (AI) circuit; and circuitry to: identify a flow, the flow including traffic diverted from a core cloud service of a network to be serviced by an edge node closer to an edge of the network than to the core of the network; receive telemetry related to the flow, the telemetry including fine-grained and flow-level network monitoring data for the flow; operate the AI circuit to predict, from the telemetry, a future service-level demand for the edge node; and cause a service parameter of the edge node to be tuned according to the prediction.
    Type: Application
    Filed: August 1, 2024
    Publication date: November 28, 2024
    Applicant: Intel Corporation
    Inventors: Francesc Guim Bernat, Karthik Kumar, Thomas Willhalm, Mark A. Schmisseur, Timothy Verrall
  • Publication number: 20240385884
    Abstract: Methods, apparatus, systems, and articles of manufacture are disclosed to estimate workload complexity. An example apparatus includes processor circuitry to perform at least one of first, second, or third operations to instantiate payload interface circuitry to extract workload objective information and service level agreement (SLA) criteria corresponding to a workload, and acceleration circuitry to select a pre-processing model based on (a) the workload objective information and (b) feedback corresponding to workload performance metrics of at least one prior workload execution iteration, execute the pre-processing model to calculate a complexity metric corresponding to the workload, and select candidate resources based on the complexity metric.
    Type: Application
    Filed: December 23, 2021
    Publication date: November 21, 2024
    Inventors: Karthik Kumar, Timothy Verrall, Thomas Willhalm, Francesc Guim Bernat, Zhongyan Lu
  • Patent number: 12132664
    Abstract: Example edge gateway circuitry to schedule service requests in a network computing system includes: gateway-level hardware queue manager circuitry to: parse the service requests based on service parameters in the service requests; and schedule the service requests in a queue based on the service parameters, the service requests received from client devices; and hardware queue manager communication interface circuitry to send ones of the service requests from the queue to rack-level hardware queue manager circuitry in a physical rack, the ones of the service requests corresponding to functions as a service provided by resources in the physical rack.
    Type: Grant
    Filed: December 19, 2022
    Date of Patent: October 29, 2024
    Assignee: INTEL CORPORATION
    Inventors: Francesc Guim Bernat, Karthik Kumar, Suraj Prabhakaran, Ignacio Astilleros Diez, Timothy Verrall
  • Patent number: 12132825
    Abstract: Technologies for accelerated key caching in an edge hierarchy include multiple edge appliance devices organized in tiers. An edge appliance device receives a request for a key, such as a private key. The edge appliance device determines whether the key is included in a local key cache and, if not, requests the key from an edge appliance device included in an inner tier of the edge hierarchy. The edge appliance device may request the key from an edge appliance device included in a peer tier of the edge hierarchy. The edge appliance device may activate per-tenant accelerated logic to identify one or more keys in the key cache for eviction. The edge appliance device may activate per-tenant accelerated logic to identify one or more keys for pre-fetching. Those functions of the edge appliance device may be performed by an accelerator such as an FPGA. Other embodiments are described and claimed.
    Type: Grant
    Filed: December 23, 2021
    Date of Patent: October 29, 2024
    Assignee: Intel Corporation
    Inventors: Timothy Verrall, Thomas Willhalm, Francesc Guim Bernat, Karthik Kumar, Ned M. Smith, Rajesh Poornachandran, Kapil Sood, Tarun Viswanathan, John J. Browne, Patrick Kutch
  • Patent number: 12126592
    Abstract: Systems and methods may be used to provide neutral host edge services in an edge network. An example method may include generating a virtual machine for a communication service provider at a compute device. The method may include receiving a user packet originated at a user device associated with the communication service provider and identifying dynamic route information related to the user packet using the virtual machine corresponding to the communication service provider. Data may be output corresponding to the user packet based on the dynamic route information.
    Type: Grant
    Filed: December 26, 2020
    Date of Patent: October 22, 2024
    Assignee: Intel Corporation
    Inventors: Kannan Babu Ramia, Deepak S, Palaniappan Ramanathan, Timothy Verrall, Francesc Guim Bernat
  • Patent number: 12120175
    Abstract: Technologies for providing selective offload of execution of an application to the edge include a device that includes circuitry to determine whether a section of an application to be executed by the device is available to be offloaded. Additionally, the circuitry is to determine one or more characteristics of an edge resource available to execute the section. Further, the circuitry is to determine, as a function of the one or more characteristics and a target performance objective associated with the section, whether to offload the section to the edge resource and offload, in response to a determination to offload the section, the section to the edge resource.
    Type: Grant
    Filed: March 7, 2022
    Date of Patent: October 15, 2024
    Assignee: Intel Corporation
    Inventors: Francesc Guim Bernat, Ned Smith, Thomas Willhalm, Karthik Kumar, Timothy Verrall
  • Patent number: 12112201
    Abstract: Methods, apparatus, systems, and articles of manufacture are disclosed to aggregate telemetry data in an edge environment. An example apparatus includes at least one processor, and memory including instructions that, when executed, cause the at least one processor to at least generate a composition for an edge service in the edge environment, the composition representative of a first interface to obtain the telemetry data, the telemetry data associated with resources of the edge service and including a performance metric, generate a resource object based on the performance metric, generate a telemetry object based on the performance metric, and generate a telemetry executable based on the composition, the composition including at least one of the resource object or the telemetry object, the telemetry executable to generate the telemetry data in response to the edge service executing a computing task distributed to the edge service based on the telemetry data.
    Type: Grant
    Filed: January 4, 2022
    Date of Patent: October 8, 2024
    Assignee: Intel Corporation
    Inventors: Kshitij Doshi, Francesc Guim Bernat, Timothy Verrall, Ned Smith, Rajesh Gadiyar
  • Patent number: 12095844
    Abstract: Methods, apparatus, systems and articles of manufacture for re-use of a container in an edge computing environment are disclosed. An example method includes detecting that a container executed at an edge node of a cloud computing environment is to be cleaned, deleting user data from the container, the deletion of the user data performed without deleting the container from the memory of the edge node, restoring settings of the container to a default state; and storing information identifying the container, the information including a flavor of the container, the storing of the information to enable the container to be re-used by a subsequent requestor.
    Type: Grant
    Filed: October 13, 2020
    Date of Patent: September 17, 2024
    Assignee: INTEL CORPORATION
    Inventors: Francesc Guim Bernat, Brinda Ganesh, Timothy Verrall, Ned Smith, Kshitij Doshi
  • Patent number: 12088507
    Abstract: There is disclosed in one example an application-specific integrated circuit (ASIC), including: an artificial intelligence (AI) circuit; and circuitry to: identify a flow, the flow including traffic diverted from a core cloud service of a network to be serviced by an edge node closer to an edge of the network than to the core of the network; receive telemetry related to the flow, the telemetry including fine-grained and flow-level network monitoring data for the flow; operate the AI circuit to predict, from the telemetry, a future service-level demand for the edge node; and cause a service parameter of the edge node to be tuned according to the prediction.
    Type: Grant
    Filed: July 11, 2022
    Date of Patent: September 10, 2024
    Assignee: Intel Corporation
    Inventors: Francesc Guim Bernat, Karthik Kumar, Thomas Willhalm, Mark A. Schmisseur, Timothy Verrall
  • Patent number: 12068928
    Abstract: Methods, apparatus, systems and articles of manufacture are disclosed to schedule workloads based on secure edge to device telemetry by calculating a difference between a first telemetric data received from a first hardware device and an operating parameter and computing an adjustment for a second hardware device based on the difference between the first telemetric data and the operating parameter.
    Type: Grant
    Filed: September 25, 2020
    Date of Patent: August 20, 2024
    Assignee: Intel Corporation
    Inventors: Kapil Sood, Timothy Verrall, Ned M. Smith, Tarun Viswanathan, Kshitij Doshi, Francesc Guim Bernat, John J. Browne, Katalin Bartfai-Walcott, Maryam Tahhan, Eoin Walsh, Damien Power
  • Publication number: 20240235959
    Abstract: Various systems and methods for autonomously monitoring intent-driven end-to-end (E2E) orchestration are described herein. An orchestration system is configured to: receive, at the orchestration system, an intent-based service level objective (SLO) for execution of a plurality of tasks; generate a common context that relates the SLO to the execution of the plurality of tasks; select a plurality of monitors to monitor the execution of the plurality of tasks, the plurality of monitors to log a plurality of key performance indicators; generate a domain context for the plurality of tasks; configure an analytics system with the plurality of monitors and the plurality of key performance indicators correlated by the domain contexts; deploy the plurality of monitors to collect telemetry; monitor the execution of the plurality of tasks using the telemetry from the plurality of monitors; and perform a responsive action based on the telemetry.
    Type: Application
    Filed: December 24, 2021
    Publication date: July 11, 2024
    Inventors: John Joseph Browne, Francesc Guim Bernat, Kshitij Arun Doshi, Adrian Hoban, David Cremins, Thijs Metsch, Susanne M. Balle, Christopher MacNamara, Przemyslaw Perycz, Emma Cecilia Collins, Timothy Verrall
  • Publication number: 20240080246
    Abstract: Examples include techniques for artificial intelligence (AI) capabilities at a network switch. These examples include receiving a request to register a neural network for loading to an inference resource located at the network switch and loading the neural network based on information included in the request to support an AI service to be provided by users requesting the AI service.
    Type: Application
    Filed: October 2, 2023
    Publication date: March 7, 2024
    Inventors: Francesc GUIM BERNAT, Suraj PRABHAKARAN, Kshitij A. DOSHI, Brinda GANESH, Timothy VERRALL
  • Patent number: 11922227
    Abstract: Technologies for providing efficient migration of services include a server device. The server device includes compute engine circuitry to execute a set of services on behalf of a terminal device and migration accelerator circuitry. The migration accelerator circuitry is to determine whether execution of the services is to be migrated from an edge station in which the present server device is located to a second edge station in which a second server device is located, determine a prioritization of the services executed by the server device, and send, in response to a determination that the services are to be migrated and as a function of the determined prioritization, data utilized by each service to the second server device of the second edge station to migrate the services. Other embodiments are also described and claimed.
    Type: Grant
    Filed: December 21, 2022
    Date of Patent: March 5, 2024
    Assignee: Intel Corporation
    Inventors: Francesc Guim Bernat, Karthik Kumar, Susanne M. Balle, Ignacio Astilleros Diez, Timothy Verrall, Ned M. Smith