Patents by Inventor Timothy Verrall
Timothy Verrall has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11537447Abstract: Technologies for providing efficient migration of services include a server device. The server device includes compute engine circuitry to execute a set of services on behalf of a terminal device and migration accelerator circuitry. The migration accelerator circuitry is to determine whether execution of the services is to be migrated from an edge station in which the present server device is located to a second edge station in which a second server device is located, determine a prioritization of the services executed by the server device, and send, in response to a determination that the services are to be migrated and as a function of the determined prioritization, data utilized by each service to the second server device of the second edge station to migrate the services. Other embodiments are also described and claimed.Type: GrantFiled: June 29, 2018Date of Patent: December 27, 2022Assignee: Intel CorporationInventors: Francesc Guim Bernat, Karthik Kumar, Susanne M. Balle, Ignacio Astilleros Diez, Timothy Verrall, Ned M. Smith
-
Patent number: 11539596Abstract: Systems and techniques for end-to-end quality of service in edge computing environments are described herein. A set of telemetry measurements may be obtained for an ongoing dataflow between a device and a node of an edge computing system. A current key performance indicator (KPI) may be calculated for the ongoing dataflow. The current KPI may be compared to a target KPI to determine an urgency value. A set of resource quality metrics may be collected for resources of the network. The set of resource quality metrics may be evaluated with a resource adjustment model to determine available resource adjustments. A resource adjustment may be selected from the available resource adjustments based on an expected minimization of the urgency value. Delivery of the ongoing dataflow may be modified using the selected resource adjustment.Type: GrantFiled: October 5, 2021Date of Patent: December 27, 2022Assignee: Intel CorporationInventors: Kshitij Arun Doshi, Ned M. Smith, Francesc Guim Bernat, Timothy Verrall, Rajesh Gadiyar
-
Patent number: 11533268Abstract: An example system to schedule service requests in a network computing system using hardware queue managers includes: a gateway-level hardware queue manager in an edge gateway to schedule the service requests received from client devices in a queue; a rack-level hardware queue manager in a physical rack in communication with the edge gateway, the rack-level hardware queue manager to send a pull request to the gateway-level hardware queue manager for a first one of the service requests; and a drawer-level hardware queue manager in a drawer of the physical rack, the drawer-level hardware queue manager to send a second pull request to the rack-level hardware queue manager for the first one of the service requests, the drawer including a resource to provide a function as a service specified in the first one of the service requests.Type: GrantFiled: March 30, 2018Date of Patent: December 20, 2022Assignee: INTEL CORPORATIONInventors: Francesc Guim Bernat, Karthik Kumar, Suraj Prabhakaran, Ignacio Astilleros Diez, Timothy Verrall
-
Patent number: 11522682Abstract: Technologies for providing streamlined provisioning of accelerated functions in a disaggregated architecture include a compute sled. The compute sled includes a network interface controller and circuitry to determine whether to accelerate a function of a workload executed by the compute sled, and send, to a memory sled and in response to a determination to accelerate the function, a data set on which the function is to operate. The circuitry is also to receive, from the memory sled, a service identifier indicative of a memory location independent handle for data associated with the function, send, to a compute device, a request to schedule acceleration of the function on the data set, receive a notification of completion of the acceleration of the function, and obtain, in response to receipt of the notification and using the service identifier, a resultant data set from the memory sled. The resultant data set was produced by an accelerator device during acceleration of the function on the data set.Type: GrantFiled: May 27, 2021Date of Patent: December 6, 2022Assignee: Intel CorporationInventors: Francesc Guim Bernat, Suraj Prabhakaran, Kshitij A. Doshi, Timothy Verrall
-
Publication number: 20220345420Abstract: There is disclosed in one example an application-specific integrated circuit (ASIC), including: an artificial intelligence (AI) circuit; and circuitry to: identify a flow, the flow including traffic diverted from a core cloud service of a network to be serviced by an edge node closer to an edge of the network than to the core of the network; receive telemetry related to the flow, the telemetry including fine-grained and flow-level network monitoring data for the flow; operate the AI circuit to predict, from the telemetry, a future service-level demand for the edge node; and cause a service parameter of the edge node to be tuned according to the prediction.Type: ApplicationFiled: July 11, 2022Publication date: October 27, 2022Applicant: Intel CorporationInventors: Francesc Guim Bernat, Karthik Kumar, Thomas Willhalm, Mark A. Schmisseur, Timothy Verrall
-
Publication number: 20220337481Abstract: Various approaches for deployment and use of configurable edge computing platforms are described. In an edge computing system, an edge computing device includes hardware resources that can be composed from a configuration of chiplets, as the chiplets are disaggregated for selective use and deployment (for compute, acceleration, memory, storage, or other resources). In an example, configuration operations are performed to: identify a condition for use of the hardware resource, based on an edge computing workload received at the edge computing device; obtain, determine, or identify properties of a configuration for the hardware resource that are available to be implemented with the chiplets, with the configuration enabling the hardware resource to satisfy the condition for use of the hardware resource; and compose the chiplets into the configuration, according to the properties of the configuration, to enable the use of the hardware resource for the edge computing workload.Type: ApplicationFiled: May 5, 2022Publication date: October 20, 2022Inventors: Francesc Guim Bernat, Kshitij Arun Doshi, Ned M. Smith, Timothy Verrall, Uzair Qureshi
-
Patent number: 11469953Abstract: A computing apparatus, including: a hardware platform; and an interworking broker function (IBF) hosted on the hardware platform, the IBF including a translation driver (TD) associated with a legacy network appliance lacking native interoperability with an orchestrator, the IBF configured to: receive from the orchestrator a network function provisioning or configuration command for the legacy network appliance; operate the TD to translate the command to a format consumable by the legacy network appliance; and forward the command to the legacy network appliance.Type: GrantFiled: September 27, 2017Date of Patent: October 11, 2022Assignee: Intel CorporationInventors: John J. Browne, Timothy Verrall, Maryam Tahhan, Michael J. McGrath, Sean Harte, Kevin Devey, Jonathan Kenny, Christopher MacNamara
-
Patent number: 11461491Abstract: Methods and systems that allow a user to see the people or groups who have access to files that are maintained by a plurality of cloud content sharing services. In particular, the user may see what specific party has access to each particular file or directory, regardless of multiple cloud content sharing services involved. Moreover, a user interface and exposed application program interface allows the user to manipulate the permissions, e.g., granting access, to another person or group, to a file or directory. The user interface may also allow the user to terminate access to the file or directory for a person or group. The user's action to change a permission may be effected independently of the particular cloud content sharing service.Type: GrantFiled: October 2, 2020Date of Patent: October 4, 2022Assignee: Intel CorporationInventors: Steven J. Birkel, Rita H. Wouhaybi, Timothy Verrall, Mrigank Shekhar
-
Patent number: 11456966Abstract: There is disclosed in one example an application-specific integrated circuit (ASIC), including: an artificial intelligence (Al) circuit; and circuitry to: identify a flow, the flow including traffic diverted from a core cloud service of a network to be serviced by an edge node closer to an edge of the network than to the core of the network; receive telemetry related to the flow, the telemetry including fine-grained and flow-level network monitoring data for the flow; operate the Al circuit to predict, from the telemetry, a future service-level demand for the edge node; and cause a service parameter of the edge node to be tuned according to the prediction.Type: GrantFiled: October 13, 2021Date of Patent: September 27, 2022Assignee: Intel CorporationInventors: Francesc Guim Bernat, Karthik Kumar, Thomas Willhalm, Mark A. Schmisseur, Timothy Verrall
-
Patent number: 11451435Abstract: Technologies for providing multi-tenant support in edge resources using edge channels include a device that includes circuitry to obtain a message associated with a service provided at the edge of a network. Additionally, the circuitry is to identify an edge channel based on metadata associated with the message. The edge channel has a predefined amount of resource capacity allocated to the edge channel to process the message. Further, the circuitry is to determine the predefined amount of resource capacity allocated to the edge channel and process the message using the allocated resource capacity for the identified edge channel.Type: GrantFiled: March 28, 2019Date of Patent: September 20, 2022Assignee: INTEL CORPORATIONInventors: Francesc Guim Bernat, Karthik Kumar, Benjamin Graniello, Timothy Verrall, Andrew J. Herdrich, Rashmin Patel, Monica Kenguva, Brinda Ganesh, Alexander Vul, Ned M. Smith, Suraj Prabhakaran
-
Patent number: 11444846Abstract: Technologies for accelerated orchestration and attestation include multiple edge devices. An edge appliance device performs an attestation process with each of its components to generate component certificates. The edge appliance device generates an appliance certificate that is indicative of the component certificates and a current utilization of the edge appliance device and provides the appliance certificate to a relying party. The relying party may be an edge orchestrator device. The edge orchestrator device receives a workload scheduling request with a service level agreement requirement. The edge orchestrator device verifies the appliance certificate and determines whether the service level agreement requirement is satisfied based on the appliance certificate. If satisfied, the workload is scheduled to the edge appliance device. Attestation and generation of the appliance certificate by the edge appliance device may be performed by an accelerator of the edge appliance device.Type: GrantFiled: March 29, 2019Date of Patent: September 13, 2022Assignee: Intel CorporationInventors: Francesc Guim Bernat, Kapil Sood, Tarun Viswanathan, Kshitij Doshi, Timothy Verrall, Ned M. Smith, Manish Dave, Alex Vul
-
Publication number: 20220263891Abstract: Technologies for providing selective offload of execution of an application to the edge include a device that includes circuitry to determine whether a section of an application to be executed by the device is available to be offloaded. Additionally, the circuitry is to determine one or more characteristics of an edge resource available to execute the section. Further, the circuitry is to determine, as a function of the one or more characteristics and a target performance objective associated with the section, whether to offload the section to the edge resource and offload, in response to a determination to offload the section, the section to the edge resource.Type: ApplicationFiled: March 7, 2022Publication date: August 18, 2022Inventors: Francesc Guim Bernat, Ned Smith, Thomas Willhalm, Karthik Kumar, Timothy Verrall
-
Patent number: 11416295Abstract: Technologies for providing efficient data access in an edge infrastructure include a compute device comprising circuitry configured to identify pools of resources that are usable to access data at an edge location. The circuitry is also configured to receive a request to execute a function at an edge location. The request identifies a data access performance target for the function. The circuitry is also configured to map, based on a data access performance of each pool and the data access performance target of the function, the function to a set of the pools to satisfy the data access performance target.Type: GrantFiled: September 6, 2019Date of Patent: August 16, 2022Assignee: INTEL CORPORATIONInventors: Francesc Guim Bernat, Karthik Kumar, Suraj Prabhakaran, Timothy Verrall, Thomas Willhalm, Mark Schmisseur
-
Publication number: 20220222274Abstract: Technologies for providing dynamic persistence of data in edge computing include a device including circuitry configured to determine multiple different logical domains of data storage resources for use in storing data from a client compute device at an edge of a network. Each logical domain has a different set of characteristics. The circuitry is also to configured to receive, from the client compute device, a request to persist data. The request includes a target persistence objective indicative of an objective to be satisfied in the storage of the data. Additionally, the circuitry is configured to select, as a function of the characteristics of the logical domains and the target persistence objective, a logical domain into which to persist the data and provide the data to the selected logical domain.Type: ApplicationFiled: January 20, 2022Publication date: July 14, 2022Inventors: Francesc Guim Bernat, Karthik Kumar, Suraj Prabhakaran, Ramanathan Sethuraman, Timothy Verrall, Ned Smith
-
Patent number: 11388054Abstract: Various approaches for deployment and use of configurable edge computing platforms are described. In an edge computing system, an edge computing device includes hardware resources that can be composed from a configuration of chiplets, as the chiplets are disaggregated for selective use and deployment (for compute, acceleration, memory, storage, or other resources). In an example, configuration operations are performed to: identify a condition for use of the hardware resource, based on an edge computing workload received at the edge computing device; obtain, determine, or identify properties of a configuration for the hardware resource that are available to be implemented with the chiplets, with the configuration enabling the hardware resource to satisfy the condition for use of the hardware resource; and compose the chiplets into the configuration, according to the properties of the configuration, to enable the use of the hardware resource for the edge computing workload.Type: GrantFiled: December 20, 2019Date of Patent: July 12, 2022Assignee: Intel CorporationInventors: Francesc Guim Bernat, Kshitij Arun Doshi, Ned M. Smith, Timothy Verrall, Uzair Qureshi
-
Patent number: 11379264Abstract: Some examples provide for uninterruptible power supply form (UPS) resources and non-UPS resources to be offered in a composite node for customers to use. For a workload run on the composite node, monitoring of non-UPS resource power availability, resource temperature, and/or cooling facilities can take place. In the event, a non-UPS resource experiences a power outage or reduction in available power, temperature that is at or above a threshold level, and/or cooling facility outage, monitoring of performance of a workload executing on the non-UPS resource can take place. If the performance is acceptable and the power available to the non-UPS resource exceeds a threshold level, the supplied power can be reduced. If the performance experiences excessive levels of errors or slows unacceptably, the workload can be migrated to another non-UPS or UPS compliant resource.Type: GrantFiled: April 15, 2019Date of Patent: July 5, 2022Assignee: Intel CorporationInventors: Francesc Guim Bernat, Felipe Pastor Beneyto, Kshitij A. Doshi, Timothy Verrall, Suraj Prabhakaran
-
Publication number: 20220209971Abstract: Methods, apparatus, systems, and articles of manufacture are disclosed to aggregate telemetry data in an edge environment. An example apparatus includes at least one processor, and memory including instructions that, when executed, cause the at least one processor to at least generate a composition for an edge service in the edge environment, the composition representative of a first interface to obtain the telemetry data, the telemetry data associated with resources of the edge service and including a performance metric, generate a resource object based on the performance metric, generate a telemetry object based on the performance metric, and generate a telemetry executable based on the composition, the composition including at least one of the resource object or the telemetry object, the telemetry executable to generate the telemetry data in response to the edge service executing a computing task distributed to the edge service based on the telemetry data.Type: ApplicationFiled: January 4, 2022Publication date: June 30, 2022Inventors: Kshitij Doshi, Francesc Guim Bernat, Timothy Verrall, Ned Smith, Rajesh Gadiyar
-
Publication number: 20220206857Abstract: Technologies for providing dynamic selection of edge and local accelerator resources includes a device having circuitry to identify a function of an application to be accelerated, determine one or more properties of an accelerator resource available at the edge of a network where the device is located, and determine one or more properties of an accelerator resource available in the device. Additionally, the circuitry is to determine a set of acceleration selection factors associated with the function, wherein the acceleration factors are indicative of one or more objectives to be satisfied in the acceleration of the function. Further, the circuitry is to select, as a function of the one or more properties of the accelerator resource available at the edge, the one or more properties of the accelerator resource available in the device, and the acceleration selection factors, one or more of the accelerator resources to accelerate the function.Type: ApplicationFiled: November 8, 2021Publication date: June 30, 2022Inventors: Francesc Guim Bernat, Karthik Kumar, Ned Smith, Thomas Willhalm, Timothy Verrall
-
Patent number: 11374776Abstract: Systems and techniques for adaptive dataflow transformation in edge computing environments are described herein. A transformation compatibility indication may be received from a device. A set of transformations available for use by the device connected to the network may be determined based on the transformation compatibility indicator. The set of transformations may be transmitted to the device. A value may be determined for an operating metric for an edge computing node of the network. The edge computing node may provide a service to the device via the network. A transformation request may be transmitted to the device based on the value. The transformation request may cause the device to execute a transformation of the set of transformations to transform a dataflow of the service. The adaptive dataflow transformations may be continuous with changing predicted values of operating metrics.Type: GrantFiled: December 20, 2019Date of Patent: June 28, 2022Assignee: Intel CorporationInventors: Francesc Guim Bernat, Kshitij Arun Doshi, Ned M. Smith, Timothy Verrall
-
Publication number: 20220200788Abstract: Technologies for accelerated key caching in an edge hierarchy include multiple edge appliance devices organized in tiers. An edge appliance device receives a request for a key, such as a private key. The edge appliance device determines whether the key is included in a local key cache and, if not, requests the key from an edge appliance device included in an inner tier of the edge hierarchy. The edge appliance device may request the key from an edge appliance device included in a peer tier of the edge hierarchy. The edge appliance device may activate per-tenant accelerated logic to identify one or more keys in the key cache for eviction. The edge appliance device may activate per-tenant accelerated logic to identify one or more keys for pre-fetching. Those functions of the edge appliance device may be performed by an accelerator such as an FPGA. Other embodiments are described and claimed.Type: ApplicationFiled: December 23, 2021Publication date: June 23, 2022Inventors: Timothy Verrall, Thomas Willhalm, Francesc Guim Bernat, Karthik Kumar, Ned M. Smith, Rajesh Poornachandran, Kapil Sood, Tarun Viswanathan, John J. Browne, Patrick Kutch