Patents by Inventor THIJS METSCH

THIJS METSCH has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20180129970
    Abstract: A machine-learning decision system includes an online decision system and an offline decision system. The online decision system produces a first time slice-specific decision output corresponding to a first time slice based on one or more situational inputs received in the first time slice. The offline decision system produces a second Lime slice-specific decision output corresponding to the first time slice based on one or more situational inputs received in the first time slice and in a plurality of subsequent time slices occurring after the first time slice. The system further includes an online training system that conducts negative-reinforcement training of the online decision system in response to a nonconvergence between the first and the second time slice-specific decision outputs.
    Type: Application
    Filed: November 10, 2016
    Publication date: May 10, 2018
    Inventors: Justin E. Gottschlich, Thijs Metsch, Leonard Truong, Tatiana Shpeisman, Sara S. Baghsorkhi
  • Patent number: 9918146
    Abstract: A system comprises a scoring engine comprising at least one processor and memory. The scoring engine is to generate, based on telemetry information obtained from a plurality of nodes of a computing infrastructure, a first availability score for a first node of the plurality of computing infrastructure nodes and a second availability score for a second node of the plurality of computing infrastructure nodes. The scoring engine is further to generate, based on the first availability score of the first computing infrastructure node and the second availability score of the second computing infrastructure node, an edge tension score for a link between the first node and the second node.
    Type: Grant
    Filed: February 8, 2016
    Date of Patent: March 13, 2018
    Assignee: Intel Corporation
    Inventors: Annie Ibrahim Rana, Joseph Butler, Thijs Metsch, Alexander Leckey, Vincenzo Mario Riccobene, Giovani Estrada
  • Publication number: 20180027060
    Abstract: Technologies for determining and storing workload characteristics include an orchestrator server to identify a workload to be executed by a managed node, obtain a profile associated with the workload, wherein the profile includes a model that relates an input parameter set indicative of one of more characteristics of the workload with an output parameter set indicative of one or more aspects of resources to be allocated for execution of the workload, determine, as a function of the input parameter set and the model, resources to allocate to the managed node to execute the workload, and allocate the determined resources to the managed node to execute the workload. Other embodiments are also described and claimed.
    Type: Application
    Filed: January 17, 2017
    Publication date: January 25, 2018
    Inventors: Thijs Metsch, Nishi Ahuja, Susanne M. Balle, Mrittika Ganguli, Rahul Khanna
  • Publication number: 20170255494
    Abstract: Examples are described for computing resource discovery and management for a system of configurable computing resources that may include disaggregate physical elements such as central processing units, storage devices, memory devices, network input/output devices or network switches. In some examples, these disaggregate physical elements may be located within one or more racks of a data center.
    Type: Application
    Filed: February 23, 2015
    Publication date: September 7, 2017
    Applicant: Intel Corporation
    Inventors: Katalin K. Bartfai-Walcott, John Kennedy, Thijs Metsch, Chris Woods, Giovani Estrada, Alexander Leckey, Joseph Butler, Slawomir Putyrski
  • Publication number: 20170230733
    Abstract: In one embodiment, a system comprises a scoring engine comprising at least one processor and memory. The scoring engine is to generate, based on telemetry information obtained from a plurality of nodes of a computing infrastructure, a first availability score for a first node of the plurality of computing infrastructure nodes and a second availability score for a second node of the plurality of computing infrastructure nodes. The scoring engine is further to generate, based on the first availability score of the first computing infrastructure node and the second availability score of the second computing infrastructure node, an edge tension score for a link between the first node and the second node.
    Type: Application
    Filed: February 8, 2016
    Publication date: August 10, 2017
    Applicant: Intel Corporation
    Inventors: Annie Ibrahim Rana, Joseph Butler, Thijs Metsch, Alexander Leckey, Vincenzo Mario Riccobene, Giovani Estrada
  • Publication number: 20170187790
    Abstract: One embodiment provides an apparatus. The apparatus includes ranker logic. The ranker logic is to rank each of a plurality of compute nodes in a data center based, at least in part, on a respective node score. Each node score is determined based, at least in part, on a utilization (U), a saturation parameter (S) and a capacity factor (Ci). The capacity factor is determined based, at least in part, on a sold capacity (Cs) related to the compute node. The ranker logic is further to select one compute node with a highest node score for placement of a received workload.
    Type: Application
    Filed: December 23, 2015
    Publication date: June 29, 2017
    Inventors: Alexander Leckey, Joseph M. Butler, Thijs Metsch, Giovani Estrada, Vincenzo M. Riccobene, John M. Kennedy
  • Publication number: 20170180220
    Abstract: Examples include techniques to generate workload performance fingerprints for cloud infrastructure elements. In some examples, performance metrics are obtained from resource elements or nodes included in an identified sub-graph that represents at least a portion of configurable computing resources of a cloud infrastructure. For these examples, averages for the performance metrics are determined and then stored at a top-level context information node for the identified sub-graph to represent a workload performance fingerprint for the identified sub-graph.
    Type: Application
    Filed: December 18, 2015
    Publication date: June 22, 2017
    Applicant: Intel Corporation
    Inventors: ALEXANDER LECKEY, THIJS METSCH, JOSEPH BUTLER, MICHAEL J. MCGRATH, VICTOR BAYON-MOLINO
  • Publication number: 20160366026
    Abstract: Technologies for generating an analytical model for a workload of a data center include an analytics server to receive raw data from components of a data center. The analytics server retrieves a workbook that includes analytical algorithms from a workbook marketplace server, and uses the analytical algorithms to analyze the raw data to generate the analytical model for the workload based on the raw data. The analytics server further generates an optimization trigger to be transmitted to a controller component of the data center that may be based on the analytical model and one or more previously generated analytical models. The workbook marketplace server may include a plurality of workbooks, each of which may include one or more analytical algorithms from which to generate a different analytical model for the workload of the data center.
    Type: Application
    Filed: February 24, 2015
    Publication date: December 15, 2016
    Inventors: Katalin K. BARTFAI-WALCOTT, Alexander LECKEY, Thijs METSCH, Joseph BUTLER, Slawomir PUTYRSKI, Connor UPTON, Giovani ESTRADA, John KENNEDY
  • Publication number: 20160359683
    Abstract: Technologies for datacenter management include one or more computing racks each including a rack controller. The rack controller may receive system, performance, or health metrics for the components of the computing rack. The rack controller generates regression models to predict component lifespan and may predict logical machine lifespans based on the lifespan of the included hardware components. The rack controller may generate notifications or schedule maintenance sessions based on remaining component or logical machine lifespans. The rack controller may compose logical machines using components having similar remaining lifespans. In some embodiments the rack controller may validate a service level agreement prior to executing an application based on the probability of component failure. A management interface may generate an interactive visualization of the system state and optimize the datacenter schedule based on optimization rules derived from human input in response to the visualization.
    Type: Application
    Filed: February 24, 2015
    Publication date: December 8, 2016
    Applicant: INTEL CORPORATION
    Inventors: Katalin K. BARTFAI-WALCOTT, Chris WOODS, Giovani ESTRADA, John KENNEDY, Joseph BUTLER, Slawomir PUTYRSKI, Alexander LECKEY, Victor M. BAYON-MOLINO, Connor UPTON, Thijs METSCH
  • Publication number: 20160182320
    Abstract: Examples may include techniques to generate a graph model for cloud infrastructure elements. Information regarding the cloud infrastructure elements may be obtained. Logical layers may be assigned to each of the cloud infrastructure elements. The logical layers may include a physical layer for physical devices, an allocation layer for logical services composed of placed physical devices, a virtual layer for virtualized elements or a service layer for services or workloads implemented by the virtualized elements. In some examples, each cloud infrastructure element may be added to a graph model as nodes having metadata and attributes based on the obtained information.
    Type: Application
    Filed: December 23, 2014
    Publication date: June 23, 2016
    Applicant: Intel Corporation
    Inventors: KATALIN K. BARTFAI-WALCOTT, ALEXANDER LECHEY, THIJS METSCH, JOSEPH BUTLER, JONATHAN DONALDSON, MICHAEL J. MCGRATH