Patents by Inventor Paul F. Olsen
Paul F. Olsen has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 10469565Abstract: Techniques are provided for providing computing resources from a pool a plurality of networked computing systems to a consumer. The method includes determining that the consumer's resource usage exceeds a predetermined threshold. After a predetermined period of time, and upon determining that the consumer's resource usage continues to exceed the predetermined threshold, the method identifies one or more computing systems from the pool having capacity to host at least part of the amount of excess resource usage. The method further includes allocating resources on one or more computing systems selected from the identified computing systems to satisfy the amount of excess resource usage, and transferring at least the amount of excess resource usage to the selected one or more computing systems.Type: GrantFiled: January 31, 2014Date of Patent: November 5, 2019Assignee: International Business Machines CorporationInventors: Daniel C. Birkestrand, Stephanie L. Jensen, Paul F. Olsen
-
Patent number: 10469564Abstract: Techniques are provided for providing computing resources from a pool a plurality of networked computing systems to a consumer. The method includes determining that the consumer's resource usage exceeds a predetermined threshold. After a predetermined period of time, and upon determining that the consumer's resource usage continues to exceed the predetermined threshold, the method identifies one or more computing systems from the pool having capacity to host at least part of the amount of excess resource usage. The method further includes allocating resources on one or more computing systems selected from the identified computing systems to satisfy the amount of excess resource usage, and transferring at least the amount of excess resource usage to the selected one or more computing systems.Type: GrantFiled: January 21, 2014Date of Patent: November 5, 2019Assignee: International Business Machines CorporationInventors: Daniel C. Birkestrand, Stephanie L. Jensen, Paul F. Olsen
-
Patent number: 10148743Abstract: Workload, preferably as one or more partitions, is migrated from a source server to one or more target servers by computing a respective projected performance optimization for each candidate partition and target, the optimization being dependent on a projected processor-memory affinity resulting from migrating candidate workload to the candidate target, and selecting a target to receive a workload accordingly. A target may be pre-configured to receive a workload being migrated to it by altering the configuration parameters of at least one workload currently executing on the target according to the projected performance optimization.Type: GrantFiled: November 30, 2017Date of Patent: December 4, 2018Assignee: International Business Machines CorporationInventors: Daniel C. Birkestrand, Peter J. Heyrman, Paul F. Olsen
-
Patent number: 10129333Abstract: Workload, preferably as one or more partitions, is migrated from a source server to one or more target servers by computing a respective projected performance optimization for each candidate partition and target, the optimization being dependent on a projected processor-memory affinity resulting from migrating candidate workload to the candidate target, and selecting a target to receive a workload accordingly. A target may be pre-configured to receive a workload being migrated to it by altering the configuration parameters of at least one workload currently executing on the target according to the projected performance optimization.Type: GrantFiled: November 30, 2017Date of Patent: November 13, 2018Assignee: International Business Machines CorporationInventors: Daniel C. Birkestrand, Peter J. Heyrman, Paul F. Olsen
-
Patent number: 9930108Abstract: Workload, preferably as one or more partitions, is migrated from a source server to one or more target servers by computing a respective projected performance optimization for each candidate partition and target, the optimization being dependent on a projected processor-memory affinity resulting from migrating candidate workload to the candidate target, and selecting a target to receive a workload accordingly. A target may be pre-configured to receive a workload being migrated to it by altering the configuration parameters of at least one workload currently executing on the target according to the projected performance optimization.Type: GrantFiled: May 23, 2015Date of Patent: March 27, 2018Assignee: International Business Machines CorporationInventors: Daniel C. Birkestrand, Peter J. Heyrman, Paul F. Olsen
-
Publication number: 20180084037Abstract: Workload, preferably as one or more partitions, is migrated from a source server to one or more target servers by computing a respective projected performance optimization for each candidate partition and target, the optimization being dependent on a projected processor-memory affinity resulting from migrating candidate workload to the candidate target, and selecting a target to receive a workload accordingly. A target may be pre-configured to receive a workload being migrated to it by altering the configuration parameters of at least one workload currently executing on the target according to the projected performance optimization.Type: ApplicationFiled: November 30, 2017Publication date: March 22, 2018Inventors: Daniel C. Birkestrand, Peter J. Heyrman, Paul F. Olsen
-
Publication number: 20180084036Abstract: Workload, preferably as one or more partitions, is migrated from a source server to one or more target servers by computing a respective projected performance optimization for each candidate partition and target, the optimization being dependent on a projected processor-memory affinity resulting from migrating candidate workload to the candidate target, and selecting a target to receive a workload accordingly. A target may be pre-configured to receive a workload being migrated to it by altering the configuration parameters of at least one workload currently executing on the target according to the projected performance optimization.Type: ApplicationFiled: November 30, 2017Publication date: March 22, 2018Inventors: Daniel C. Birkestrand, Peter J. Heyrman, Paul F. Olsen
-
Patent number: 9912741Abstract: Workload, preferably as one or more partitions, is migrated from a source server to one or more target servers by computing a respective projected performance optimization for each candidate partition and target, the optimization being dependent on a projected processor-memory affinity resulting from migrating candidate workload to the candidate target, and selecting a target to receive a workload accordingly. A target may be pre-configured to receive a workload being migrated to it by altering the configuration parameters of at least one workload currently executing on the target according to the projected performance optimization.Type: GrantFiled: January 20, 2015Date of Patent: March 6, 2018Assignee: International Business Machines CorporationInventors: Daniel C. Birkestrand, Peter J. Heyrman, Paul F. Olsen
-
Patent number: 9479575Abstract: A cloud capacity on demand manager manages capacity on demand for servers in a server cloud. The cloud capacity on demand manager may borrow capacity from one or more servers and lend the capacity borrowed from one server to a different server in the server cloud. When the server cloud is no longer intact, capacity borrowed from servers no longer in the server cloud is disabled, and servers no longer in the server cloud reclaim capacity that was lent to the server cloud.Type: GrantFiled: March 27, 2012Date of Patent: October 25, 2016Assignee: International Business Machines CorporationInventors: Paul F. Olsen, Terry L. Schardt
-
Publication number: 20160212061Abstract: Workload, preferably as one or more partitions, is migrated from a source server to one or more target servers by computing a respective projected performance optimization for each candidate partition and target, the optimization being dependent on a projected processor-memory affinity resulting from migrating candidate workload to the candidate target, and selecting a target to receive a workload accordingly. A target may be pre-configured to receive a workload being migrated to it by altering the configuration parameters of at least one workload currently executing on the target according to the projected performance optimization.Type: ApplicationFiled: May 23, 2015Publication date: July 21, 2016Inventors: Daniel C. Birkestrand, Peter J. Heyrman, Paul F. Olsen
-
Publication number: 20160212202Abstract: Workload, preferably as one or more partitions, is migrated from a source server to one or more target servers by computing a respective projected performance optimization for each candidate partition and target, the optimization being dependent on a projected processor-memory affinity resulting from migrating candidate workload to the candidate target, and selecting a target to receive a workload accordingly. A target may be pre-configured to receive a workload being migrated to it by altering the configuration parameters of at least one workload currently executing on the target according to the projected performance optimization.Type: ApplicationFiled: January 20, 2015Publication date: July 21, 2016Inventors: Daniel C. Birkestrand, Peter J. Heyrman, Paul F. Olsen
-
Patent number: 9103254Abstract: An exhaust system for a power system contained in an engine compartment. The exhaust system includes a mount for two or more exhaust treatment devices and an enclosure surrounding the two or more exhaust treatment devices. The enclosure defines a space with a higher temperature than a space defined by the engine compartment during steady state operation of the power system. At least one electronic or fluid device is coupled to the enclosure or mount and located on an exterior of the enclosure.Type: GrantFiled: March 15, 2013Date of Patent: August 11, 2015Assignee: Caterpillar Inc.Inventors: Jack Albert Merchant, Paul F. Olsen, Eric J. Charles
-
Patent number: 9094415Abstract: A cloud capacity on demand manager manages capacity on demand for servers in a server cloud. The cloud capacity on demand manager may borrow capacity from one or more servers and lend the capacity borrowed from one server to a different server in the server cloud. When the server cloud is no longer intact, capacity borrowed from servers no longer in the server cloud is disabled, and servers no longer in the server cloud reclaim capacity that was lent to the server cloud.Type: GrantFiled: November 26, 2012Date of Patent: July 28, 2015Assignee: International Business Machines CorporationInventors: Paul F. Olsen, Terry L. Schardt
-
Publication number: 20150207750Abstract: Techniques are provided for providing computing resources from a pool a plurality of networked computing systems to a consumer. The method includes determining that the consumer's resource usage exceeds a predetermined threshold. After a predetermined period of time, and upon determining that the consumer's resource usage continues to exceed the predetermined threshold, the method identifies one or more computing systems from the pool having capacity to host at least part of the amount of excess resource usage. The method further includes allocating resources on one or more computing systems selected from the identified computing systems to satisfy the amount of excess resource usage, and transferring at least the amount of excess resource usage to the selected one or more computing systems.Type: ApplicationFiled: January 21, 2014Publication date: July 23, 2015Applicant: International Business Machines CorporationInventors: Daniel C. BIRKESTRAND, Stephanie L. Jensen, Paul F. Olsen
-
Publication number: 20150207752Abstract: Techniques are provided for providing computing resources from a pool a plurality of networked computing systems to a consumer. The method includes determining that the consumer's resource usage exceeds a predetermined threshold. After a predetermined period of time, and upon determining that the consumer's resource usage continues to exceed the predetermined threshold, the method identifies one or more computing systems from the pool having capacity to host at least part of the amount of excess resource usage. The method further includes allocating resources on one or more computing systems selected from the identified computing systems to satisfy the amount of excess resource usage, and transferring at least the amount of excess resource usage to the selected one or more computing systems.Type: ApplicationFiled: January 31, 2014Publication date: July 23, 2015Applicant: International Business Machines CorporationInventors: Daniel C. BIRKESTRAND, Stephanie L. JENSEN, Paul F. OLSEN
-
Patent number: 8856336Abstract: In an embodiment, a request is received that requests to move a first partition from a source computer to a destination computer. In response to the request, charging is halted for a resource used by the first partition at the source computer while the first partition is executing at the source computer. In response to the request, a resource is allocated to a second partition at the destination computer. In response to the request, use of the resource is charged at the destination computer. In response to the request, execution of the second partition is started at the destination computer.Type: GrantFiled: August 16, 2011Date of Patent: October 7, 2014Assignee: International Business Machines CorporationInventor: Paul F. Olsen
-
Publication number: 20130262677Abstract: A cloud capacity on demand manager manages capacity on demand for servers in a server cloud. The cloud capacity on demand manager may borrow capacity from one or more servers and lend the capacity borrowed from one server to a different server in the server cloud. When the server cloud is no longer intact, capacity borrowed from servers no longer in the server cloud is disabled, and servers no longer in the server cloud reclaim capacity that was lent to the server cloud.Type: ApplicationFiled: March 27, 2012Publication date: October 3, 2013Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Paul F. Olsen, Terry L. Schardt
-
Publication number: 20130262682Abstract: A cloud capacity on demand manager manages capacity on demand for servers in a server cloud. The cloud capacity on demand manager may borrow capacity from one or more servers and lend the capacity borrowed from one server to a different server in the server cloud. When the server cloud is no longer intact, capacity borrowed from servers no longer in the server cloud is disabled, and servers no longer in the server cloud reclaim capacity that was lent to the server cloud.Type: ApplicationFiled: November 26, 2012Publication date: October 3, 2013Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Paul F. Olsen, Terry L. Schardt
-
Publication number: 20130046891Abstract: In an embodiment, a request is received that requests to move a first partition from a source computer to a destination computer. In response to the request, charging is halted for a resource used by the first partition at the source computer while the first partition is executing at the source computer. In response to the request, a resource is allocated to a second partition at the destination computer. In response to the request, use of the resource is charged at the destination computer. In response to the request, execution of the second partition is started at the destination computer.Type: ApplicationFiled: August 16, 2011Publication date: February 21, 2013Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventor: Paul F. Olsen
-
Patent number: 8197230Abstract: A damper assembly including an input member, an output member, and a transfer assembly is disclosed. The input member is configured to receive a torsional input. The output member is configured to provide a torsional output. The transfer assembly is coupled between the input member and the output member and includes a ring, a first guide, a second guide, a first spring, and a second spring. The ring defines a first linear slide path that has a first end and a second end, and a second linear slide path that has a third end and a fourth end. The first guide is slideable within the first linear slide path and coupled to the input member. The second guide is slideable within the second linear slide path and coupled to the input member. The first spring is positioned between the first guide and the second end of the first linear slide path. The second spring is positioned between the second guide and the fourth end of the second linear slide path.Type: GrantFiled: December 3, 2008Date of Patent: June 12, 2012Assignee: Caterpillar Inc.Inventors: William L. Schell, Thomas A. Brosowske, Paul F. Olsen, Thomas L. Atwell