Patents by Inventor Jose E. Moreira

Jose E. Moreira has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20140281710
    Abstract: A method of backstepping through a program execution includes dividing the program execution into a plurality of epochs, wherein the program execution is performed by an active core, determining, during a subsequent epoch of the plurality of epochs, that a rollback is to be performed, performing the rollback including re-executing a previous epoch of the plurality of epochs, wherein the previous epoch includes one or more instructions of the program execution stored by a checkpointing core, and adjusting a granularity of the plurality of epochs according to a frequency of the rollback.
    Type: Application
    Filed: June 27, 2013
    Publication date: September 18, 2014
    Inventors: Harold W. Cain, III, David M. Daly, Kattamuri Ekanadham, Jose E. Moreira, Mauricio J. Serrano
  • Publication number: 20140095716
    Abstract: Aspects of the present invention provide a solution for maximizing server site resources in a server network. In an embodiment, an application signature is collected for an application. This application signature includes a representation of operating characteristics of the application. The application signature is compared with application signatures collected from other applications in the server network. Based on the comparison, the application is assigned for execution to a server site that hosts a group of applications that have similar application signatures to that of the application.
    Type: Application
    Filed: September 28, 2012
    Publication date: April 3, 2014
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: David M. Daly, Jose E. Moreira, Patricia M. Sagmeister, Jessica H. Tseng
  • Publication number: 20140095718
    Abstract: Aspects of the present invention provide a solution for maximizing server site resources in a server network. In an embodiment, an application signature is collected for an application. This application signature includes a representation of operating characteristics of the application. The application signature is compared with application signatures collected from other applications in the server network. Based on the comparison, the application is assigned for execution to a server site that hosts a group of applications that have similar application signatures to that of the application.
    Type: Application
    Filed: October 4, 2012
    Publication date: April 3, 2014
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: David M. Daly, Jose E. Moreira, Patricia M. Sagmeister, Jessica H. Tseng
  • Patent number: 8683175
    Abstract: A method, system and computer program product are disclosed for interfacing between a multi-threaded processing core and an accelerator. In one embodiment, the method comprises copying from the processing core to the hardware accelerator memory address translations for each of multiple threads operating on the processing core, and simultaneously storing on the hardware accelerator one or more of the memory address translations for each of the threads. Whenever any one of the multiple threads operating on the processing core instructs the hardware accelerator to perform a specified operation, the hardware accelerator has stored thereon one or more of the memory address translations for the any one of the threads. This facilitates starting that specified operation without memory translation faults. In an embodiment, the copying includes, each time one of the memory address translations is updated on the processing core, copying the updated one of the memory address translations to the hardware accelerator.
    Type: Grant
    Filed: March 15, 2011
    Date of Patent: March 25, 2014
    Assignee: International Business Machines Corporation
    Inventors: Kattamuri Ekanadham, Hung Q. Le, Jose E. Moreira, Pratap C. Pattnaik
  • Publication number: 20140075121
    Abstract: Techniques for conflict detection in hardware transactional memory (HTM) are provided. In one aspect, a method for detecting conflicts in HTM includes the following steps. Conflict detection is performed eagerly by setting read and write bits in a cache as transactions having read and write requests are made. A given one of the transactions is stalled when a conflict is detected whereby more than one of the transactions are accessing data in the cache in a conflicting way. An address of the conflicting data is placed in a predictor. The predictor is queried whenever the write requests are made to determine whether they correspond to entries in the predictor. A copy of the data corresponding to entries in the predictor is placed in a store buffer. The write bits in the cache are set and the copy of the data in the store buffer is merged in at transaction commit.
    Type: Application
    Filed: October 5, 2012
    Publication date: March 13, 2014
    Applicant: International Business Machines Corporation
    Inventors: Colin B. Blundell, Harold W. Cain, III, Jose E. Moreira
  • Publication number: 20130019083
    Abstract: A mechanism is provided for redundant execution of a set of instructions. A redundant execution begin (rbegin) instruction to be executed by a first hardware thread on the first processor is identified in the set of instructions. The set of instructions immediately after the rbegin instruction are executed on the first hardware thread and on a second hardware thread. Responsive to both the first processor and the second processor ending execution of the set of instructions, responsive to a first set of cache lines in a first speculative store matching a second set of cache lines in a second speculative store, and responsive to a first set of register states in a first status register matching a second set of register states in a second status register, dirty lines in the first speculative store are committed thereby committing a redundant transaction state to an architectural state.
    Type: Application
    Filed: July 11, 2011
    Publication date: January 17, 2013
    Applicant: International Business Machines Corporation
    Inventors: Harold W. Cain, III, David M. Daly, Kattamuri Ekanadham, Michael C. Huang, Jose E. Moreira, Mauricio J. Serrano
  • Publication number: 20130019085
    Abstract: A mechanism is provided for reducing a penalty for executing a correct branch of a branch instruction. An execution unit in a processor of a data processing system executes a first branch of the branch instruction from a main thread of a processor and executes a second branch of the branch instruction from an assist thread of the processor. The execution unit determines whether the main thread is a correct branch of the branch instruction or the assist thread is the correct branch of the branch instruction. Responsive to the assist thread being the correct branch of the branch instruction, the execution unit pauses execution of the branch instruction on both the main thread and the assist thread. The execution unit then properly inherits a context of the main thread in order that execution of the second branch may continue.
    Type: Application
    Filed: July 12, 2011
    Publication date: January 17, 2013
    Applicant: International Business Machines Corporation
    Inventors: Harold W. Cain, III, David M. Daly, Michael C. Huang, Jose E. Moreira, IL Park
  • Publication number: 20120239904
    Abstract: A method, system and computer program product are disclosed for interfacing between a multi-threaded processing core and an accelerator. In one embodiment, the method comprises copying from the processing core to the hardware accelerator memory address translations for each of multiple threads operating on the processing core, and simultaneously storing on the hardware accelerator one or more of the memory address translations for each of the threads. Whenever any one of the multiple threads operating on the processing core instructs the hardware accelerator to perform a specified operation, the hardware accelerator has stored thereon one or more of the memory address translations for the any one of the threads. This facilitates starting that specified operation without memory translation faults. In an embodiment, the copying includes, each time one of the memory address translations is updated on the processing core, copying the updated one of the memory address translations to the hardware accelerator.
    Type: Application
    Filed: March 15, 2011
    Publication date: September 20, 2012
    Applicant: International Business Machines Corporation
    Inventors: Kattamuri Ekanadham, Hung Q. Le, Jose E. Moreira, Pratap C. Pattnaik
  • Patent number: 8214560
    Abstract: A system, method and computer program product are provided for supporting Transactional Memory communications. In one embodiment, the system comprises a transactional memory host with a host transactional memory buffer, an endpoint device, a transactional memory buffer associated with the endpoint device, and a communication path connecting the endpoint device and host. Input/Output transactions associated with the endpoint device executed in transactional memory on the host are stored in both the host transactional memory buffer and the transactional memory buffer associated with the endpoint device. In an embodiment, the Transactional Memory system further comprises an intermediate device located on the communication path between the host and the endpoint device, and an intermediate transactional memory buffer associated with said intermediate devices.
    Type: Grant
    Filed: April 20, 2010
    Date of Patent: July 3, 2012
    Assignee: International Business Machines Corporation
    Inventors: Jose E. Moreira, Patricia M. Sagmeister
  • Publication number: 20110320765
    Abstract: A computer processor, method, and computer program product for executing vector processing instructions on a variable width vector register file. An example embodiment is a computer processor that includes an instruction execution unit coupled to a variable width vector register file which contains a number of vector registers, the width of the vector registers is changeable during operation of the computer processor.
    Type: Application
    Filed: June 28, 2010
    Publication date: December 29, 2011
    Applicant: International Business Machines Corporation
    Inventors: Tejas Karkhanis, Jose E. Moreira, Valentina Salapura
  • Publication number: 20110314259
    Abstract: A pointer is for pointing to a next-to-read location within a stack of information. For pushing information onto the stack: a value is saved of the pointer, which points to a first location within the stack as being the next-to-read location; the pointer is updated so that it points to a second location within the stack as being the next-to-read location; and the information is written for storage at the second location. For popping the information from the stack: in response to the pointer, the information is read from the second location as the next-to-read location; and the pointer is restored to equal the saved value so that it points to the first location as being the next-to-read location.
    Type: Application
    Filed: June 17, 2010
    Publication date: December 22, 2011
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Kattamuri Ekanadham, Brian R. Konigsburg, David S. Levitan, Jose E. Moreira, David Mui, IL Park
  • Publication number: 20110296148
    Abstract: Mechanisms are provided, in a data processing system having a processor and a transactional memory, for executing a transaction in the data processing system. These mechanisms execute a transaction comprising one or more instructions that modify at least a portion of the transactional memory. The transaction is suspended in response to a transaction suspend instruction being executed by the processor. A suspended block of code is executed in a non-transactional manner while the transaction is suspended. A determination is made as to whether an interrupt occurs while the transaction is suspended. In response to an interrupt occurring while the transaction is suspended, a transaction abort operation is delayed until after the transaction suspension is discontinued.
    Type: Application
    Filed: May 27, 2010
    Publication date: December 1, 2011
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Harold W. Cain, III, Bradly G. Frey, Benjamin Herrenschmidt, Hung Q. Le, Cathy May, Maged M. Michael, Jose E. Moreira, Priya A. Nagpurkar, Naresh Nayar, Randal C. Swanberg
  • Publication number: 20110258347
    Abstract: A system, method and computer program product are provided for supporting Transactional Memory communications. In one embodiment, the system comprises a transactional memory host with a host transactional memory buffer, an endpoint device, a transactional memory buffer associated with the endpoint device, and a communication path connecting the endpoint device and host. Input/Output transactions associated with the endpoint device executed in transactional memory on the host are stored in both the host transactional memory buffer and the transactional memory buffer associated with the endpoint device. In an embodiment, the Transactional Memory system further comprises an intermediate device located on the communication path between the host and the endpoint device, and an intermediate transactional memory buffer associated with said intermediate devices.
    Type: Application
    Filed: April 20, 2010
    Publication date: October 20, 2011
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Jose E. Moreira, Patricia M. Sagmeister
  • Patent number: 7895323
    Abstract: A system for predicting an occurrence of a critical even in a computer cluster includes: a control system that includes an event log, a system parameter log, a memory for storing information related to occurrences of critical events, and a processor. The processor implements a hybrid prediction system; loads the information from the event log and the system performance log into a Bayesian network model; uses the Bayesian network model to predict a future critical event; makes future scheduling and current data migration selections; and adapts the Bayesian network model by feeding the scheduling and data migration selections.
    Type: Grant
    Filed: November 10, 2008
    Date of Patent: February 22, 2011
    Assignee: International Business Machines Corporation
    Inventors: Manish Gupta, Jose E. Moreira, Adam J. Oliner, Ramendra K. Sahoo
  • Patent number: 7721009
    Abstract: A method for implementing large scale parallel file I/O processing includes steps of: separating processing nodes into compute nodes specializing in computation and I/O nodes (computer processors restricted to running I/O daemons); organizing the compute nodes and the I/O nodes into processing sets, the processing sets including: one dedicated I/O node corresponding to a plurality of compute nodes. I/O related system calls are received in the compute nodes then sent to the corresponding I/O nodes. The I/O related system calls are processed through a system I/O daemon residing in the I/O node. The plurality of compute nodes are evenly distributed across participating processing sets. Additionally, for collective I/O operations, compute nodes from each processing set are assigned as I/O aggregators to issue I/O requests to their corresponding I/O node, wherein the I/O aggregators are evenly distributed across the processing set.
    Type: Grant
    Filed: November 22, 2006
    Date of Patent: May 18, 2010
    Assignee: International Business Machines Corporation
    Inventors: Jose E. Moreira, Ramendra K. Sahoo, Hao Yu
  • Publication number: 20090282151
    Abstract: A method for managing clustered computer resources, and particularly very large scale clusters of computer resources by a semi-hierarchical n level, n+1 tier approach. Controller resources and controlled resources exist at different hardware levels. The top level consists of controller nodes and a first tier is defined for at least part of the top level. At a last level, at which controlled nodes are found, a last tier is defined. Additional levels of controlled and controller resources may exist between the top and last levels. At least one logical intermediate tier is introduced between adjacent levels and comprises at least one proxy or set of proxy processes.
    Type: Application
    Filed: July 24, 2009
    Publication date: November 12, 2009
    Applicant: International Business Machines Corporation
    Inventors: Myung Mun Bae, Jose E. Moreira, Ramendra Kumar Sahoo
  • Patent number: 7577730
    Abstract: A system and method for managing clustered computer resources, and more particularly very large scale clusters of computer resources by a semi-hierarchical n level, n+1 tier approach. The top level consists of the controller nodes. A first tier is defined at the top level. At a last level, at which the cluster of controlled nodes is found, a last tier is defined. Additional levels of controller or controlled nodes may exist between the top and bottom levels. At least one intermediate tier is introduced between two of the levels and comprises at least one proxy or a plurality of proxies. A proxy is a process or set of processes representing processes of the clustered computer resources. Proxies can run either on controller nodes or on the controlled nodes or controlled node clusters to facilitate administration of the controlled nodes.
    Type: Grant
    Filed: November 27, 2002
    Date of Patent: August 18, 2009
    Assignee: International Business Machines Corporation
    Inventors: Myung Mun Bae, Jose E. Moreira, Ramendra Kumar Sahoo
  • Publication number: 20090070628
    Abstract: A system for predicting an occurrence of a critical even in a computer cluster includes: a control system that includes an event log, a system parameter log, a memory for storing information related to occurrences of critical events, and a processor. The processor implements a hybrid prediction system; loads the information from the event log and the system performance log into a Bayesian network model; uses the Bayesian network model to predict a future critical event; makes future scheduling and current data migration selections; and adapts the Bayesian network model by feeding the scheduling and data migration selections.
    Type: Application
    Filed: November 10, 2008
    Publication date: March 12, 2009
    Applicant: International Business Machines Corporation
    Inventors: Manish Gupta, Jose E. Moreira, Adam J. Oliner, Ramendra K. Sahoo
  • Patent number: 7451210
    Abstract: A hybrid method of predicting the occurrence of future critical events in a computer cluster having a series of nodes records system performance parameters and the occurrence of past critical events. A data filter filters the logged to data to eliminate redundancies and decrease the data storage requirements of the system. Time-series models and rule based classification schemes are used to associate various system parameters with the past occurrence of critical events and predict the occurrence of future critical events. Ongoing processing jobs are migrated to nodes for which no critical events are predicted and future jobs are routed to more robust nodes.
    Type: Grant
    Filed: November 24, 2003
    Date of Patent: November 11, 2008
    Assignee: International Business Machines Corporation
    Inventors: Manish Gupta, Jose E. Moreira, Adam J. Oliner, Ramendra K. Sahoo
  • Publication number: 20080120435
    Abstract: A method for implementing large scale parallel file I/O processing includes steps of: separating processing nodes into compute nodes specializing in computation and I/O nodes (computer processors restricted to running I/O daemons); organizing the compute nodes and the I/O nodes into processing sets, the processing sets including: one dedicated I/O node corresponding to a plurality of compute nodes. I/O related system calls are received in the compute nodes then sent to the corresponding I/O nodes. The I/O related system calls are processed through a system I/O daemon residing in the I/O node. The plurality of compute nodes are evenly distributed across participating processing sets. Additionally, for collective I/O operations, compute nodes from each processing set are assigned as I/O aggregators to issue I/O requests to their corresponding I/O node, wherein the I/O aggregators are evenly distributed across the processing set.
    Type: Application
    Filed: November 22, 2006
    Publication date: May 22, 2008
    Applicant: International Business Machines Corporation
    Inventors: Jose E. Moreira, Ramendra K. Sahoo, Hao Yu