Patents by Inventor Jos Accapadi

Jos Accapadi has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20100082855
    Abstract: Input/output (I/O) requests generated by processes are typically stored in I/O queues. Because the queued I/O requests may not be associated with the processes that generated them, changing a process' priority may not affect the priority of the I/O requests generated by the process. Therefore, after the process' priority has been increased, it may be forced to wait for an I/O handler to service its I/O request, which may be stuck behind an I/O request generated by a lower priority process. Functionality can be implemented to associate the processes' priorities with the I/O requests generated by the processes. Also, reordering the queued I/O requests to reflect changes in the processes' priorities can ensure that the I/O requests from high priority processes are serviced before the I/O requests from low priority processes. This can ensure efficient processing and lower wait times for high priority processes.
    Type: Application
    Filed: September 29, 2008
    Publication date: April 1, 2010
    Applicant: Internatinal Business Machines Corporation
    Inventors: Jos Accapadi, Andrew Dunshea, Vandana Mallempati, Agustin Mena, III
  • Patent number: 7475194
    Abstract: A computer implemented method, apparatus, and computer usable code for managing cache data. A partition identifier is associated with a cache entry in a cache, wherein the partition identifier identifies a last partition accessing the cache entry. The partition identifier associated with the cache entry is compared with a previous partition identifier located in a processor register in response to the cache entry being moved into a lower level cache relative to the cache. The cache entry is marked if the partition identifier associated with the cache entry matches the previous partition identifier located in the processor register to form a marked cache entry, wherein the marked cache entry is aged at a slower rate relative to an unmarked cache entry.
    Type: Grant
    Filed: January 2, 2008
    Date of Patent: January 6, 2009
    Assignee: International Business Machines Corporation
    Inventors: Jos Accapadi, Andrew Dunshea, Greg R. Mewhinney, Mysore Sathyanaranyana Srinivas
  • Publication number: 20080133837
    Abstract: A computer implemented method, apparatus, and computer usable code for managing cache data. A partition identifier is associated with a cache entry in a cache, wherein the partition identifier identifies a last partition accessing the cache entry. The partition identifier associated with the cache entry is compared with a previous partition identifier located in a processor register in response to the cache entry being moved into a lower level cache relative to the cache. The cache entry is marked if the partition identifier associated with the cache entry matches the previous partition identifier located in the processor register to form a marked cache entry, wherein the marked cache entry is aged at a slower rate relative to an unmarked cache entry.
    Type: Application
    Filed: January 2, 2008
    Publication date: June 5, 2008
    Inventors: Jos Accapadi, Andrew Dunshea, Greg R. Mewhinney, Mysore Sathyanaranyana Srinivas
  • Publication number: 20080098397
    Abstract: A system and method for scheduling threads in a Simultaneous Multithreading (SMT) processor environment utilizing multiple SMT processors is provided. Poor performing threads that are being run on each of the SMT processors are identified. After being identified, the poor performing threads are moved to a different SMT processor. Data is captured regarding the performance of threads. In one embodiment, this data includes each threads' CPI value. When a thread is moved, data regarding the thread and its performance at the time it was moved is recorded along with a timestamp. The data regarding previous moves is used to determine whether a thread's performance is improved following the move.
    Type: Application
    Filed: December 13, 2007
    Publication date: April 24, 2008
    Inventors: Jos Accapadi, Andrew Dunshea, Dirk Michel, Mysore Srinivas
  • Publication number: 20080072228
    Abstract: A system and method is provided for delaying a priority boost of an execution thread. When a thread prepares to enter a critical section of code, such as when the thread utilizes a shared system resource, a user mode accessible data area is updated indicating that the thread is in a critical section and, if the kernel receives a preemption event, the priority boost that the thread should receive. If the kernel receives a preemption event before the thread finishes the critical section, the kernel applies the priority boost on behalf of the thread. Often, the thread will finish the critical section without having to have its priority actually boosted. If the thread does receive an actual priority boost then, after the critical section is finished, the kernel resets the thread's priority to a normal level.
    Type: Application
    Filed: November 21, 2007
    Publication date: March 20, 2008
    Inventors: Jos Accapadi, Andrew Dunshea, Dirk Michel, James Van Fleet
  • Patent number: 7337276
    Abstract: A computer implemented method, apparatus, and computer usable code for managing cache data. A partition identifier is associated with a cache entry in a cache, wherein the partition identifier identifies a last partition accessing the cache entry. The partition identifier associated with the cache entry is compared with a previous partition identifier located in a processor register in response to the cache entry being moved into a lower level cache relative to the cache. The cache entry is marked if the partition identifier associated with the cache entry matches the previous partition identifier located in the processor register to form a marked cache entry, wherein the marked cache entry is aged at a slower rate relative to an unmarked cache entry.
    Type: Grant
    Filed: August 11, 2005
    Date of Patent: February 26, 2008
    Assignee: International Business Machines Corporation
    Inventors: Jos Accapadi, Andrew Dunshea, Greg R. Mewhinney, Mysore Sathyanaranyana Srinivas
  • Publication number: 20070294521
    Abstract: Methods, systems, and media are disclosed for improved granularity of a response-request communication on a networked computer system. One example embodiment includes receiving the request-response communication by the networked computer system, and associating the request-response communication with a port, having a nodelay setting, from a set of ports on the networked computer system. Further, the example embodiment includes enabling, based upon the associating, the nodelay setting upon connection of the request-response communication with the port. Further still, the example embodiment includes sending, in accordance with the enabling, the request-response communication to a destination in communication with the networked computer system.
    Type: Application
    Filed: June 25, 2007
    Publication date: December 20, 2007
    Inventors: Jos Accapadi, Kavitha Baratakke, Andrew Dunshea, Venkat Venkatsubra
  • Publication number: 20070168434
    Abstract: A computer implemented method, apparatus, and computer usable program code for saving information from an email message. The information is selected from the email message to form selected information. The selected information and header information is saved in the email message. The header information is designated through a user preference.
    Type: Application
    Filed: January 18, 2006
    Publication date: July 19, 2007
    Inventors: Jos Accapadi, Andrew Dunshea
  • Publication number: 20070136725
    Abstract: A system and method is provided that reserves a software lock for a waiting thread is presented. When a software lock is released by a first thread, a second thread that is waiting for the same resource controlled by the software lock is woken up. In addition, a reservation to the software lock is established for the second thread. After the reservation is established, if the lock is available and requested by a thread other than the second thread, the requesting thread is denied, added to the wait queue, and put to sleep. In addition, the reservation is cleared. After the reservation has been cleared, the lock will be granted to the next thread to request the lock.
    Type: Application
    Filed: December 12, 2005
    Publication date: June 14, 2007
    Inventors: Jos Accapadi, Matthew Accapadi, Andrew Dunshea, Dirk Michel
  • Publication number: 20070101052
    Abstract: A system and method for implementing a fast file synchronization in a data processing system. A memory management unit divides a file stored in system memory into a collection of data block groups. In response to a master (e.g., processing unit, peripheral, etc.) modifying a first data block group among the collection of data block groups, the memory management unit writes a first block group number associated with the first data block group to system memory. In response to a master modifying a second data block group, the memory management unit writes the first data block group to a hard disk drive and writes a second data block group number associated with the second data block group to system memory. In response to a request to update modified data block groups of the file stored in the system memory to the hard disk drive, the memory management unit writes the second data block to the hard disk drive.
    Type: Application
    Filed: October 27, 2005
    Publication date: May 3, 2007
    Inventors: Jos Accapadi, Mathew Accapadi, Andrew Dunshea, Dirk Michel
  • Publication number: 20070061423
    Abstract: A method, system, and program for facilitating presentation and monitoring of electronic mail messages with reply by constraints are provided. Within a network environment, a server receives electronic mail messages with separate selected reply by dates, wherein each electronic mail message is addressed for delivery by the server to at least one particular recipient. The server enables, for display within a user interface accessible to the particular recipient, a separate record for each electronic mail message within an inbox. The inbox include at least one selectable sublevel, wherein upon selection of the particular selectable sublevel of the inbox, only a selection of records for electronic mail messages with a same reply by date as the selectable sublevel are displayed within the user interface.
    Type: Application
    Filed: September 15, 2005
    Publication date: March 15, 2007
    Inventors: Jos Accapadi, Andrew Dunshea
  • Publication number: 20070038809
    Abstract: A computer implemented method, apparatus, and computer usable code for managing cache data. A partition identifier is associated with a cache entry in a cache, wherein the partition identifier identifies a last partition accessing the cache entry. The partition identifier associated with the cache entry is compared with a previous partition identifier located in a processor register in response to the cache entry being moved into a lower level cache relative to the cache. The cache entry is marked if the partition identifier associated with the cache entry matches the previous partition identifier located in the processor register to form a marked cache entry, wherein the marked cache entry is aged at a slower rate relative to an unmarked cache entry.
    Type: Application
    Filed: August 11, 2005
    Publication date: February 15, 2007
    Inventors: Jos Accapadi, Andrew Dunshea, Greg Mewhinney, Mysore Srinivas
  • Publication number: 20060288186
    Abstract: A system and method for dynamically altering a Virtual Memory Manager (VMM) Sequential-Access Read Ahead settings based upon current system memory conditions is provided. Normal VMM operations are performed using the Sequential-Access Read Ahead values set by the user. When low memory is detected, the system either turns off Sequential-Access Read Ahead operations or decreases the maximum page ahead (maxpgahead) value based upon whether the amount of free space is simply low or has reached a critically low level. The altered VMM Sequential-Access Read Ahead state remains in effect until enough free space is available so that normal VMM Sequential-Access Read Ahead operations can be performed (at which point the altered Sequential-Access Read Ahead values are reset to their original levels).
    Type: Application
    Filed: August 8, 2006
    Publication date: December 21, 2006
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Jos Accapadi, Andrew Dunshea, Li Li, Grover Neuman, Mysore Srinivas, David Hepkin
  • Publication number: 20060277551
    Abstract: Administration of locks for critical sections of computer programs in a computer that supports a multiplicity of logical partitions that include determining by a thread executing on a virtual processor executing in a time slice on a physical processor whether an expected lock time for a critical section of the thread exceeds a remaining entitlement of the virtual processor in the time slice and deferring acquisition of a lock if the expected lock time exceeds the remaining entitlement.
    Type: Application
    Filed: June 6, 2005
    Publication date: December 7, 2006
    Inventors: Jos Accapadi, Andrew Dunshea, Sujatha Kashyap
  • Publication number: 20060112208
    Abstract: A method, system and computer program product for processing interrupts in a multi-processor system is provided. The method, system and computer program product process interrupts utilizing an unequal scheduling policy in order to achieve SLA target goals for interrupt processing. In a method of the present invention an interrupt is received. A determination is made as to whether the interrupt is assigned to a specific processor. If the interrupt is not assigned to a specific processor then a processor is selected from the group of processors based on their respective interrupt priority levels. Specifically, one processor is selected from all the processors that have the highest interrupt priority level. After the interrupt has been processed by the selected processor, a determination is made as to whether the selected processor has exceeded its threshold processing level. If threshold processing level has been exceeded, the selected processor's interrupt priority level is lowered.
    Type: Application
    Filed: November 22, 2004
    Publication date: May 25, 2006
    Applicant: International Business Machines Corporation
    Inventors: Jos Accapadi, Andrew Dunshea
  • Publication number: 20060047848
    Abstract: Methods, systems, and media are disclosed for improved granularity of a response-request communication on a networked computer system. One example embodiment includes receiving the request-response communication by the networked computer system, and associating the request-response communication with a port, having a nodelay setting, from a set of ports on the networked computer system. Further, the example embodiment includes enabling, based upon the associating, the nodelay setting upon connection of the request-response communication with the port. Further still, the example embodiment includes sending, in accordance with the enabling, the request-response communication to a destination in communication with the networked computer system.
    Type: Application
    Filed: June 3, 2004
    Publication date: March 2, 2006
    Applicant: International Business Machines Corporation
    Inventors: Jos Accapadi, Kavitha Baratakke, Andrew Dunshea, Venkat Venkatsubra
  • Publication number: 20060036810
    Abstract: A system, apparatus and method of reducing cache thrashing in a multi-processor with a shared cache executing a disruptive process (i.e., a thread that has a poor cache affinity or a large cache footprint) are provided. When a thread is dispatched for execution, a table is consulted to determine whether the dispatched thread is a disruptive thread. If so, a system idle process is dispatched to the processor sharing a cache with the processor executing the disruptive thread. Since the system idle process may not use data intensively, cache thrashing may be avoided.
    Type: Application
    Filed: August 12, 2004
    Publication date: February 16, 2006
    Applicant: International Business Machines Corporation
    Inventors: Jos Accapadi, Larry Brenner, Andrew Dunshea, Dirk Michel
  • Publication number: 20060037020
    Abstract: Methods, systems, and computer program products are provided for scheduling threads in a multiprocessor computer. Embodiments include selecting a thread in a ready queue to be dispatched to a processor and determining whether an interrupt mask flag is set in a thread control block associated with the thread. If the interrupt mask flag is set in the thread control block associated with the thread, embodiments typically include selecting a processor, setting a current processor priority register of the selected processor to least favored, and dispatching the thread from the ready queue to the selected processor. In some embodiments, setting the current processor priority register of the selected processor to least favored is carried out by storing a value associated with the highest interrupt priority in the current processor priority register.
    Type: Application
    Filed: August 12, 2004
    Publication date: February 16, 2006
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Jos Accapadi, Mathew Accapadi, Andrew Dunshea, Mark Hack, Agustin Mena, Mysore Srinivas
  • Publication number: 20060037017
    Abstract: A system, apparatus and method of reducing adverse performance impact due to migration of processes from one processor to another in a multi-processor system are provided. When a process is executing, the number of cycles it takes to fetch each instruction (CPI) of the process is stored. After execution of the process, an average CPI is computed and stored in a storage device that is associated with the process. When a run queue of the multi-processor system is empty, a process may be chosen from the run queue that has the most processes awaiting execution to migrate to the empty run queue. The chosen process is the process that has the highest average number of CPIs.
    Type: Application
    Filed: August 12, 2004
    Publication date: February 16, 2006
    Applicant: International Business Machines Corporation
    Inventors: Jos Accapadi, Larry Brenner, Andrew Dunshea, Dirk Michel
  • Publication number: 20050265235
    Abstract: A method, computer program product, and a data processing system for processing transactions of a client-server application is provided. A first data set is transmitted from a client to a server. A second data set to be transmitted to the server is received by the client. An evaluation is made to determine whether transmission of the second data set is blocked until receipt of an acknowledgment of the first data set. A number of allowable outstanding acknowledgements is increased responsive to determining that the second data set is blocked from transmission.
    Type: Application
    Filed: May 27, 2004
    Publication date: December 1, 2005
    Applicant: International Business Machines Corporation
    Inventors: Jos Accapadi, Kavitha Vittal Baratakke, Andrew Dunshea, Venkat Venkatsubra