Patents by Inventor Steven W. White

Steven W. White has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 9298630
    Abstract: A computer processor collects information for a dominant data access loop and reference code patterns based on data reference pattern analysis, and for pointer aliasing and data shape based on pointer escape analysis. The computer processor selects a candidate array for data splitting wherein the candidate array is referenced by a dominant data access loop. The computer processor determines a data splitting mode by which to split the data of the candidate array, based on the reference code patterns, the pointer aliasing, and the data shape information, and splits the data into two or more split arrays. The computer processor creates a software cache that includes a portion of the data of the two or more split arrays in a transposed format, and maintains the portion of the transposed data within the software cache and consults the software cache during an access of the split arrays.
    Type: Grant
    Filed: June 13, 2014
    Date of Patent: March 29, 2016
    Assignee: GLOBALFOUNDRIES INC.
    Inventors: Christopher M. Barton, Shimin Cui, Satish K. Sadasivam, Raul E. Silvera, Madhavi G. Valluri, Steven W White
  • Patent number: 9104577
    Abstract: A computer processor collects information for a dominant data access loop and reference code patterns based on data reference pattern analysis, and for pointer aliasing and data shape based on pointer escape analysis. The computer processor selects a candidate array for data splitting wherein the candidate array is referenced by a dominant data access loop. The computer processor determines a data splitting mode by which to split the data of the candidate array, based on the reference code patterns, the pointer aliasing, and the data shape information, and splits the data into two or more split arrays. The computer processor creates a software cache that includes a portion of the data of the two or more split arrays in a transposed format, and maintains the portion of the transposed data within the software cache and consults the software cache during an access of the split arrays.
    Type: Grant
    Filed: August 27, 2013
    Date of Patent: August 11, 2015
    Assignee: International Business Machines Corporation
    Inventors: Christopher M. Barton, Shimin Cui, Satish K. Sadasivam, Raul E. Silvera, Mahavi G. Valluri, Steven W. White
  • Publication number: 20150067260
    Abstract: A computer processor collects information for a dominant data access loop and reference code patterns based on data reference pattern analysis, and for pointer aliasing and data shape based on pointer escape analysis. The computer processor selects a candidate array for data splitting wherein the candidate array is referenced by a dominant data access loop. The computer processor determines a data splitting mode by which to split the data of the candidate array, based on the reference code patterns, the pointer aliasing, and the data shape information, and splits the data into two or more split arrays. The computer processor creates a software cache that includes a portion of the data of the two or more split arrays in a transposed format, and maintains the portion of the transposed data within the software cache and consults the software cache during an access of the split arrays.
    Type: Application
    Filed: August 27, 2013
    Publication date: March 5, 2015
    Applicant: International Business Machines Corporation
    Inventors: Christopher M. Barton, Shimin Cui, Satish K. Sadasivam, Raul E. Silvera, Mahavi G. Valluri, Steven W. White
  • Publication number: 20150067268
    Abstract: A computer processor collects information for a dominant data access loop and reference code patterns based on data reference pattern analysis, and for pointer aliasing and data shape based on pointer escape analysis. The computer processor selects a candidate array for data splitting wherein the candidate array is referenced by a dominant data access loop. The computer processor determines a data splitting mode by which to split the data of the candidate array, based on the reference code patterns, the pointer aliasing, and the data shape information, and splits the data into two or more split arrays. The computer processor creates a software cache that includes a portion of the data of the two or more split arrays in a transposed format, and maintains the portion of the transposed data within the software cache and consults the software cache during an access of the split arrays.
    Type: Application
    Filed: June 13, 2014
    Publication date: March 5, 2015
    Inventors: Christopher M. Barton, Shimin Cui, Satish K. Sadasivam, Raul E. Silvera, Madhavi G. Valluri, Steven W. White
  • Patent number: 8832383
    Abstract: A cache entry replacement unit can delay replacement of more valuable entries by replacing less valuable entries. When a miss occurs, the cache entry replacement unit can determine a cache entry for replacement (“a replacement entry”) based on a generic replacement technique. If the replacement entry is an entry that should be protected from replacement (e.g., a large page entry), the cache entry replacement unit can determine a second replacement entry. The cache entry replacement unit can “skip” the first replacement entry by replacing the second replacement entry with a new entry, if the second replacement entry is an entry that should not be protected (e.g., a small page entry). The first replacement entry can be skipped a predefined number of times before the first replacement entry is replaced with a new entry.
    Type: Grant
    Filed: May 20, 2013
    Date of Patent: September 9, 2014
    Assignee: International Business Machines Corporation
    Inventors: Bret R. Olszewski, Basu Vaidyanathan, Steven W. White
  • Patent number: 8745607
    Abstract: According to one aspect of the present disclosure, a method and technique for reducing branch misprediction impact for nested loop code is disclosed. The method includes: responsive to identifying code having an outer loop and an inner loop, determining a quantity of iterations of the inner loop for an initial number of iterations of the outer loop; determining a number of processor cycles for executing the quantity of iterations of the inner loop for the initial number of iterations of the outer loop; determining whether the number of processor cycles is less than a threshold; and responsive to determining that the number of processor cycles is less than the threshold, fully unrolling the inner loop for the initial number of iterations of the outer loop.
    Type: Grant
    Filed: November 11, 2011
    Date of Patent: June 3, 2014
    Assignee: International Business Machines Corporation
    Inventors: Madhavi G. Valluri, Steven W. White
  • Publication number: 20130254490
    Abstract: A cache entry replacement unit can delay replacement of more valuable entries by replacing less valuable entries. When a miss occurs, the cache entry replacement unit can determine a cache entry for replacement (“a replacement entry”) based on a generic replacement technique. If the replacement entry is an entry that should be protected from replacement (e.g., a large page entry), the cache entry replacement unit can determine a second replacement entry. The cache entry replacement unit can “skip” the first replacement entry by replacing the second replacement entry with a new entry, if the second replacement entry is an entry that should not be protected (e.g., a small page entry). The first replacement entry can be skipped a predefined number of times before the first replacement entry is replaced with a new entry.
    Type: Application
    Filed: May 20, 2013
    Publication date: September 26, 2013
    Applicant: International Business Machines Corporation
    Inventors: Bret R. Olszewski, Basu Vaidyanathan, Steven W. White
  • Patent number: 8473684
    Abstract: A cache entry replacement unit can delay replacement of more valuable entries by replacing less valuable entries. When a miss occurs, the cache entry replacement unit can determine a cache entry for replacement (“a replacement entry”) based on a generic replacement technique. If the replacement entry is an entry that should be protected from replacement (e.g., a large page entry), the cache entry replacement unit can determine a second replacement entry. The cache entry replacement unit can “skip” the first replacement entry by replacing the second replacement entry with a new entry, if the second replacement entry is an entry that should not be protected (e.g., a small page entry). The first replacement entry can be skipped a predefined number of times before the first replacement entry is replaced with a new entry.
    Type: Grant
    Filed: December 22, 2009
    Date of Patent: June 25, 2013
    Assignee: International Business Machines Corporation
    Inventors: Bret R. Olszewski, Basu Vaidyanathan, Steven W. White
  • Publication number: 20130125104
    Abstract: According to one aspect of the present disclosure, a method and technique for reducing branch misprediction impact for nested loop code is disclosed. The method includes: responsive to identifying code having an outer loop and an inner loop, determining a quantity of iterations of the inner loop for an initial number of iterations of the outer loop; determining a number of processor cycles for executing the quantity of iterations of the inner loop for the initial number of iterations of the outer loop; determining whether the number of processor cycles is less than a threshold; and responsive to determining that the number of processor cycles is less than the threshold, fully unrolling the inner loop for the initial number of iterations of the outer loop.
    Type: Application
    Filed: November 11, 2011
    Publication date: May 16, 2013
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Madhavi G. Valluri, Steven W. White
  • Publication number: 20130117839
    Abstract: The disclosure generally describes computer-implemented methods, software, and systems for controlling access to applications on a device while the device is in motion. One example computer-implemented method includes identifying a request to access an application on a device, determining if the requested application is a motion-restricted application, upon determination that the requested application is a motion-restricted application, identifying a speed of movement associated with the device, and controlling access to the requested application based at least in part on the identified speed of movement of the device.
    Type: Application
    Filed: October 26, 2012
    Publication date: May 9, 2013
    Inventors: Steven W. White, Ashok Ramadass
  • Publication number: 20130013705
    Abstract: A method for filtering content with a communication device includes, with the communication device, applying a filter function to a message associated with the communication device, the filter function finding at least one content element. The method further includes comparing the content element with a set of restricted content elements, and withholding the message from communication in response to determining that the content element matches one of the restricted content elements.
    Type: Application
    Filed: July 9, 2012
    Publication date: January 10, 2013
    Applicant: IMAGE VISION LABS, INC.
    Inventors: Steven W. White, Ashok Ramadass
  • Patent number: 7973680
    Abstract: A system and computer readable storage medium for creating an in-memory physical dictionary for data compression are provided. A new heuristic is defined for converting each of a plurality of logical nodes into a corresponding physical node forming a plurality of physical nodes. Each of the physical nodes are placed into the physical dictionary while traversing the dictionary tree in descending visit count order. Each physical node is placed in its nearest ascendant's cache-line with sufficient space. If there is no space in any of the ascendant's cache-line, then the physical node is placed into a new cache-line, unless a pre-defined packing threshold has been reached, in which case the physical node is placed in the first available cache-line.
    Type: Grant
    Filed: July 14, 2008
    Date of Patent: July 5, 2011
    Assignee: International Business Machines Corporation
    Inventors: Balakrishna Raghavendra Iyer, Piotr M. Plachta, Wolfram Sauer, Steven W. White
  • Publication number: 20110153949
    Abstract: A cache entry replacement unit can delay replacement of more valuable entries by replacing less valuable entries. When a miss occurs, the cache entry replacement unit can determine a cache entry for replacement (“a replacement entry”) based on a generic replacement technique. If the replacement entry is an entry that should be protected from replacement (e.g., a large page entry), the cache entry replacement unit can determine a second replacement entry. The cache entry replacement unit can “skip” the first replacement entry by replacing the second replacement entry with a new entry, if the second replacement entry is an entry that should not be protected (e.g., a small page entry). The first replacement entry can be skipped a predefined number of times before the first replacement entry is replaced with a new entry.
    Type: Application
    Filed: December 22, 2009
    Publication date: June 23, 2011
    Applicant: International Business Machines Corporation
    Inventors: Bret R. Olszewski, Basu Vaidyanathan, Steven W. White
  • Patent number: 7460033
    Abstract: Some aspects of the invention provide methods for creating an in-memory physical dictionary for data compression. To that end, in accordance with aspects of the present invention, a new heuristic is defined for converting each of the plurality of logical nodes into a corresponding physical node forming a plurality of physical nodes; then place each of the physical nodes into the physical dictionary while traversing the dictionary tree in descending visit count order. Each physical node is placed in its nearest ascendant's cache-line with sufficient space. If there is no space in any of the ascendant's cache-line, then the physical node is placed into a new cache-line, unless a pre-defined packing threshold has been reached, in which case the physical node is placed in the first available cache-line.
    Type: Grant
    Filed: December 28, 2006
    Date of Patent: December 2, 2008
    Assignee: International Business Machines Corporation
    Inventors: Balakrishna Raghavendra Iyer, Piotr M. Plachta, Wolfram Sauer, Steven W. White
  • Publication number: 20080275897
    Abstract: Some aspects of the invention provide methods, systems, and computer program products for creating an in-memory physical dictionary for data compression. To that end, in accordance with aspects of the present invention, a new heuristic is defined for converting each of the plurality of logical nodes into a corresponding physical node forming a plurality of physical nodes; then place each of the physical nodes into the physical dictionary while traversing the dictionary tree in descending visit count order. Each physical node is placed in its nearest ascendant's cache-line with sufficient space. If there is no space in any of the ascendant's cache-line, then the physical node is placed into a new cache-line, unless a pre-defined packing threshold has been reached, in which case the physical node is placed in the first available cache-line.
    Type: Application
    Filed: July 14, 2008
    Publication date: November 6, 2008
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Balakrishna Raghavendra Iyer, Piotr M. Plachta, Wolfram Sauer, Steven W. White
  • Publication number: 20080162517
    Abstract: Some aspects of the invention provide methods, systems, and computer program products for creating an in-memory physical dictionary for data compression. To that end, in accordance with aspects of the present invention, a new heuristic is defined for converting each of the plurality of logical nodes into a corresponding physical node forming a plurality of physical nodes; then place each of the physical nodes into the physical dictionary while traversing the dictionary tree in descending visit count order. Each physical node is placed in its nearest ascendant's cache-line with sufficient space. If there is no space in any of the ascendant's cache-line, then the physical node is placed into a new cache-line, unless a pre-defined packing threshold has been reached, in which case the physical node is placed in the first available cache-line.
    Type: Application
    Filed: December 28, 2006
    Publication date: July 3, 2008
    Applicant: INTERNATIONAL BUSINESS MACHINES
    Inventors: Balakrishna Raghavendra Iyer, Piotr M. Plachta, Wolfram Sauer, Steven W. White
  • Patent number: 6907509
    Abstract: A method, a computer or computer program product for automatically restructuring a program having arrays in inner loops to reduce an average penalty incurred for bursty cache miss patterns by spreading out the cache misses. The method may be used separately or in conjunction with methods for reducing the number of cache misses. The method determines a padding required for each array according to a proportion of the cache line size, to offset the starting points of the arrays relative to the start of a cache line memory access address for each array. Preferably, the starting points of the arrays that induce bursty cache misses are padded so that they are uniformly spaced from one another.
    Type: Grant
    Filed: November 18, 2002
    Date of Patent: June 14, 2005
    Assignee: International Business Machines Corporation
    Inventors: Brian C. Hall, Robert J. Blainey, Steven W. White
  • Publication number: 20030097538
    Abstract: A method, a computer or computer program product for automatically restructuring a program having arrays in inner loops to reduce an average penalty incurred for bursty cache miss patterns by spreading out the cache misses. The method may be used separately or in conjunction with methods for reducing the number of cache misses. The method determines a padding required for each array according to a proportion of the cache line size, to offset the starting points of the arrays relative to the start of a cache line memory access address for each array. Preferably, the starting points of the arrays that induce bursty cache misses are padded so that they are uniformly spaced from one another.
    Type: Application
    Filed: November 18, 2002
    Publication date: May 22, 2003
    Applicant: International Business Machines Corporation
    Inventors: Brian C. Hall, Robert J. Blainey, Steven W. White
  • Patent number: 5796998
    Abstract: An apparatus and method for fetching instructions in an information handling system operating at a predetermined number of cycles per second includes an instruction cache for storing instructions to be fetched. Branch target calculators are operably coupled to instruction queues and to a fetch address selector for determining, in parallel, if instructions in the instruction queues are branch instructions and for providing, in parallel, a target address for each of the instruction queues to the fetch address selector such that the fetch address selector can provide the instruction cache with one of the plurality of target addresses as the next fetch address. Decoding of instructions, calculating the target addresses of branch instructions, and resolving branch instructions are performed in parallel instead of sequentially and, in this manner, back-to-back taken branches can be executed at a rate of one per cycle.
    Type: Grant
    Filed: November 21, 1996
    Date of Patent: August 18, 1998
    Assignee: International Business Machines Corporation
    Inventors: David Stephen Levitan, John S. Muhich, Adam R. Talcott, Steven W. White
  • Patent number: 5721858
    Abstract: A method and system for memory management and address translation mapping of pools of logical partitions for BAT and TLB entries in a data processing system is provided. An entry in an address translation buffer is created that is associated with a particular block of virtual memory comprised of a plurality of logical partitions that are grouped in one or more pools of logical partitions, wherein the size of each pool of logical partitions is equal to a preselected page size for real memory, and wherein the entry maps each pool of logical partitions to a page of real memory within a sector of real memory, wherein the size of the sector is a function of the size of the associated block of virtual memory.
    Type: Grant
    Filed: December 12, 1995
    Date of Patent: February 24, 1998
    Assignee: International Business Machines Corporation
    Inventors: Steven W. White, G. Jeannette McWilliams, Jack Wayne Kemp