Patents by Inventor Robert J. Sonnelitter, III

Robert J. Sonnelitter, III has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20110320657
    Abstract: A mechanism for controlling data stream interruptions on a shared bus is provided. A first request is received to transfer data. High priority data components and low priority data components are determined for the first request. The high priority data components are transferred without interruptions. In response to receiving requests when transferring the high priority data components, the received requests are rejected.
    Type: Application
    Filed: June 24, 2010
    Publication date: December 29, 2011
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Garrett M. Drapala, Kenneth D. Klapproth, Robert J. Sonnelitter, III, Craig R. Walters
  • Publication number: 20110320735
    Abstract: A mechanism for dynamically altering a request received at a hardware component is provided. The request is received at the hardware component, and the request includes a mode option. It is determined whether an action of the request requires an unavailable resource and it is determined whether the mode option is for the action requiring the unavailable resource. In response to the mode option being for the action requiring the unavailable resource, the action is automatically removed from the request. The request is passed for pipeline arbitration without the action requiring the unavailable resource.
    Type: Application
    Filed: June 23, 2010
    Publication date: December 29, 2011
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Deanna Postles Dunn Berger, Michael Fee, Kenneth D. Klapproth, Robert J. Sonnelitter, III
  • Publication number: 20110320695
    Abstract: Various embodiments of the present invention mitigate busy time in a hierarchical store-through memory cache structure. In one embodiment, a cache directory associated with a memory cache is divided into a plurality of portions each associated with a portion memory cache. Simultaneous cache lookup operations and cache write operations between the plurality of portions of the cache directory are supported. Two or more store commands are simultaneously processed in a shared cache pipeline communicatively coupled to the plurality of portions of the cache directory.
    Type: Application
    Filed: June 23, 2010
    Publication date: December 29, 2011
    Applicant: International Business Machines Corporation
    Inventors: Deanna P. BERGER, Michael F. Fee, Christine C. Jones, Arthur J. O'Neill, Diana L. Orf, Robert J. Sonnelitter, III
  • Publication number: 20110320855
    Abstract: A pipelined processing device includes: a processor configured to receive a request to perform an operation; a plurality of processing controllers configured to receive at least one instruction associated with the operation, each of the plurality of processing controllers including a memory to store at least one instruction therein; a pipeline processor configured to receive and process the at least one instruction, the pipeline processor including shared error detection logic configured to detect a parity error in the at least one instruction as the at least one instruction is processed in a pipeline and generate an error signal; and a pipeline bus connected to each of the plurality of processing controllers and configured to communicate the error signal from the error detection logic.
    Type: Application
    Filed: June 23, 2010
    Publication date: December 29, 2011
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Ekaterina M. Ambroladze, Deanna Postles Dunn Berger, Michael Fee, Arthur J. O'Neill, JR., Diana Lynn Orf, Robert J. Sonnelitter, III
  • Publication number: 20110320722
    Abstract: An apparatus for controlling access to a pipeline includes a plurality of command queues including a first subset of the plurality of command queues being assigned processes the commands of first command type, a second subset of the plurality of command queues being assigned to process commands of the second command type, and a third subset of the plurality of the command queues not being assigned to either the first subset or the second subset. The apparatus also includes an input controller configured to receive requests having the first command type and the second command type and assign requests having the first command type to command queues in the first subset until all command queues in the first subset are filled and then assign requests having the first command type to command queues in the third subset.
    Type: Application
    Filed: June 23, 2010
    Publication date: December 29, 2011
    Applicant: INTERNATIONAL BUSINESS MACHINES
    Inventors: Deanna Postles Dunn Berger, Garrett M. Drapala, Michael F. Fee, Robert J. Sonnelitter, III
  • Publication number: 20110320863
    Abstract: Dynamic re-allocation of cache buffer slots includes moving data out of a reserved buffer slot upon detecting an error in the reserved buffer slot, creating a new buffer slot, and storing the data moved out of the reserved buffer slot in the new buffer slot.
    Type: Application
    Filed: June 24, 2010
    Publication date: December 29, 2011
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Ekaterina M. Amroladze, Deanna Postles Dunn Berger, Michael Fee, Arthur J. O'Neill, JR., Diana Lynn Orf, Robert J. Sonnelitter, III
  • Publication number: 20110320727
    Abstract: An apparatus for controlling operation of a cache includes a first command queue, a second command queue and an input controller configured to receive requests having a first command type and a second command type and to assign a first request having the first command type to the first command queue and a second command having the first command type to the second command queue in the event that the first command queue has not received an indication that a first dedicated buffer is available.
    Type: Application
    Filed: June 23, 2010
    Publication date: December 29, 2011
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Diana L. Orf, Robert J. Sonnelitter, III
  • Publication number: 20110314212
    Abstract: Various embodiments of the present invention manage a hierarchical store-through memory cache structure. A store request queue is associated with a processing core in multiple processing cores. At least one blocking condition is determined to have occurred at the store request queue. Multiple non-store requests and a set of store requests associated with a remaining set of processing cores in the multiple processing cores are dynamically blocked from accessing a memory cache in response to the blocking condition having occurred.
    Type: Application
    Filed: June 22, 2010
    Publication date: December 22, 2011
    Applicant: International Business Machines Corporation
    Inventors: DEANNA P. BERGER, Michael F. Fee, Christine C. Jones, Diana L. Orf, Robert J. Sonnelitter, III
  • Publication number: 20110314183
    Abstract: A method of managing a temporary memory includes: receiving a request to transfer data from a source location to a destination location, the data transfer request associated with an operation to be performed, the operation selected from an input into an intermediate temporary memory and an output; checking a two-state indicator associated with the temporary memory, the two-state indicator having a first state indicating that an immediately preceding operation on the temporary memory was an input to the temporary memory and a second state indicating that the immediately preceding operation was an output from the temporary memory; and performing the operation responsive to one of: the operation being an input operation and the two-state indicator being in the second state, indicating that the immediately preceding operation was an output; and the operation being an output operation and the two-state indicator being in the first state, indicating that the immediately preceding operation was an input.
    Type: Application
    Filed: June 22, 2010
    Publication date: December 22, 2011
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Ekaterina M. Ambroladze, Deanna Postles Dunn Berger, Michael Fee, Arthur J. O'Neill, JR., Diana Lynn Orf, Robert J. Sonnelitter, III
  • Publication number: 20110314211
    Abstract: Various embodiments of the present invention merge data in a cache memory. In one embodiment a set of store data is received from a processing core. A store merge command and a merge mask from are also received from the processing core. A portion of the store data to perform a merging operation thereon is identified based on the store merge command. A sub-portion of the portion of the store data to be merged with a corresponding set of data from a cache memory is identified based on the merge mask. The sub-portion is merged with the corresponding set of data from the cache memory.
    Type: Application
    Filed: June 22, 2010
    Publication date: December 22, 2011
    Applicant: International Business Machines Corporation
    Inventors: Deanna P. BERGER, Michael F. Fee, Christine C. Jones, Diana L. Orf, Robert J. Sonnelitter, III
  • Publication number: 20110258394
    Abstract: A method for merging data including receiving a request from an input/output device to merge a data, wherein a merge of the data includes a manipulation of the data, determining that the data exists in a local cache memory that is in local communication with the input/output device, fetching the data to the local cache memory from a remote cache memory or a main memory if the data does not exist in the local cache memory, merging the data according to the request to obtain a merged data, and storing the merged data in the local cache, wherein the merging of the data is performed without using a memory controller within a control flow or a data flow of the merging of the data.
    Type: Application
    Filed: June 30, 2011
    Publication date: October 20, 2011
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Deanna P. Dunn, Robert J. Sonnelitter, III, Gary E. Strait
  • Patent number: 8006039
    Abstract: A method for merging data including receiving a request from an input/output device to merge a data, wherein a merge of the data includes a manipulation of the data, determining if the data exists in a local cache memory that is in local communication with the input/output device, fetching the data to the local cache memory from a remote cache memory or a main memory if the data does not exist in the local cache memory, merging the data according to the request to obtain a merged data, and storing the merged data in the local cache, wherein the merging of the data is performed without using a memory controller within a control flow or a data flow of the merging of the data. A corresponding system and computer program product.
    Type: Grant
    Filed: February 25, 2008
    Date of Patent: August 23, 2011
    Assignee: International Business Machines Corporation
    Inventors: Deanna P. Dunn, Robert J. Sonnelitter, III, Gary E. Strait
  • Patent number: 7882338
    Abstract: A method, system and computer program product for performing an implicit predicted return from a predicted subroutine are provided. The system includes a branch history table/branch target buffer (BHT/BTB) to hold branch information, including a target address of a predicted subroutine and a branch type. The system also includes instruction buffers, and instruction fetch controls to perform a method including fetching a branch instruction at a branch address and a return-point instruction. The method also includes receiving the target address and the branch type, and fetching a fixed number of instructions in response to the branch type. The method further includes referencing the return-point instruction within the instruction buffers such that the return-point instruction is available upon completing the fetching of the fixed number of instructions absent a re-fetch of the return-point instruction.
    Type: Grant
    Filed: February 20, 2008
    Date of Patent: February 1, 2011
    Assignee: International Business Machines Corporation
    Inventors: Khary J. Alexander, James J. Bonanno, Brian R. Prasky, Anthony Saporito, III, Robert J. Sonnelitter, III, Charles F. Webb
  • Patent number: 7822954
    Abstract: A branch prediction algorithm is used to generate a prediction of whether or not a branch will be taken. One or more instructions are fetched such that, for each of the fetched instructions, the prediction initiates a fetch of an instruction at a predicted target of the branch. A test is performed to ascertain whether or not the prediction was generated late relative to the fetched instructions, so that if the branch is later detected as mispredicted, that detection can be correlated to the late prediction. When the prediction is generated late relative to the fetched instructions, a latent prediction is selected by utilizing a fetching initiated by the latent prediction such that a new fetch is not started.
    Type: Grant
    Filed: February 20, 2008
    Date of Patent: October 26, 2010
    Assignee: International Business Machines Corporation
    Inventors: John W. Ward, III, Khary J. Alexander, James J. Bonanno, Brian R. Prasky, Anthony Saporito, Robert J. Sonnelitter, III
  • Publication number: 20090240891
    Abstract: Systems, methods and computer program products for data buffers partitioned from a cache array. An exemplary embodiment includes a method in a processor and for providing data buffers partitioned from a cache array, the method including clearing cache directories associated with the processor to an initial state, obtaining a selected directory state from a control register preloaded by the service processor, in response to the control register including the desired cache state, sending load commands with an address and data, loading cache lines and cache line directory entries into the cache and storing the specified data in the corresponding cache line.
    Type: Application
    Filed: March 19, 2008
    Publication date: September 24, 2009
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Gary E. Strait, Deanna P. Dunn, Michael F. Fee, Pak-kin Mak, Robert J. Sonnelitter, III
  • Publication number: 20090217015
    Abstract: A system and method for controlling restarting of instruction fetching using speculative address computations in a processor are provided. The system includes a predicted target queue to hold branch prediction logic (BPL) generated target address values. The system also includes target selection logic including a recycle queue. The target selection logic selects a saved branch target value between a previously speculatively calculated branch target value from the recycle queue and an address value from the predicted target queue. The system further includes a compare block to identify a wrong target in response to a mismatch between the saved branch target value and a current calculated branch target, where instruction fetching is restarted in response to the wrong target.
    Type: Application
    Filed: February 22, 2008
    Publication date: August 27, 2009
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Khary J. Alexander, Brian R. Prasky, Anthony Saporito, Robert J. Sonnelitter, III
  • Publication number: 20090216952
    Abstract: A method for managing cache memory including receiving an instruction fetch for an instruction stream in a cache memory, wherein the instruction fetch includes an instruction fetch reference tag for the instruction stream and the instruction stream is at least partially included within a cache line, comparing the instruction fetch reference tag to a previous instruction fetch reference tag, maintaining a cache replacement status of the cache line if the instruction fetch reference tag is the same as the previous instruction fetch reference tag, and upgrading the cache replacement status of the cache line if the instruction fetch reference tag is different from the previous instruction fetch reference tag, whereby the cache replacement status of the cache line is upgraded if the instruction stream is independently fetched more than once. A corresponding system and computer program product.
    Type: Application
    Filed: February 26, 2008
    Publication date: August 27, 2009
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Robert J. Sonnelitter, III, Gregory W. Alexander, Brian R. Prasky
  • Publication number: 20090217003
    Abstract: A method for reducing cache memory pollution including fetching an instruction stream from a cache line, preventing a fetching for the instruction stream from a sequential cache line, searching for a next predicted taken branch instruction, determining whether a length of the instruction stream extends beyond a length of the cache line based on the next predicted taken branch instruction, continuing preventing the fetching for the instruction stream from the sequential cache line if the length of the instruction stream does not extend beyond the length of the cache line, and allowing the fetching for the instruction stream from the sequential cache line if the length of the instruction stream extends beyond the length of the cache line, whereby the fetching from the sequential cache line and a resulting polluting of a cache memory that stores the instruction stream is minimized. A corresponding system and computer program product.
    Type: Application
    Filed: February 25, 2008
    Publication date: August 27, 2009
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Robert J. Sonnelitter, III, James J. Bonanno, David S. Hutton, Brian R. Prasky, Anthony Saporito
  • Publication number: 20090217017
    Abstract: A method, system, and computer program product for minimizing branch prediction latency in a pipelined computer processing environment are provided. The method includes detecting a branch loop utilizing branch instruction addresses and corresponding target addresses stored in a branch target buffer (BTB). The method also includes fetching the branch loop into a pre-decode instruction buffer and qualifying the branch loop for loop lockdown. The method further includes locking an instruction stream that forms the branch loop in the pre-decode instruction buffer and processing qualified branch loop instructions from the buffer and powering down instruction fetching and branch prediction logic (BPL) associated with the BTB.
    Type: Application
    Filed: February 26, 2008
    Publication date: August 27, 2009
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Khary J. Alexander, David S. Hutton, Brian R. Prasky, Anthony Saporito, Robert J. Sonnelitter, III, John W. Ward, III
  • Publication number: 20090216915
    Abstract: A method for merging data including receiving a request from an input/output device to merge a data, wherein a merge of the data includes a manipulation of the data, determining if the data exists in a local cache memory that is in local communication with the input/output device, fetching the data to the local cache memory from a remote cache memory or a main memory if the data does not exist in the local cache memory, merging the data according to the request to obtain a merged data, and storing the merged data in the local cache, wherein the merging of the data is performed without using a memory controller within a control flow or a data flow of the merging of the data. A corresponding system and computer program product.
    Type: Application
    Filed: February 25, 2008
    Publication date: August 27, 2009
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Deanna P. Dunn, Robert J. Sonnelitter, III, Gary E. Strait