Patents by Inventor Kenneth D. Klapproth
Kenneth D. Klapproth has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 10884890Abstract: Methods, systems and computer program products for measuring hardware performance are provided. Aspects include receiving an indication of a start to a hardware operation. A number of clock cycles are counted from the start of a hardware operation to the completion of the hardware operation. A first region comprising a first set of bit location is defined. A second and third region is defined each including a set of bit locations. Based on the first set of bit locations being equal to zero, a granularity flag is set to zero in the sample buffer and the second and third set of bit locations are written to the sample buffer. And based on the first set of bit locations being greater than zero, the granularity flag in the sample buffer is set to one and the first and second set of bit locations are written to the sample buffer.Type: GrantFiled: August 6, 2019Date of Patent: January 5, 2021Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Ram Sai Manoj Bamdhamravuri, Deanna Postles Dunn Berger, Mark R. Hodges, Kenneth D. Klapproth, Guy G. Tracy, Craig R. Walters
-
Patent number: 10540251Abstract: Methods, systems and computer program products for measuring hardware performance are provided. Aspects include receiving an indication of a start to a hardware operation. A number of clock cycles are counted from the start of a hardware operation to the completion of the hardware operation. A first region comprising a first set of bit location is defined. A second and third region is defined each including a set of bit locations. Based on the first set of bit locations being equal to zero, a granularity flag is set to zero in the sample buffer and the second and third set of bit locations are written to the sample buffer. And based on the first set of bit locations being greater than zero, the granularity flag in the sample buffer is set to one and the first and second set of bit locations are written to the sample buffer.Type: GrantFiled: May 22, 2017Date of Patent: January 21, 2020Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Ram Sai Manoj Bamdhamravuri, Deanna Postles Dunn Berger, Mark R. Hodges, Kenneth D. Klapproth, Guy G. Tracy, Craig R. Walters
-
Publication number: 20190361783Abstract: Methods, systems and computer program products for measuring hardware performance are provided. Aspects include receiving an indication of a start to a hardware operation. A number of clock cycles are counted from the start of a hardware operation to the completion of the hardware operation. A first region comprising a first set of bit location is defined. A second and third region is defined each including a set of bit locations. Based on the first set of bit locations being equal to zero, a granularity flag is set to zero in the sample buffer and the second and third set of bit locations are written to the sample buffer. And based on the first set of bit locations being greater than zero, the granularity flag in the sample buffer is set to one and the first and second set of bit locations are written to the sample buffer.Type: ApplicationFiled: August 6, 2019Publication date: November 28, 2019Inventors: Ram Sai Manoj Bamdhamravuri, Deanna Postles Dunn Berger, Mark R. Hodges, Kenneth D. Klapproth, Guy G. Tracy, Craig R. Walters
-
Patent number: 10489292Abstract: Embodiments of the present invention are directed to a computer-implemented method for ownership tracking updates across multiple simultaneous operations. A non-limiting example of the computer-implemented method includes receiving, by a cache directory control circuit, a message to update a cache directory entry. The method further includes, in response, updating, by the cache directory control circuit, the cache directory entry, and generating a reverse compare signal including an updated ownership vector of a memory line corresponding to the cache directory entry. The method further includes sending the reverse compare signal to a cache controller associated with the cache directory entry.Type: GrantFiled: November 20, 2017Date of Patent: November 26, 2019Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Michael A. Blake, Timothy C. Bronson, Ashraf ElSharif, Kenneth D. Klapproth, Vesselina K. Papazova, Guy G. Tracy
-
Patent number: 10482015Abstract: Embodiments of the present invention are directed to a computer-implemented method for ownership tracking updates across multiple simultaneous operations. A non-limiting example of the computer-implemented method includes receiving, by a cache directory control circuit, a message to update a cache directory entry. The method further includes, in response, updating, by the cache directory control circuit, the cache directory entry, and generating a reverse compare signal including an updated ownership vector of a memory line corresponding to the cache directory entry. The method further includes sending the reverse compare signal to a cache controller associated with the cache directory entry.Type: GrantFiled: May 18, 2017Date of Patent: November 19, 2019Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Michael A. Blake, Timothy C. Bronson, Ashraf ElSharif, Kenneth D. Klapproth, Vesselina K. Papazova, Guy G. Tracy
-
Patent number: 10379776Abstract: An aspect includes interlocking operations in an address-sliced cache system. A computer-implemented method includes determining whether a dynamic memory relocation operation is in process in the address-sliced cache system. Based on determining that the dynamic memory relocation operation is in process, a key operation is serialized to maintain a sequenced order of completion of the key operation across a plurality of slices and pipes in the address-sliced cache system. Based on determining that the dynamic memory relocation operation is not in process, a plurality of key operation requests is allowed to launch across two or more of the slices and pipes in parallel in the address-sliced cache system while ensuring that only one instance of the key operations is in process across all of the slices and pipes at a same time.Type: GrantFiled: May 24, 2017Date of Patent: August 13, 2019Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Deanna P. Berger, Michael A. Blake, Ashraf Elsharif, Kenneth D. Klapproth, Pak-kin Mak, Robert J. Sonnelitter, III, Guy G. Tracy
-
Publication number: 20180365070Abstract: Methods, systems, and computer program products for managing broadcasts in a distributed symmetric multiprocessing computer are provided. Aspects include defining a default broadcast rate for a plurality of processors in the distributed symmetric multiprocessing computer, wherein the default broadcast rate is a rate at which a processor broadcasts a request for a resource. The one or more broadcasted requests by a first processor are monitored and related responses are utilized to determine a state of the one or more broadcasted requests. The default broadcast rate is adjusted based at least in part on the state of the one or more broadcasted requests.Type: ApplicationFiled: June 16, 2017Publication date: December 20, 2018Inventors: Michael A. Blake, Mike Chow, Kenneth D. Klapproth, Pak-kin Mak, Arthur J. O'Neill, JR., Robert J. Sonnelitter, III
-
Publication number: 20180341422Abstract: An aspect includes interlocking operations in an address-sliced cache system. A computer-implemented method includes determining whether a dynamic memory relocation operation is in process in the address-sliced cache system. Based on determining that the dynamic memory relocation operation is in process, a key operation is serialized to maintain a sequenced order of completion of the key operation across a plurality of slices and pipes in the address-sliced cache system. Based on determining that the dynamic memory relocation operation is not in process, a plurality of key operation requests is allowed to launch across two or more of the slices and pipes in parallel in the address-sliced cache system while ensuring that only one instance of the key operations is in process across all of the slices and pipes at a same time.Type: ApplicationFiled: May 24, 2017Publication date: November 29, 2018Inventors: Deanna P. Berger, Michael A. Blake, Ashraf Elsharif, Kenneth D. Klapproth, Pak-kin Mak, Robert J. Sonnelitter, III, Guy G. Tracy
-
Publication number: 20180336134Abstract: Embodiments of the present invention are directed to a computer-implemented method for ownership tracking updates across multiple simultaneous operations. A non-limiting example of the computer-implemented method includes receiving, by a cache directory control circuit, a message to update a cache directory entry. The method further includes, in response, updating, by the cache directory control circuit, the cache directory entry, and generating a reverse compare signal including an updated ownership vector of a memory line corresponding to the cache directory entry. The method further includes sending the reverse compare signal to a cache controller associated with the cache directory entry.Type: ApplicationFiled: May 18, 2017Publication date: November 22, 2018Inventors: Michael A. Blake, Timothy C. Bronson, Ashraf ElSharif, Kenneth D. Klapproth, Vesselina K. Papazova, Guy G. Tracy
-
Publication number: 20180336135Abstract: Embodiments of the present invention are directed to a computer-implemented method for ownership tracking updates across multiple simultaneous operations. A non-limiting example of the computer-implemented method includes receiving, by a cache directory control circuit, a message to update a cache directory entry. The method further includes, in response, updating, by the cache directory control circuit, the cache directory entry, and generating a reverse compare signal including an updated ownership vector of a memory line corresponding to the cache directory entry. The method further includes sending the reverse compare signal to a cache controller associated with the cache directory entry.Type: ApplicationFiled: November 20, 2017Publication date: November 22, 2018Inventors: Michael A. Blake, Timothy C. Bronson, Ashraf ElSharif, Kenneth D. Klapproth, Vesselina K. Papazova, Guy G. Tracy
-
Publication number: 20180336116Abstract: Methods, systems and computer program products for measuring hardware performance are provided. Aspects include receiving an indication of a start to a hardware operation. A number of clock cycles are counted from the start of a hardware operation to the completion of the hardware operation. A first region comprising a first set of bit location is defined. A second and third region is defined each including a set of bit locations. Based on the first set of bit locations being equal to zero, a granularity flag is set to zero in the sample buffer and the second and third set of bit locations are written to the sample buffer. And based on the first set of bit locations being greater than zero, the granularity flag in the sample buffer is set to one and the first and second set of bit locations are written to the sample buffer.Type: ApplicationFiled: May 22, 2017Publication date: November 22, 2018Inventors: Ram Sai Manoj Bamdhamravuri, Deanna Postles Dunn Berger, Mark R. Hodges, Kenneth D. Klapproth, Guy G. Tracy, Craig R. Walters
-
Patent number: 9734110Abstract: In one embodiment, a computer-implemented method includes instructing two or more processors that are operating in a normal state of a symmetric multiprocessing (SMP) network to transition from the normal state to a slow state. The two or more processors reduce their frequencies to respective target frequencies in a transitional state when transitioning from the normal state to the slow state. It is determined that the two or more processors have achieved their respective target frequencies for the slow state. The slow state is entered, responsive to this determination. Responsive to entering the slow state, a first processor of the two or more processors is instructed to send empty packets across an interconnect to compensate for a first greatest potential rate differential between the first processor and a remainder of the two or more processors during the slow state.Type: GrantFiled: February 13, 2015Date of Patent: August 15, 2017Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Garrett M. Drapala, Michael F. Fee, Kenneth D. Klapproth, Robert J. Sonnelitter, III
-
Publication number: 20160239450Abstract: In one embodiment, a computer-implemented method includes instructing two or more processors that are operating in a normal state of a symmetric multiprocessing (SMP) network to transition from the normal state to a slow state. The two or more processors reduce their frequencies to respective target frequencies in a transitional state when transitioning from the normal state to the slow state. It is determined that the two or more processors have achieved their respective target frequencies for the slow state. The slow state is entered, responsive to this determination. Responsive to entering the slow state, a first processor of the two or more processors is instructed to send empty packets across an interconnect to compensate for a first greatest potential rate differential between the first processor and a remainder of the two or more processors during the slow state.Type: ApplicationFiled: February 13, 2015Publication date: August 18, 2016Inventors: Garrett M. Drapala, Michael F. Fee, Kenneth D. Klapproth, Robert J. Sonnelitter, III
-
Patent number: 9003127Abstract: Embodiments relate to storing data to a system memory. An aspect includes accessing successive entries of a cache directory having a plurality of directory entries by a stepper engine, where access to the cache directory is given a lower priority than other cache operations. It is determined that a specific directory entry in the cache directory has a change line state that indicates it is modified. A store operation is performed to send a copy of the specific corresponding cache entry to the system memory as part of a cache management function. The specific directory entry is updated to indicate that the change line state is unmodified.Type: GrantFiled: November 21, 2013Date of Patent: April 7, 2015Assignee: International Business Machines CorporationInventors: Michael A. Blake, Timothy C. Bronson, Hieu T. Huynh, Kenneth D. Klapproth, Pak-Kin Mak, Vesselina K. Papazova
-
Patent number: 8990507Abstract: Embodiments relate to storing data to a system memory. An aspect includes accessing successive entries of a cache directory having a plurality of directory entries by a stepper engine, where access to the cache directory is given a lower priority than other cache operations. It is determined that a specific directory entry in the cache directory has a change line state that indicates it is modified. A store operation is performed to send a copy of the specific corresponding cache entry to the system memory as part of a cache management function. The specific directory entry is updated to indicate that the change line state is unmodified.Type: GrantFiled: June 13, 2012Date of Patent: March 24, 2015Assignee: International Business Machines CorporationInventors: Michael A. Blake, Pak-Kin Mak, Timothy C. Bronson, Hieu T. Huynh, Kenneth D. Klapproth, Vesselina K. Papazova
-
Patent number: 8930616Abstract: System refresh in a cache memory that includes generating a refresh time period (RTIM) pulse at a centralized refresh controller of the cache memory and activating a refresh request at the centralized refresh controller based on generating the RTIM pulse. The refresh request is associated with a single cache memory bank of the cache memory. A refresh grant is received and transmitted to a bank controller. The bank controller is associated with and localized at the single cache memory bank of the cache memory.Type: GrantFiled: October 18, 2012Date of Patent: January 6, 2015Assignee: International Business Machines CorporationInventors: Michael A. Blake, Timothy C. Bronson, Hieu T. Huynh, Kenneth D. Klapproth
-
Patent number: 8706972Abstract: A method of providing requests to a cache pipeline includes receiving a plurality of requests from one or more state machines at an arbiter, selecting one of the plurality of requests as a selected request, the selected request having been provided by a first state machine, determining that the selected request includes a mode that requires a first step and a second step, the first step including an access to a location in a cache, determining that the location in the cache is unavailable, and replacing the mode with a modified mode that only includes the second step.Type: GrantFiled: November 20, 2012Date of Patent: April 22, 2014Assignee: International Business Machines CorporationInventors: Deanna Postles Dunn Berger, Michael F. Fee, Kenneth D. Klapproth, Robert J. Sonnelitter, III
-
Publication number: 20140082289Abstract: Embodiments relate to storing data to a system memory. An aspect includes accessing successive entries of a cache directory having a plurality of directory entries by a stepper engine, where access to the cache directory is given a lower priority than other cache operations. It is determined that a specific directory entry in the cache directory has a change line state that indicates it is modified. A store operation is performed to send a copy of the specific corresponding cache entry to the system memory as part of a cache management function. The specific directory entry is updated to indicate that the change line state is unmodified.Type: ApplicationFiled: November 21, 2013Publication date: March 20, 2014Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Michael A. Blake, Timothy C. Bronson, Hieu T. Huynh, Kenneth D. Klapproth, Pak-Kin Mak, Vesselina K. Papazova
-
Patent number: 8639887Abstract: A mechanism for dynamically altering a request received at a hardware component is provided. The request is received at the hardware component, and the request includes a mode option. It is determined whether an action of the request requires an unavailable resource and it is determined whether the mode option is for the action requiring the unavailable resource. In response to the mode option being for the action requiring the unavailable resource, the action is automatically removed from the request. The request is passed for pipeline arbitration without the action requiring the unavailable resource.Type: GrantFiled: June 23, 2010Date of Patent: January 28, 2014Assignee: International Business Machines CorporationInventors: Deanna Postles Dunn Berger, Michael Fee, Kenneth D. Klapproth, Robert J. Sonnelitter, III
-
Patent number: 8635409Abstract: A method of providing requests to a cache pipeline includes receiving a plurality of requests from one or more state machines at an arbiter; selecting one of the plurality of requests as a selected request the selected request having been provided by a first state machine; determining that the selected request includes a mode that requires a first step and a second step, the first step including an access to a location in a cache; determining that the location in the cache is unavailable; and replacing the mode with a modified mode that only includes the second step.Type: GrantFiled: June 23, 2010Date of Patent: January 21, 2014Assignee: International Business Machines CorporationInventors: Deanna Postles Dunn Berger, Michael F. Fee, Kenneth D. Klapproth, Robert J. Sonnelitter, III