Patents by Inventor Dam Sunwoo

Dam Sunwoo has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11899583
    Abstract: Various implementations described herein are directed to a device with a multi-layered logic structure with multiple layers including a first layer and a second layer arranged vertically in a stacked configuration. The device may have a first cache memory with first interconnect logic disposed in the first layer. The device may have a second cache memory with second interconnect logic disposed in the second layer, wherein the second interconnect logic in the second layer is linked to the first interconnect logic in the first layer.
    Type: Grant
    Filed: July 29, 2021
    Date of Patent: February 13, 2024
    Assignee: Arm Limited
    Inventors: Joshua Randall, Alejandro Rico Carro, Dam Sunwoo, Saurabh Pijuskumar Sinha, Jamshed Jalal
  • Patent number: 11640381
    Abstract: Briefly, example methods, apparatuses, devices, and/or articles of manufacture are disclosed that may be implemented, in whole or in part, using one or more processing devices to facilitate and/or support one or more operations and/or techniques to access entries in a hash table. In a particular implementation, a hash operation may be selected from between or among multiple hash operations to map key values to entries in a hash table.
    Type: Grant
    Filed: July 23, 2019
    Date of Patent: May 2, 2023
    Assignee: Arm Limited
    Inventors: Gwangsun Kim, Dam Sunwoo
  • Patent number: 11599361
    Abstract: A data processing apparatus is provided. It includes control flow detection prediction circuitry that performs a presence prediction of whether a block of instructions contains a control flow instruction. A fetch queue stores, in association with prediction information, a queue of indications of the instructions and the prediction information comprises the presence prediction. An instruction cache stores fetched instructions that have been fetched according to the fetch queue. Post-fetch correction circuitry receives the fetched instructions prior to the fetched instructions being received by decode circuitry, the post-fetch correction circuitry includes analysis circuitry that causes the fetch queue to be at least partly flushed in dependence on a type of a given fetched instruction and the prediction information associated with the given fetched instruction.
    Type: Grant
    Filed: May 10, 2021
    Date of Patent: March 7, 2023
    Assignee: Arm Limited
    Inventors: Jaekyu Lee, Yasuo Ishii, Krishnendra Nathella, Dam Sunwoo
  • Publication number: 20230029860
    Abstract: Various implementations described herein are directed to a device with a multi-layered logic structure with multiple layers including a first layer and a second layer arranged vertically in a stacked configuration. The device may have a first cache memory with first interconnect logic disposed in the first layer. The device may have a second cache memory with second interconnect logic disposed in the second layer, wherein the second interconnect logic in the second layer is linked to the first interconnect logic in the first layer.
    Type: Application
    Filed: July 29, 2021
    Publication date: February 2, 2023
    Inventors: Joshua Randall, Alejandro Rico Carro, Dam Sunwoo, Saurabh Pijuskumar Sinha, Jamshed Jalal
  • Patent number: 11526356
    Abstract: An apparatus and method is provided, the apparatus comprising a processor pipeline to execute instructions, a cache structure to store information for reference by the processor pipeline when executing said instructions; and prefetch circuitry to issue prefetch requests to the cache structure to cause the cache structure to prefetch information into the cache structure in anticipation of a demand request for that information being issued to the cache structure by the processor pipeline. The processor pipeline is arranged to issue a trigger to the prefetch circuitry on detection of a given event that will result in a reduced level of demand requests being issued by the processor pipeline, and the prefetch circuitry is configured to control issuing of prefetch requests in dependence on reception of the trigger.
    Type: Grant
    Filed: May 29, 2020
    Date of Patent: December 13, 2022
    Assignee: Arm Limited
    Inventors: Lingzhe Cai, Krishnendra Nathella, Jaekyu Lee, Dam Sunwoo
  • Publication number: 20220357953
    Abstract: A data processing apparatus is provided. It includes control flow detection prediction circuitry that performs a presence prediction of whether a block of instructions contains a control flow instruction. A fetch queue stores, in association with prediction information, a queue of indications of the instructions and the prediction information comprises the presence prediction. An instruction cache stores fetched instructions that have been fetched according to the fetch queue. Post-fetch correction circuitry receives the fetched instructions prior to the fetched instructions being received by decode circuitry, the post-fetch correction circuitry includes analysis circuitry that causes the fetch queue to be at least partly flushed in dependence on a type of a given fetched instruction and the prediction information associated with the given fetched instruction.
    Type: Application
    Filed: May 10, 2021
    Publication date: November 10, 2022
    Inventors: Jaekyu LEE, Yasuo ISHII, Krishnendra NATHELLA, Dam SUNWOO
  • Patent number: 11294828
    Abstract: An apparatus and method are provided for controlling allocation of information into a cache storage. The apparatus has processing circuitry for executing instructions, and for allowing speculative execution of one or more of those instructions. A cache storage is also provided having a plurality of entries to store information for reference by the processing circuitry, and cache control circuitry is used to control the cache storage, the cache control circuitry comprising a speculative allocation tracker having a plurality of tracking entries. The cache control circuitry is responsive to a speculative request associated with the speculative execution, requiring identified information to be allocated into a given entry of the cache storage, to allocate a tracking entry in the speculative allocation tracker for the speculative request before allowing the identified information to be allocated into the given entry of the cache storage.
    Type: Grant
    Filed: May 15, 2019
    Date of Patent: April 5, 2022
    Assignee: Arm Limited
    Inventors: Jaekyu Lee, Dam Sunwoo
  • Publication number: 20220035679
    Abstract: A method for controlling hardware resource configuration for a processing system comprises obtaining performance monitoring data indicative of processing performance associated with workloads to be executed on the processing system, providing a trained machine learning model with input data depending on the performance monitoring data; and based on an inference made from the input data by the trained machine learning model, setting control information for configuring the processing system to control an amount of hardware resource allocated for use by at least one processor core. A corresponding method of training the model is provided. This is particularly useful for controlling inter-core borrowing of resource between processor cores in a multi-core processing system, where resource is borrowed between respective cores, e.g. cores on different layers of a 3D integrated circuit.
    Type: Application
    Filed: July 30, 2020
    Publication date: February 3, 2022
    Inventors: Dam SUNWOO, Supreet JELOKA, Saurabh Pijuskumar SINHA, Jaekyu LEE, Jose Alberto JOAO, Krishnendra NATHELLA
  • Publication number: 20210373889
    Abstract: An apparatus and method is provided, the apparatus comprising a processor pipeline to execute instructions, a cache structure to store information for reference by the processor pipeline when executing said instructions; and pref etch circuitry to issue prefetch requests to the cache structure to cause the cache structure to prefetch information into the cache structure in anticipation of a demand request for that information being issued to the cache structure by the processor pipeline. The processor pipeline is arranged to issue a trigger to the prefetch circuitry on detection of a given event that will result in a reduced level of demand requests being issued by the processor pipeline, and the prefetch circuitry is configured to control issuing of pref etch requests in dependence on reception of the trigger.
    Type: Application
    Filed: May 29, 2020
    Publication date: December 2, 2021
    Inventors: Lingzhe CAI, Krishnendra NATHELLA, Jaekyu LEE, Dam SUNWOO
  • Publication number: 20210067335
    Abstract: A data processing apparatus is provided that includes storage circuitry. Communication circuitry responds to an access request comprising a requested index with an access response comprising requested data. Coding circuitry performs a coding operation using a current key to: translate the requested index to an encoded index of the storage circuitry at which the requested data is stored or to translate encoded data stored at the requested index of the storage circuitry to the requested data. The current key is based on an execution environment. Update circuitry performs an update, in response to the current key being changed, of: the encoded index of the storage circuitry at which the requested data is stored or the encoded data.
    Type: Application
    Filed: August 26, 2019
    Publication date: March 4, 2021
    Inventors: Jaekyu LEE, Yasuo ISHII, Dam SUNWOO
  • Publication number: 20210026826
    Abstract: Briefly, example methods, apparatuses, devices, and/or articles of manufacture are disclosed that may be implemented, in whole or in part, using one or more processing devices to facilitate and/or support one or more operations and/or techniques to access entries in a hash table. In a particular implementation, a hash operation may be selected from between or among multiple hash operations to map key values to entries in a hash table.
    Type: Application
    Filed: July 23, 2019
    Publication date: January 28, 2021
    Inventors: Kim Gwangsun, Dam Sunwoo
  • Publication number: 20200364154
    Abstract: An apparatus and method are provided for controlling allocation of information into a cache storage. The apparatus has processing circuitry for executing instructions, and for allowing speculative execution of one or more of those instructions. A cache storage is also provided having a plurality of entries to store information for reference by the processing circuitry, and cache control circuitry is used to control the cache storage, the cache control circuitry comprising a speculative allocation tracker having a plurality of tracking entries. The cache control circuitry is responsive to a speculative request associated with the speculative execution, requiring identified information to be allocated into a given entry of the cache storage, to allocate a tracking entry in the speculative allocation tracker for the speculative request before allowing the identified information to be allocated into the given entry of the cache storage.
    Type: Application
    Filed: May 15, 2019
    Publication date: November 19, 2020
    Inventors: Jaekyu LEE, Dam SUNWOO
  • Patent number: 10817426
    Abstract: A variety of data processing apparatuses are provided in which stride determination circuitry determines a stride value as a difference between a current address and a previously received address. Stride storage circuitry stores an association between stride values determined by the stride determination circuitry and a frequency during a training period. Prefetch circuitry causes a further data value to be proactively retrieved from a further address. The further address is the current address modified by a stride value in the stride storage circuitry having a highest frequency during the training period. The variety of data processing apparatuses are directed towards improving efficiency by variously disregarding certain candidate stride values, considering additional further addresses for prefetching by using multiple stride values, using feedback to adjust the training process and compensating for page table boundaries.
    Type: Grant
    Filed: September 24, 2018
    Date of Patent: October 27, 2020
    Assignee: Arm Limited
    Inventors: Krishnendra Nathella, Chris Abernathy, Huzefa Moiz Sanjeliwala, Dam Sunwoo, Balaji Vijayan
  • Patent number: 10769070
    Abstract: Apparatuses and methods for prefetch generation are disclosed. Prefetching circuitry receives addresses specified by load instructions and can cause retrieval of a data value from an address before that address is received. Stride determination circuitry determines stride values as a difference between a current address and a previously received address. Plural stride values corresponding to a sequence of received addresses are determined. Multiple stride storage circuitry stores the plurality of stride values determined by the stride determination circuitry. New address comparison circuitry determines whether a current address corresponds to a matching stride value based on the plurality of stride values stored in the multiple stride storage circuitry. Prefetch initiation circuitry can causes a data value to be retrieved from a further address, wherein the further address is the current address modified by the matching stride value of the plurality of stride values.
    Type: Grant
    Filed: September 25, 2018
    Date of Patent: September 8, 2020
    Assignee: Arm Limited
    Inventors: Joseph Michael Pusdesris, Miles Robert Dooley, Alexander Cole Shulyak, Krishnendra Nathella, Dam Sunwoo
  • Patent number: 10725992
    Abstract: An apparatus has processing circuitry for processing instructions from multiple threads. A storage structure is shared between the threads and has a number of entries. Indexing circuitry generates a target index value identifying an entry of the storage structure to be accessed in response to a request from the processing circuitry specifying a requested index value corresponding to information to be accessed from the storage structure. The indexing circuitry generates the target index value as a function of the requested index value and a key value selected depending on which of the threads trigger the request. The key value for at least one of the threads is updated from time to time.
    Type: Grant
    Filed: December 3, 2018
    Date of Patent: July 28, 2020
    Assignee: ARM Limited
    Inventors: Mitchell Bryan Hayenga, Curtis Glenn Dunham, Dam Sunwoo
  • Publication number: 20200097409
    Abstract: A variety of data processing apparatuses are provided in which stride determination circuitry determines a stride value as a difference between a current address and a previously received address. Stride storage circuitry stores an association between stride values determined by the stride determination circuitry and a frequency during a training period. Prefetch circuitry causes a further data value to be proactively retrieved from a further address. The further address is the current address modified by a stride value in the stride storage circuitry having a highest frequency during the training period. The variety of data processing apparatuses are directed towards improving efficiency by variously disregarding certain candidate stride values, considering additional further addresses for prefetching by using multiple stride values, using feedback to adjust the training process and compensating for page table boundaries.
    Type: Application
    Filed: September 24, 2018
    Publication date: March 26, 2020
    Inventors: Krishnendra Nathella, Chris Abernathy, Huzefa Moiz Sanjeliwala, Dam Sunwoo, Balaji Vijayan
  • Publication number: 20200097411
    Abstract: Apparatuses and methods for prefetch generation are disclosed. Prefetching circuitry receives addresses specified by load instructions and can cause retrieval of a data value from an address before that address is received. Stride determination circuitry determines stride values as a difference between a current address and a previously received address. Plural stride values corresponding to a sequence of received addresses are determined. Multiple stride storage circuitry stores the plurality of stride values determined by the stride determination circuitry. New address comparison circuitry determines whether a current address corresponds to a matching stride value based on the plurality of stride values stored in the multiple stride storage circuitry. Prefetch initiation circuitry can causes a data value to be retrieved from a further address, wherein the further address is the current address modified by the matching stride value of the plurality of stride values.
    Type: Application
    Filed: September 25, 2018
    Publication date: March 26, 2020
    Inventors: Joseph Michael PUSDESRIS, Miles Robert DOOLEY, Alexander Cole SHULYAK, Krishnendra NATHELLA, Dam SUNWOO
  • Publication number: 20190102388
    Abstract: An apparatus has processing circuitry for processing instructions from multiple threads. A storage structure is shared between the threads and has a number of entries. Indexing circuitry generates a target index value identifying an entry of the storage structure to be accessed in response to a request from the processing circuitry specifying a requested index value corresponding to information to be accessed from the storage structure. The indexing circuitry generates the target index value as a function of the requested index value and a key value selected depending on which of the threads trigger the request. The key value for at least one of the threads is updated from time to time.
    Type: Application
    Filed: December 3, 2018
    Publication date: April 4, 2019
    Inventors: Mitchell Bryan HAYENGA, Curtis Glenn DUNHAM, Dam SUNWOO
  • Patent number: 10185731
    Abstract: An apparatus has processing circuitry for processing instructions from multiple threads. A storage structure is shared between the threads and has a number of entries. Indexing circuitry generates a target index value identifying an entry of the storage structure to be accessed in response to a request from the processing circuitry specifying a requested index value corresponding to information to be accessed from the storage structure. The indexing circuitry generates the target index value as a function of the requested index value and a key value selected depending on which of the threads trigger the request. The key value for at least one of the threads is updated from time to time.
    Type: Grant
    Filed: March 31, 2016
    Date of Patent: January 22, 2019
    Assignee: ARM Limited
    Inventors: Mitchell Bryan Hayenga, Curtis Glenn Dunham, Dam Sunwoo
  • Publication number: 20170286421
    Abstract: An apparatus has processing circuitry for processing instructions from multiple threads. A storage structure is shared between the threads and has a number of entries. Indexing circuitry generates a target index value identifying an entry of the storage structure to be accessed in response to a request from the processing circuitry specifying a requested index value corresponding to information to be accessed from the storage structure. The indexing circuitry generates the target index value as a function of the requested index value and a key value selected depending on which of the threads trigger the request. The key value for at least one of the threads is updated from time to time.
    Type: Application
    Filed: March 31, 2016
    Publication date: October 5, 2017
    Inventors: Mitchell Bryan HAYENGA, Curtis Glenn DUNHAM, Dam SUNWOO