Patents by Inventor Hien Minh Le

Hien Minh Le has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11817320
    Abstract: Implementations described herein generally relate to a method for forming a metal layer and to a method for forming an oxide layer on the metal layer. In one implementation, the metal layer is formed on a seed layer, and the seed layer helps the metal in the metal layer nucleate with small grain size without affecting the conductivity of the metal layer. The metal layer may be formed using plasma enhanced chemical vapor deposition (PECVD) and nitrogen gas may be flowed into the processing chamber along with the precursor gases. In another implementation, a barrier layer is formed on the metal layer in order to prevent the metal layer from being oxidized during subsequent oxide layer deposition process. In another implementation, the metal layer is treated prior to the deposition of the oxide layer in order to prevent the metal layer from being oxidized.
    Type: Grant
    Filed: August 29, 2019
    Date of Patent: November 14, 2023
    Assignee: Applied Materials, Inc.
    Inventors: Susmit Singha Roy, Kelvin Chan, Hien Minh Le, Sanjay Kamath, Abhijit Basu Mallick, Srinivas Gandikota, Karthik Janakiraman
  • Publication number: 20190393042
    Abstract: Implementations described herein generally relate to a method for forming a metal layer and to a method for forming an oxide layer on the metal layer. In one implementation, the metal layer is formed on a seed layer, and the seed layer helps the metal in the metal layer nucleate with small grain size without affecting the conductivity of the metal layer. The metal layer may be formed using plasma enhanced chemical vapor deposition (PECVD) and nitrogen gas may be flowed into the processing chamber along with the precursor gases. In another implementation, a barrier layer is formed on the metal layer in order to prevent the metal layer from being oxidized during subsequent oxide layer deposition process. In another implementation, the metal layer is treated prior to the deposition of the oxide layer in order to prevent the metal layer from being oxidized.
    Type: Application
    Filed: August 29, 2019
    Publication date: December 26, 2019
    Inventors: Susmit SINGHA ROY, Kelvin CHAN, Hien Minh LE, Sanjay KAMATH, Abhijit Basu MALLICK, Srinivas GANDIKOTA, Karthik JANAKIRAMAN
  • Patent number: 10410869
    Abstract: Implementations described herein generally relate to a method for forming a metal layer and to a method for forming an oxide layer on the metal layer. In one implementation, the metal layer is formed on a seed layer, and the seed layer helps the metal in the metal layer nucleate with small grain size without affecting the conductivity of the metal layer. The metal layer may be formed using plasma enhanced chemical vapor deposition (PECVD) and nitrogen gas may be flowed into the processing chamber along with the precursor gases. In another implementation, a barrier layer is formed on the metal layer in order to prevent the metal layer from being oxidized during subsequent oxide layer deposition process. In another implementation, the metal layer is treated prior to the deposition of the oxide layer in order to prevent the metal layer from being oxidized.
    Type: Grant
    Filed: June 26, 2017
    Date of Patent: September 10, 2019
    Assignee: APPLIED MATERIALS, INC.
    Inventors: Susmit Singha Roy, Kelvin Chan, Hien Minh Le, Sanjay Kamath, Abhijit Basu Mallick, Srinivas Gandikota, Karthik Janakiraman
  • Patent number: 9990291
    Abstract: Aspects disclosed herein include avoiding deadlocks in processor-based systems employing retry and in-order-response non-retry bus coherency protocols. In this regard, an interface bridge circuit is communicatively coupled to a first core device that implements a retry bus coherency protocol, and a second core device that implements an in-order-response non-retry bus coherency protocol. The interface bridge circuit receives a snoop command from the first core device, and forwards the snoop command to the second core device. While the snoop command is pending, the interface bridge circuit detects a potential deadlock condition between the first core device and the second core device. In response to detecting the potential deadlock condition, the interface bridge circuit is configured to send a retry response to the first core device. This enables the first core device to continue processing, thereby eliminating the potential deadlock condition.
    Type: Grant
    Filed: September 24, 2015
    Date of Patent: June 5, 2018
    Assignee: QUALCOMM Incorporated
    Inventors: Hien Minh Le, Thuong Quang Truong, Kun Xu, Jaya Prakash Subramaniam Ganasan, Cesar Aaron Ramirez
  • Patent number: 9921962
    Abstract: Maintaining cache coherency using conditional intervention among multiple master devices is disclosed. In one aspect, a conditional intervention circuit is configured to receive intervention responses from multiple snooping master devices. To select a snooping master device to provide intervention data, the conditional intervention circuit determines how many snooping master devices have a cache line granule size the same as or larger than a requesting master device. If one snooping master device has a same or larger cache line granule size, that snooping master device is selected. If more than one snooping master device has a same or larger cache line granule size, a snooping master device is selected based on an alternate criteria. The intervention responses provided by the unselected snooping master devices are canceled by the conditional intervention circuit, and intervention data from the selected snooping master device is provided to the requesting master device.
    Type: Grant
    Filed: September 24, 2015
    Date of Patent: March 20, 2018
    Assignee: QUALCOMM Incorporated
    Inventors: Kun Xu, Thuong Quang Truong, Jaya Prakash Subramaniam Ganasan, Hien Minh Le, Cesar Aaron Ramirez
  • Publication number: 20170372953
    Abstract: Implementations described herein generally relate to a method for forming a metal layer and to a method for forming an oxide layer on the metal layer. In one implementation, the metal layer is formed on a seed layer, and the seed layer helps the metal in the metal layer nucleate with small grain size without affecting the conductivity of the metal layer. The metal layer may be formed using plasma enhanced chemical vapor deposition (PECVD) and nitrogen gas may be flowed into the processing chamber along with the precursor gases. In another implementation, a barrier layer is formed on the metal layer in order to prevent the metal layer from being oxidized during subsequent oxide layer deposition process. In another implementation, the metal layer is treated prior to the deposition of the oxide layer in order to prevent the metal layer from being oxidized.
    Type: Application
    Filed: June 26, 2017
    Publication date: December 28, 2017
    Inventors: Susmit Singha ROY, Kelvin CHAN, Hien Minh LE, Sanjay KAMATH, Abhijit Basu MALLICK, Srinivas GANDIKOTA, Karthik JANAKIRAMAN
  • Publication number: 20170371783
    Abstract: Self-aware, peer-to-peer cache transfers between local, shared cache memories in a multi-processor system is disclosed. A shared cache memory system is provided comprising local shared cache memories accessible by an associated central processing unit (CPU) and other CPUs in a peer-to-peer manner. When a CPU desires to request a cache transfer (e.g., in response to a cache eviction), the CPU acting as a master CPU issues a cache transfer request. In response, target CPUs issue snoop responses indicating their willingness to accept the cache transfer. The target CPUs also use the snoop responses to be self-aware of the willingness of other target CPUs to accept the cache transfer. The target CPUs willing to accept the cache transfer use a predefined target CPU selection scheme to determine its acceptance of the cache transfer. This can avoid a CPU making multiple requests to find a target CPU for a cache transfer.
    Type: Application
    Filed: June 24, 2016
    Publication date: December 28, 2017
    Inventors: Hien Minh Le, Thuong Quang Truong, Eric Francis Robinson, Brad Herold, Robert Bell, JR.
  • Patent number: 9792208
    Abstract: A technique for operating a data processing system includes determining whether a cache line that is to be victimized from a cache includes high availability (HA) data that has not been logged. In response determining that the cache line that is to be victimized from the cache includes HA data that has not been logged, an address for the HA data is written to an HA dirty address data structure, e.g., a dirty address table (DAT), in a first memory via a first non-blocking channel. The cache line that is victimized from the cache is written to a second memory via a second non-blocking channel.
    Type: Grant
    Filed: January 31, 2014
    Date of Patent: October 17, 2017
    Assignee: International Business Machines Corporation
    Inventors: Sanjeev Ghai, Guy Lynn Guthrie, Hien Minh Le, Hugh Shen, Philip G. Williams
  • Publication number: 20170212840
    Abstract: Providing scalable dynamic random access memory (DRAM) cache management using tag directory caches is provided. In one aspect, a DRAM cache management circuit is provided to manage access to a DRAM cache in a high-bandwidth memory. The DRAM cache management circuit comprises a tag directory cache and a tag directory cache directory. The tag directory cache stores tags of frequently accessed cache lines in the DRAM cache, while the tag directory cache directory stores tags for the tag directory cache. The DRAM cache management circuit uses the tag directory cache and the tag directory cache directory to determine whether data associated with a memory address is cached in the DRAM cache of the high-bandwidth memory. Based on the tag directory cache and the tag directory cache directory, the DRAM cache management circuit may determine whether a memory operation can be performed using the DRAM cache and/or a system memory DRAM.
    Type: Application
    Filed: June 24, 2016
    Publication date: July 27, 2017
    Inventors: Hien Minh Le, Thuong Quang Truong, Natarajan Vaidhyanathan, Mattheus Cornelis Antonius Adrianus Heddes, Colin Beaton Verrilli
  • Publication number: 20170091095
    Abstract: Maintaining cache coherency using conditional intervention among multiple master devices is disclosed. In one aspect, a conditional intervention circuit is configured to receive intervention responses from multiple snooping master devices. To select a snooping master device to provide intervention data, the conditional intervention circuit determines how many snooping master devices have a cache line granule size the same as or larger than a requesting master device. If one snooping master device has a same or larger cache line granule size, that snooping master device is selected. If more than one snooping master device has a same or larger cache line granule size, a snooping master device is selected based on an alternate criteria. The intervention responses provided by the unselected snooping master devices are canceled by the conditional intervention circuit, and intervention data from the selected snooping master device is provided to the requesting master device.
    Type: Application
    Filed: September 24, 2015
    Publication date: March 30, 2017
    Inventors: Kun Xu, Thuong Quang Truong, Jaya Prakash Subramaniam Ganasan, Hien Minh Le, Cesar Aaron Ramirez
  • Publication number: 20170091098
    Abstract: Aspects disclosed herein include avoiding deadlocks in processor-based systems employing retry and in-order-response non-retry bus coherency protocols. In this regard, an interface bridge circuit is communicatively coupled to a first core device that implements a retry bus coherency protocol, and a second core device that implements an in-order-response non-retry bus coherency protocol. The interface bridge circuit receives a snoop command from the first core device, and forwards the snoop command to the second core device. While the snoop command is pending, the interface bridge circuit detects a potential deadlock condition between the first core device and the second core device. In response to detecting the potential deadlock condition, the interface bridge circuit is configured to send a retry response to the first core device. This enables the first core device to continue processing, thereby eliminating the potential deadlock condition.
    Type: Application
    Filed: September 24, 2015
    Publication date: March 30, 2017
    Inventors: Hien Minh Le, Thuong Quang Truong, Kun Xu, Jaya Prakash Subramaniam Ganasan, Cesar Aaron Ramirez
  • Patent number: 9529717
    Abstract: A technique for operating a memory system for a node includes interrogating, by a cache, an associated cache directory to determine whether a shared cache line to be installed in the cache is associated with an invalid global state in the cache. The invalid global state specifies that a version of the shared cache line has been intervened off-node. In response to the shared cache line being in the invalid global state the cache spawns a castout invalid global command for the shared cache line. The shared cache line is installed in the cache. A coherence state for the shared cache line is updated in the associated cache directory to indicate the shared cache line is shared.
    Type: Grant
    Filed: June 10, 2015
    Date of Patent: December 27, 2016
    Assignee: International Business Machines Corporation
    Inventors: Guy L. Guthrie, Hien Minh Le, Jeffrey A. Stuecheli, Phillip G. Williams
  • Patent number: 9483403
    Abstract: A technique for operating a memory system for a node includes interrogating, by a cache, an associated cache directory to determine whether a shared cache line to be installed in the cache is associated with an invalid global state in the cache. The invalid global state specifies that a version of the shared cache line has been intervened off-node. In response to the shared cache line being in the invalid global state the cache spawns a castout invalid global command for the shared cache line. The shared cache line is installed in the cache. A coherence state for the shared cache line is updated in the associated cache directory to indicate the shared cache line is shared.
    Type: Grant
    Filed: June 17, 2014
    Date of Patent: November 1, 2016
    Assignee: International Business Machines Corporation
    Inventors: Guy L. Guthrie, Hien Minh Le, Jeffrey A. Stuecheli, Phillip G. Williams
  • Patent number: 9471491
    Abstract: A technique for operating a high-availability (HA) data processing system includes, in response to receiving an HA logout indication at a cache, initiating a walk of the cache to locate cache lines in the cache that include HA data. In response to determining that a cache line includes HA data, an address of the cache line is logged in a first portion of a buffer in the cache. In response to the first portion of the buffer reaching a determined fill level, contents of the first portion of the buffer are logged to another memory. In response to all cache lines in the cache being walked, the cache walk is terminated.
    Type: Grant
    Filed: November 6, 2013
    Date of Patent: October 18, 2016
    Assignee: International Business Machines Corporation
    Inventors: Sanjeev Ghai, Guy Lynn Guthrie, Hien Minh Le, Hugh Shen, Philip G. Williams
  • Patent number: 9430382
    Abstract: A technique for operating a high-availability (HA) data processing system includes, in response to receiving an HA logout indication at a cache, initiating a walk of the cache to locate cache lines in the cache that include HA data. In response to determining that a cache line includes HA data, an address of the cache line is logged in a first portion of a buffer in the cache. In response to the first portion of the buffer reaching a determined fill level, contents of the first portion of the buffer are logged to another memory. In response to all cache lines in the cache being walked, the cache walk is terminated.
    Type: Grant
    Filed: January 31, 2014
    Date of Patent: August 30, 2016
    Assignee: International Business Machines Corporation
    Inventors: Sanjeev Ghai, Guy Lynn Guthrie, Hien Minh Le, Hugh Shen, Philip G. Williams
  • Patent number: 9336142
    Abstract: A technique for operating a data processing system includes determining whether a cache line that is to be victimized from a cache includes high availability (HA) data that has not been logged. In response determining that the cache line that is to be victimized from the cache includes HA data that has not been logged, an address for the HA data is written to an HA dirty address data structure, e.g., a dirty address table (DAT), in a first memory via a first non-blocking channel. The cache line that is victimized from the cache is written to a second memory via a second non-blocking channel.
    Type: Grant
    Filed: November 6, 2013
    Date of Patent: May 10, 2016
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Sanjeev Ghai, Guy Lynn Guthrie, Hien Minh Le, Hugh Shen, Philip G. Williams
  • Patent number: 9280465
    Abstract: A technique of operating a data processing system, includes logging addresses for cache lines modified by a producer core in a data array of a producer cache to create a high-availability (HA) log for the producer core. The technique also includes moving the HA log directly from the producer cache to a consumer cache of a consumer core and moving HA data associated with the addresses of the HA log directly from the producer cache to the consumer cache. The HA log corresponds to a cache line that includes multiple of the addresses. Finally, the technique includes processing, by the consumer core, the HA log and the HA data for the data processing system.
    Type: Grant
    Filed: October 8, 2013
    Date of Patent: March 8, 2016
    Assignee: GLOBALFOUNDRIES INC.
    Inventors: Guy Lynn Guthrie, Steven R. Kunkel, Hien Minh Le, Geraint North, William J. Starke
  • Patent number: 9274952
    Abstract: A technique of operating a data processing system includes logging addresses for cache lines modified by a producer core in a data array of a producer cache to create a high-availability (HA) log for the producer core. The technique also includes moving the HA log directly from the producer cache to a consumer cache of a consumer core and moving HA data associated with the addresses of the HA log directly from the producer cache to the consumer cache. The HA log corresponds to a cache line that includes multiple of the addresses. Finally, the technique includes processing, by the consumer core, the HA log and the HA data for the data processing system.
    Type: Grant
    Filed: January 31, 2014
    Date of Patent: March 1, 2016
    Assignee: GLOBALFOUNDRIES INC.
    Inventors: Guy Lynn Guthrie, Steven R. Kunkel, Hien Minh Le, Geraint North, William J. Starke
  • Publication number: 20150363316
    Abstract: A technique for operating a memory system for a node includes interrogating, by a cache, an associated cache directory to determine whether a shared cache line to be installed in the cache is associated with an invalid global state in the cache. The invalid global state specifies that a version of the shared cache line has been intervened off-node. In response to the shared cache line being in the invalid global state the cache spawns a castout invalid global command for the shared cache line. The shared cache line is installed in the cache. A coherence state for the shared cache line is updated in the associated cache directory to indicate the shared cache line is shared.
    Type: Application
    Filed: June 10, 2015
    Publication date: December 17, 2015
    Inventors: GUY L. GUTHRIE, HIEN MINH LE, JEFFREY A. STUECHELI, PHILLIP G. WILLIAMS
  • Publication number: 20150363317
    Abstract: A technique for operating a memory system for a node includes interrogating, by a cache, an associated cache directory to determine whether a shared cache line to be installed in the cache is associated with an invalid global state in the cache. The invalid global state specifies that a version of the shared cache line has been intervened off-node. In response to the shared cache line being in the invalid global state the cache spawns a castout invalid global command for the shared cache line. The shared cache line is installed in the cache. A coherence state for the shared cache line is updated in the associated cache directory to indicate the shared cache line is shared.
    Type: Application
    Filed: June 17, 2014
    Publication date: December 17, 2015
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: GUY L. GUTHRIE, HIEN MINH LE, JEFFREY A. STUECHELI, PHILLIP G. WILLIAMS