Patents by Inventor Thang M. Tran

Thang M. Tran has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 8639884
    Abstract: Systems and methods are disclosed for multi-threading computer systems. In a computer system executing multiple program threads in a processing unit, a first load/store execution unit is configured to handle instructions from a first program thread and a second load/store execution unit is configured to handle instructions from a second program thread. When the computer system executing a single program thread, the first and second load/store execution units are reconfigured to handle instructions from the single program thread, and a Level 1 (L1) data cache is reconfigured with a first port to communicate with the first load/store execution unit and a second port to communicate with the second load/store execution unit.
    Type: Grant
    Filed: February 28, 2011
    Date of Patent: January 28, 2014
    Assignee: Freescale Semiconductor, Inc.
    Inventor: Thang M. Tran
  • Publication number: 20140025967
    Abstract: A technique for performing power management for configurable processor resources of a processor determining whether to increase, decrease, or maintain resource units for each of the configurable processor resources based on utilization of each of the configurable processor resources. A total weighted power number for the processor is substantially maintained while resource units for each of the configurable processor resources whose utilization is above a first level is increased and resource units for each of the configurable processor resources whose utilization is below a second level is decreased. The total weighted power number corresponds to a sum of weighted power numbers for the configurable processor resources.
    Type: Application
    Filed: July 17, 2012
    Publication date: January 23, 2014
    Applicant: FREESCALE SEMICONDUCTOR, INC.
    Inventor: Thang M. Tran
  • Publication number: 20130297912
    Abstract: A processor reduces the likelihood of stalls at an instruction pipeline by dynamically extending the size of a full execution queue. To extend the full execution queue, the processor temporarily repurposes another execution queue to store instructions on behalf of the full execution queue. The execution queue to be repurposed can be selected based on a number of factors, including the type of instructions it is generally designated to store, whether it is empty of other instruction types, and the rate of cache hits at the processor. By selecting the repurposed queue based on dynamic factors such as the cache hit rate, the likelihood of stalls at the dispatch stage is reduced for different types of program flows, improving overall efficiency of the processor.
    Type: Application
    Filed: May 3, 2012
    Publication date: November 7, 2013
    Applicant: Freescale Semiconductor, Inc.
    Inventors: Thang M. Tran, Sourav Roy
  • Publication number: 20130290639
    Abstract: A processor uses a dedicated buffer to reduce the amount of time needed to execute memory copy operations. For each load instruction associated with the memory copy operation, the processor copies the load data from memory to the dedicated buffer. For each store operation associated with the memory copy operation, the processor retrieves the store data from the dedicated buffer and transfers it to memory. The dedicated buffer is separate from a register file and caches of the processor, so that each load operation associated with a memory copy operation does not have to wait for data to be loaded from memory to the register file. Similarly, each store operation associated with a memory copy operation does not have to wait for data to be transferred from the register file to memory.
    Type: Application
    Filed: April 25, 2012
    Publication date: October 31, 2013
    Applicant: FREESCALE SEMICONDUCTOR, INC.
    Inventors: Thang M. Tran, James Yang
  • Publication number: 20130212358
    Abstract: A data processing system comprises a processor unit that includes an instruction decode/issue unit including a re-order buffer having entries that include an execution queue tag that indicates an execution queue location of an instruction to which a re-order buffer entry is assigned, a result valid indicator to indicate that a corresponding instruction has executed with a status bit valid result, and a forward indicator to indicate that the status bit can be forwarded to an execution queue of an instruction pointed to that is waiting to receive the status bit.
    Type: Application
    Filed: February 15, 2012
    Publication date: August 15, 2013
    Inventors: Thang M. Tran, Trinh Huy Nguyen
  • Publication number: 20130212585
    Abstract: In some embodiments, a data processing system includes a processing unit, a first load/store unit LSU and a second LSU configured to operate independently of the first LSU in single and multi-thread modes. A first store buffer is coupled to the first and second LSUs, and a second store buffer is coupled to the first and second LSUs. The first store buffer is used to execute a first thread in multi-thread mode. The second store buffer is used to execute a second thread in multi-thread mode. The first and second store buffers are used when executing a single thread in single thread mode.
    Type: Application
    Filed: February 10, 2012
    Publication date: August 15, 2013
    Inventor: Thang M. Tran
  • Publication number: 20130198490
    Abstract: In a processing system capable of single and multi-thread execution, a branch prediciton unit can be configured to detect hard to predict branches and loop instructions. In a dual-threading (simultaneous multi-threading) configuration, one instruction queues (IQ) is used for each thread and instructions are alternately sent from each IQ to decode units. In single thread mode, the second IQ can be used to store the “not predicted path” of the hard-to-predict branch or the “fall-through” path of the loop. On mis-prediction, the mis-prediction penalty is reduced by getting the instructions from IQ instead of instruction cache.
    Type: Application
    Filed: January 31, 2012
    Publication date: August 1, 2013
    Inventors: Thang M. Tran, Michael B. Schinzler
  • Patent number: 8458447
    Abstract: A data processor includes a branch target buffer (BTB) having a plurality of BTB entries grouped in ways. The BTB entries in one of the ways include a short tag address and the BTB entries in another one of the ways include a full tag address.
    Type: Grant
    Filed: June 17, 2011
    Date of Patent: June 4, 2013
    Assignee: Freescale Semiconductor, Inc.
    Inventors: Thang M. Tran, Edmund J. Gieske, Michael B. Schinzler
  • Publication number: 20130046936
    Abstract: Systems and methods are disclosed for a computer system that includes a first load/store execution unit 210a, a first Level 1 L1 data cache unit 216a coupled to the first load/store execution unit, a second load/store execution unit 210b, and a second L1 data cache unit 216b coupled to the second load/store execution unit. Some instructions are directed to the first load/store execution unit and other instructions are directed to the second load/store execution unit when executing a single thread of instructions.
    Type: Application
    Filed: August 19, 2011
    Publication date: February 21, 2013
    Inventor: Thang M. Tran
  • Publication number: 20130046957
    Abstract: Processing systems and methods are disclosed that can include an instruction unit which provides instructions for execution by the processor; a decode/issue unit which decodes instructions received from the instruction unit and issues the instructions; and a plurality of execution queues coupled to the decode/issue unit, wherein each issued instruction from the decode/issue unit can be stored into an entry of at least one queue of the plurality of execution queues. The plurality of queues can comprise an independent execution queue, a dependent execution queue, and a plurality of execution units coupled to receive instructions for execution from the plurality of execution queues. The plurality of execution units can comprise a first execution unit, coupled to receive instructions from the dependent execution queue and the independent execution queue which have been selected for execution.
    Type: Application
    Filed: August 18, 2011
    Publication date: February 21, 2013
    Applicant: FREESCALE SEMICONDUCTOR, INC.
    Inventors: THANG M. TRAN, TRINH HUY H. NGUYEN
  • Publication number: 20130046956
    Abstract: Systems and methods are disclosed that can include a processor having an instruction unit, a decode/issue unit, a first execution queue configured to provide instructions of a first instruction type to a first execution unit, and a second execution queue configured to provide instructions of a second instruction type to a second execution unit. A first instruction (IMUL) of the second instruction type is received. The first instruction is decoded by the decode/issue unit to determine operands of the first instruction. The operands of the first instruction are determined to include a dependency on a second instruction (Id) of the first instruction type stored in a first entry of the first execution queue. The first instruction is stored in a first entry of the second execution queue.
    Type: Application
    Filed: August 16, 2011
    Publication date: February 21, 2013
    Inventors: THANG M. TRAN, Trinh Huy H. Nguyen
  • Publication number: 20120324209
    Abstract: A data processor includes a branch target buffer (BTB) having a plurality of BTB entries grouped in ways. The BTB entries in one of the ways include a short tag address and the BTB entries in another one of the ways include a full tag address.
    Type: Application
    Filed: June 17, 2011
    Publication date: December 20, 2012
    Inventors: Thang M. Tran, Edmund J. Gieske, Michael B. Schinzler
  • Publication number: 20120303935
    Abstract: A processor includes an instruction unit which provides instructions for execution by the processor, a decode/issue unit which decodes instructions received from the instruction unit and issues the instructions, and a plurality of execution queues coupled to the decode/issue unit. Each issued instruction from the decode/issue unit is stored into an entry of at least one queue of the plurality of execution queues, wherein each entry of the plurality of execution queues is configured to store an issued instruction and a duplicate indicator corresponding to the issued instruction which indicates whether or not a duplicate instruction of the issued instruction is also stored in an entry of another queue of the plurality of execution queues.
    Type: Application
    Filed: May 26, 2011
    Publication date: November 29, 2012
    Inventors: THANG M. TRAN, LEICK D. ROBINSON
  • Publication number: 20120303936
    Abstract: In a processor having an instruction unit, a decode/issue unit, and execution queues configured to provide instructions to correspondingly different types execution units, a method comprises maintaining a duplicate free list for the execution queues. The duplicate free list includes a plurality of duplicate dependent instruction indicators that indicate when a duplicate instruction for a dependent instruction is stored in at least one of the execution queues. One of the duplicate dependent instruction indicators is assigned to an execution queue for a dependent instruction. The dependent instruction is executed only when the one of the duplicate dependent instruction indicators is reset.
    Type: Application
    Filed: March 14, 2012
    Publication date: November 29, 2012
    Applicant: FREESCALE SEMICONDUCTOR, INC.
    Inventors: Thang M. Tran, Trinh Huy Nguyen
  • Publication number: 20120278592
    Abstract: In a processor, a decode unit identifies instructions needing a checkpoint and enables selected checkpoints, and a register file unit includes a plurality of architectural registers; a first set of checkpoint registers corresponding to a first checkpoint, wherein each checkpoint register of the first set corresponds to a corresponding architectural register of the plurality of architectural registers; a first set of indicators corresponding to the first set of checkpoint registers which, for each checkpoint register in the first set of checkpoint registers, indicates whether the corresponding architectural register has been modified or is intended to be modified prior to enabling of the first checkpoint; and a second set of indicators corresponding to the first set of checkpoint registers which, for each checkpoint register in the first set of checkpoint registers, indicates whether the corresponding architectural register has been modified or is intended to be modified after enabling of the first checkpoint.
    Type: Application
    Filed: April 28, 2011
    Publication date: November 1, 2012
    Inventor: THANG M. TRAN
  • Publication number: 20120278596
    Abstract: A data processing device maintains register map information that maps accesses to architectural registers, as identified by instructions being executed, to physical registers of the data processing device. In response to determining that an instruction, such as a speculatively-executing conditional branch, indicates a checkpoint, the data processing device stores the register map information for subsequent retrieval depending on the resolution of the instruction. In addition, in response to the checkpoint indication the data processing device generates new register map information such that accesses to the architectural registers are mapped to different physical registers. The data processing device maintains a list, referred to as a free register list, of physical registers available to be mapped to an architectural registers.
    Type: Application
    Filed: April 26, 2011
    Publication date: November 1, 2012
    Applicant: Freescale Semiconductor, Inc.
    Inventor: Thang M. Tran
  • Publication number: 20120221793
    Abstract: A microprocessor system is disclosed that includes a first data cache that is shared by a first group of one or more program threads in a multi-thread mode and used by one program thread in a single-thread mode. A second data cache is shared by a second group of one or more program threads in the multi-thread mode and is used as a victim cache for the first data cache in the single-thread mode.
    Type: Application
    Filed: February 28, 2011
    Publication date: August 30, 2012
    Inventor: THANG M. TRAN
  • Publication number: 20120221796
    Abstract: Systems and methods are disclosed for multi-threading computer systems. In a computer system executing multiple program threads in a processing unit, a first load/store execution unit is configured to handle instructions from a first program thread and a second load/store execution unit is configured to handle instructions from a second program thread. When the computer system executing a single program thread, the first and second load/store execution units are reconfigured to handle instructions from the single program thread, and a Level 1 (L1) data cache is reconfigured with a first port to communicate with the first load/store execution unit and a second port to communicate with the second load/store execution unit.
    Type: Application
    Filed: February 28, 2011
    Publication date: August 30, 2012
    Inventor: THANG M. TRAN
  • Publication number: 20120221835
    Abstract: An instruction unit provides instructions for execution by a processor. A decode unit decodes instructions received from the instruction unit. Queues are coupled to receive instructions from the decode unit. Each instruction in a same queue is executed in order by a corresponding execution unit. An arbiter is coupled to each queue and to the execution unit that executes instructions of a first instruction type. The arbiter selects a next instruction of the first instruction type from a bottom entry of the queue for execution by the first execution unit.
    Type: Application
    Filed: February 28, 2011
    Publication date: August 30, 2012
    Inventor: THANG M. TRAN
  • Publication number: 20100169578
    Abstract: A system comprises tag memories and data memories. Sources use the tag memories with the data memories as a cache. Arbitration of a cache request is replayed, based on an arbitration miss and way hit, without accessing the tag memories. A method comprises receiving a cache request sent by a source out of a plurality of sources. The sources use tag memories with data memories as a cache. The method further comprises arbitrating the cache request, and replaying arbitration, based on an arbitration miss and way hit, without accessing the tag memories.
    Type: Application
    Filed: December 31, 2008
    Publication date: July 1, 2010
    Applicant: TEXAS INSTRUMENTS INCORPORATED
    Inventors: Robert NYCHKA, William M. JOHNSON, Thang M. TRAN