Patents by Inventor Tai-song Jin

Tai-song Jin has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20130238877
    Abstract: Provided is a technique for improving the transfer latency of vector register file data when an interrupt is generated. According to an aspect, when interrupt occurs, a core determines whether to store vector register file data currently being executed in a first memory or in a second memory based on whether or not the first memory can store the vector register file data therein. In response to not being able to store the vector register file data in the first memory, a data transfer unit, which is implemented as hardware, is provided to store vector register file data in the second memory.
    Type: Application
    Filed: November 9, 2012
    Publication date: September 12, 2013
    Applicant: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Jin-Seok Lee, Dong-Hoon Yoo, Won-Sub Kim, Tai-Song Jin, Hae-Woo Park, Min-Wook Ahn, Hee-Jin Ahn
  • Publication number: 20130124825
    Abstract: A technique for minimizing overhead caused by copying or moving a value from one cluster to another cluster is provided. A number of operations, for example, a mov operation for moving or copying a value from one cluster to another cluster and a normal operation may be executed concurrently. Accordingly, access to a register file outside of the cluster may be reduced and the performance of code may be improved.
    Type: Application
    Filed: July 11, 2012
    Publication date: May 16, 2013
    Inventors: Min-Wook AHN, Tai-Song Jin, Hee-Jin Ahn
  • Publication number: 20130067444
    Abstract: An apparatus and method are provided to minimize an overhead caused by mode conversion by processing parts that cannot be subject to software pipelining. A processor is configured to execute code including a first part that is able to be subject to software pipelining in the code, and a second part that is disable to be subject to software pipelining in the code, the second part including a data part and a control part. The processor is further configured to execute the first part, and the data part of the second part in a first execution mode, and to execute the control part of the second part in a second execution mode. When the first part and the data part, the data part and the first part, or different data parts are successively executed, the processor processes the code in the first execution mode without entering the second execution mode.
    Type: Application
    Filed: September 7, 2012
    Publication date: March 14, 2013
    Applicant: SAMSUNG ELECTRONICS CO., LTD.
    Inventor: Tai-Song JIN
  • Publication number: 20130067207
    Abstract: Provided is a technique that is capable of efficiently compressing instructions by inserting instruction compression bits into valid instruction bundles and deleting no operation (NOP) instruction bundles. Accordingly, the number of instructions that can be parallel-processed in a processor may be increased.
    Type: Application
    Filed: August 27, 2012
    Publication date: March 14, 2013
    Inventor: Tai-Song Jin
  • Publication number: 20120246444
    Abstract: Provided is an apparatus and method capable of processing code to which a software pipelining is not applicable, in a CGA mode. The apparatus may include a processing unit that has a very long instruction word (VLIW) mode and a coarse-grained array (CGA) mode, and an adjusting unit configured to detect a target region to which software pipelining is not applicable, in code to be executed by the processing unit. The adjusting unit may selectively map the detected target region to one of the VLIW mode and the CGA mode according to a schedule length of the detected target region.
    Type: Application
    Filed: January 31, 2012
    Publication date: September 27, 2012
    Inventors: Tai-Song Jin, Dong-Hoon Yoo, Min-Wook Ahn, Jin-Seok Lee
  • Publication number: 20120124351
    Abstract: An apparatus and method for dynamically determining the execution mode of a reconfigurable array are provided. Performance information of a loop may be obtained before and/or during the execution of the loop. The performance information may be used to determine whether to operate the apparatus in a very long instruction word (VLIW) mode or in a coarse grained array (CGA) mode.
    Type: Application
    Filed: August 25, 2011
    Publication date: May 17, 2012
    Inventors: Bernhard Egger, Dong-Hoon Yoo, Tai-Song Jin, Won-Sub Kim, Min-Wook Ahn, Jin-Seok Lee, Hee-Jin Ahn
  • Publication number: 20120102496
    Abstract: A reconfigurable processor which merges an inner loop and an outer loop which are included in a nested loop and allocates the merged loop to processing elements in parallel, thereby reducing processing time to process the nested loop. The reconfigurable processor may extract loop execution frequency information from the inner loop and the outer loop of the nested loop, and may merge the inner loop and the outer loop based on the extracted loop execution frequency information.
    Type: Application
    Filed: April 14, 2011
    Publication date: April 26, 2012
    Applicant: Samsung Electronics Co., Ltd.
    Inventors: Min-Wook Ahn, Dong-Hoon Yoo, Jin-Seok Lee, Bernhard Egger, Tai-Song Jin, Won-Sub Kim, Hee-Jin Ahn
  • Publication number: 20120096247
    Abstract: Provided are a reconfigurable processor, which is capable of reducing the probability of an incorrect computation by analyzing the dependence between memory access instructions and allocating the memory access instructions between a plurality of processing elements (PEs) based on the results of the analysis, and a method of controlling the reconfigurable processor. The reconfigurable processor extracts an execution trace from simulation results, and analyzes the memory dependence between instructions included in different iterations based on parts of the execution trace of memory access instructions.
    Type: Application
    Filed: October 13, 2011
    Publication date: April 19, 2012
    Inventors: Hee-Jin AHN, Dong-Hoon Yoo, Bernhard Egger, Min-Wook Ahn, Jin-Seok Lee, Tai-Song Jin, Won-Sub Kim
  • Publication number: 20120089823
    Abstract: A technology for reducing pipeline a control hazard is provided. A conditional branch is processed through a conditional branch prediction, and a predetermined conditional branch prediction, which is determined as incorrect, may be modified through a following test for the conditional branch prediction, thereby reducing the pipeline control hazard quickly without additional hardware.
    Type: Application
    Filed: April 22, 2011
    Publication date: April 12, 2012
    Applicant: Samsung Electronics Co., Ltd.,
    Inventors: Tai-Song Jin, Dong-Hoon Yoo, Bernhard Egger, Won-Sub Kim
  • Publication number: 20120089821
    Abstract: A debugging apparatus and method are provided. The debugging apparatus may include a breakpoint setting unit configured to store a first instruction corresponding to a breakpoint in a table, stop a program currently being executed, and insert a breakpoint instruction including current location information of the first instruction into the breakpoint; and an instruction execution unit configured to selectively execute one of the breakpoint instruction and the first instruction according to a value of a status bit.
    Type: Application
    Filed: April 4, 2011
    Publication date: April 12, 2012
    Inventors: Jin-Seok Lee, Bernhard Egger, Dong-Hoon Yoo, Tai-Song Jin
  • Publication number: 20120089813
    Abstract: Provided are a computing apparatus based on a reconfigurable architecture and a memory dependence correction method thereof. In one general aspect, a computing apparatus has a reconfigurable architecture. The computing apparatus may include: a reconfiguration unit having processing elements configured to reconfigure data paths between one or more of the processing elements; a compiler configured to analyze instructions to generate reconfiguration information for reconfiguring one or more of the reconfigurable data paths; a configuration memory configured to store the reconfiguration information; and a processor configured to execute the instructions through the reconfiguration unit, and to correct at least one memory dependency among the processing elements.
    Type: Application
    Filed: July 7, 2011
    Publication date: April 12, 2012
    Inventors: Tai-Song Jin, Dong-Hoon Yoo, Bernhard Egger
  • Patent number: 8051274
    Abstract: The description relates to an instruction fetch technology of a processor that processes a plurality of instructions in parallel. The processor exploits the use of a compression code fetched during a previous clock cycle when fetching compressed instructions from a program memory and creating an instruction bundle consisting of a sequence of instructions to be processed in parallel. A compression buffer is interposed between the program memory and an instruction decompression unit, such that a compression code read in a previous clock cycle is ready at the beginning of a decompression cycle of the subsequent instruction bundle thereby avoiding a delay due to memory read latency.
    Type: Grant
    Filed: May 18, 2009
    Date of Patent: November 1, 2011
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Sang-suk Lee, Tai-song Jin
  • Publication number: 20110238963
    Abstract: A reconfigurable array is provided. The reconfigurable array includes a Very Long Instruction Word (VLIW) mode and a Coarse-Grained Array (CGA) mode. When the VLIW mode is converted to the CGA mode, instead of sharing a central register file between the VLIW mode and the CGA mode, live data to be used in the CGA mode is copied from the central register file to local register files.
    Type: Application
    Filed: December 8, 2010
    Publication date: September 29, 2011
    Inventors: Won-Sub Kim, Tai-Song Jin, Dong-Hoon Yoo, Bernhard Egger, Jin-Seok Lee
  • Publication number: 20110231627
    Abstract: A memory managing apparatus and method are provided. The memory managing apparatus may determine, based on a pointer indicator bit, the target memory area on which garbage collection is to be performed, and may perform the garbage collection on the target memory area. The memory managing apparatus may generate the pointer indicator bit and store the generated pointer indicator bit in a pointer field.
    Type: Application
    Filed: November 1, 2010
    Publication date: September 22, 2011
    Applicant: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Bernhard EGGER, Tai-Song Jin, Dong-Hoon Yoo, Won-Sub Kim, Sun-Hwa Kim, Hee-Jin Ahn
  • Publication number: 20110225399
    Abstract: A processor for supporting a MIMO operation and method of processing a MIMO instruction are provided. The MIMO operation supporting processor may include a scheduler and at least one functional unit. The scheduler may map multiple inputs of the MIMO instruction to a plurality of sequential input cycles, respectively, and may map multiple outputs of the MIMO instruction to a plurality of sequential output cycles, respectively. The output cycles may be followed by the input cycles and a predetermined number of cycles for a MIMO operation. A functional unit may read a register during sequential input cycles, may perform a MIMO operation during a predetermined number of execution cycles, and may write the result of the MIMO operation into a register during sequential output cycles.
    Type: Application
    Filed: December 9, 2010
    Publication date: September 15, 2011
    Applicant: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Tai-Song JIN, Dong-Hoon Yoo, Bernhard Egger, Won-Sub Kim
  • Publication number: 20110202749
    Abstract: An instruction compressing apparatus and method for a parallel processing computer such as a very long instruction word (VLIW) computer, are provided. The instruction compressing apparatus includes a bundle code generating unit, an instruction compressing unit, and an instruction converting unit. The bundle code generating unit may generate a bundle code in response to an input of instructions to be compressed. The bundle code may indicate whether a current instruction group is terminated, and also whether an instruction group following the current instruction group is a no-operation (NOP) instruction group. The instruction compressing unit may remove a NOP instruction and/or a NOP instruction group from the input instructions according to the generated bundle code. The instruction converting unit may include the generated bundle code in the remaining instructions which have not been removed by the instruction compressing unit.
    Type: Application
    Filed: October 26, 2010
    Publication date: August 18, 2011
    Applicant: Samsung Electronics Co., Ltd.
    Inventors: Tai-Song Jin, Dong-Hoon Yoo, Bernhard Egger, Won-Sub Kim, Jin-Seok Lee, Sun-Hwa Kim, Hee-Jin Ahn
  • Publication number: 20100205405
    Abstract: A static branch prediction method and code execution method for a pipeline processor, and a code compiling method for static branch prediction, are provided herein. The static branch prediction method includes predicting a conditional branch code as taken or not-taken, adding the prediction information, converting the conditional branch code into a jump target address setting (JTS) code including target address information, branch time information, and a test code, and scheduling codes in a block. The code may be scheduled into a last slot of the block, and the JTS code may be scheduled into an empty slot after all the other codes in the block are scheduled. When the conditional branch code is predicted as taken in the prediction operation, a target address indicated by the target address information may be fetched at a cycle time indicated by the branch time information.
    Type: Application
    Filed: January 25, 2010
    Publication date: August 12, 2010
    Inventors: Tai-song JIN, Dong-kwan Suh, Suk-jin Kim
  • Publication number: 20100088536
    Abstract: The description relates to an instruction fetch technology of a processor that processes a plurality of instructions in parallel. The processor exploits the use of a compression code fetched during a previous clock cycle when fetching compressed instructions from a program memory and creating an instruction bundle consisting of a sequence of instructions to be processed in parallel. A compression buffer is interposed between the program memory and an instruction decompression unit, such that a compression code read in a previous clock cycle is ready at the beginning of a decompression cycle of the subsequent instruction bundle thereby avoiding a delay due to memory read latency.
    Type: Application
    Filed: May 18, 2009
    Publication date: April 8, 2010
    Inventors: Sang-suk LEE, Tai-song Jin