Patents by Inventor Seow Lim

Seow Lim has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20070074012
    Abstract: Systems and methods for recording instruction sequences in a microprocessor having a dynamically decoupleable extended instruction pipeline. A record instruction including a record start address is sent to the extended pipeline. The extended pipeline thus begins recording the subsequent instruction sequence at the specified address until an end record instruction is encountered. The end record instruction is recorded as the last instruction in the sequence. The main pipeline may then call the instruction sequence by sending a run instruction including the start address for the desired sequence to the extended pipeline. This run instruction causes the extended pipeline to begin autonomously executing the recorded sequence until the end record instruction is encountered. This instruction causes the extended pipeline to cease autonomous execution and to return to executing instructions supplied by the main pipeline.
    Type: Application
    Filed: September 28, 2006
    Publication date: March 29, 2007
    Inventors: Carl Graham, Simon Jones, Seow Lim, Yazid Nemouchi, Kar-Lik Wong, Aris Aristodemou
  • Publication number: 20070074004
    Abstract: Systems and methods for selectively decoupling a parallel extended processor pipeline. A main processor pipeline and parallel extended pipeline are coupled via an instruction queue. The main pipeline can instruct the parallel pipeline to execute instructions directly or to begin fetching and executing its own instructions autonomously. During autonomous operation of the parallel pipeline, instructions from the main pipeline accumulate in the instruction queue. The parallel pipeline can return to main pipeline controlled execution through a single instruction. A light weight mechanism in the form of a condition code as seen by the main processor is designed to allow intelligent decision maximizing overall performance to be made in run-time if further instructions should be issued to the parallel extended pipeline based on the queue status.
    Type: Application
    Filed: September 28, 2006
    Publication date: March 29, 2007
    Inventors: Kar-Lik Wong, Carl Graham, Seow Lim, Simon Jones, Yazid Nemouchi, Aris Aristodemou
  • Publication number: 20070073925
    Abstract: Systems and methods for synchronizing multiple processing engines of a microprocessor. In a microprocessor engine employing processor extension logic, DMA engines are used to permit the processor extension logic to move data into and out of local memory independent of the main instruction pipeline. Synchronization between the extended instruction pipeline and DMA engines is performed to maximize simultaneous operation of these elements. The DMA engines includes a data-in and data-out engine each adapted to buffer at least one instruction in a queue. If, for each DMA engine, the queue is full and a new instruction is trying to enter the buffer, the DMA engine will cause the extended pipeline to pause execution until the current DMA operation is complete. This prevents data overwrites while maximizing simultaneous operation.
    Type: Application
    Filed: September 28, 2006
    Publication date: March 29, 2007
    Inventors: Seow Lim, Carl Graham, Kar-Lik Wong, Simon Jones, Aris Aristodemou
  • Publication number: 20050278505
    Abstract: A microprocessor architecture including a predictive pre-fetch XY memory pipeline in parallel to the processor's pipeline for processing compound instructions with enhanced processor performance through predictive prefetch techniques. Instruction operands are predictively prefetched from X and Y based on the historical use of operands in instructions that target X and Y memory. After the compound instruction is decoded in the pipeline, the pre-fetched operand pointer, address and data is reconciled with the operands contained in the actual instruction. If the actual data has been pre-fetched, it is passed to the appropriate execute unit in the execute stage of the processor pipeline. As a result, if the prediction is correct, the data to use for access can be selected and the data selected fed to the execution stage without any addition processor overhead. This pre-fetch mechanism avoids the need to slow down the clock speed of the processor or insert stalls for each compound instruction when using XY memory.
    Type: Application
    Filed: May 19, 2005
    Publication date: December 15, 2005
    Inventors: Seow Lim, Kar-Lik Wong