Patents by Inventor Lik Wong

Lik Wong has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20070192384
    Abstract: Computer-implemented methods and computer systems for automatically managing stored checkpoint data are described. The method includes accessing a first user defined time period. The first user defined time period is related to a plurality of stored checkpoint data, and each checkpoint data of the plurality of stored checkpoint data has an associated storage time. Further, the method includes identifying a first set of checkpoint data having storage times that are within the first user defined time period. Moreover, the method includes identifying a second set of checkpoint data having storage times that are older than the first user defined time period. In addition, the method includes pruning the second set of checkpoint data according to a user specified process in proportion to storage time of each checkpoint data of the second set of checkpoint data. The older stored checkpoint data is more heavily pruned over recent stored checkpoint data.
    Type: Application
    Filed: February 2, 2006
    Publication date: August 16, 2007
    Applicant: ORACLE INTERNATIONAL CORPORATION
    Inventors: Neeraj Shodhan, Qinqin Wang, Lik Wong, Joydip Kundu
  • Publication number: 20070083569
    Abstract: Data consistency in the context of information sharing requires maintenance of dependencies among information being shared. Transactional dependency ordering is implemented in a database system message queue, by associating a unique system commit time with each transactional message group. Read consistency is implemented in such a queue by allowing only messages with a fully determined order to be visible. A fully determined order is implemented through use of a high watermark, which guarantees that future transactions, for which messages are entering the queue, have commit times that are greater than the current high watermark. Therefore, only messages below the current high watermark are visible and can be dequeued, with no chance of other new messages enqueuing below the current high watermark.
    Type: Application
    Filed: October 7, 2005
    Publication date: April 12, 2007
    Inventors: Lik Wong, Hung Tran, James Stamos
  • Publication number: 20070083530
    Abstract: Systems and methods for providing a one-step API that executes a series of atomic transactions in a database system. In one implementation, each atomic transaction is associated with a forward block of code that effects changes, an undo block of code that reverses the changes made by the forward block, and a state block of code that mimics successful execution of the forward block by setting internal states. In the event of a failure, the forward blocks, undo blocks, and state blocks can be used to roll forward or roll back changes as a whole. In one implementation, a one-step API for replicating data in a database is provided.
    Type: Application
    Filed: October 10, 2005
    Publication date: April 12, 2007
    Applicant: ORACLE INTERNATIONAL CORPORATION
    Inventors: Anand Lakshminath, Lik Wong, James Stamos, Alan Downing
  • Publication number: 20070083563
    Abstract: To export source tablespaces, an auxiliary database system is created and started with a minimum configuration. Copies of versions of the source tablespaces are restored from database backups to the auxiliary database system. A copy of a version of a tablespace is referred to herein as a tablespace instance. The tablespace instances restored from database backups are recovered to a particular point-in-time. A script is then generated. The script can be executed by a database server of the destination database to import the tablespace instances.
    Type: Application
    Filed: October 7, 2005
    Publication date: April 12, 2007
    Inventors: Benny Souder, James Stamos, Hung Tran, Francisco Sanchez, Lik Wong
  • Publication number: 20070074004
    Abstract: Systems and methods for selectively decoupling a parallel extended processor pipeline. A main processor pipeline and parallel extended pipeline are coupled via an instruction queue. The main pipeline can instruct the parallel pipeline to execute instructions directly or to begin fetching and executing its own instructions autonomously. During autonomous operation of the parallel pipeline, instructions from the main pipeline accumulate in the instruction queue. The parallel pipeline can return to main pipeline controlled execution through a single instruction. A light weight mechanism in the form of a condition code as seen by the main processor is designed to allow intelligent decision maximizing overall performance to be made in run-time if further instructions should be issued to the parallel extended pipeline based on the queue status.
    Type: Application
    Filed: September 28, 2006
    Publication date: March 29, 2007
    Inventors: Kar-Lik Wong, Carl Graham, Seow Lim, Simon Jones, Yazid Nemouchi, Aris Aristodemou
  • Publication number: 20070074007
    Abstract: A parameterizable clip instruction for SIMD microprocessor architecture and method of performing a clip operating the same. A single instruction is provided with three input operands: a destination address, a source address and a controlling parameter. The controlling parameter includes a range type and a range specifier. The range type is a multi-bit integer in the operand that is used to index a table of range types. The range specifier plugs into the range type to define a range. The data input at the source address is clipped according to the controlling parameters. The instruction is particularly suited to video encoding/decoding applications where interpolations or other calculations, lies outside the maximum value and that final result will have to be clipped to saturation value, for example, the maximum pixel value. Signed and unsigned clipping ranges may be used that are not only powers of two.
    Type: Application
    Filed: September 28, 2006
    Publication date: March 29, 2007
    Inventors: Nigel Topham, Yazid Nemouchi, Simon Jones, Carl Graham, Kar-Lik Wong, Aris Aristodemou
  • Publication number: 20070073925
    Abstract: Systems and methods for synchronizing multiple processing engines of a microprocessor. In a microprocessor engine employing processor extension logic, DMA engines are used to permit the processor extension logic to move data into and out of local memory independent of the main instruction pipeline. Synchronization between the extended instruction pipeline and DMA engines is performed to maximize simultaneous operation of these elements. The DMA engines includes a data-in and data-out engine each adapted to buffer at least one instruction in a queue. If, for each DMA engine, the queue is full and a new instruction is trying to enter the buffer, the DMA engine will cause the extended pipeline to pause execution until the current DMA operation is complete. This prevents data overwrites while maximizing simultaneous operation.
    Type: Application
    Filed: September 28, 2006
    Publication date: March 29, 2007
    Inventors: Seow Lim, Carl Graham, Kar-Lik Wong, Simon Jones, Aris Aristodemou
  • Publication number: 20070071106
    Abstract: Two pairs of deblock instructions for performing deblock filtering on a horizontal row of pixels according to the H.264 (MPEG 4 part 10) and VC1 video codec algorithms. The first instruction of each pair has three 128-bit operands comprising the 16-bit components of a horizontal line of 8 pixels crossing a vertical block edge between pixels 4 and 5 in a YUV image, a series of filter threshold parameters, and a 128-bit destination operand for storing the output of the first instruction. The second instruction of each pair accepts the same 16-bit components as its first input, the output of the first instruction as its second input and a destination operand for storing an output of the second instruction as its third input. The instruction pairs are intended for use with the H.264 or VC1 video codecs respectively.
    Type: Application
    Filed: September 28, 2006
    Publication date: March 29, 2007
    Inventors: Carl Graham, Kar-Lik Wong, Simon Jones, Aris Aristodemou, Yazid Nemouchi
  • Publication number: 20070070080
    Abstract: A data path for a SIMD-based microprocessor is used to perform different simultaneous filter sub-operations in parallel data lanes of the SIMD-based microprocessor. Filter operations for sub-pixel interpolation are performed simultaneously on separate lanes of the SIMD processor's data path. Using a dedicated internal data path, precision higher than the native precision of the SIMD unit may be achieved. Through the data path according to this invention, a single instruction may be used to generate the value of two adjacent sub-pixels located diagonally with respect to integer pixel positions.
    Type: Application
    Filed: September 28, 2006
    Publication date: March 29, 2007
    Inventors: Carl Graham, Kar-Lik Wong, Simon Jones, Aris Aristodemou
  • Publication number: 20070074012
    Abstract: Systems and methods for recording instruction sequences in a microprocessor having a dynamically decoupleable extended instruction pipeline. A record instruction including a record start address is sent to the extended pipeline. The extended pipeline thus begins recording the subsequent instruction sequence at the specified address until an end record instruction is encountered. The end record instruction is recorded as the last instruction in the sequence. The main pipeline may then call the instruction sequence by sending a run instruction including the start address for the desired sequence to the extended pipeline. This run instruction causes the extended pipeline to begin autonomously executing the recorded sequence until the end record instruction is encountered. This instruction causes the extended pipeline to cease autonomous execution and to return to executing instructions supplied by the main pipeline.
    Type: Application
    Filed: September 28, 2006
    Publication date: March 29, 2007
    Inventors: Carl Graham, Simon Jones, Seow Lim, Yazid Nemouchi, Kar-Lik Wong, Aris Aristodemou
  • Patent number: 7162689
    Abstract: Schema evolution involves defining flavored object groups. Specifically, related collections of replicated tables and other database objects, which are defined as belonging to an object group, are given different “flavors.” A flavor describes different subsets of the objects and even different subsets of the columns in the master tables. In one embodiment, when one site in a distributed database system propagates changes to a replicated database object, the current flavor for the site is also transmitted, so that the destination site can make the necessary adjustments in the uploaded changes by dropping the values for obsolete columns and using default values for new columns.
    Type: Grant
    Filed: May 28, 1999
    Date of Patent: January 9, 2007
    Assignee: Oracle International Corporation
    Inventors: Alan J. Demers, Curtis Elsbernd, James William Stamos, Lik Wong
  • Publication number: 20060224626
    Abstract: Techniques are provided for managing electronic items by storing, within a file group repository, metadata that identifies (a) a plurality of file groups, (b) for each file group, a set of one or more file group versions for the file group, and (c) for each file group version of each file group, a set of one or more items that belong to the version of the file group. Once the metadata has been established, queries may be executed against the metadata to request identification of items that belong to a particular version of a particular file group. This file group framework may be used in a variety of contexts, including the management of a centralized tablespace repository, and periodic purging of versions of file collections, where the files within the collections may be spread across multiple repositories.
    Type: Application
    Filed: April 4, 2005
    Publication date: October 5, 2006
    Inventors: Anand Lakshminath, Benny Souder, James Stamos, Lik Wong, Hung Tran
  • Publication number: 20060155789
    Abstract: Techniques for making a replica of a particular group of database objects of a database on a particular node that does not initially have the particular group of database objects include determining whether conditions for copying a full database from a first node are satisfied. If conditions for copying the full database from the first node are not satisfied, then a database-object-copy routine is employed for each database object in the particular group of database objects. If conditions for copying the full database from the first node are satisfied, then a full-database-copy routine for performing a copy of an entire database is employed.
    Type: Application
    Filed: March 1, 2006
    Publication date: July 13, 2006
    Inventors: Lik Wong, Alan Demers, James Stamos
  • Publication number: 20060149799
    Abstract: Techniques for making a replica of a particular group of database objects on a particular node of a network include receiving, during a transfer period, a first copy of the particular group of objects at the particular node from a first node on the network. The particular node receives, from a second node on the network, data indicating changes to the particular group of database objects on the second node, where the changes indicated in the data are changes that were made at the second node during the transfer period. The first copy of the particular group of database objects is modified based on the data indicating changes.
    Type: Application
    Filed: March 1, 2006
    Publication date: July 6, 2006
    Inventors: Lik Wong, Alan Demers, James Stamos
  • Patent number: 7039669
    Abstract: Techniques for making a replica of a particular group of database objects of a database on a particular node that does not initially have the particular group of database objects include transferring description data from a first node to the particular node during a first time period. The description data describes the particular group of database objects at a first time. The first time period begins at the first time and ends at a second time. During the first time period, a request from a user of the database to perform an operation involving particular data in the particular group of database objects is processed.
    Type: Grant
    Filed: September 28, 2001
    Date of Patent: May 2, 2006
    Assignee: Oracle Corporation
    Inventors: Lik Wong, Alan J. Demers, James W. Stamos
  • Publication number: 20050289323
    Abstract: A 2N bit right only barrel shifter for a microprocessor comprising upper and lower N bit shifter portions. A N bit input is put in the upper portion. An X bit right shift of the N bit number yields the results in the N bit upper portion and the result of an N-X bit left shift in the lower portion. The N bit shifter is comprised of a Log2N stage multiplexer where in each successive stage of the multiplexer adds 2x additional bits where x increments from 0 to (Log2N-1).
    Type: Application
    Filed: May 19, 2005
    Publication date: December 29, 2005
    Inventors: Kar-Lik Wong, Nigel Topham
  • Publication number: 20050278513
    Abstract: A hybrid branch prediction scheme for a multi-stage pipelined microprocessor that combines features of static and dynamic branch prediction to reduce complexity and enhance performance over conventional branch prediction techniques. Prior to microprocessor deployment, a branch prediction table is populated using static branch prediction techniques by executing instructions analogous to those to be executed during microprocessor deployment. The branch prediction table is stored, and then loaded into the BPU during deployment, for example, at the time of microprocessor power on. Dynamic branch prediction is then performed using the pre-loaded data, thereby enabling dynamic branch prediction with a required “warm-up” period. After resolving each branch in the selection stage of the microprocessor instruction pipeline, the BPU is updated with the address of the next instruction that resulted from that branch to enhance performance.
    Type: Application
    Filed: May 19, 2005
    Publication date: December 15, 2005
    Inventors: Aris Aristodemou, Rich Fuhler, Kar-Lik Wong
  • Publication number: 20050278517
    Abstract: A method of performing branch prediction in a microprocessor using variable length instructions is provided. An instruction is fetched from memory based on a specified fetch address and a branch prediction is made based on the address. The prediction is selectively discarded if the look-up was based on a non-sequential fetch to an unaligned instruction address and a branch target alignment cache (BTAC) bit of the instruction is equal to zero. In order to remove the inherent latency of branch prediction, an instruction prior to a branch instruction may be fetched concurrently with a branch prediction unit look-up table entry containing prediction information for a next instruction word. Then, the branch instruction is fetched and a prediction is made on this branch instruction based on information fetched in the previous cycle. The predicted target instruction is fetched on the next clock cycle.
    Type: Application
    Filed: May 19, 2005
    Publication date: December 15, 2005
    Inventors: Kar-Lik Wong, James Hakewill, Nigel Topham, Rich Fuhler
  • Publication number: 20050278505
    Abstract: A microprocessor architecture including a predictive pre-fetch XY memory pipeline in parallel to the processor's pipeline for processing compound instructions with enhanced processor performance through predictive prefetch techniques. Instruction operands are predictively prefetched from X and Y based on the historical use of operands in instructions that target X and Y memory. After the compound instruction is decoded in the pipeline, the pre-fetched operand pointer, address and data is reconciled with the operands contained in the actual instruction. If the actual data has been pre-fetched, it is passed to the appropriate execute unit in the execute stage of the processor pipeline. As a result, if the prediction is correct, the data to use for access can be selected and the data selected fed to the execution stage without any addition processor overhead. This pre-fetch mechanism avoids the need to slow down the clock speed of the processor or insert stalls for each compound instruction when using XY memory.
    Type: Application
    Filed: May 19, 2005
    Publication date: December 15, 2005
    Inventors: Seow Lim, Kar-Lik Wong
  • Publication number: 20050273559
    Abstract: A microprocessor architecture including a unified cache debug unit. A debug unit on the processor chip receives data/command signals from a unit of the execute stage of the multi-stage instruction pipeline of the processor and returns information to the execute stage unit. The cache debug unit is operatively connected to both instruction and data cache units of the microprocessor. The memory subsystem of the processor may be accessed by the cache debug unit through either of the instruction or data cache units. By unifying the cache debug in a separate structure, the need for redundant debug structure in both cache units is obviated. Also, the unified cache debug unit can be powered down when not accessed by the instruction pipeline, thereby saving power.
    Type: Application
    Filed: May 19, 2005
    Publication date: December 8, 2005
    Inventors: Aris Aristodemou, Daniel Hansson, Morgyn Taylor, Kar-Lik Wong