Tin-Fook Ngai has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
Abstract: Methods and apparatus are disclosed to compile programs to use speculative parallel threads. An example method disclosed herein identifies a set of speculative parallel thread candidates; determines misspeculation cost values for at least some of the speculative parallel thread candidates; selects a set of speculative parallel threads from the set of speculative parallel thread candidates based on the cost values; and generates program code based on the set of speculative parallel threads.
Abstract: Methods and apparatus to predict software values are disclosed. In one example, a method identifies a variable associated with one or more machine readable instructions, determines a predicted value of the variable based on a pattern, generates a value prediction instruction to predict a run-time value using the predicted value of the variable based on the pattern, and combines the value prediction instruction with the one or more machine readable instructions.
Abstract: A method and apparatus for enabling the speculative forking of a speculative thread is disclosed. In one embodiment, a speculative fork instruction is conditioned by the results of a fork predictor. The fork predictor may issue predictions as to whether or not a speculative thread would execute desirably. The fork predictor may be implemented as a modified branch predictor circuit, and may have execution history updates entered by a determination of whether or not the execution of a speculative thread was or would have been desirable.
Abstract: A data processing apparatus, a computer, an article including a machine-accessible medium, and a method of processing data are disclosed. The data processing apparatus may include a pair of pipelines sharing an instruction cache, data cache, and a branch predictor with the second pipeline running ahead of the first pipeline using a data value prediction module. The pipelines may be included in one or more processors and coupled to a memory to form a computer. The method includes executing a plurality of instructions using the pipeline pair, such that when a cache miss is encountered by the second pipeline during execution of a LOAD instruction, the data value prediction module supplies a predicted load value in lieu of a cached value, enabling continued execution of the plurality of instructions by the second pipeline without waiting for the return of the cached value.