Patents by Inventor Gansha Wu

Gansha Wu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 7251671
    Abstract: A method, apparatus, and system are provided for integrating mark bits and allocation bits. According to one embodiment, a single space is allocated for accommodating a mark bit and an allocation bit. The mark bit and the allocation bit are integrated into a mark/allocation bit using the single space allocated. The mark/allocation bit then used to correspond to an object in a heap.
    Type: Grant
    Filed: March 26, 2004
    Date of Patent: July 31, 2007
    Assignee: Intel Corporation
    Inventors: Gansha Wu, Xin Zhou, Guei-Yuan Lueh
  • Publication number: 20070157184
    Abstract: A method for statement shifting to increase the parallelism of loops includes constructing a data dependence graph (DDG) to represent dependences between statements in a loop, constructing a basic equations group from the DDG, constructing a dependence equations group derived in part from the basic equations group, and determining a shifting vector for the loop from the dependence equations group, wherein the shifting vector to represent an offset to apply to each statement in the loop for statement shifting. Other embodiments are also disclosed.
    Type: Application
    Filed: December 29, 2005
    Publication date: July 5, 2007
    Inventors: Li Liu, Zhaohui Du, Bu Cheng, Shiwei Liao, Gansha Wu, Tin-fook Ngai
  • Patent number: 7240176
    Abstract: Methods and apparatus are disclosed to intelligently place a managed heap in a memory. An example method disclosed herein identifies a current boundary of a static data region in memory, and sets a lower boundary of the managed heap at an address located a safeguard distance above the identified current boundary of the static data region in memory. Other embodiments may be described and claimed.
    Type: Grant
    Filed: May 1, 2004
    Date of Patent: July 3, 2007
    Assignee: Intel Corporation
    Inventors: Gansha Wu, Guei-Yuan Lueh
  • Publication number: 20070150868
    Abstract: A location to insert stack clearing code into a method to be executed in an execution environment of a computer system is determined. The stack clearing code is inserted into the location of the method. The stack clearing code is executed during execution of the method to clear a stack. Other embodiments are also described and claimed.
    Type: Application
    Filed: December 28, 2005
    Publication date: June 28, 2007
    Inventors: Gansha Wu, Xin Zhou, Peng Guo, Jinzhan Peng, Zhiwei Ying, Guei-Yuan Lueh
  • Publication number: 20070124732
    Abstract: Method, apparatus and system embodiments to schedule user-level OS-independent “shreds” without intervention of an operating system. For at least one embodiment, the shred is scheduled for execution by a scheduler routine rather than the operating system. The scheduler routine may receive compiler-generated hints from a compiler. The compiler hints may be generated by the compiler without user-provided pragmas, and may be passed to the scheduler routine via an API-like interface. The interface may include a scheduling hint data structure that is maintained by the compiler. Other embodiments are also described and claimed.
    Type: Application
    Filed: November 29, 2005
    Publication date: May 31, 2007
    Inventors: Shih-wei Lia, Ryan Rakvic, Richard Hankins, Hong Wang, Gansha Wu, Guei-Yuan Lueh, Xinmin Tian, Paul Petersen, Sanjiv Shah, Trung Diep, John Shen, Gautham Chinya
  • Publication number: 20070079300
    Abstract: Linear transformations of statements in code are performed to generate linear expressions associated with the statements. Parallel code is generated using the linear expressions. Generating the parallel code includes splitting the computation-space of the statements into intervals and generating parallel code for the intervals.
    Type: Application
    Filed: September 30, 2005
    Publication date: April 5, 2007
    Inventors: Zhao Du, Shih-wei Liao, Gansha Wu, Guei-Yuan Lueh
  • Publication number: 20070079303
    Abstract: Systems and methods perform affine partitioning on a code stream to produce code segments that may be parallelized. The code segments include copies of the original code stream with conditional inserted that aid in parallelizing code. The conditional is formed by determining the constraints on a processor variable determined by the affine partitioning and applying the constraints to the original code stream.
    Type: Application
    Filed: September 30, 2005
    Publication date: April 5, 2007
    Inventors: Zhao Du, Shih-Wei Liao, Gansha Wu, Guei-Yuan Lueh
  • Publication number: 20070079281
    Abstract: Code is affine partitioned to generate affine partitioning mappings. Parallel code is generated based on the affine partitioning mappings. Generating the parallel code includes coalescing loops in the parallel code generated from the affine partitioning mappings to generate coalesced parallel code and optimizing the coalesced parallel code.
    Type: Application
    Filed: September 30, 2005
    Publication date: April 5, 2007
    Inventors: Shih-wei Liao, Zhao Du, Bu Cheng, Gansha Wu, Guei-Yuan Lueh
  • Publication number: 20070074195
    Abstract: Methods for optimizing stream operator processing by creating a system of inequalities to describe a multi-dimensional polyhedron, solving the system by projecting the polyhedron into a space of one fewer dimensions, and mapping the solution into the stream program. Other program optimization methods based on affine partitioning are also described and claimed.
    Type: Application
    Filed: September 23, 2005
    Publication date: March 29, 2007
    Inventors: Shih-wei Liao, Zhaohui Du, Gansha Wu, Guei-yuan Lueh, Zhiwei Ying, Jinzhan Peng
  • Patent number: 7168071
    Abstract: A system of permitting stack allocation in a program with open-world features is described. The system includes an escape analysis module to (1) determine which objects of the program can be stack-allocated under a closed-world assumption and (2) analyze, after stack allocation, which stack allocation is invalidated due to the occurrence of an open-world feature. A stack allocation module is provided to stack-allocate these objects based on the determination of the escape analysis module. A stack allocation recovery module is provided to recover those invalidated stack allocations back to their original allocation in heap based on the analysis of the escape analysis module. A method of permitting stack allocation in a program with open-world features is also described.
    Type: Grant
    Filed: September 30, 2003
    Date of Patent: January 23, 2007
    Assignee: Intel Corporation
    Inventors: Gansha Wu, Guei-Yuan Lueh, Xiaohua Shi, Jinzhan Peng
  • Publication number: 20070003161
    Abstract: A method including providing a stream of content to a processor, transforming kernels within the stream of content through affine modeling, transforming the affine modeled kernels, stream contracting kernel processes, and stream blocking the kernel processes.
    Type: Application
    Filed: June 30, 2005
    Publication date: January 4, 2007
    Inventors: Shih-wei Liao, Zhaohui Du, Gansha Wu, Ken Lueh, Zhiwei Ying, Jinzhan Peng
  • Publication number: 20070006140
    Abstract: A technique includes generating frames on a stack for a chain of callers. Each frame corresponds to one of the callers, and at least some of the callers use an object that survives at least one but not all of the callers. The technique includes retaining at least one of the frames on stack after the corresponding caller ceases to exist.
    Type: Application
    Filed: June 29, 2005
    Publication date: January 4, 2007
    Inventors: Guei-Yuan Lueh, Gansha Wu, Xiaohua Shi
  • Publication number: 20060288338
    Abstract: Translating a virtual machine instruction into a value which when logically combined with a base value yields an address of interpretation code to perform the virtual machine instruction.
    Type: Application
    Filed: June 15, 2005
    Publication date: December 21, 2006
    Inventors: Jinzhan Peng, Ken Lueh, Gansha Wu
  • Patent number: 7089273
    Abstract: An arrangement is provided for using a stack trace cache when performing root set enumeration in a stack of a thread during garbage collection. During the first root set enumeration in the stack, full stack unwinding may be performed and a stack trace cache may be created to cache stack trace information relating to stack frames. Subsequent sessions of root set enumeration in the stack may access and copy parts or the entire cached stack trace information instead of performing full stack unwinding.
    Type: Grant
    Filed: August 1, 2003
    Date of Patent: August 8, 2006
    Assignee: Intel Corporation
    Inventors: Gansha Wu, Guei-Yuan Lueh
  • Publication number: 20060173939
    Abstract: Provided are a method, system, and article of manufacture, wherein a plurality of objects are allocated in dynamic memory. Reversed references are determined for the plurality of objects, wherein a reversed reference corresponding to an object is an address of a location that has a valid reference to the object. Unreferenced objects are deleted to fragment the dynamic memory. The fragmented dynamic memory is compacted via adjustments to the reversed references.
    Type: Application
    Filed: January 31, 2005
    Publication date: August 3, 2006
    Inventors: Baolin Yin, Guei-Yuan Lueh, Gansha Wu, Xin Zhou
  • Publication number: 20060031810
    Abstract: Methods and apparatuses provide for referencing thread local variables (TLVs) with techniques such as stack address mapping. A method may involve a head pointer that points to a set of thread local variables (TLVs) of a thread. A method according to one embodiment may include an operation for storing the head pointer in a global data structure in a user space of a processing system. The head pointer may subsequently be retrieved from the global data structure and used to access one or more TLVs associated with the thread. In one embodiment, the head pointer is retrieved without executing any kernel system calls. In an example embodiment, the head pointer is stored in a global array, and a stack address for the thread is used to derive an index into the array. Other embodiments are described and claimed.
    Type: Application
    Filed: August 9, 2004
    Publication date: February 9, 2006
    Inventors: Jinzhan Peng, Xiaohua Shi, Guei-Yuan Lueh, Gansha Wu
  • Publication number: 20060010303
    Abstract: In an embodiment of the invention, a technique includes assigning a first pointer to an address of an array and using the first pointer to identify a first location of first data of the array. The first pointer is used to locate at least one additional pointer to identify at least one additional location of additional data of the array.
    Type: Application
    Filed: July 12, 2004
    Publication date: January 12, 2006
    Inventors: Gansha Wu, Guei-Yuan Lueh, Jesse Fang
  • Publication number: 20050246402
    Abstract: Methods and apparatus are disclosed to intelligently place a managed heap in a memory. An example method disclosed herein identifies a current boundary of a static data region in memory, and sets a lower boundary of the managed heap at an address located a safeguard distance above the identified current boundary of the static data region in memory. Other embodiments may be described and claimed.
    Type: Application
    Filed: May 1, 2004
    Publication date: November 3, 2005
    Inventors: Gansha Wu, Guei-Yuan Lueh
  • Publication number: 20050223370
    Abstract: Executing an instruction on an operand stack, including performing a stack-state aware translation of the instruction to threaded code to determine an operand stack state for the instruction, dispatching the instruction according to the operand stack state for the instruction, and executing the instruction.
    Type: Application
    Filed: March 31, 2004
    Publication date: October 6, 2005
    Applicant: Intel Corporation
    Inventors: Gansha Wu, Guei-Yuan Lueh, Jinzhan Peng
  • Publication number: 20050222979
    Abstract: A technique includes querying method information in execution environments for programs written for virtual machines. This technique limits the searching scope within a relatively smaller region of the queried instruction pointer (IP) or code address and relieves the management overhead of a method lookup table, with garbage collector facilitation. In one embodiment, a system receives a code address and queries method metadata for the code address by limiting the search scope within a local memory sub-region of the code address.
    Type: Application
    Filed: March 31, 2004
    Publication date: October 6, 2005
    Inventors: Gansha Wu, Guei-Yuan Lueh