Patents by Inventor Lixin Zhang
Lixin Zhang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20110296428Abstract: A method, system, and computer usable program product for improved register allocation in a simultaneous multithreaded processor. A determination is made that a thread of an application in the data processing environment needs more physical registers than are available to allocate to the thread. The thread is configured to utilize a logical register that is mapped to a memory register. The thread is executed utilizing the physical registers and the memory registers.Type: ApplicationFiled: May 27, 2010Publication date: December 1, 2011Applicant: International Business Machines CorporationInventors: FREEMAN LEIGH RAWSON, III, William Evan Speight, Lixin Zhang
-
Publication number: 20110296434Abstract: A technique for sharing a fabric to facilitate off-chip communication for on-chip units includes dynamically assigning a first unit that implements a first communication protocol to a first portion of the fabric when private fabrics are indicated for the on-chip units. The technique also includes dynamically assigning a second unit that implements a second communication protocol to a second portion of the fabric when the private fabrics are indicated for the on-chip units. In this case, the first and second units are integrated in a same chip and the first and second protocols are different. The technique further includes dynamically assigning, based on off-chip traffic requirements of the first and second units, the first unit or the second unit to the first and second portions of the fabric when the private fabrics are not indicated for the on-chip units.Type: ApplicationFiled: May 25, 2010Publication date: December 1, 2011Applicant: International Business Machines CorporationInventors: Jian Li, William E. Speight, Lixin Zhang
-
Publication number: 20110283067Abstract: Target memory hierarchy specification in a multi-core computer processing system is provided including a system for implementing prefetch instructions. The system includes a first core processor, a dedicated cache corresponding to the first core processor, and a second core processor. The second core processor includes instructions for executing a prefetch instruction that specifies a memory location and the dedicated local cache corresponding to the first core processor. Executing the prefetch instruction includes retrieving data from the memory location and storing the retrieved data on the dedicated local cache corresponding to the first core processor.Type: ApplicationFiled: May 11, 2010Publication date: November 17, 2011Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Tong Chen, Yaoqing Gao, Kevin K. O'Brien, Zehra Sura, Lixin Zhang
-
Publication number: 20110242991Abstract: A method, device, and system for processing Border Gateway Protocol (BGP) route are provided, which relate to the field of network communication technologies. The method includes: receiving a BGP route sent by a BGP neighbor; obtaining a route prefix of the BGP route according to the BGP route; determining, according to the route prefix, a BGP route storing and processing module corresponding to the route prefix; and sending the BGP route to the determined BGP route storing and processing module, so that the BGP route storing and processing module processes the received BGP route. A device and a system for processing BGP route are also provided. Therefore, the processing efficiency of the BGP route is improved, and the high extensibility is realized.Type: ApplicationFiled: June 17, 2011Publication date: October 6, 2011Inventors: Lixin ZHANG, Shuanglong Chen, Yuan Rao, Lei Fan, Boyan Tu, Jianwen Liu, Jianbin Xu, Qing Zeng
-
Publication number: 20110238946Abstract: A virtual address scheme for improving performance and efficiency of memory accesses of sparsely-stored data items in a cached memory system is disclosed. In a preferred embodiment of the present invention, a special address translation unit is used to translate sets of non-contiguous addresses in real memory into contiguous blocks of addresses in an “intermediate address space.” This intermediate address space is a fictitious or “virtual” address space, but is distinguishable from the virtual address space visible to application programs, and in user-level memory operations, effective addresses seen/manipulated by application programs are translated into intermediate addresses by an additional address translation unit for memory caching purposes. This scheme allows non-contiguous data items in memory to be assembled into contiguous cache lines for more efficient caching/access (due to the perceived spatial proximity of the data from the perspective of the processor).Type: ApplicationFiled: March 24, 2010Publication date: September 29, 2011Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Ramakrishnan Rajamony, William E. Speight, Lixin Zhang
-
Publication number: 20110145509Abstract: A technique for performing stream detection and prefetching within a cache memory simplifies stream detection and prefetching. A bit in a cache directory or cache entry indicates that a cache line has not been accessed since being prefetched and another bit indicates the direction of a stream associated with the cache line. A next cache line is prefetched when a previously prefetched cache line is accessed, so that the cache always attempts to prefetch one cache line ahead of accesses, in the direction of a detected stream. Stream detection is performed in response to load misses tracked in the load miss queue (LMQ). The LMQ stores an offset indicating a first miss at the offset within a cache line. A next miss to the line sets a direction bit based on the difference between the first and second offsets and causes prefetch of the next line for the stream.Type: ApplicationFiled: February 9, 2011Publication date: June 16, 2011Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: William E. Speight, Lixin Zhang
-
Patent number: 7962722Abstract: In at least one embodiment, a processor includes at least one execution unit that executes instructions and instruction sequencing logic, coupled to the at least one execution unit, that fetches instructions from a memory system for execution by the at least one execution unit. The instruction sequencing logic including a branch target address cache (BTAC) including a plurality of entries for storing branch target address predictions. The BTAC includes index logic that selects an entry to access utilizing a BTAC index based upon at least a set of higher order bits of an instruction address and a set of lower order bits of the instruction address.Type: GrantFiled: February 1, 2008Date of Patent: June 14, 2011Assignee: International Business Machines CorporationInventors: Sheldon B. Levenstein, David S. Levitan, Lixin Zhang
-
Patent number: 7958317Abstract: A technique for performing stream detection and prefetching within a cache memory simplifies stream detection and prefetching. A bit in a cache directory or cache entry indicates that a cache line has not been accessed since being prefetched and another bit indicates the direction of a stream associated with the cache line. A next cache line is prefetched when a previously prefetched cache line is accessed, so that the cache always attempts to prefetch one cache line ahead of accesses, in the direction of a detected stream. Stream detection is performed in response to load misses tracked in the load miss queue (LMQ). The LMQ stores an offset indicating a first miss at the offset within a cache line. A next miss to the line sets a direction bit based on the difference between the first and second offsets and causes prefetch of the next line for the stream.Type: GrantFiled: August 4, 2008Date of Patent: June 7, 2011Assignee: International Business Machines CorporationInventors: William E. Speight, Lixin Zhang
-
Patent number: 7956141Abstract: The present invention provides 1) a complex comprising a mono-anionictridentate ligand, represented by the following general formula (I); 2) a polymerization catalyst composition, comprising the complex; and 3) a cis-1,4-isoprene polymer, a cis-1,4-butadiene polymer, a cis-1,4-isoprene-styrene copolymer, a cis-1,4-butadiene-styrene copolymer, a cis-1,4-butadiene-cis-1,4-isoprene copolymer, and a cis-1,4-butadiene-cis-1,4-isoprene-styrene copolymer, each of which has high-cis-1,4 content in a micro structure and a sharp molecular-weight distribution.Type: GrantFiled: January 23, 2006Date of Patent: June 7, 2011Assignee: RikenInventors: Toshiaki Suzuki, Lixin Zhang, Zhaomin Hou
-
Patent number: 7958316Abstract: A method, processor, and data processing system for dynamically adjusting a prefetch stream priority based on the consumption rate of the data by the processor. The method includes a prefetch engine issuing a prefetch request of a first prefetch stream to fetch one or more data from the memory subsystem. The first prefetch stream has a first assigned priority that determines a relative order for scheduling prefetch requests of the first prefetch stream relative to other prefetch requests of other prefetch streams. Based on the receipt of a processor demand for the data before the data returns to the cache or return of the data along time before the receiving the processor demand, logic of the prefetch engine dynamically changes the first assigned priority to a second higher or lower priority, which priority is subsequently utilized to schedule and issue a next prefetch request of the first prefetch stream.Type: GrantFiled: February 1, 2008Date of Patent: June 7, 2011Assignee: International Business Machines CorporationInventors: William E. Speight, Lixin Zhang
-
Publication number: 20110119322Abstract: Mechanisms for providing an interconnect layer of a three-dimensional integrated circuit device having multiple independent and cooperative on-chip networks are provided. With regard to an apparatus implementing the interconnect layer, such an apparatus comprises a first integrated circuit layer comprising one or more first functional units and an interconnect layer coupled to the first integrated circuit layer. The first integrated circuit layer and interconnect layer are integrated with one another into a single three-dimensional integrated circuit. The interconnect layer comprises a plurality of independent on-chip communication networks that are independently operable and independently able to be powered on and off, each on-chip communication network comprising a plurality of point-to-point communication links coupled together by a plurality of connection points. The one or more first functional units are coupled to a first independent on-chip communication network of the interconnect layer.Type: ApplicationFiled: November 13, 2009Publication date: May 19, 2011Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Jian Li, Steven P. VanderWiel, Lixin Zhang
-
Patent number: 7944926Abstract: A method for peer migration in a distributed Border Gateway Protocol (BGP) system includes: disconnecting a peer relationship between a source BGP process and a network device, wherein first routing information received from the network device is recorded in a forwarding instruction process; establishing a peer relationship between a target BGP process and the network device, and receiving second routing information from the network device; and updating the first routing information recorded in the forwarding instruction process according to the second routing information.Type: GrantFiled: February 26, 2009Date of Patent: May 17, 2011Assignee: Huawei Technologies Co., Ltd.Inventors: Lixin Zhang, Boyan Tu
-
Publication number: 20110072214Abstract: A mechanism is provided in a cache for providing a read and write aware cache. The mechanism partitions a large cache into a read-often region and a write-often region. The mechanism considers read/write frequency in a non-uniform cache architecture replacement policy. A frequently written cache line is placed in one of the farther banks. A frequently read cache line is placed in one of the closer banks. The size ratio between read-often and write-often regions may be static or dynamic. The boundary between the read-often region and the write-often region may be distinct or fuzzy.Type: ApplicationFiled: September 18, 2009Publication date: March 24, 2011Applicant: International Business Machines CorporationInventors: Jian Li, Ramakrishnan Rajamony, William E. Speight, Lixin Zhang
-
Publication number: 20110055827Abstract: A mechanism is provided in a virtual machine monitor for providing cache partitioning in virtualized environments. The mechanism assigns a virtual identification (ID) to each virtual machine in the virtualized environment. The processing core stores the virtual ID of the virtual machine in a special register. The mechanism also creates an entry for the virtual machine in a partition table. The mechanism may partition a shared cache using a vertical (way) partition and/or a horizontal partition. The entry in the partition table includes a vertical partition control and a horizontal partition control. For each cache access, the virtual machine passes the virtual ID along with the address to the shared cache. If the cache access results in a miss, the shared cache uses the partition table to select a victim cache line for replacement.Type: ApplicationFiled: August 25, 2009Publication date: March 3, 2011Applicant: International Business Machines CorporationInventors: Jiang Lin, Lixin Zhang
-
Publication number: 20110048729Abstract: The present disclosure provides an improved design for a pull tube sleeved stress joint and associated pull tube for managing stresses on a catenary riser for a floating offshore structure. The pull tube sleeve stress joint includes at least one sleeve surrounding a length of the pull tube with an annular gap between the sleeve and pull tube and a link ring therebetween. For embodiments having a plurality of sleeves, a first sleeve can be spaced by an annular first gap from the pull tube and coupled thereto with a first ring between the pull tube and the first sleeve, and a second sleeve can be spaced by an annular second gap from the first sleeve and coupled thereto with a second ring between the first sleeve and the second sleeve. Both pull tube and sleeves can be made with regular pipe segments welded together with regular girth welds.Type: ApplicationFiled: August 25, 2009Publication date: March 3, 2011Applicant: TECHNIP FRANCEInventors: Michael Y.H. LUO, Bob Lixin ZHANG, Shih-Hsiao Mark CHANG
-
Patent number: 7886132Abstract: A predication technique for out-of-order instruction processing provides efficient out-of-order execution with low hardware overhead. A special op-code demarks unified regions of program code that contain predicated instructions that depend on the resolution of a condition. Field(s) or operand(s) associated with the special op-code indicate the number of instructions that follow the op-code and also contain an indication of the association of each instruction with its corresponding conditional path. Each conditional register write in a region has a corresponding register write for each conditional path, with additional register writes inserted by the compiler if symmetry is not already present, forming a coupled set of register writes. Therefore, a unified instruction stream can be decoded and dispatched with the register writes all associated with the same re-name resource, and the conditional register write is resolved by executing the particular instruction specified by the resolved condition.Type: GrantFiled: May 19, 2008Date of Patent: February 8, 2011Assignee: International Business Machines CorporationInventors: Ram Rangan, Mark W. Stephenson, Lixin Zhang
-
Publication number: 20110022773Abstract: A mechanism is provided in a virtual machine monitor for fine grained cache allocation in a shared cache. The mechanism partitions a cache tag into a most significant bit (MSB) portion and a least significant bit (LSB) portion. The MSB portion of the tags is shared among the cache lines in a set. The LSB portion of the tags is private, one per cache line. The mechanism allows software to set the MSB portion of tags in a cache to allocate sets of cache lines. The cache controller determines whether a cache line is locked based on the MSB portion of the tag.Type: ApplicationFiled: July 27, 2009Publication date: January 27, 2011Applicant: International Business Machines CorporationInventors: Ramakrishnan Rajamony, William E. Speight, Lixin Zhang
-
Publication number: 20110021346Abstract: 3,4-isoprene-based polymer having high isotacticity can be produced by polymerizing an isoprene compound using a complex represented by the general formula (A) and a catalyst activator: wherein R1 and R2 independently represent an alkyl group, a cyclohexyl group, an aryl group or an aralkyl group; R3 represents an alkyl group, an alkenyl group, an alkynyl group, an aryl group, an aralkyl group, an aliphatic, aromatic or cyclic amino group, a phosphino group, a boryl group, an alkylthio or arylthio group, or an alkoxy or aryloxy group; M represents a rare earth element selected from Sc, Y, and La to Lu with promethium (Pm) excluded; Q1 and Q2 independently represent a monoanionic ligand; L represents a neutral Lewis base.Type: ApplicationFiled: October 4, 2010Publication date: January 27, 2011Applicant: RIKENInventors: Zhaomin HOU, Lixin Zhang
-
Patent number: 7877586Abstract: In at least one embodiment, a processor includes at least one execution unit that executes instructions and instruction sequencing logic, coupled to the at least one execution unit, that fetches instructions from a memory system for execution by the at least one execution unit. The instruction sequencing logic including branch target address prediction circuitry that stores a branch target address prediction associating a first instruction fetch address with a branch target address to be used as a second instruction fetch address. The branch target address prediction circuitry includes delay logic that, in response to at least a tag portion of a third instruction fetch address matching the first instruction fetch address, delays access to the memory system utilizing the second instruction fetch address if no branch target address prediction was made in an immediately previous cycle of operation.Type: GrantFiled: February 1, 2008Date of Patent: January 25, 2011Assignee: International Business Machines CorporationInventors: David S. Levitan, Lixin Zhang
-
Publication number: 20110004875Abstract: A method, a system, an apparatus, and a computer program product for allocating resources of one or more shared devices to one or more partitions of a virtualization environment within a data processing system. At least one user defined resource assignment is received for one or more devices associated with the data processing system. One or more registers, associated with the one or more partitions are dynamically set to execute the at least one resource assignment, whereby the at least one resource assignment enables a user defined quantitative measure (number and/or percentage) of devices to operate when the one or more transactions are executed via the partition. The system enables the one or more devices to execute one or more transactions at a bandwidth/capacity that is less than or equal to the user defined resource assignment and minimizes performance interference among partitions.Type: ApplicationFiled: July 1, 2009Publication date: January 6, 2011Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Elmootazbellah N. Elnozahy, Ramakrishnan Rajamony, William E. Speight, Lixin Zhang