Patents by Inventor Lixin Zhang
Lixin Zhang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20120195924Abstract: Provided are adenoviral vectors for generating an immune response to antigen. The vectors comprise a transcription unit encoding a secretable polypeptide, the polypeptide comprising a secretory signal sequence upstream of a tumor antigen upstream of CD40 ligand, which is missing all or substantially all of the transmembrane domain rendering CD40L secretable. Also provided are methods of generating an immune response against cells expressing a tumor antigen by administering an effective amount of the invention vector. Further provided are methods of generating an immune response against cancer expressing a tumor antigen in an individual by administering an effective amount of the invention vector. Still further provided are methods of generating immunity to infection by human papilloma virus (HPV) by administering an effective amount of the invention vector which enocodes the E6 or E7 protein of HPV. The immunity generated is long term.Type: ApplicationFiled: February 2, 2012Publication date: August 2, 2012Applicant: VAXum, LLCInventors: Albert B. Deisseroth, Lixin Zhang
-
Publication number: 20120191946Abstract: A method for fast remote communication and computation between processors is provided in the illustrative embodiments. A direct core to core communication unit (DCC) is configured to operate with a first processor, the first processor being a remote processor. A memory associated with the DCC receives a set of bytes, the set of bytes being sent from a second processor. An operation specified in the set of bytes is executed at the remote processor such that the operation is invoked without causing a software thread to execute.Type: ApplicationFiled: March 7, 2012Publication date: July 26, 2012Applicant: International Business Machines CorporationInventors: John Bruce Carter, Elmootazbellah Nabil Elnozahy, Ahmed Gheith, Eric Van Hensbergen, Karthick Rajamani, William Evan Speight, Lixin Zhang
-
Patent number: 8209488Abstract: A technique for data prefetching using indirect addressing includes monitoring data pointer values, associated with an array, in an access stream to a memory. The technique determines whether a pattern exists in the data pointer values. A prefetch table is then populated with respective entries that correspond to respective array address/data pointer pairs based on a predicted pattern in the data pointer values. Respective data blocks (e.g., respective cache lines) are then prefetched (e.g., from the memory or another memory) based on the respective entries in the prefetch table.Type: GrantFiled: February 1, 2008Date of Patent: June 26, 2012Assignee: International Business Machines CorporationInventors: Ravi K. Arimilli, Balaram Sinharoy, William E. Speight, Lixin Zhang
-
Patent number: 8179674Abstract: A scalable space-optimized and energy-efficient computing system is provided. The computing system comprises a plurality of modular compartments in at least one level of a frame configured in a hexadron configuration. The computing system also comprises an air inlet, an air mixing plenum, and at least one fan. In the computing system the plurality of modular compartments are affixed above the air inlet, the air mixing plenum is affixed above the plurality of modular compartments, and the at least one fan is affixed above the air mixing plenum. When at least one module is inserted into one of the plurality of modular compartments, the module couples to a backplane within the frame.Type: GrantFiled: May 28, 2010Date of Patent: May 15, 2012Assignee: International Business Machines CorporationInventors: John B. Carter, Wael R. El-Essawy, Elmootazbellah N. Elnozahy, Madhusudan K. Iyengar, Thomas W. Keller, Jr., Jian Li, Karthick Rajamani, Juan C. Rubio, William E. Speight, Lixin Zhang
-
Patent number: 8166277Abstract: A technique for performing indirect data prefetching includes determining a first memory address of a pointer associated with a data prefetch instruction. Content of a memory at the first memory address is then fetched. A second memory address is determined from the content of the memory at the first memory address. Finally, a data block (e.g., a cache line) including data at the second memory address is fetched (e.g., from the memory or another memory).Type: GrantFiled: February 1, 2008Date of Patent: April 24, 2012Assignee: International Business Machines CorporationInventors: Ravi K. Arimilli, Balaram Sinharoy, William E. Speight, Lixin Zhang
-
Patent number: 8161265Abstract: A technique for performing data prefetching using multi-level indirect data prefetching includes determining a first memory address of a pointer associated with a data prefetch instruction. Content that is included in a first data block (e.g., a first cache line of a memory) at the first memory address is then fetched. A second memory address is then determined based on the content at the first memory address. Content that is included in a second data block (e.g., a second cache line) at the second memory address is then fetched (e.g., from the memory or another memory). A third memory address is then determined based on the content at the second memory address. Finally, a third data block (e.g., a third cache line) that includes another pointer or data at the third memory address is fetched (e.g., from the memory or the another memory).Type: GrantFiled: February 1, 2008Date of Patent: April 17, 2012Assignee: International Business Machines CorporationInventors: Ravi K. Arimilli, Balaram Sinharoy, William E. Speight, Lixin Zhang
-
Patent number: 8161263Abstract: A processor includes a first address translation engine, a second address translation engine, and a prefetch engine. The first address translation engine is configured to determine a first memory address of a pointer associated with a data prefetch instruction. The prefetch engine is coupled to the first translation engine and is configured to fetch content, included in a first data block (e.g., a first cache line) of a memory, at the first memory address. The second address translation engine is coupled to the prefetch engine and is configured to determine a second memory address based on the content of the memory at the first memory address. The prefetch engine is also configured to fetch (e.g., from the memory or another memory) a second data block (e.g., a second cache line) that includes data at the second memory address.Type: GrantFiled: February 1, 2008Date of Patent: April 17, 2012Assignee: International Business Machines CorporationInventors: Ravi K. Arimilli, Balaram Sinharoy, William E. Speight, Lixin Zhang
-
Patent number: 8161264Abstract: A technique for performing data prefetching using indirect addressing includes determining a first memory address of a pointer associated with a data prefetch instruction. Content, that is included in a first data block (e.g., a first cache line) of a memory, at the first memory address is then fetched. An offset is then added to the content of the memory at the first memory address to provide a first offset memory address. A second memory address is then determined based on the first offset memory address. A second data block (e.g., a second cache line) that includes data at the second memory address is then fetched (e.g., from the memory or another memory). A data prefetch instruction may be indicated by a unique operational code (opcode), a unique extended opcode, or a field (including one or more bits) in an instruction.Type: GrantFiled: February 1, 2008Date of Patent: April 17, 2012Assignee: International Business Machines CorporationInventors: Ravi K. Arimilli, Balaram Sinharoy, William E. Speight, Lixin Zhang
-
Patent number: 8140768Abstract: A method, processor, and data processing system for enabling utilization of a single prefetch stream to access data across a memory page boundary. A prefetch engine includes an active streams table in which information for one or more scheduled prefetch streams are stored. The prefetch engine also includes a victim table for storing a previously active stream whose next prefetch crosses a memory page boundary. The scheduling logic issues a prefetch request with a real address to fetch data from the lower level memory. Then, responsive to detecting that the real address of the stream's next sequential prefetch crosses the memory page boundary, the prefetch engine determines when the first prefetch stream can continue across the page boundary of the first memory page (via an effective address comparison). The PE automatically reinserts the first prefetch stream into the active stream table to jump start prefetching across the page boundary.Type: GrantFiled: February 1, 2008Date of Patent: March 20, 2012Assignee: International Business Machines CorporationInventors: William E. Speight, Lixin Zhang
-
Patent number: 8119117Abstract: Provided are adenoviral vectors for generating an immune response to antigen. The vectors comprise a transcription unit encoding a secretable polypeptide, the polypeptide comprising a secretory signal sequence upstream of a tumor antigen upstream of CD40 ligand, which is missing all or substantially all of the transmembrane domain rendering CD40L secretable. Also provided are methods of generating an immune response against cells expressing a tumor antigen by administering an effective amount of the invention vector. Further provided are methods of generating an immune response against cancer expressing a tumor antigen in an individual by administering an effective amount of the invention vector. Still further provided are methods of generating immunity to infection by human papilloma virus (HPV) by administering an effective amount of the invention vector which enocodes the E6 or E7 protein of HPV. The immunity generated is long term.Type: GrantFiled: November 12, 2003Date of Patent: February 21, 2012Assignee: VAXum, LLCInventors: Albert B. Deisseroth, Lixin Zhang
-
Patent number: 8086831Abstract: In at least one embodiment, an indexed table circuit includes a plurality of banks for storing data to be accessed and a split index array. The indexed table circuit is organized in a plurality of entries each corresponding to a respective one of a plurality of different entry indices, where each entry includes a storage location in the plurality of banks and the split index array. The indexed table circuit further includes selection logic that, responsive to read access of an entry among the plurality of entries utilizing an entry index of a bit string, utilizes a split index read from the split index array to select a set of one or more bits of a tag of the bit string, utilizes the selected set of one or more bits to select data read from one of the plurality of banks, and outputs the selected data.Type: GrantFiled: February 1, 2008Date of Patent: December 27, 2011Assignee: International Business Machines CorporationInventors: Lei Chen, Lixin Zhang
-
Publication number: 20110310903Abstract: A method, an apparatus and a system for migrating VPN Routing and Forwarding (VRF) instances are disclosed. The system includes: a first BGP process, configured to: send a VRF instance and configuration information of BGP peers in the VRF instance to a second Border Gateway Protocol (BGP) process; instruct a Transport Control Protocol (TCP) to back up a TCP link to the second BGP process in a hot backup mode, where the TCP link is related to peer sessions of the BGP peers in the VRF instance; and the second BGP process, configured to: receive routes in an Adjacent Ingress Routing Information Base (Adj-RIB-IN) sent by the first BGP process unit. In the embodiments of the present invention, BGP peers services of the VRF instance are not interrupted, and the sessions are not disconnected; a peer device cannot perceive the migration process of a router completely; and changed routing information in the migration process is updated after the migration.Type: ApplicationFiled: September 2, 2011Publication date: December 22, 2011Applicant: Huawei Technologies Co., Ltd.Inventors: Boyan Tu, Lixin Zhang, Shuanglong Chen, Jianwen Liu, Jianbin Xu, Xiaohui Liu
-
Publication number: 20110292594Abstract: A scalable space-optimized and energy-efficient computing system is provided. The computing system comprises a plurality of modular compartments in at least one level of a frame configured in a hexadron configuration. The computing system also comprises an air inlet, an air mixing plenum, and at least one fan. In the computing system the plurality of modular compartments are affixed above the air inlet, the air mixing plenum is affixed above the plurality of modular compartments, and the at least one fan is affixed above the air mixing plenum. When at least one module is inserted into one of the plurality of modular compartments, the module couples to a backplane within the frame.Type: ApplicationFiled: May 28, 2010Publication date: December 1, 2011Applicant: International Business Machines CorporationInventors: John B. Carter, Wael R. El-Essawy, Elmootazbellah N. Elnozahy, Madhusudan K. Iyengar, Thomas W. Keller, JR., Jian Li, Karthick Rajamani, Juan C. Rubio, William E. Speight, Lixin Zhang
-
Publication number: 20110296138Abstract: A method, system, and computer usable program product for fast remote communication and computation between processors are provided in the illustrative embodiments. A direct core to core communication unit (DCC) is configured to operate with a first processor, the first processor being a remote processor. A memory associated with the DCC receives a set of bytes, the set of bytes being sent from a second processor. An operation specified in the set of bytes is executed at the remote processor such that the operation is invoked without causing a software thread to execute.Type: ApplicationFiled: May 27, 2010Publication date: December 1, 2011Applicant: International Business Machines CorporationInventors: JOHN BRUCE CARTER, ELMOOTAZBELLAH NABIL ELNOZAHY, AHMED GHEITH, ERIC VAN HANSBERGEN, KARTHICK RAJAMANI, WILLIAM EVAN SPEIGHT, LIXIN ZHANG
-
Publication number: 20110296149Abstract: Mechanisms are provided for processing an instruction in a processor of a data processing system. The mechanisms operate to receive, in a processor of the data processing system, an instruction, the instruction including power/performance tradeoff information associated with the instruction. The mechanisms further operate to determine power/performance tradeoff priorities or criteria, specifying whether power conservation or performance is prioritized with regard to execution of the instruction, based on the power/performance tradeoff information. Moreover, the mechanisms process the instruction in accordance with the power/performance tradeoff priorities or criteria identified based on the power/performance tradeoff information of the instruction.Type: ApplicationFiled: May 27, 2010Publication date: December 1, 2011Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: John B. Carter, Jian Li, Karthick Rajamani, William E. Speight, Lixin Zhang
-
Publication number: 20110296107Abstract: A mechanism is provided within a 3D stacked memory organization to spread or stripe cache lines across multiple layers. In an example organization, a 128B cache line takes eight cycles on a 16B-wide bus. Each layer may provide 32B. The first layer uses the first two of the eight transfer cycles to send the first 32B. The next layer sends the next 32B using the next two cycles of the eight transfer cycles, and so forth. The mechanism provides a uniform memory access.Type: ApplicationFiled: May 26, 2010Publication date: December 1, 2011Applicant: International Business Machines CorporationInventors: Jian Li, William E. Speight, Lixin Zhang
-
Publication number: 20110292597Abstract: A modular processing module is provided. The modular processing module comprises a set of processing module sides. Each processing module side comprises a circuit board, a plurality of connectors coupled to the circuit board, and a plurality of processing nodes coupled to the circuit board. Each processing module side in the set of processing module sides couples to another processing module side using at least one connector in the plurality of connectors such that, when all of the set of processing module sides are coupled together, the modular processing module is formed. The modular processing module comprises an exterior connection to a power source and a communication system.Type: ApplicationFiled: May 28, 2010Publication date: December 1, 2011Applicant: International Business Machines CorporationInventors: John B. Carter, Wael R. El-Essawy, Elmootazbellah N. Elnozahy, Wesley M. Felter, Madhusudan K. Iyengar, Thomas W. Keller, JR., Karthick Rajamani, Juan C. Rubio, William E. Speight, Lixin Zhang
-
Publication number: 20110296097Abstract: Mechanisms are provided for inhibiting precharging of memory cells of a dynamic random access memory (DRAM) structure. The mechanisms receive a command for accessing memory cells of the DRAM structure. The mechanisms further determine, based on the command, if precharging the memory cells following accessing the memory cells is to be inhibited. Moreover, the mechanisms send, in response to the determination indicating that precharging the memory cells is to be inhibited, a command to blocking logic of the DRAM structure to block precharging of the memory cells following accessing the memory cells.Type: ApplicationFiled: May 27, 2010Publication date: December 1, 2011Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Elmootazbellah N. Elnozahy, Karthick Rajamani, William E. Speight, Lixin Zhang
-
Publication number: 20110296115Abstract: A mechanism is provided for assigning memory to on-chip cache coherence domains. The mechanism assigns caches within a processing unit to coherence domains. The mechanism then assigns chunks of memory to the coherence domains. The mechanism monitors applications running on cores within the processing unit to identify needs of the applications. The mechanism may then reassign memory chunks to the cache coherence domains based on the needs of the applications running in the coherence domains. When a memory controller receives the cache miss, the memory controller may look up the address in a lookup table that maps memory chunks to cache coherence domains. Snoop requests are sent to caches within the coherence domain. If a cache line is found in a cache within the coherence domain, the cache line is returned to the originating cache by the cache containing the cache line either directly or through the memory controller.Type: ApplicationFiled: May 26, 2010Publication date: December 1, 2011Applicant: International Business Machines CorporationInventors: William E. Speight, Lixin Zhang
-
Publication number: 20110296434Abstract: A technique for sharing a fabric to facilitate off-chip communication for on-chip units includes dynamically assigning a first unit that implements a first communication protocol to a first portion of the fabric when private fabrics are indicated for the on-chip units. The technique also includes dynamically assigning a second unit that implements a second communication protocol to a second portion of the fabric when the private fabrics are indicated for the on-chip units. In this case, the first and second units are integrated in a same chip and the first and second protocols are different. The technique further includes dynamically assigning, based on off-chip traffic requirements of the first and second units, the first unit or the second unit to the first and second portions of the fabric when the private fabrics are not indicated for the on-chip units.Type: ApplicationFiled: May 25, 2010Publication date: December 1, 2011Applicant: International Business Machines CorporationInventors: Jian Li, William E. Speight, Lixin Zhang