Patents by Inventor Lixin Zhang

Lixin Zhang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 8310936
    Abstract: In a communication network, links in a transmission path between source and destination terminals are sequentially switched to an operational state in response to a command or a group of commands for transmitting data prior to completion of assembling the data. Each node in the transmission path independently monitors transmission of data. After transmitting the data, the links are selectively switched to pre-determined power saving states.
    Type: Grant
    Filed: July 23, 2008
    Date of Patent: November 13, 2012
    Assignee: International Business Machines Corporation
    Inventors: Jian Li, Lixin Zhang
  • Publication number: 20120276148
    Abstract: Provided are adenoviral vectors for generating an immune response to antigen. The vectors comprise a transcription unit encoding a secretable polypeptide, the polypeptide comprising a secretory signal sequence upstream of a tumor antigen upstream of CD40 ligand, which is missing all or substantially all of the transmembrane domain rendering CD40L secretable. Also provided are methods of generating an immune response against cells expressing a tumor antigen by administering an effective amount of the invention vector. Further provided are methods of generating an immune response against cancer expressing a tumor antigen in an individual by administering an effective amount of the invention vector. Still further provided are methods of generating immunity to infection by human papilloma virus (HPV) by administering an effective amount of the invention vector which enocodes the E6 or E7 protein of HPV. The immunity generated is long term.
    Type: Application
    Filed: July 16, 2012
    Publication date: November 1, 2012
    Applicant: VAXUM, LLC
    Inventors: Albert B. Deisseroth, Lixin Zhang
  • Publication number: 20120272888
    Abstract: A spar hull for a floating vessel can include a hard tank having a belly portion, a fixed strake coupled to the outer surface of the tank and a folding strake coupled to the belly portion of the tank, the folding strake having one or more strake panels and one or more support frames. A method for installing folding belly strakes on a spar hull may include providing a floating spar hull having a hard tank with a belly side, rotating the spar so that the belly side is in a first workable position, coupling at least one folding strake to the belly side of the spar, and coupling the strake in a folded position for transport. The method may include positioning the spar hull offshore in a transport position, upending the spar hull, unfolding the strake, fixing the strake in the unfolded position and installing the spar hull.
    Type: Application
    Filed: January 28, 2010
    Publication date: November 1, 2012
    Applicant: TECHNIP FRANCE
    Inventors: Michael Y.H. Luo, Harvey O. Mohr, Vera Mohr, Lixin Zhang, Kostas Filoktitis Lambrakos
  • Publication number: 20120265944
    Abstract: A mechanism for assigning memory to on-chip cache coherence domains assigns caches within a processing unit to coherence domains. The mechanism assigns chunks of memory to the coherence domains. The mechanism monitors applications running on cores within the processing unit to identify needs of the applications. The mechanism may then reassign memory chunks to the cache coherence domains based on the needs of the applications running in the coherence domains. When a memory controller receives the cache miss, the memory controller may look up the address in a lookup table that maps memory chunks to cache coherence domains. Snoop requests are sent to caches within the coherence domain. If a cache line is found in a cache within the coherence domain, the cache line is returned to the originating cache by the cache containing the cache line either directly or through the memory controller.
    Type: Application
    Filed: April 24, 2012
    Publication date: October 18, 2012
    Applicant: International Business Machines Corporation
    Inventors: William E. Speight, Lixin Zhang
  • Patent number: 8285973
    Abstract: A method, processor and processing system provide management of per-thread pipeline resource allocation in a simultaneous multi-threaded (SMT) processor by counting indications of instruction completion for each of the threads. The indication may be the commit phase of the pipeline, which indicates results of the pipeline instruction execution are ready for write-back. The completion counts are used in a relative or absolute form to control the pipeline resource allocation. The decode or fetch rates of instructions for the threads can be controlled from the relative or absolute completion counts, providing control of scheduling instructions among the threads for execution by execution pipeline(s). Alternatively, or in combination, the thread priority registers in any thread priority management scheme can be controlled by comparison and/or scaling of the completion counts.
    Type: Grant
    Filed: August 4, 2008
    Date of Patent: October 9, 2012
    Assignee: International Business Machines Corporation
    Inventors: Wael R. El-essawy, Lixin Zhang
  • Publication number: 20120243545
    Abstract: A method and corresponding device for determining forwarding rule for data packet in Virtual Private LAN Service with Provider Backbone Bridge (PBB-VPLS) network are disclosed. In the method, a value in a backbone service instance identifier (I-SID) field of the received data packet is firstly examined, then a virtual split horizon group corresponding to the data packets is determined based on the I-SID value, wherein the virtual split horizon group defines a forwarding rule for the data packets between different pseudo wire ports of the PBB-VPLS network. With the dynamic split horizon group, the method dynamically adapts to different forwarding rules for multiple I-VPLS instances with different tree topologies, and is capable of supporting multiple I-VPLS instances with different root sites and tree topologies in one B-VPLS instance, thereby ensuring the stability of the backbone network and reducing the network operation cost.
    Type: Application
    Filed: December 17, 2009
    Publication date: September 27, 2012
    Inventors: Lixin Zhang, Duan Chen, Lijun Chen
  • Publication number: 20120238541
    Abstract: The invention relates to a series of compounds with particular activity as inhibitors of the serine-threonine kinase AKT. Also provided are pharmaceutical compositions comprising same as well as methods for treating cancer.
    Type: Application
    Filed: September 17, 2010
    Publication date: September 20, 2012
    Applicant: Almac Discovery Limited
    Inventors: Mark Peter Bell, Timothy Harrison, Sumita Bhattacharyya, James Samuel Shane Rountree, Frank Burkamp, Stephen Price, Calum MacLeod, Richard Leonard Elliott, Phillip Smith, Toby Jonathan Blench, Colin Roderick O'Dowd, Lixin Zhang, Graham Peter Trevitt, Hazel Joan Dyke
  • Patent number: 8271729
    Abstract: A mechanism is provided in a cache for providing a read and write aware cache. The mechanism partitions a large cache into a read-often region and a write-often region. The mechanism considers read/write frequency in a non-uniform cache architecture replacement policy. A frequently written cache line is placed in one of the farther banks. A frequently read cache line is placed in one of the closer banks. The size ratio between read-often and write-often regions may be static or dynamic. The boundary between the read-often region and the write-often region may be distinct or fuzzy.
    Type: Grant
    Filed: September 18, 2009
    Date of Patent: September 18, 2012
    Assignee: International Business Machines Corporation
    Inventors: Jian Li, Ramakrishnan Rajamony, William E. Speight, Lixin Zhang
  • Patent number: 8266381
    Abstract: In at least one embodiment, a processor detects during execution of program code whether a load instruction within the program code is associated with a hint. In response to detecting that the load instruction is not associated with a hint, the processor retrieves a full cache line of data from the memory hierarchy into the processor in response to the load instruction. In response to detecting that the load instruction is associated with a hint, a processor retrieves a partial cache line of data into the processor from the memory hierarchy in response to the load instruction.
    Type: Grant
    Filed: February 1, 2008
    Date of Patent: September 11, 2012
    Assignee: International Business Machines Corporation
    Inventors: Ravi K. Arimilli, Gheorghe C. Cascaval, Balaram Sinharoy, William E. Speight, Lixin Zhang
  • Patent number: 8255631
    Abstract: A method, processor, and data processing system for implementing a framework for priority-based scheduling and throttling of prefetching operations. A prefetch engine (PE) assigns a priority to a first prefetch stream, indicating a relative priority for scheduling prefetch operations of the first prefetch stream. The PE monitors activity within the data processing system and dynamically updates the priority of the first prefetch stream based on the activity (or lack thereof). Low priority streams may be discarded. The PE also schedules prefetching in a priority-based scheduling sequence that corresponds to the priority currently assigned to the scheduled active streams. When there are no prefetches within a prefetch queue, the PE triggers the active streams to provide prefetches for issuing. The PE determines when to throttle prefetching, based on the current usage level of resources relevant to completing the prefetch.
    Type: Grant
    Filed: February 1, 2008
    Date of Patent: August 28, 2012
    Assignee: International Business Machines Corporation
    Inventors: Lei Chen, Lixin Zhang
  • Patent number: 8250307
    Abstract: According to a method of data processing, a memory controller receives a prefetch load request from a processor core of a data processing system. The prefetch load request specifies a requested line of data. In response to receipt of the prefetch load request, the memory controller determines by reference to a stream of demand requests how much data is to be supplied to the processor core in response to the prefetch load request. In response to the memory controller determining to provide less than all of the requested line of data, the memory controller provides less than all of the requested line of data to the processor core.
    Type: Grant
    Filed: February 1, 2008
    Date of Patent: August 21, 2012
    Assignee: International Business Machines Corporation
    Inventors: Ravi K. Arimilli, Gheorghe C. Cascaval, Balaram Sinharoy, William E. Speight, Lixin Zhang
  • Patent number: 8250298
    Abstract: Mechanisms are provided for inhibiting precharging of memory cells of a dynamic random access memory (DRAM) structure. The mechanisms receive a command for accessing memory cells of the DRAM structure. The mechanisms further determine, based on the command, if precharging the memory cells following accessing the memory cells is to be inhibited. Moreover, the mechanisms send, in response to the determination indicating that precharging the memory cells is to be inhibited, a command to blocking logic of the DRAM structure to block precharging of the memory cells following accessing the memory cells.
    Type: Grant
    Filed: May 27, 2010
    Date of Patent: August 21, 2012
    Assignee: International Business Machines Corporation
    Inventors: Elmootazbellah N. Elnozahy, Karthick Rajamani, William E. Speight, Lixin Zhang
  • Patent number: 8236295
    Abstract: Provided are adenoviral vectors for generating an immune response to antigen. The vectors comprise a transcription unit encoding a secretable polypeptide, the polypeptide comprising a secretory signal sequence upstream of a tumor antigen upstream of CD40 ligand, which is missing all or substantially all of the transmembrane domain rendering CD40L secretable. Also provided are methods of generating an immune response against cells expressing a tumor antigen by administering an effective amount of the invention vector. Further provided are methods of generating an immune response against cancer expressing a tumor antigen in an individual by administering an effective amount of the invention vector. Still further provided are methods of generating immunity to infection by human papilloma virus (HPV) by administering an effective amount of the invention vector which encodes the E6 or E7 protein of HPV. The immunity generated is long term.
    Type: Grant
    Filed: February 2, 2012
    Date of Patent: August 7, 2012
    Assignee: VAXum, LLC
    Inventors: Albert B. Deisseroth, Lixin Zhang
  • Publication number: 20120198172
    Abstract: A mechanism is provided in a virtual machine monitor for providing cache partitioning in virtualized environments. The mechanism assigns a virtual identification (ID) to each virtual machine in the virtualized environment. The processing core stores the virtual ID of the virtual machine in a special register. The mechanism also creates an entry for the virtual machine in a partition table. The mechanism may partition a shared cache using a vertical (way) partition and/or a horizontal partition. The entry in the partition table includes a vertical partition control and a horizontal partition control. For each cache access, the virtual machine passes the virtual ID along with the address to the shared cache. If the cache access results in a miss, the shared cache uses the partition table to select a victim cache line for replacement.
    Type: Application
    Filed: April 11, 2012
    Publication date: August 2, 2012
    Applicant: International Business Machines Corporation
    Inventors: Jiang Lin, Lixin Zhang
  • Publication number: 20120195924
    Abstract: Provided are adenoviral vectors for generating an immune response to antigen. The vectors comprise a transcription unit encoding a secretable polypeptide, the polypeptide comprising a secretory signal sequence upstream of a tumor antigen upstream of CD40 ligand, which is missing all or substantially all of the transmembrane domain rendering CD40L secretable. Also provided are methods of generating an immune response against cells expressing a tumor antigen by administering an effective amount of the invention vector. Further provided are methods of generating an immune response against cancer expressing a tumor antigen in an individual by administering an effective amount of the invention vector. Still further provided are methods of generating immunity to infection by human papilloma virus (HPV) by administering an effective amount of the invention vector which enocodes the E6 or E7 protein of HPV. The immunity generated is long term.
    Type: Application
    Filed: February 2, 2012
    Publication date: August 2, 2012
    Applicant: VAXum, LLC
    Inventors: Albert B. Deisseroth, Lixin Zhang
  • Publication number: 20120191946
    Abstract: A method for fast remote communication and computation between processors is provided in the illustrative embodiments. A direct core to core communication unit (DCC) is configured to operate with a first processor, the first processor being a remote processor. A memory associated with the DCC receives a set of bytes, the set of bytes being sent from a second processor. An operation specified in the set of bytes is executed at the remote processor such that the operation is invoked without causing a software thread to execute.
    Type: Application
    Filed: March 7, 2012
    Publication date: July 26, 2012
    Applicant: International Business Machines Corporation
    Inventors: John Bruce Carter, Elmootazbellah Nabil Elnozahy, Ahmed Gheith, Eric Van Hensbergen, Karthick Rajamani, William Evan Speight, Lixin Zhang
  • Patent number: 8209488
    Abstract: A technique for data prefetching using indirect addressing includes monitoring data pointer values, associated with an array, in an access stream to a memory. The technique determines whether a pattern exists in the data pointer values. A prefetch table is then populated with respective entries that correspond to respective array address/data pointer pairs based on a predicted pattern in the data pointer values. Respective data blocks (e.g., respective cache lines) are then prefetched (e.g., from the memory or another memory) based on the respective entries in the prefetch table.
    Type: Grant
    Filed: February 1, 2008
    Date of Patent: June 26, 2012
    Assignee: International Business Machines Corporation
    Inventors: Ravi K. Arimilli, Balaram Sinharoy, William E. Speight, Lixin Zhang
  • Patent number: 8179674
    Abstract: A scalable space-optimized and energy-efficient computing system is provided. The computing system comprises a plurality of modular compartments in at least one level of a frame configured in a hexadron configuration. The computing system also comprises an air inlet, an air mixing plenum, and at least one fan. In the computing system the plurality of modular compartments are affixed above the air inlet, the air mixing plenum is affixed above the plurality of modular compartments, and the at least one fan is affixed above the air mixing plenum. When at least one module is inserted into one of the plurality of modular compartments, the module couples to a backplane within the frame.
    Type: Grant
    Filed: May 28, 2010
    Date of Patent: May 15, 2012
    Assignee: International Business Machines Corporation
    Inventors: John B. Carter, Wael R. El-Essawy, Elmootazbellah N. Elnozahy, Madhusudan K. Iyengar, Thomas W. Keller, Jr., Jian Li, Karthick Rajamani, Juan C. Rubio, William E. Speight, Lixin Zhang
  • Patent number: 8166277
    Abstract: A technique for performing indirect data prefetching includes determining a first memory address of a pointer associated with a data prefetch instruction. Content of a memory at the first memory address is then fetched. A second memory address is determined from the content of the memory at the first memory address. Finally, a data block (e.g., a cache line) including data at the second memory address is fetched (e.g., from the memory or another memory).
    Type: Grant
    Filed: February 1, 2008
    Date of Patent: April 24, 2012
    Assignee: International Business Machines Corporation
    Inventors: Ravi K. Arimilli, Balaram Sinharoy, William E. Speight, Lixin Zhang
  • Patent number: 8161263
    Abstract: A processor includes a first address translation engine, a second address translation engine, and a prefetch engine. The first address translation engine is configured to determine a first memory address of a pointer associated with a data prefetch instruction. The prefetch engine is coupled to the first translation engine and is configured to fetch content, included in a first data block (e.g., a first cache line) of a memory, at the first memory address. The second address translation engine is coupled to the prefetch engine and is configured to determine a second memory address based on the content of the memory at the first memory address. The prefetch engine is also configured to fetch (e.g., from the memory or another memory) a second data block (e.g., a second cache line) that includes data at the second memory address.
    Type: Grant
    Filed: February 1, 2008
    Date of Patent: April 17, 2012
    Assignee: International Business Machines Corporation
    Inventors: Ravi K. Arimilli, Balaram Sinharoy, William E. Speight, Lixin Zhang