Patents by Inventor Naxin Zhang

Naxin Zhang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11781384
    Abstract: Drilling installation comprising a cantilever with a drilling floor for performing drilling operations; further comprising an independent operations handling system arranged for handling equipment underneath the drilling floor independent of the drilling operations on the drilling floor, wherein the independent operations handling system comprises a handling element for cooperation with the equipment to be handled, wherein the handling element is extendible underneath the cantilever.
    Type: Grant
    Filed: July 15, 2022
    Date of Patent: October 10, 2023
    Assignee: GUSTOMSC B.V.
    Inventors: Josephus Leonardus Aloysius Maria Van Der Hoorn, Naxin Zhang, Mark Cornelius Marinus Franciscus Rommens, Govert Hendrik Teunis Zijderveld
  • Publication number: 20230040288
    Abstract: Drilling installation comprising a cantilever with a drilling floor for performing drilling operations; further comprising an independent operations handling system arranged for handling equipment underneath the drilling floor independent of the drilling operations on the drilling floor, wherein the independent operations handling system comprises a handling element for cooperation with the equipment to be handled, wherein the handling element is extendible underneath the cantilever.
    Type: Application
    Filed: July 15, 2022
    Publication date: February 9, 2023
    Inventors: Josephus Leonardus Aloysius Maria VAN DER HOORN, Naxin ZHANG, Mark Cornelius Marinus Franciscus ROMMENS, Govert Hendrik Teunis ZIJDERVELD
  • Patent number: 11414938
    Abstract: Drilling installation comprising a cantilever with a drilling floor for performing drilling operations; further comprising an independent operations handling system arranged for handling equipment underneath the drilling floor independent of the drilling operations on the drilling floor, wherein the independent operations handling system comprises a handling element for cooperation with the equipment to be handled, wherein the handling element is extendible underneath the cantilever.
    Type: Grant
    Filed: August 17, 2020
    Date of Patent: August 16, 2022
    Assignee: GUSTOMSC B.V.
    Inventors: Josephus Leonardus Aloysius Maria Van Der Hoorn, Naxin Zhang, Mark Cornelius Marinus Franciscus Rommens, Govert Hendrik Teunis Zijderveld
  • Publication number: 20210032945
    Abstract: Drilling installation comprising a cantilever with a drilling floor for performing drilling operations; further comprising an independent operations handling system arranged for handling equipment underneath the drilling floor independent of the drilling operations on the drilling floor, wherein the independent operations handling system comprises a handling element for cooperation with the equipment to be handled, wherein the handling element is extendible underneath the cantilever.
    Type: Application
    Filed: August 17, 2020
    Publication date: February 4, 2021
    Inventors: Josephus Leonardus Aloysius Maria VAN DER HOORN, Naxin ZHANG, Mark Cornelius Marinus Franciscus ROMMENS, Govert Hendrik Teunis ZIJDERVELD
  • Patent number: 10745983
    Abstract: Drilling installation comprising a cantilever with a drilling floor for performing drilling operations; further comprising an independent operations handling system arranged for handling equipment underneath the drilling floor independent of the drilling operations on the drilling floor, wherein the independent operations handling system comprises a handling element for cooperation with the equipment to be handled, wherein the handling element is extendible underneath the cantilever.
    Type: Grant
    Filed: May 4, 2016
    Date of Patent: August 18, 2020
    Assignee: GustoMSC B.V.
    Inventors: Josephus Leonardus Aloysius Maria Van Der Hoorn, Naxin Zhang, Mark Cornelius Marinus Franciscus Rommens, Govert Hendrik Teunis Zijderveld
  • Patent number: 10554584
    Abstract: This invention is related to an Express Traversal (EXTRA) Network on Chip (NoC) comprising a number of EXTRA routers. The EXTRA NoC comprises a Buffer Write and Route Computation (BW/RC) pipeline, a Switch Allocation-Local (SA-L) pipeline, a Setup Request (SR) pipeline, a Switch Allocation-Global (SA-G) pipeline, and a Switch Traversal and Link Traversal (ST/LT) pipeline. The BW/RC pipeline is configured to write an incoming flit to an input buffer(s) of a start EXTRA router and compute the route for the incoming head flit by selecting an output port to depart from the start EXTRA router. The SA-L pipeline is configured to arbitrate the start EXTRA router to choose an input port and an output port for a winning flit. The SR pipeline is configured to handle the transmission of a number of SR signals from the start EXTRA router to downstream EXTRA routers.
    Type: Grant
    Filed: July 11, 2018
    Date of Patent: February 4, 2020
    Assignee: Huawei International Pte. Ltd.
    Inventors: Zhiguo Ge, Naxin Zhang
  • Patent number: 10503642
    Abstract: A data processing method includes allocating a tag entry in a tag array for a data block; allocating a data entry in a data array for the data block when the data block is actively shared; and de-allocating the data entry when the data block is temporarily private or gets evicted in the data array.
    Type: Grant
    Filed: August 25, 2017
    Date of Patent: December 10, 2019
    Assignees: Huawei Technologies Co., Ltd., National University of Singapore
    Inventors: Yuan Yao, Tulika Mitra, Zhiguo Ge, Naxin Zhang
  • Publication number: 20180324110
    Abstract: This invention is related to an Express Traversal (EXTRA) Network on Chip (NoC) comprising a number of EXTRA routers. The EXTRA NoC comprises a Buffer Write and Route Computation (BW/RC) pipeline, a Switch Allocation-Local (SA-L) pipeline, a Setup Request (SR) pipeline, a Switch Allocation-Global (SA-G) pipeline, and a Switch Traversal and Link Traversal (ST/LT) pipeline. The BW/RC pipeline is configured to write an incoming flit to an input buffer(s) of a start EXTRA router and compute the route for the incoming head flit by selecting an output port to depart from the start EXTRA router. The SA-L pipeline is configured to arbitrate the start EXTRA router to choose an input port and an output port for a winning flit. The SR pipeline is configured to handle the transmission of a number of SR signals from the start EXTRA router to downstream EXTRA routers.
    Type: Application
    Filed: July 11, 2018
    Publication date: November 8, 2018
    Inventors: Zhiguo GE, Naxin ZHANG
  • Publication number: 20180322092
    Abstract: Embodiments of the application provide device, method and system for routing global assistant signals in a NoC. The device comprises: a signal distributing element having an associated intermediate router provided in a system for routing global assistant signals in a NoC which includes at least one intermediate router electrically interposed between a source router and a destination router, wherein the signal distributing element is configured to: based on a predetermined criterion, select either local global assistant signals generated by the associated intermediate router or upstream global assistant signals received from an upstream router of the associated intermediate router as current global assistant signals to be sent to a downstream router of the associated intermediate router.
    Type: Application
    Filed: July 13, 2018
    Publication date: November 8, 2018
    Inventors: Zhiguo GE, Xianmin CHEN, Niraj K. JHA, Naxin ZHANG
  • Publication number: 20180155995
    Abstract: Drilling installation comprising a cantilever with a drilling floor for performing drilling operations; further comprising an independent operations handling system arranged for handling equipment underneath the drilling floor independent of the drilling operations on the drilling floor, wherein the independent operations handling system comprises a handling element for co-operation with the equipment to be handled, wherein the handling element is extendible underneath the cantilever.
    Type: Application
    Filed: May 4, 2016
    Publication date: June 7, 2018
    Inventors: Josephus Leonardus Aloysius Maria VAN DER HOORN, Naxin ZHANG, Mark Cornelius Marinus Franciscus ROMMENS, Govert Hendrik Teunis ZIJDERVELD
  • Patent number: 9977741
    Abstract: A reconfigurable cache architecture is provided. In processor design, as the density of on-chip components increases, a quantity and complexity of processing cores will increase as well. In order to take advantage of increased processing capabilities, many applications will take advantage of instruction level parallelism. The reconfigurable cache architecture provides a cache memory that in capable of being configured in a private mode and a fused mode for an associated multi-core processor. In the fused mode, individual cores of the multi-core processor can write and read data from certain cache banks of the cache memory with greater control over address routing. The cache architecture further includes control and configurability of the memory size and associativity of the cache memory itself.
    Type: Grant
    Filed: July 12, 2016
    Date of Patent: May 22, 2018
    Assignees: Huawei Technologies Co., Ltd., National University of Singapore
    Inventors: Mihai Pricopi, Zhiguo Ge, Yuan Yao, Tulika Mitra, Naxin Zhang
  • Publication number: 20170351612
    Abstract: The embodiment of the disclosure discloses a data processing method and device in a cache coherence directory architecture. The method includes that allocating a tag entry in a tag array for a data block; allocating a data entry in a data array for the data block when the data block is actively shared; and de-allocating the data entry when the data block is temporarily private or gets evicted in the data array. Therefore the embodiments of the disclosure allocate data entry only when a data block is actively shared and will not allocate data entry for data block which is not actively shared, therefore smaller directory size can be achieved.
    Type: Application
    Filed: August 25, 2017
    Publication date: December 7, 2017
    Inventors: Yuan YAO, Tulika MITRA, Zhiguo GE, Naxin ZHANG
  • Patent number: 9537799
    Abstract: A network node comprises a receiver configured to receive a first packet, a processor coupled to the receiver and configured to process the first packet, and prioritize the first packet according to a scheme, wherein the scheme assigns priority to packets based on phase, and a transmitter coupled to the processor and configured to transmit the first packet. An apparatus comprises a processor coupled to the memory and configured to generate instructions for a packet prioritization scheme, wherein the scheme assigns priority to packet transactions based on closeness to completion, and a memory coupled to the processor and configured to store the instructions. A method comprises receiving a first packet, processing the first packet, prioritizing the first packet according to a scheme, wherein the scheme assigns priority to packets based on phase, and transmitting the first packet.
    Type: Grant
    Filed: July 30, 2013
    Date of Patent: January 3, 2017
    Assignee: Futurewei Technologies, Inc.
    Inventors: Iulin Lih, Chenghong He, Hongbo Shi, Naxin Zhang
  • Publication number: 20160321177
    Abstract: A reconfigurable cache architecture is provided. In processor design, as the density of on-chip components increases, a quantity and complexity of processing cores will increase as well. In order to take advantage of increased processing capabilities, many applications will take advantage of instruction level parallelism. The reconfigurable cache architecture provides a cache memory that in capable of being configured in a private mode and a fused mode for an associated multi-core processor. In the fused mode, individual cores of the multi-core processor can write and read data from certain cache banks of the cache memory with greater control over address routing. The cache architecture further includes control and configurability of the memory size and associativity of the cache memory itself.
    Type: Application
    Filed: July 12, 2016
    Publication date: November 3, 2016
    Inventors: Mihai PRICOPI, Zhiguo GE, Yuan YAO, Tulika MITRA, Naxin ZHANG
  • Patent number: 9460012
    Abstract: A reconfigurable cache architecture is provided. In processor design, as the density of on-chip components increases, a quantity and complexity of processing cores will increase as well. In order to take advantage of increased processing capabilities, many applications will take advantage of instruction level parallelism. The reconfigurable cache architecture provides a cache memory that in capable of being configured in a private mode and a fused mode for an associated multi-core processor. In the fused mode, individual cores of the multi-core processor can write and read data from certain cache banks of the cache memory with greater control over address routing. The cache architecture further includes control and configurability of the memory size and associativity of the cache memory itself.
    Type: Grant
    Filed: February 18, 2014
    Date of Patent: October 4, 2016
    Assignees: National University of Singapore, Huawei Technologies Co., Ltd.
    Inventors: Mihai Pricopi, Zhiguo Ge, Yuan Yao, Tulika Mitra, Naxin Zhang
  • Patent number: 9411633
    Abstract: A computing system for handling barrier commands includes a memory, an interface, and a processor. The memory is configured to store a pre-barrier spreading range that identifies a target computing system associated with a barrier command. The interface is coupled to the memory and is configured to send a pre-barrier computing probe to the target computing system identified in the pre-barrier spreading range and receive a barrier completion notification messages from the target computing system. The pre-barrier computing probe is configured to instruct the target computing system to monitor a status of a transaction that needs to be executed for the barrier command to be completed. The processor is coupled to the interface and is configured to determine a status of the barrier command based on the received barrier completion notification messages.
    Type: Grant
    Filed: July 26, 2013
    Date of Patent: August 9, 2016
    Assignee: Futurewei Technologies, Inc.
    Inventors: Iulin Lih, Chenghong He, Hongbo Shi, Naxin Zhang
  • Patent number: 9304924
    Abstract: Disclosed herein is a processing network element (NE) comprising at least one receiver configured to receive a plurality of memory request messages from a plurality of memory nodes, wherein each memory request designates a source node, a destination node, and a memory location, and a plurality of response messages to the memory requests from the plurality of memory nodes, wherein each memory request designates a source node, a destination node, and a memory location, at least one transmitter configured to transmit the memory requests and memory responses to the plurality of memory nodes, and a controller coupled to the receiver and the transmitter and configured to enforce ordering such that memory requests and memory responses designating the same memory location and the same source node/destination node pair are transmitted by the transmitter in the same order received by the receiver.
    Type: Grant
    Filed: August 2, 2013
    Date of Patent: April 5, 2016
    Assignee: Futurewei Technologies, Inc.
    Inventors: Iulin Lih, Chenghong He, Hongbo Shi, Naxin Zhang
  • Patent number: 9274955
    Abstract: A processing network comprising a cache configured to store copies of memory data as a plurality of cache lines, a cache controller configured to receive data requests from a plurality of cache agents, and designate at least one of the cache agents as an owner of a first of the cache lines, and a directory configured to store cache ownership designations of the first cache line, and wherein the directory is encoded to support substantially simultaneous ownership of the first cache line by a plurality but less than all of the cache agents. Also disclosed is a method comprising receiving coherent transactions from a plurality of cache agents, and storing ownership designations of a plurality of cache lines by the cache agents in a directory, wherein the directory is configured to support storage of substantially simultaneous ownership designations for a plurality but less than all of the cache agents.
    Type: Grant
    Filed: July 29, 2013
    Date of Patent: March 1, 2016
    Assignee: Futurewei Technologies, Inc.
    Inventors: Iulin Lih, Naxin Zhang, Chenghong He, Hongbo Shi
  • Patent number: 9270582
    Abstract: A method comprising detecting at least one Quality of Service (QoS) requirement is met that indicates a very important packet (VIP) is outstanding from a source node in a multi-hop network comprising multiple nodes, sending an initiation message to an adjacent node in response to the detection that may activate a protocol in which a reserved channel is activated, and receiving the VIP via the reserved channel. Also, a method comprising receiving an initiation message from an adjacent node in a multi-hop network that comprises information identifying the VIP comprising a source node, a destination node, a packet type, wherein the initiation message activates a protocol in which a reserved channel is activated, searching for the VIP identified by the initiation message, and forwarding the VIP promptly if present via the reserved channel or forwarding an initiation message to adjacent nodes closer to the source node.
    Type: Grant
    Filed: July 31, 2013
    Date of Patent: February 23, 2016
    Assignee: Futurewei Technologies, Inc.
    Inventors: Iulin Lih, Hongbo Shi, Chenghong He, Naxin Zhang
  • Patent number: 9235519
    Abstract: A home node for selecting a source node using a cache coherency protocol, comprising a logic unit cluster coupled to a directory, wherein the logic unit cluster is configured to receive a request for data from a requesting cache node, determine a plurality of nodes that hold a copy of the requested data using the directory, select one of the nodes using one or more selection parameters as the source node, and transmit a message to the source node to determine whether the source node stores a copy of the requested data, wherein the source node forwards the requested data to the requesting cache node when the requested data is found within the source node, and wherein some of the nodes are marked as a Shared state corresponding to the cache coherency protocol.
    Type: Grant
    Filed: June 17, 2013
    Date of Patent: January 12, 2016
    Assignee: Futurewei Technologies, Inc.
    Inventors: Iulin Lih, Chenghong He, Hongbo Shi, Naxin Zhang