Patents by Inventor Xinyu Hu

Xinyu Hu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10558443
    Abstract: A hardware acceleration method, a compiler, and a device, to improve code execution efficiency and implement hardware acceleration. The method includes: obtaining, by a compiler, compilation policy information and source code, where the compilation policy information indicates that a first code type matches a first processor and a second code type matches a second processor; analyzing, by the compiler, a code segment in the source code according to the compilation policy information, and determining a first code segment belonging to the first code type or a second code segment belonging to the second code type; and compiling, by the compiler, the first code segment into first executable code, and sending the first executable code to the first processor; and compiling the second code segment into second executable code, and sending the second executable code to the second processor.
    Type: Grant
    Filed: December 28, 2017
    Date of Patent: February 11, 2020
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Jian Chen, Hong Zhou, Xinyu Hu, Hongguang Guan, Xiaojun Zhang
  • Publication number: 20190265985
    Abstract: An accelerator loading apparatus obtains an acceleration requirement, where the acceleration requirement includes an acceleration function of a to-be-created virtual machine and acceleration performance of the to-be-created virtual machine. The accelerator loading apparatus determines a target accelerator that meets the acceleration function of the to-be-created virtual machine and the acceleration performance of the to-be-created virtual machine. The accelerator loading apparatus determines an image corresponding to the target accelerator, and sends an image loading command to a target host in which the target accelerator is located, where the image loading command is used to enable the target host to load the image for the target accelerator based on the image loading command.
    Type: Application
    Filed: May 9, 2019
    Publication date: August 29, 2019
    Inventors: Qian Cao, Yuping Zhao, Xinyu Hu
  • Publication number: 20190266006
    Abstract: An accelerator loading apparatus obtains an acceleration requirement, where the acceleration requirement includes an acceleration function and acceleration performance of a to-be-created virtual machine, determines an image that meets the acceleration function and the acceleration performance, and determines a target host in which an available accelerator that can load the image is located, and then sends an image loading command to the target host. The image loading command includes a descriptor of the image, and is used to enable the target host to load the image for the available accelerator. In the method, a target host that can create the virtual machine may be determined based on the acceleration function and the acceleration performance of the to-be-created virtual machine, and an image used for acceleration is loaded to an available accelerator of the target host, to implement dynamic accelerator loading and deployment.
    Type: Application
    Filed: May 9, 2019
    Publication date: August 29, 2019
    Inventors: Qian Cao, Yuping Zhao, Xinyu Hu
  • Publication number: 20180336480
    Abstract: An aspect includes querying, by a processor, a plurality of model data from a distributed data source based at least in part on one or more user characteristics. A plurality of sensor data is gathered associated with a condition of a user. A policy is generated including an end goal and one or more sub-goals based at least in part on the model data and the sensor data. The policy is iteratively adapted based at least in part on one or more detected changes in the sensor data collected over a period of time to adjust at least one of the one or more sub-goals. The policy and the one or more sub-goals are provided to the user.
    Type: Application
    Filed: May 22, 2017
    Publication date: November 22, 2018
    Inventors: Hung-Yang Chang, Ching-Hua Chen, James V. Codella, Pei-Yun Hsueh, Xinyu Hu
  • Publication number: 20180308447
    Abstract: The present application provides a turn-on voltage supplying circuit and method, a defect analysis method and a display device. The turn-on voltage supplying circuit includes a voltage supplying unit and a switching unit. The voltage supplying unit is configured to provide turn-on voltages, values of which being within a predetermined range, to the M stages of gate driving circuits respectively in the case that the M stages of gate driving circuits are in a normal operation state, or provide corresponding turn-on voltages to the gate driving circuits in the case that the gate driving circuits are subject to a defect analysis process. M is an integer greater than 1. When the gate driving circuits are subject to the defect analysis process, the voltage supplying unit comprises variable resistors connected between a reference turn-on voltage outputting terminal and the turn-on voltage inputting terminals of the gate driving circuits.
    Type: Application
    Filed: July 6, 2016
    Publication date: October 25, 2018
    Applicants: BOE TECHNOLOGY GROUP CO., LTD., BEIJING BOE DISPLAY TECHNOLOGY CO., LTD.
    Inventors: Ming HUA, Xinyu HU, Liwei ZHU, Luqiang GUO, Zhiming MENG, Yunfei WANG
  • Publication number: 20180307498
    Abstract: A driver loading method and a server, where when receiving a service request, the server determines a first global index and a first global virtual function (VF) identifier corresponding to a first function description of a designated function included in the service request, determines a virtual machine (VM) corresponding to the service request, associates the first global VF identifier with the VM, allocates a first local index on the VM to the designated function, creates a correspondence between the first local index and the first function description, and sends the correspondence to the VM. The VM loads, according to the correspondence, a driver of the designated function for a first VF corresponding to the first global VF identifier. According to the foregoing method, different drivers can be loaded for VFs that have different functions and that are virtualized by a Peripheral Component Interconnect Express (PCIe) device.
    Type: Application
    Filed: June 18, 2018
    Publication date: October 25, 2018
    Inventors: Dongtian Yang, Xinyu Hu, Yuming Xie, Yuping Zhao
  • Patent number: 10050875
    Abstract: A network service processing method and apparatus are provided. The method includes: partitioning a field-programmable gate array (FPGA) or a network processor (NP) into multiple mutually isolated service processing resources; receiving a first configuration instruction, where the first configuration instruction includes a first service configuration execution file; performing service configuration on a service processing resource according to the first service configuration execution file, so that the service processing resource performs service processing on a to-be-processed service packet; receiving a forwarding rule used when service forwarding is to be performed on the to-be-processed service packet; and receiving the to-be-processed service packet sent by a user terminal, and distributing the to-be-processed service packet to the service processing resource by using the forwarding rule, so that the service processing resource performs service processing on the to-be-processed service packet.
    Type: Grant
    Filed: May 26, 2016
    Date of Patent: August 14, 2018
    Assignee: Huawei Technologies Co., Ltd.
    Inventors: Xinyu Hu, Yuping Zhao
  • Publication number: 20180121180
    Abstract: A hardware acceleration method, a compiler, and a device, to improve code execution efficiency and implement hardware acceleration. The method includes: obtaining, by a compiler, compilation policy information and source code, where the compilation policy information indicates that a first code type matches a first processor and a second code type matches a second processor; analyzing, by the compiler, a code segment in the source code according to the compilation policy information, and determining a first code segment belonging to the first code type or a second code segment belonging to the second code type; and compiling, by the compiler, the first code segment into first executable code, and sending the first executable code to the first processor; and compiling the second code segment into second executable code, and sending the second executable code to the second processor.
    Type: Application
    Filed: December 28, 2017
    Publication date: May 3, 2018
    Applicant: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Jian Chen, Hong Zhou, Xinyu Hu, Hongguang Guan, Xiaojun Zhang
  • Publication number: 20170359214
    Abstract: An Internet Protocol Security (IPSec) acceleration method, an apparatus, and a system, where the method includes generating, by an Internet Key Exchange (IKE) device, an IKE link establishment session packet according to an IPSec configuration parameter and a security policy in a security policy database (SPD), sending, by the IKE device, the IKE link establishment session packet to a peer device, establishing a security association (SA) with the peer device, and sending, by the IKE module, the SA to a data forwarding device. The IKE device and the data forwarding device are discrete devices. In this way, the IKE device and the data forwarding device can be deployed in different devices in order to increase the IPSec speed.
    Type: Application
    Filed: August 7, 2017
    Publication date: December 14, 2017
    Inventors: Yuming Xie, Xinyu Hu, Yuping Zhao, Fan Yang
  • Publication number: 20170076701
    Abstract: The present disclosure discloses a display method of a display panel, a display device and a display apparatus. A received high resolution (first resolution) image data is decomposed into multiple frames (n frames) of low resolution (second solution) image data, and for multiple frames (n frames) of low resolution image data corresponding to each frame of high resolution image data, various frames of image data in the multiple frames of second resolution image data are refreshed frame by frame at a predetermined frame refresh frequency. In this way, multiple frames of low resolution images which are presented in turn are observed by human eyes, and are combined into an original high resolution image by a brain, so as to achieve an effect of viewing an image with a high resolution. The present disclosure achieves the effect of viewing an image with a high resolution by increasing a refresh frequency of the low resolution image data.
    Type: Application
    Filed: May 9, 2016
    Publication date: March 16, 2017
    Inventors: Xinyu Hu, Wulin Li, Ming Hua
  • Publication number: 20170039089
    Abstract: The present invention discloses a method and an apparatus for implementing acceleration processing on a VNF. In the present invention, an acceleration request of performing acceleration processing on a virtualized network function VNF is received; a hardware acceleration device capable of performing acceleration processing on the VNF is determined according to the acceleration request; and an acceleration resource of the hardware acceleration device is allocated to the VNF, so as to perform acceleration processing on the VNF. According to the present invention, a corresponding hardware acceleration device can be dynamically selected for and allocated to a VNF, implementing virtualized management on the hardware acceleration device, and improving resource utilization.
    Type: Application
    Filed: September 27, 2016
    Publication date: February 9, 2017
    Inventors: Jinwei Xia, Xinyu Hu, Liang Zhang
  • Publication number: 20160277292
    Abstract: A network service processing method and apparatus are provided. The method includes: partitioning a field-programmable gate array (FPGA) or a network processor (NP) into multiple mutually isolated service processing resources; receiving a first configuration instruction, where the first configuration instruction includes a first service configuration execution file; performing service configuration on a service processing resource according to the first service configuration execution file, so that the service processing resource performs service processing on a to-be-processed service packet; receiving a forwarding rule used when service forwarding is to be performed on the to-be-processed service packet; and receiving the to-be-processed service packet sent by a user terminal, and distributing the to-be-processed service packet to the service processing resource by using the forwarding rule, so that the service processing resource performs service processing on the to-be-processed service packet.
    Type: Application
    Filed: May 26, 2016
    Publication date: September 22, 2016
    Inventors: Xinyu HU, Yuping ZHAO
  • Publication number: 20160193746
    Abstract: The invention refers to a technical field of display device, and specifically discloses a slicing device. The slicing device includes a base, a mold and a cutting device; therein, an avoiding groove for cooperating with the cutting device is provided on the base, and multiple first supporting rods are fixed on the base; the mold is fixed detachably at a setting position on the base, and the mold has an opening cooperating with the cutting device; the cutting device is assembled slidably on the multiple first supporting rods; therein, each of the first supporting rods is sheathed with a first buffer spring. In the above technical solution, by means of the detachable connection between the mold and the base, when the mold is damaged, the mold can be detached directly, and replaced with a new mold, and the base does not need to be replaced, so that reduces the cost of maintaining the slicing device.
    Type: Application
    Filed: July 17, 2015
    Publication date: July 7, 2016
    Inventors: Xinyu Hu, Xiaoliang Chu, Yongchao Zhang, Xinming Duan, Qian Liu
  • Publication number: 20150326405
    Abstract: A fiber-coax unit (FCU) is coupled to an optical line terminal (OLT) and a plurality of coax network units (CNUs). The FCU receives a multicast frame from the OLT. The multicast frame includes a first multicast logical link identifier (LLID) dedicated for multicast traffic directed to CNUs. The FCU replaces the first multicast LLID in the multicast frame with a second multicast LLID corresponding to one or more multicast groups that include at least one CNU of the plurality of CNUs. The FCU transmits the multicast frame to the plurality of CNUs.
    Type: Application
    Filed: December 17, 2012
    Publication date: November 12, 2015
    Inventors: Tienchuan Ko, Honger Nie, Xinyu Hu, Andrea J. Garavaglia, Stephen J. Shellhammer, Xuguang Li, Patrick Stupar
  • Publication number: 20150127849
    Abstract: Embodiments of the present invention provide a transmission control protocol TCP data transmission method, a TCP offload engine, and a system, which relate to the field of communications, and can reduce data migration between the TCP offload engine and a CPU, and at the same time reduce parsing work on data by the CPU, so as to achieve the effects of reducing resources of the CPU for processing TCP/IP data and reducing transmission delay. The method includes: a TCP offload engine receives TCP data sent by a remote device; performs TCP offloading on the TCP data; identifies the TCP offloaded data, and sends, according to an identification result, the TCP offloaded data to a CPU or a storage device corresponding to storage position information issued by the CPU. The embodiments of the present invention are applicable to TCP data transmission.
    Type: Application
    Filed: January 9, 2015
    Publication date: May 7, 2015
    Inventors: Hai LUO, Xinyu HU, Qikun WEI, Liang ZHANG
  • Patent number: 8756170
    Abstract: The present invention discloses a regex matching method and system, and relates to the field of computer technologies. The method includes: sorting multiple regexes into several regex groups, where all regexes in one regex group include a common string, which is known as a generic string; compiling each regex group into a DFA, and setting up a correlation between the generic string of each regex group and the DFA; matching to-be-matched data streams with the generic string respectively, and using the matched generic string as a matched string; obtaining a DFA corresponding to the matched string; and performing regex matching for the to-be-matched data streams according to the DFA, and outputting a matching result. The embodiments of the present invention shorten the data loading process, decrease the time consumed by data loading, and improve the matching performance.
    Type: Grant
    Filed: May 25, 2011
    Date of Patent: June 17, 2014
    Assignee: Huawei Technologies Co., Ltd.
    Inventors: Jian Chen, Xinyu Hu
  • Patent number: 8343395
    Abstract: A powder particle shaping device includes a closed cavity capable of changing between multiple shapes as an external pressure changes, and the closed cavity compresses and moves powder particles with which the closed cavity is filled full while the shape changes. A powder particle shaping method is further provided, which includes a. filling a cavity full with powder particles to be shaped; and b. applying a varying external pressure to make the cavity change repeatedly between multiple shapes, thereby making the powder particles under compression move and be subject to friction, where the cavity is kept in a closed state during an effective processing process. The shaping device and method have highly controllable shaping processing intensity of powder particles and stable processing strength, and thus are applicable to shaping and pulverization of various powder particles, and also applicable to pulverization and further shaping processing of dispersed agglomerates.
    Type: Grant
    Filed: October 28, 2011
    Date of Patent: January 1, 2013
    Inventors: Xinyu Hu, Xijun Hu
  • Patent number: 8260799
    Abstract: The present invention discloses a method and an apparatus for creating a pattern matching state machine and identifying a pattern, and relates to pattern matching technologies. The method includes: obtaining a sub-keyword field after division; generating a state transition (goto) function according to the sub-keyword field; generating a failure function of each state node according to the goto function; generating a next-hop goto function ? of each state node according to the goto function and the failure function; in the process of converting the failure chain, the entry with a failure transition to the initial state is not generated. Therefore, the storage content does not increase massively, the storage structure of the AC algorithm is optimized, and the processing speed of the AC algorithm is improved.
    Type: Grant
    Filed: September 28, 2010
    Date of Patent: September 4, 2012
    Assignee: Huawei Technologies Co., Ltd.
    Inventors: Jian Chen, Hong Zhou, Xinyu Hu
  • Publication number: 20120043685
    Abstract: A powder particle shaping device is provided, which includes a closed cavity capable of changing between multiple shapes as an external pressure changes, and the closed cavity compresses and moves powder particles with which the closed cavity is filled full while the shape changes. A powder particle shaping method is further provided, which includes a. filling a closed cavity full with powder particles to be shaped; and b. applying a varying external pressure to the closed cavity, such that the closed cavity changes repeatedly between multiple shapes, thereby making the powder particles under compression move and be subject to friction. The shaping device and method according to present invention have highly controllable shaping processing intensity of powder particles and stable processing strength, and thus are applicable to shaping and pulverization of various powder particles, and also applicable to pulverization and further shaping processing of dispersed agglomerates.
    Type: Application
    Filed: October 28, 2011
    Publication date: February 23, 2012
    Inventors: Xinyu Hu, Xijun Hu
  • Publication number: 20110295779
    Abstract: The present invention discloses a regex matching method and system, and relates to the field of computer technologies. The method includes: sorting multiple regexes into several regex groups, where all regexes in one regex group include a common string, which is known as a generic string; compiling each regex group into a DFA, and setting up a correlation between the generic string of each regex group and the DFA; matching to-be-matched data streams with the generic string respectively, and using the matched generic string as a matched string; obtaining a DFA corresponding to the matched string; and performing regex matching for the to-be-matched data streams according to the DFA, and outputting a matching result. The embodiments of the present invention shorten the data loading process, decrease the time consumed by data loading, and improve the matching performance.
    Type: Application
    Filed: May 25, 2011
    Publication date: December 1, 2011
    Inventors: Jian Chen, Xinyu Hu