Patents by Inventor Jinhu LIU

Jinhu LIU has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250021648
    Abstract: Embodiments of this application provide a method for detecting ransomware, a related system, and a storage medium. The method includes: obtaining a partial feature of a target file based on preset data in the target file, where the partial feature includes a partial incremental entropy and/or partial histogram statistical data; determining, based on the partial feature of the target file, whether the target file is an encrypted file; and determining, if the target file is the encrypted file, that the target file is attacked by the ransomware. The method can increase file detection efficiency and accuracy.
    Type: Application
    Filed: September 27, 2024
    Publication date: January 16, 2025
    Inventors: Feng TAN, Sen WANG, Gong ZHANG, Jinhu LIU
  • Patent number: 12159036
    Abstract: This disclosure provides an access request processing method, a storage device, and a storage system. In embodiments of this disclosure, a first storage device receives an artificial intelligence (AI) model from a second storage device, wherein the AI model is obtained by the second storage device through training based on historical input output (IO) requests in historical running processes. The first storage device obtains a prediction result of the AI model based on an access request received by the first storage device.
    Type: Grant
    Filed: February 1, 2022
    Date of Patent: December 3, 2024
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Peng Lu, Jinhu Liu, Xiaodong Du
  • Patent number: 12135648
    Abstract: A data prefetching method and apparatus, and related storage device are provided. Data samples are collected. An AI chip trains the data samples to obtain a prefetching model. The AI chip then sends the prefetching model to a processor. The processor reads to-be-read data into a cache based on the prefetching model to reduce the computing burden of the processor.
    Type: Grant
    Filed: September 23, 2022
    Date of Patent: November 5, 2024
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Peng Lu, Jinhu Liu
  • Publication number: 20240012756
    Abstract: Embodiments of this disclosure provide for a storage device, a cache management method, and related apparatus. The storage device includes at least two levels of storages. The at least two levels of storages include a first-level storage and a second-level storage, and a speed of processing data by the first-level storage is higher than that of the second-level storage. The first-level storage includes at least a first storage area and a second storage area. Hotness values of data stored in the first storage area are within a first hotness value interval, hotness values of data stored in the second storage area are within a second hotness value interval, and the data in the first-level storage is swapped out at a granularity of one storage area based on the hotness values of the data.
    Type: Application
    Filed: September 19, 2023
    Publication date: January 11, 2024
    Applicant: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Weijie XIANG, Jinmou LIU, Junwei CHEN, Jinhu LIU, Zhuo CHENG
  • Publication number: 20230331699
    Abstract: A preparation method for a CDK4/6 inhibitor for a compound of formula (I): 5-fluoro-4-(3-isopropyl-2-methyl-2H-indazol-5-yl)-N-(5-(piperazin-1-yl)pyrazol-2-yl)pyrimidine-2-amine. The preparation method has cheap and readily available starting materials and reagents, greatly simplified total reaction steps, shortened reaction time, improved total yield and a high purity of the key intermediate and final product, being suitable for industrial production.
    Type: Application
    Filed: June 21, 2021
    Publication date: October 19, 2023
    Applicant: CHIA TAI TIANQING PHARMACEUTICAL GROUP CO., LTD.
    Inventors: Guofeng LIU, Yong WANG, Meng GUO, Xiquan ZHANG, Kaizhen SONG, Jinhu LIU, Junshan LUO
  • Publication number: 20230009375
    Abstract: A data prefetching method and apparatus, and related storage device are provided. Data samples are collected. An AI chip trains the data samples to obtain a prefetching model. The AI chip then sends the prefetching model to a processor. The processor reads to-be-read data into a cache based on the prefetching model to reduce the computing burden of the processor.
    Type: Application
    Filed: September 23, 2022
    Publication date: January 12, 2023
    Applicant: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Peng LU, Jinhu LIU
  • Publication number: 20220155970
    Abstract: This disclosure provides an access request processing method, a storage device, and a storage system. In embodiments of this disclosure, a first storage device receives an artificial intelligence (AI) model from a second storage device, wherein the AI model is obtained by the second storage device through training based on historical input output (IO) requests in historical running processes. The first storage device obtains a prediction result of the AI model based on an access request received by the first storage device.
    Type: Application
    Filed: February 1, 2022
    Publication date: May 19, 2022
    Applicant: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Peng LU, Jinhu LIU, Xiaodong DU
  • Publication number: 20190044676
    Abstract: Method, base station and user equipment are disclosed for transmission. A method comprises performing listen-before-talk on at least one channel within a first window comprising two or more channels; selecting at least one available channel based on a result of the listen-before-talk; and transmitting a discovery reference signal on at least one resource block of the at least one available channel.
    Type: Application
    Filed: August 2, 2017
    Publication date: February 7, 2019
    Inventors: Gen Li, Jinhue Liu, Rui Fan
  • Publication number: 20180332611
    Abstract: Embodiments of the present disclosure relate to methods and devices for subframe scheduling. In example embodiments, according to a method implemented in a network device is provided, a report indicating a size of uplink data to be transmitted by the terminal device is received from a terminal device. Scheduling grant information is transmitted to the terminal device indicating a first number of subframes scheduled to the terminal device for transmission of the uplink data. The first number of subframes are determined based on the report, and the first number is greater than a second number of subframes to be consumed by the transmission of the uplink data.
    Type: Application
    Filed: November 17, 2016
    Publication date: November 15, 2018
    Inventors: Gen LI, Jinhu LIU, Cong SHI, Amitav MUKHERJEE