Patents by Inventor Mingren HU

Mingren HU has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11954011
    Abstract: An apparatus and a method for executing a customized production line using an artificial intelligence development platform, a computing device and a computer readable storage medium are provided. The apparatus includes: a production line executor configured to generate a native form of the artificial intelligence development platform based on a file set, the native form to be sent to a client accessing the artificial intelligence development platform so as to present a native interactive page of the artificial intelligence development platform; and a standardized platform interface configured to provide an interaction channel between the production line executor and the artificial intelligence development platform. The production line executor is further configured to generate an intermediate result by executing processing logic defined in the file set and to process the intermediate result by interacting with the artificial intelligence development platform via the standardized platform interface.
    Type: Grant
    Filed: October 28, 2020
    Date of Patent: April 9, 2024
    Assignee: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY CO., LTD.
    Inventors: Yongkang Xie, Ruyue Ma, Zhou Xin, Hao Cao, Kuan Shi, Yu Zhou, Yashuai Li, En Shi, Zhiquan Wu, Zihao Pan, Shupeng Li, Mingren Hu, Tian Wu
  • Publication number: 20240005182
    Abstract: Provided are a streaming media processing method based on inference service, an electronic device, and a storage medium, which relates to the field of artificial intelligence, and in particular, to the field of inference service of artificial intelligence models. The method includes: detecting, in a process of processing a k-th channel of streaming media through an i-th inference service pod, the i-th inference service pod, to obtain a detection result of the i-th inference service pod, i and k being positive integers; determining a replacement object of the i-th inference service pod, in the case where it is determined that the i-th inference service pod is in an abnormal state based on the detection result of the i-th inference service pod; and processing the k-th channel of streaming media through the replacement object of the i-th inference service pod.
    Type: Application
    Filed: November 7, 2022
    Publication date: January 4, 2024
    Applicant: Beijing Baidu Netcom Science Technology Co., Ltd.
    Inventors: Jinqi Li, En Shi, Mingren Hu, Zhengyu Qian, Zhengxiong Yuan, Zhenfang Chu, Yue Huang, Yang Luo, Guobin Wang
  • Publication number: 20230376726
    Abstract: Provided are an inference service deployment method, a device and a storage medium, relating to the field of artificial intelligence technology, and in particular to the field of machine learning and inference service technology. The inference service deployment method includes: obtaining performance information of a runtime environment of a deployment end; selecting a target version of an inference service from a plurality of candidate versions of the inference service of a model according to the performance information of the runtime environment of the deployment end; and deploying the target version of the inference service to the deployment end.
    Type: Application
    Filed: November 3, 2022
    Publication date: November 23, 2023
    Inventors: Zhengxiong YUAN, Zhenfang CHU, Jinqi LI, Mingren HU, Guobin WANG, Yang LUO, Yue HUANG, Zhengyu QIAN, En SHI
  • Publication number: 20220374742
    Abstract: A method for running an inference service platform, includes: determining inference tasks to be allocated for the inference service platform, in which the inference service platform includes two or more inference service groups, versions of the inference service groups are different, and the inference service groups are configured to perform a same type of inference services; determining a flow weight of each of the inference service groups, in which the flow weight is configured to indicate a proportion of a number of inference tasks to which the corresponding inference service group need to be allocated in a total number of inference tasks; and allocating the corresponding number of inference tasks in the inference tasks to be allocated to each of the inference service groups based on the flow weight of each of the inference service groups; and performing the inference tasks by the inference service group.
    Type: Application
    Filed: August 3, 2022
    Publication date: November 24, 2022
    Applicant: BEIJING BAIDU NETCOM SCIENCE TECHNOLOGY CO., LTD.
    Inventors: Zhengxiong Yuan, Zhengyu Qian, En Shi, Mingren Hu, Jinqi Li, Zhenfang Chu, Runqing Li, Yue Huang
  • Patent number: 11455173
    Abstract: A method for management of an artificial intelligence development platform is provided. The artificial intelligence development platform is deployed with instances of a plurality of model services, and each of the model services is provided with one or more instances. The method includes: acquiring calling information of at least one model service; determining the activity of the at least one model service according to the calling information; and at least deleting all instances of the at least one model service in response to that the determined activity meets a first condition.
    Type: Grant
    Filed: March 19, 2021
    Date of Patent: September 27, 2022
    Assignee: Beijing Baidu Netcom Science and Technology Co., Ltd.
    Inventors: Zhengxiong Yuan, En Shi, Yongkang Xie, Mingren Hu, Zhengyu Qian, Zhenfang Chu
  • Publication number: 20220253372
    Abstract: An apparatus and a method for executing a customized production line using an artificial intelligence development platform, a computing device and a computer readable storage medium are provided. The apparatus includes: a production line executor configured to generate a native form of the artificial intelligence development platform based on a file set, the native form to be sent to a client accessing the artificial intelligence development platform so as to present a native interactive page of the artificial intelligence development platform; and a standardized platform interface configured to provide an interaction channel between the production line executor and the artificial intelligence development platform. The production line executor is further configured to generate an intermediate result by executing processing logic defined in the file set and to process the intermediate result by interacting with the artificial intelligence development platform via the standardized platform interface.
    Type: Application
    Filed: October 28, 2020
    Publication date: August 11, 2022
    Inventors: Yongkang XIE, Ruyue MA, Zhou XIN, Hao CAO, Kuan SHI, Yu ZHOU, Yashuai LI, En SHI, Zhiquan WU, Zihao PAN, Shupeng LI, Mingren HU, Tian WU
  • Publication number: 20210211361
    Abstract: A method for management of an artificial intelligence development platform is provided. The artificial intelligence development platform is deployed with instances of a plurality of model services, and each of the model services is provided with one or more instances. The method includes: acquiring calling information of at least one model service; determining the activity of the at least one model service according to the calling information; and at least deleting all instances of the at least one model service in response to that the determined activity meets a first condition.
    Type: Application
    Filed: March 19, 2021
    Publication date: July 8, 2021
    Inventors: Zhengxiong Yuan, En Shi, Yongkang Xie, Mingren Hu, Zhengyu Qian, Zhenfang Chu
  • Patent number: 10506063
    Abstract: A method for caching User Generated Content (UGC) messages performed at a server is provided, in which first and second attribute information of an UGC message is acquired, a function value corresponding to the first attribute information is obtained based on the first attribute information and a preset first function, a function value corresponding to the second attribute information is obtained based on the second attribute information and a preset second function, the UGC message is added to the cache memory of the server when it is determined from the function values corresponding to the first and the second attribute information that the first and second attribute information of the UGC message meets a preset condition. Additionally, an apparatus and a server for caching UGC messages are also provided.
    Type: Grant
    Filed: August 14, 2015
    Date of Patent: December 10, 2019
    Assignee: Tencent Technology (Shenzhen) Company Limited
    Inventor: Mingren Hu
  • Patent number: 9612771
    Abstract: Embodiments of the present invention provide a method and system for processing hot topic message. The method includes: receiving, by an interface machine, a read request for a message, and determining whether the interface machine has buffered the message and whether buffering duration of the message does not exceed preset valid duration; feeding back, if the message has been buffered and the buffering duration of the message does not exceed the preset valid duration, the message that the interface machine has buffered; and determining, if the message has been buffered but the buffering duration of the message exceeds the preset valid duration or the message is not buffered, whether the message is a hot topic message, and retrieving, if the message is a hot topic message, the hot topic message from a storage machine, buffering the hot topic message, recording a buffering moment, and feeding back the hot topic message.
    Type: Grant
    Filed: May 7, 2015
    Date of Patent: April 4, 2017
    Assignee: Tencent Technology (Shenzhen) Company Limited
    Inventor: Mingren Hu
  • Publication number: 20150358419
    Abstract: A method for caching User Generated Content (UGC) messages performed at a server is provided, in which first and second attribute information of an UGC message is acquired, a function value corresponding to the first attribute information is obtained based on the first attribute information and a preset first function, a function value corresponding to the second attribute information is obtained based on the second attribute information and a preset second function, the UGC message is added to the cache memory of the server when it is determined from the function values corresponding to the first and the second attribute information that the first and second attribute information of the UGC message meets a preset condition. Additionally, an apparatus and a server for caching UGC messages are also provided.
    Type: Application
    Filed: August 14, 2015
    Publication date: December 10, 2015
    Inventor: Mingren Hu
  • Publication number: 20150254021
    Abstract: Embodiments of the present invention provide a method and system for processing hot topic message. The method includes: receiving, by an interface machine, a read request for a message, and determining whether the interface machine has buffered the message and whether buffering duration of the message does not exceed preset valid duration; feeding back, if the message has been buffered and the buffering duration of the message does not exceed the preset valid duration, the message that the interface machine has buffered; and determining, if the message has been buffered but the buffering duration of the message exceeds the preset valid duration or the message is not buffered, whether the message is a hot topic message, and retrieving, if the message is a hot topic message, the hot topic message from a storage machine, buffering the hot topic message, recording a buffering moment, and feeding back the hot topic message.
    Type: Application
    Filed: May 7, 2015
    Publication date: September 10, 2015
    Inventor: Mingren HU