Patents by Inventor Zhifan FENG

Zhifan FENG has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20220036085
    Abstract: Technical solutions for video event recognition relate to the fields of knowledge graphs, deep learning and computer vision. A video event graph is constructed, and each event in the video event graph includes: M argument roles of the event and respective arguments of the argument roles, with M being a positive integer greater than one. For a to-be-recognized video, respective arguments of the M argument roles of a to-be-recognized event corresponding to the video are acquired. According to the arguments acquired, an event is selected from the video event graph as a recognized event corresponding to the video.
    Type: Application
    Filed: June 17, 2021
    Publication date: February 3, 2022
    Applicant: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY CO., LTD.
    Inventors: Qi WANG, Zhifan FENG, Hu YANG, Feng HE, Chunguang CHAI, Yong ZHU
  • Publication number: 20220027634
    Abstract: A video processing method, an electronic device and a storage medium are provided, and relate to the field of artificial intelligence, and particularly relates to the fields of deep learning, model training, knowledge mapping, video processing and the like. The method includes: acquiring a plurality of first video frames, and performing fine-grained splitting on the plurality of first video frames to obtain a plurality of second video frames; performing feature encoding on the plurality of second video frames according to multi-mode information related to the plurality of second video frames, to obtain feature fusion information for characterizing fusion of the multi-mode information; and performing similarity matching on the plurality of second video frames according to the feature fusion information, and obtaining a target video according to a result of the similarity matching.
    Type: Application
    Filed: October 6, 2021
    Publication date: January 27, 2022
    Inventors: Qi WANG, Zhifan FENG, Hu YANG, Chunguang CHAI
  • Patent number: 11210524
    Abstract: A method and an apparatus for outputting information are provided according to embodiments of the disclosure. The method includes: recognizing a target video, to recognize at least one entity and obtain a confidence degree of each entity, the entity including a main entity and related entities; matching the at least one entity with a pre-stored knowledge base to determine at least one candidate entity; obtaining at least one main entity by expanding the related entities of the at least one candidate entity based on the knowledge base, and obtaining a confidence degree of the obtained main entity; and calculating a confidence level of the obtained main entity based on the confidence degree of each of the related entities of the at least one candidate entity and the confidence degree of the obtained main entity, and outputting the confidence level of the obtained main entity.
    Type: Grant
    Filed: March 2, 2020
    Date of Patent: December 28, 2021
    Assignee: Beijing Baidu Netcom Science and Technology Co., Ltd.
    Inventors: Kexin Ren, Xiaohan Zhang, Zhifan Feng, Yang Zhang
  • Publication number: 20210383069
    Abstract: A method, apparatus, device, and storage medium for linking an entity, relates to the technical fields of knowledge graph and deep learning are provided. The method may include: acquiring a target text; determining at least one entity mention included in the target text and a candidate entity corresponding to each entity mention; determining an embedding vector of each candidate entity based on the each candidate entity and a preset entity embedding vector determination model; determining context semantic information of the target text based on the target text and each embedding vector; determining type information of the at least one entity mention; and determining an entity linking result of the at least one entity mention, based on the each embedding vector, the context semantic information, and each type information.
    Type: Application
    Filed: December 10, 2020
    Publication date: December 9, 2021
    Inventors: Zhijie LIU, Qi WANG, Zhifan FENG, Chunguang CHAI, Yong ZHU
  • Publication number: 20210326535
    Abstract: The present disclosure provides a method, a device, an equipment and a storage medium for mining a topic concept. The method includes: acquiring a plurality of candidate topic concepts based on a query; performing word segmentation on the plurality of candidate topic concepts and performing part-of-speech tagging on words obtained after performing the word segmentation, to obtain a part-of-speech sequence of each of the plurality of candidate topic concepts; and filtering the plurality of candidate topic concepts based on the part-of-speech sequence, to filter out a topic concept corresponding to a target part-of-speech sequence among the plurality of candidate topic concepts, in which a proportion of accurate topic concepts in the target part-of-speech sequence is lower than or equal to a first preset threshold, or a proportion of inaccurate topic concepts in the target part-of-speech sequence is higher than or equal to a second preset threshold.
    Type: Application
    Filed: September 29, 2020
    Publication date: October 21, 2021
    Inventors: Zhijie Liu, Qi Wang, Zhifan Feng, Zhou Fang, Chunguang Chai, Yong Zhu
  • Publication number: 20210256051
    Abstract: A theme classification method based on multimodality is related to a field of a knowledge map. The method includes obtaining text information and non-text information of an object to be classified. The non-text information includes at least one of visual information and audio information. The method also includes determining an entity set of the text information based on a pre-established knowledge base, and then extracting a text feature of the object based on the text information and the entity set. The method also includes determining a theme classification of the object based on the text feature and a non-text feature of the object.
    Type: Application
    Filed: October 13, 2020
    Publication date: August 19, 2021
    Inventors: Qi WANG, Zhifan FENG, Zhijie LIU, Chunguang CHAI, Yong ZHU
  • Publication number: 20210250666
    Abstract: The disclosure provides a method for processing a video, an electronic device, and a computer storage medium. The method includes: determining a plurality of first identifiers related to a first object based on a plurality of frames including the first object in a target video; determining a plurality of attribute values associated with the plurality of first identifiers based on a knowledge base related to the first object; determining a set of frames from the plurality of frames, in which one or more attribute values associated with one or more first identifiers determined from each one of the set of frames are predetermined values; and splitting the target video into a plurality of video clips based on positions of the set of frames in the plurality of frames.
    Type: Application
    Filed: April 28, 2021
    Publication date: August 12, 2021
    Inventors: Hu YANG, Shu WANG, Xiaohan ZHANG, Qi WANG, Zhifan FENG, Chunguang CHAI
  • Publication number: 20210216715
    Abstract: A method for mining an entity focus in a text may include: performing word and phrase feature extraction on an input text; inputting an extracted word and phrase feature into a text coding network for coding, to obtain a coding sequence of the input text; processing the coding sequence of the input text using a core entity labeling network to predict a position of a core entity in the input text; extracting a subsequence corresponding to the core entity in the input text from the coding sequence of the input text, based on the position of the core entity in the input text; and predicting a position of a focus corresponding to the core entity in the input text using a focus labeling network, based on the coding sequence of the input text and the subsequence corresponding to the core entity in the input text.
    Type: Application
    Filed: September 17, 2020
    Publication date: July 15, 2021
    Inventors: Shu WANG, Kexin REN, Xiaohan ZHANG, Zhifan FENG, Yang ZHANG, Yong ZHU
  • Publication number: 20210216716
    Abstract: A method, apparatus, device, and storage medium for entity linking is disclosed. The method includes: acquiring a target text; determining at least one entity mention included in the target text; determining a candidate entity corresponding to each of the entity mention based on a preset knowledge base; determining a reference text of each of the candidate entity and determining additional feature information of each of the candidate entity; and determining an entity linking result based on the target text, each of the reference text, and each piece of the additional feature information.
    Type: Application
    Filed: March 26, 2021
    Publication date: July 15, 2021
    Inventors: Qi Wang, Zhifan Feng, Zhijie Liu, Siqi Wang, Chunguang Chai, Yong Zhu
  • Publication number: 20210216717
    Abstract: A method, electronic device and storage medium for generating information are disclosed. The method includes: acquiring a plurality of tag entity words from a target video, the tag entity words including a person entity word, a work entity word, a video category entity word, and a video core entity word, the video core entity word including an entity word for characterizing a content related to the target video; linking, for a tag entity word among the plurality of tag entity words, the tag entity word to a node of a preset knowledge graph; determining semantic information of the target video based on a linking result of each of the tag entity words; and structuring the semantic information of the target video based on a relationship between the node and an edge of the knowledge graph, to obtain structured semantic information of the target video.
    Type: Application
    Filed: March 26, 2021
    Publication date: July 15, 2021
    Inventors: Shu Wang, Kexin Ren, Xiaohan Zhang, Zhifan Feng, Chunguang Chai, Yong Zhu
  • Publication number: 20210216712
    Abstract: A method and an apparatus for labelling a core entity, and a related electronic device are proposed. A character vector sequence, a first word vector sequence and an entity vector sequence corresponding to a target text are obtained by performing character vector mapping, word vector mapping and entity vector mapping are performed on the target text, to obtain a target vector sequence corresponding to the target text. A first probability that each character of the target text is a starting character of a core entity and a second probability that each character of the target text is an ending character of a core entity are determined by encoding and decoding the target vector sequence. One or more core entities of the target text are determined based on the first probability and the second probability.
    Type: Application
    Filed: January 14, 2021
    Publication date: July 15, 2021
    Inventors: Shu WANG, Kexin REN, Xiaohan ZHANG, Zhifan FENG, Yang ZHANG, Yong ZHU
  • Publication number: 20210216580
    Abstract: A method and an apparatus for generating a text topic and an electronic device are disclosed. The method includes: obtaining entities included in a text to be processed by mining the entities; determining each candidate entity in a knowledge graph corresponding to each entity included in the text to be processed through entity links; determining a set of core entities corresponding to the text to be processed by clustering candidate entities; determining each candidate topic included in the text to be processed based on a matching degree between each keyword in the text to be processed and each reference topic in a preset topic graph; and obtaining the text topic from the set of core entities and the candidate topics based on association between each core entity and the text to be processed as well as association between each candidate topic and the text to be processed.
    Type: Application
    Filed: January 12, 2021
    Publication date: July 15, 2021
    Inventors: Zhijie LIU, Qi WANG, Zhifan FENG, Yang ZHANG, Yong ZHU
  • Publication number: 20210192142
    Abstract: The present disclosure discloses a multimodal content processing method, apparatus, device and storage medium, which relate to the technical field of artificial intelligence. The specific implementation is: receiving a content processing request of a user which is configured to request semantic understanding of multimodal content to be processed, analyzing the multimodal content to obtain the multimodal knowledge nodes corresponding to the multimodal content, determining a semantic understanding result of the multimodal content according to the multimodal knowledge nodes, a pre-constructed multimodal knowledge graph and the multimodal content, the multimodal knowledge graph including: the multimodal knowledge nodes and an association relationship between multimodal knowledge nodes. The technical solution can obtain an accurate semantic understanding result, realize an accurate application of multimodal content, and solve the problem in the prior art that multimodal content understanding is inaccurate.
    Type: Application
    Filed: September 18, 2020
    Publication date: June 24, 2021
    Inventors: Zhifan FENG, Haifeng WANG, Kexin REN, Yong ZHU, Yajuan LYU
  • Publication number: 20210049365
    Abstract: A method and an apparatus for outputting information are provided according to embodiments of the disclosure. The method includes: recognizing a target video, to recognize at least one entity and obtain a confidence degree of each entity, the entity including a main entity and related entities; matching the at least one entity with a pre-stored knowledge base to determine at least one candidate entity; obtaining at least one main entity by expanding the related entities of the at least one candidate entity based on the knowledge base, and obtaining a confidence degree of the obtained main entity; and calculating a confidence level of the obtained main entity based on the confidence degree of each of the related entities of the at least one candidate entity and the confidence degree of the obtained main entity, and outputting the confidence level of the obtained main entity.
    Type: Application
    Filed: March 2, 2020
    Publication date: February 18, 2021
    Inventors: Kexin Ren, Xiaohan Zhang, Zhifan Feng, Yang Zhang
  • Publication number: 20200293905
    Abstract: Embodiments of the present disclosure relate to a method and apparatus for generating a neural network. The method includes: acquiring a target neural network, the target neural network corresponding to a preset association relationship, and being configured to use two entity vectors corresponding to two entities in a target knowledge graph as an input, to determine whether an association relationship between the two entities corresponding to the inputted two entity vectors is the preset association relationship, the target neural network comprising a relational tensor predetermined for the preset association relationship; converting the relational tensor in the target neural network into a product of a target number of relationship matrices, and generating a candidate neural network comprising the target number of converted relationship matrices; and generating a resulting neural network using the candidate neural network.
    Type: Application
    Filed: October 28, 2019
    Publication date: September 17, 2020
    Inventors: Jianhui HUANG, Min QIAO, Zhifan FENG, Pingping HUANG, Yong ZHU, Yajuan LYU, Ying LI
  • Publication number: 20200294267
    Abstract: Embodiments of the present provide a method and a device for processing an image, server and storage medium. The method includes: determining, based on an object type of an object in an image to be processed, a feature expression of the object in the image to be processed; and determining an entity associated with the object in the image to be processed based on the feature expression of the object in the image to be processed and a feature expression of an entity in a knowledge graph.
    Type: Application
    Filed: January 23, 2020
    Publication date: September 17, 2020
    Inventors: Xiaohan ZHANG, Ye XU, Kexin REN, Zhifan FENG, Yang ZHANG, Yong ZHU
  • Publication number: 20200242140
    Abstract: Embodiments of the present disclosure provide a method, apparatus, device and medium for determining text relevance. The method for determining text relevance may include: identifying, from a predefined knowledge base, a first set of knowledge elements associated with a first text and a second set of knowledge elements associated with a second text. The knowledge base includes a knowledge representation consist of knowledge elements. The method may further include: determining knowledge element relevance between the first set of knowledge elements and the second set of knowledge elements, and determining text relevance between the second text and the first text based at least on the knowledge element relevance.
    Type: Application
    Filed: November 20, 2019
    Publication date: July 30, 2020
    Inventors: Ye Xu, Zhifan Feng, Zhou Fang, Yang Zhang, Yong Zhu
  • Publication number: 20190228320
    Abstract: Systems, methods, terminals, and computer readable storage medium for normalizing entities in a knowledge base. A method for normalizing entities in a knowledge base includes acquiring a set of entities in the knowledge base, pre-segmenting the set of entities in a plurality of segmenting modes, performing a sample construction based on the result of pre-segmentation to extract a key sample, performing a feature construction based on the result of pre-segmentation to extract a similar feature, performing a normalizing determination on each pair of entities with at least one normalization model using the key sample and the similar feature to determine whether entities in each pair are the same, and grouping results of the normalizing determination.
    Type: Application
    Filed: January 23, 2019
    Publication date: July 25, 2019
    Inventors: Zhifan FENG, Chao LU, Ye XU, Zhou FANG, Yong ZHU, Ying LI
  • Publication number: 20190220749
    Abstract: The present disclosure provides a text processing method and device based on ambiguous entity words. The method includes: obtaining a context of a text to be disambiguated and at least two candidate entities represented by the text to be disambiguated; generating a semantic vector of the context based on a trained word vector model; generating a first entity vector of each of the at least two candidate entities based on a trained unsupervised neural network model; determining a similarity between the context and each candidate entity; and determining a target entity represented by the text to be disambiguated in the context.
    Type: Application
    Filed: December 30, 2018
    Publication date: July 18, 2019
    Inventors: Zhifan FENG, Chao LU, Yong ZHU, Ying LI
  • Publication number: 20190220752
    Abstract: Embodiments of the disclosure disclose a method, apparatus, server, and storage medium for incorporating a structured entity, wherein the method for incorporating a structured entity can comprise: selecting a candidate entity associated with a to-be-incorporated structured entity from a knowledge graph, determining the to-be-incorporated structured entity being an associated entity based on prior attribute information of a category of the candidate entity and a preset model, merging the associated entity and the candidate entity, and incorporating the associated entity into the knowledge graph. The embodiments can select a candidate entity, and then integrate a preset model using prior knowledge, which can effectively improve the efficiency and accuracy in associating entities, and reduce the amount of calculation, to enable the structured entity to be simply and efficiently incorporated into the knowledge graph.
    Type: Application
    Filed: December 7, 2018
    Publication date: July 18, 2019
    Inventors: Ye XU, Zhifan FENG, Chao LU, Yang ZHANG, Zhou FANG, Shu WANG, Yong ZHU, Ying LI