Patents Assigned to Beijing Moviebook Science and Technology Co., Ltd.
  • Publication number: 20220398845
    Abstract: A method for selecting a keyframe based on a motion state includes: sequentially storing several groups of adjacent images in a key frame sequence; extracting feature points from the images, and sequentially matching the feature point of an ith image with the feature points of the subsequent images until the number of matched feature points reaches a preset threshold value, to form a new key frame sequence; calculating a fundamental matrix between adjacent frames in the new key frame sequence, decomposing the fundamental matrix into a rotation matrix and a translation vector aa, and decomposing the non-singular rotation matrix according to coordinate axis directions to obtain a deflection angle of each coordinate axis; and comparing the deflection angle with a predetermined threshold value, selecting a current frame having the deflection angle greater than the threshold value as a key frame, and adding same to a final key frame sequence.
    Type: Application
    Filed: November 19, 2020
    Publication date: December 15, 2022
    Applicant: Beijing Moviebook Science and Technology Co., Ltd.
    Inventor: Chunbin LI
  • Publication number: 20220398746
    Abstract: A learning method and a learning device for visual odometry based on an ORB feature of an image sequence are provided. The learning method includes: recording images, and constituting an original data set by means of the plurality of obtained images; performing ORB feature extraction on the images in the original data set to realize extraction of first key features; performing feature extraction and matching on continuous images in the original data set by means of a convolutional neural network, and extracting rich second key features from the sequential images; and inputting the first key features and the second key features extracted from the original data set into a multi-layer long-short-term memory network for training and learning, and generating and outputting estimation of a visual odometer. Rich first key features are extracted from an image sequence, and then a tracking algorithm is used for tracking the features in continuous frames.
    Type: Application
    Filed: November 19, 2020
    Publication date: December 15, 2022
    Applicant: Beijing Moviebook Science and Technology Co., Ltd.
    Inventor: Ying FU
  • Publication number: 20220262119
    Abstract: A method, apparatus and device for automatically generating shooting highlights of a soccer match, and a computer-readable storage medium are provided. The method includes acquiring video data of historical soccer matches, and carrying out training according to the video data of the historical soccer matches to obtain a soccer match video processing model; according to the soccer match video processing model, processing a target soccer match video, and obtaining video data and commentator audio data of the target soccer match video; extracting from the video data continuous image frames, wherein in the continuous images frames a goal appears to form video clips to be selected; performing identification on the commentator audio data to obtain times, wherein at the times a keyword of a preset expression related to shooting occurs in the target soccer match video.
    Type: Application
    Filed: November 19, 2020
    Publication date: August 18, 2022
    Applicant: Beijing Moviebook Science and Technology Co., Ltd.
    Inventor: Minci HE
  • Publication number: 20220253700
    Abstract: An audio signal time sequence processing method and apparatus based on a neural network are provided. The audio signal time sequence processing method includes creating a combined network model, wherein the combined network model comprises a first network and a second network; acquiring a time-frequency graph of an audio signal; optimizing the time-frequency graph to obtain network input data; using the network input data to train the first network, and performing a feature extraction to obtain a multi-dimensional feature pattern; using the multi-dimensional feature pattern to construct a new feature vector; and inputting the new feature vector into the second network for training. The audio signal time sequence processing method solves a problem of an existing mapping transformation model based on a time sequence being unable to meet a multi-modal information application requirement.
    Type: Application
    Filed: November 19, 2020
    Publication date: August 11, 2022
    Applicant: Beijing Moviebook Science and Technology Co., Ltd.
    Inventor: Teng SUN
  • Publication number: 20220215560
    Abstract: A method and a device for tracking multiple target objects in a motion state, wherein the method includes: determining a feature detection area of a target object from a video frame captured by a video capture device, extracting color features of the target object from the detection area to perform comparison so as to obtain a first comparison result; comparing the position information of marked parts of target objects in adjacent video frames in a target coordinate system to obtain a second comparison result; and determining, according to the first comparison result and the second comparison result, whether the target objects in the adjacent video frames are the same target object, so as to implement accurate positioning and tracking. By using the method, multiple target objects can be quickly identified and tracked at the same time, and the accuracy of identifying and tracking target objects in video data are improved.
    Type: Application
    Filed: September 27, 2019
    Publication date: July 7, 2022
    Applicant: Beijing Moviebook Science and Technology Co., Ltd.
    Inventor: Changjiang JI
  • Patent number: 10970854
    Abstract: A visual target tracking method and apparatus based on deep adversarial training. The method includes: dividing each video frame of video data into several search regions; for each of the search regions, inputting a target template and the search region into a response graph regression network, and outputting a response graph corresponding to a target; for each of the search regions, inputting the target template, the search region, and the response graph into a discrimination network, and outputting a score of the search region; and using positioning information corresponding to a search region with the highest score as positioning information of the target in the video frame. The method can track a target by constructing a plurality of search regions, and can effectively track the target having a change in length-width ratio. End-to-end processing can be achieved by combining the response graph regression network with the discrimination network.
    Type: Grant
    Filed: July 5, 2019
    Date of Patent: April 6, 2021
    Assignee: BEIJING MOVIEBOOK SCIENCE AND TECHNOLOGY CO., LTD.
    Inventor: Xiaochen Ji
  • Patent number: 10916021
    Abstract: A visual target tracking method and apparatus based on a deeply and densely connected neural network. The method includes: a data input step: inputting a target image of a first video frame and a second video frame in video data into a deeply and densely connected neural network; a target tracking step: performing, based on the target image, target detection on the second video frame by using the trained deeply and densely connected neural network; and a tracking result output step: outputting bounding box coordinates and a similarity graph of a target in the second video frame, determining the length and width of the target based on the bounding box coordinates, and determining a center position of the target based on the position of a maximum value in the similarity graph.
    Type: Grant
    Filed: July 5, 2019
    Date of Patent: February 9, 2021
    Assignee: BEIJING MOVIEBOOK SCIENCE AND TECHNOLOGY CO., LTD.
    Inventor: Xiaochen Ji
  • Publication number: 20200327680
    Abstract: A visual target tracking method and apparatus based on deep adversarial training. The method includes: dividing each video frame of video data into several search regions; for each of the search regions, inputting a target template and the search region into a response graph regression network, and outputting a response graph corresponding to a target; for each of the search regions, inputting the target template, the search region, and the response graph into a discrimination network, and outputting a score of the search region; and using positioning information corresponding to a search region with the highest score as positioning information of the target in the video frame. The method can track a target by constructing a plurality of search regions, and can effectively track the target having a change in length-width ratio. End-to-end processing can be achieved by combining the response graph regression network with the discrimination network.
    Type: Application
    Filed: July 5, 2019
    Publication date: October 15, 2020
    Applicant: Beijing Moviebook Science and Technology Co., Ltd.
    Inventor: Xiaochen JI
  • Publication number: 20200327679
    Abstract: A visual target tracking method and apparatus based on a deeply and densely connected neural network. The method includes: a data input step: inputting a target image of a first video frame and a second video frame in video data into a deeply and densely connected neural network; a target tracking step: performing, based on the target image, target detection on the second video frame by using the trained deeply and densely connected neural network; and a tracking result output step: outputting bounding box coordinates and a similarity graph of a target in the second video frame, determining the length and width of the target based on the bounding box coordinates, and determining a center position of the target based on the position of a maximum value in the similarity graph.
    Type: Application
    Filed: July 5, 2019
    Publication date: October 15, 2020
    Applicant: Beijing Moviebook Science and Technology Co., Ltd.
    Inventor: Xiaochen JI