Patents Assigned to Beijing Xiaomi Pinecone Electronics Co., Ltd.
  • Patent number: 11245886
    Abstract: A method for synthesizing an omni-directional parallax view includes: obtaining parallaxes between an original image data pair, wherein the parallaxes include a horizontal parallax and a vertical parallax; determining a target viewpoint based on a base line between the original image data pair; obtaining a target pixel of the target viewpoint in original image data based on the horizontal parallax and the vertical parallax; and synthesizing a target view of the target viewpoint based on the target pixel.
    Type: Grant
    Filed: September 21, 2020
    Date of Patent: February 8, 2022
    Assignee: Beijing Xiaomi Pinecone Electronics Co., Ltd.
    Inventors: Chunxia Xiao, Fei Luo, Wenjie Li, Liheng Zhou
  • Publication number: 20220038641
    Abstract: The present disclosure relates to a video processing method and apparatus, and a storage medium. The method is applied to a terminal and includes: a background frame for a time static special effect is determined from video frames in a video to be processed; for each of the video frames in the video, an image area, where a target object is located is acquired from the respective video frame and the image area is fused into the background frame to generate a special effect frame with the time static special effect.
    Type: Application
    Filed: April 29, 2021
    Publication date: February 3, 2022
    Applicant: BEIJING XIAOMI PINECONE ELECTRONICS CO., LTD.
    Inventors: Binglin CHANG, Kailun MA, Zhenghui SHI, Qingmin WANG
  • Publication number: 20220038642
    Abstract: A method, apparatus, and a non-transitory computer-readable storage medium for processing a video are provided. A terminal determines a subject region of a video frame in a video and a background region. A target object is located in the subject region. The background region is a region of the video frame other than the subject region. The terminal overlays the subject region in at least one of a first video frame having the target object on at least one of a second video frame having the target object and generates a special effect frame including at least two subject regions in each of which the target object is located.
    Type: Application
    Filed: April 29, 2021
    Publication date: February 3, 2022
    Applicant: BEIJING XIAOMI PINECONE ELECTRONICS CO., LTD.
    Inventors: Binglin CHANG, Kailun MA, Zhenghui SHI, Qingmin WANG
  • Publication number: 20210406462
    Abstract: A method for semantic recognition includes: in response to performing semantic analysis on information acquired by a terminal, a sentence to be processed is acquired. Word recognition is performed on the sentence to be processed, to obtain a plurality of words and part-of-speech information thereof. A target set update operation is determined with a pre-trained word processing model, according to a word to be processed in the set of words to be processed and part-of-speech information of the word to be processed. If a dependency relationship corresponding to the target set update operation is a first dependency relationship, through each of the plurality of preset set update operations, a respective dependency relationship of the word to be processed and a respective confidence level corresponding to the dependency relationship is determined, and a respective update of the set of words to be processed is performed.
    Type: Application
    Filed: December 23, 2020
    Publication date: December 30, 2021
    Applicant: Beijing Xiaomi Pinecone Electronics Co., Ltd.
    Inventors: Yuankai GUO, Bin WANG, Liang SHI, Erli MENG, Yulan HU, Shuo WANG, Yingzhe WANG
  • Publication number: 20210406524
    Abstract: Aspects of the disclosure can provide method for identifying a face where multiple images to be identified are received. Each of the multiple images includes a face image part. Each face image of face images in the multiple images to be identified is extracted. An initial figure identification result of identifying a figure in the each face image is determined by matching a face in the each face image respectively to a face in a target image in an image identification library. The face images are grouped. A target figure identification result for each face image in each group is determined according to the initial figure identification result for the each face image in the each group.
    Type: Application
    Filed: January 14, 2021
    Publication date: December 30, 2021
    Applicant: Beijing Xiaomi Pinecone Electronics Co., Ltd.
    Inventors: Yunping PANG, Hai YAO, Wenming WANG
  • Publication number: 20210407521
    Abstract: A method for speech assistant control includes: after a speech assistant is woken up, displaying a target interface corresponding to a control instruction corresponding to received speech data; when the target interface is different from an interface of the speech assistant, displaying a speech reception identifier in the target interface and controlling to continuously receive speech data; determining, based on second speech data received when the target interface is displayed, whether a target control instruction to be executed is included in the second speech data; and displaying an interface corresponding to the target control instruction when the target control instruction is included in the second speech data.
    Type: Application
    Filed: February 3, 2021
    Publication date: December 30, 2021
    Applicant: Beijing Xiaomi Pinecone Electronics Co., Ltd.
    Inventors: Can ZHOU, Meng WEN, Xiaochuang LU
  • Publication number: 20210407495
    Abstract: A method for semantic recognition includes: in response to performing semantic analysis on information acquired by a terminal, a sentence to be processed is acquired; word recognition is performed on the sentence, to obtain a plurality of words and part-of-speech information corresponding to each of the words; a target set update operation is determined with a word processing model, according to one or more words to be input, part-of-speech information of the words to be input, and a dependency relationship of a first word. The word processing model is configured to calculate first and second feature vectors according to a word feature vector of the words to be input, a part-of-speech feature vector of the part-of-speech information and a relationship feature vector of the dependency relationship of the first word, calculate confidence levels of the preset set update operations according to the first feature vector and the second feature vector.
    Type: Application
    Filed: December 23, 2020
    Publication date: December 30, 2021
    Applicant: Beijing Xiaomi Pinecone Electronics Co., Ltd.
    Inventors: Yuankai GUO, Bin WANG, Liang SHI, Yulan HU, Erli MENG, Shuo WANG, Yingzhe WANG
  • Publication number: 20210407505
    Abstract: Provided are a device control method and apparatus. The method is applied to a server, and includes: receiving user voice instruction information, wherein the user voice instruction information is sent by a voice acquisition device after acquiring a user voice instruction, and the user voice instruction information indicates to-be-processed information and an information processing mode; determining, from a device set according to device information of devices in the device set, a target device capable of processing the to-be-processed information in the information processing mode, wherein the device set comprises the voice acquisition device, and all devices in the device set are bound to a same login account for logging into the server; and controlling the target device to process the to-be-processed information in the information processing mode.
    Type: Application
    Filed: March 26, 2021
    Publication date: December 30, 2021
    Applicant: BEIJING XIAOMI PINECONE ELECTRONICS CO., LTD.
    Inventor: Guohui QIAO
  • Publication number: 20210406532
    Abstract: A method for detecting a finger occlusion image includes: N first original occlusion images and M first non-occlusion images are acquired, and first training data set is generated based on first original occlusion images and first non-occlusion images; first training is performed, based on first training data set, on neural network model for detection of finger occlusion image; L second original occlusion images and K second non-occlusion images are acquired, and second training data set is generated based on second original occlusion images and second non-occlusion images; linear classifier in neural network model having completed first training is replaced with iterative training module to form finger occlusion image detection model; second training is performed on finger occlusion image detection model based on second training data set; image to be detected is input into trained finger occlusion image detection model, to determine whether image to be detected is finger occlusion image.
    Type: Application
    Filed: March 30, 2021
    Publication date: December 30, 2021
    Applicant: BEIJING XIAOMI PINECONE ELECTRONICS CO., LTD.
    Inventors: Zhi QU, Yasen ZHANG, Yan SONG, Zhipeng GE, Ruoyu LIU
  • Publication number: 20210398548
    Abstract: An original noisy signal of each of at least two microphones is acquired by acquiring, using the at least two microphones, an audio signal emitted by each sound source. For each frame in time domain, an estimated frequency-domain signal of each sound source is acquired according to the original noisy signal of each of the at least two microphones. A frequency collection containing a plurality of predetermined static frequencies and dynamic frequencies is determined in a predetermined frequency band range. A weighting coefficient of each frequency contained in the frequency collection is determined according to the estimated frequency-domain signal of the each frequency in the frequency collection. A separation matrix of the each frequency is determined according to the weighting coefficient. The audio signal emitted by each of the at least two sound sources is acquired based on the separation matrix and the original noisy signal.
    Type: Application
    Filed: March 30, 2021
    Publication date: December 23, 2021
    Applicant: BEIJING XIAOMI PINECONE ELECTRONICS CO., LTD.
    Inventor: Haining HOU
  • Publication number: 20210390341
    Abstract: A training method for an image denoising model that can include collecting multiple sample image groups through a shooting device, each sample image group including multiple frames of sample images with a same photographic sensitivity and sample images in different sample image groups having different photographic sensitivities.
    Type: Application
    Filed: January 13, 2021
    Publication date: December 16, 2021
    Applicant: Beijing Xiaomi Pinecone Electronics Co., Ltd.
    Inventor: Liang ZHANG
  • Publication number: 20210390340
    Abstract: Aspects of the disclosure provide a training method and device for an image enhancement model and a storage medium. The method can include inputting each training input image group into the image enhancement model to obtain a predicted image output by the image enhancement model, and training the image enhancement model until convergence through each loss function respectively corresponding to each training pair. Each loss function can include a plurality of gray scale loss components corresponding to a plurality of frequency intervals one to one, and each gray scale loss component is determined based on a difference between a gray scale frequency division image of each predicted image and a gray scale frequency division image of the corresponding target image in each frequency interval, and different gray scale loss components correspond to different frequency intervals.
    Type: Application
    Filed: January 13, 2021
    Publication date: December 16, 2021
    Applicant: Beijing Xiaomi Pinecone Electronics Co., Ltd.
    Inventor: Liang ZHANG
  • Publication number: 20210375281
    Abstract: A voice control method can be applied to a first terminal, and include: receiving a user's voice operation instruction after the first terminal is activated, the voice operation instruction being used for controlling the first terminal to perform a target operation; sending an instruction execution request to a server after the voice operation instruction is received, the instruction execution request being used for requesting the server to determine whether the first terminal is to respond to the voice operation instruction according to device information of the terminal in a device network, wherein the first terminal is located in the device network; and performing the target operation in a case where a response message is received from the server, the response message indicating that the first terminal is to respond to the voice operation instruction.
    Type: Application
    Filed: October 12, 2020
    Publication date: December 2, 2021
    Applicant: Beijing Xiaomi Pinecone Electronics Co., Ltd.
    Inventor: Chizhen GAO
  • Publication number: 20210342782
    Abstract: A method and apparatus for scheduling an item, and a computer-readable storage medium are provided. The method can include acquiring a first time sequence corresponding to a target item in a target warehouse, the first time sequence including shipment volume information of the target item corresponding to each unit time within a first historical period. The method can further include determining, according to the first time sequence and a target period to be predicted, total shipment volume information of the target item within a target period through a shipment prediction model, the shipment prediction model including a plurality of parallel first time sequence sub-models and a weighted sub-model which is connected to an output of each of the first time sequence sub-models, and scheduling the target item in the target warehouse according to the total shipment volume information and current inventory information of the target item in the target warehouse.
    Type: Application
    Filed: September 15, 2020
    Publication date: November 4, 2021
    Applicant: Beijing Xiaomi Pinecone Electronics Co., Ltd.
    Inventors: Qingchun MENG, Linhao GAO
  • Publication number: 20210334661
    Abstract: The present disclosure relates to an image processing method and apparatus based on a super network, and a computer storage medium. The method can include that a pretrained backbone network is merged with a rear end of a target detection network to obtain a merged super network, the merged super network is trained, Neural Architecture Search (NAS) is performed based on the trained super network to obtain a target detection neural architecture, and an image to be processed is processed by using the target detection neural architecture to obtain an image processing result.
    Type: Application
    Filed: September 22, 2020
    Publication date: October 28, 2021
    Applicant: Beijing Xiaomi Pinecone Electronics Co., Ltd.
    Inventors: Xiangxiang CHU, Ruijun XU, Bo ZHANG, Bin WANG
  • Publication number: 20210337331
    Abstract: A method for detecting an audio input includes: acquiring audio input signals received by at least two input signal channels of an audio input module; for each of the audio input signals, filtering the audio input signal according to a preset audio output signal of an electronic device where the audio input module is located to obtain a target signal; for each of the audio input signals, determining a comparison parameter value according to the target signal and the audio input signal; and determining a performance state of the audio input module according to the comparison parameter values.
    Type: Application
    Filed: September 20, 2020
    Publication date: October 28, 2021
    Applicant: Beijing Xiaomi Pinecone Electronics Co., Ltd.
    Inventor: Jingang LIU
  • Publication number: 20210327411
    Abstract: A method for processing information includes that: a current audio is acquired, and a current text corresponding to the current audio is acquired; feature extraction is performed on the current audio through a speech feature extraction portion in a semantic analysis model, to obtain a speech feature of the current audio; feature extraction is performed on the current text through a text feature extraction portion in the semantic analysis model, to obtain a text feature of the current text; semantic classification is performed on the speech feature and the text feature through a classification portion in the semantic analysis model, to obtain a classification result; and recognition of the current audio is rejected in response to the classification result indicating that the current audio is to be rejected for recognition.
    Type: Application
    Filed: September 26, 2020
    Publication date: October 21, 2021
    Applicant: Beijing Xiaomi Pinecone Electronics Co., Ltd.
    Inventors: Zelun WU, Shiqi CUI, Qiaojing XIE, Chen WEI, Bin QIN, Gang WANG
  • Publication number: 20210319069
    Abstract: The present disclosure relates to a corpus processing method, a corpus processing apparatus and a storage medium. The corpus processing method can include obtaining a message input by a user, retrieving a reply message matching the message input by the user from a plurality of candidate corpora, in which the plurality of the candidate corpora includes candidate corpora obtained after removing a negative emotion corpus, and sending the reply message.
    Type: Application
    Filed: September 22, 2020
    Publication date: October 14, 2021
    Applicant: Beijing Xiaomi Pinecone Electronics Co., Ltd.
    Inventors: Zhi CUI, Kecong XIAO, Qun ZHAO
  • Publication number: 20210303997
    Abstract: Provided are a method and apparatuses for training a classification neural network, a text classification method and apparatus and an electronic device. The method includes: acquiring a regression result of sample text data, which is determined based on a pre-constructed first target neural network and represents a classification trend of the sample text data; inputting the sample text data and the regression result to a second target neural network; obtaining a predicted classification result of each piece of sample text data based on the second target neural network; adjusting a parameter of the second target neural network according to a difference between the predicted classification result and a true value of a corresponding category; and obtaining a trained second target neural network after a change of network loss related to the second target neural network meets a convergence condition.
    Type: Application
    Filed: August 25, 2020
    Publication date: September 30, 2021
    Applicant: BEIJING XIAOMI PINECONE ELECTRONICS CO., LTD.
    Inventors: Zeyu XU, Erli MENG, Lei SUN
  • Publication number: 20210304069
    Abstract: A method for training classification model is provided. The method includes: an annotated data set is processed based on a pre-trained first model, to obtain N first class probabilities, each being a probability that the annotated sample data is classified as a respective one of N classes; maximum K first class probabilities are selected from the N first class probabilities, and K first prediction labels, each corresponding to a respective one of K first class probabilities, are determined; and a second model is trained based on the annotated data set, a real label of each of the annotated sample data and the K first prediction labels of each of the annotated sample data. A classification method and device for training classification model are also provided.
    Type: Application
    Filed: August 17, 2020
    Publication date: September 30, 2021
    Applicant: BEIJING XIAOMI PINECONE ELECTRONICS CO., LTD.
    Inventors: Kexin TANG, Baoyuan QI, Jiacheng HAN, Erli MENG