Patents by Inventor Qi Guo

Qi Guo has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11526521
    Abstract: A device includes a memory that stores a prefetching model. A control module receives a content page including one or more links each associated with selectable content and collects data associated with the content page. The collected data includes at least one of first data indicative of respective relationships between each of the links and a viewport of the device and second data indicative of characteristics of the viewport. The control module further assigns, using the prefetching model, respective scores to each of the links based on the collected data, and selectively generates, based on the assigned scores, a request to prefetch the selectable content associated with at least one of the links.
    Type: Grant
    Filed: July 15, 2016
    Date of Patent: December 13, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Fernando Diaz, Ryen William White, Qi Guo
  • Patent number: 11513972
    Abstract: Aspects of managing Translation Lookaside Buffer (TLB) units are described herein. The aspects may include a memory management unit (MMU) that includes one or more TLB units and a control unit. The control unit may be configured to identify one from the one or more TLB units based on a stream identification (ID) included in a received virtual address and, further, to identify a frame number in the identified TLB unit. A physical address may be generated by the control unit based on the frame number and an offset included in the virtual address.
    Type: Grant
    Filed: August 12, 2019
    Date of Patent: November 29, 2022
    Assignee: CAMBRICON TECHNOLOGIES CORPORATION LIMITED
    Inventors: Tianshi Chen, Qi Guo, Yunji Chen
  • Patent number: 11507350
    Abstract: The present disclosure relates to a fused vector multiplier for computing an inner product between vectors, where vectors to be computed are a multiplier number vector {right arrow over (A)}{AN . . . A2A1A0} and a multiplicand number {right arrow over (B)} {BN . . . B2B1B0}, {right arrow over (A)} and {right arrow over (B)} have the same dimension which is N+1.
    Type: Grant
    Filed: November 27, 2019
    Date of Patent: November 22, 2022
    Assignee: CAMBRICON (XI'AN) SEMICONDUCTOR CO., LTD.
    Inventors: Tianshi Chen, Shengyuan Zhou, Zidong Du, Qi Guo
  • Publication number: 20220308831
    Abstract: Aspects for neural network operations with fixed-point number of short bit length are described herein. The aspects may include a fixed-point number converter configured to convert one or more first floating-point numbers to one or more first fixed-point numbers in accordance with at least one format. Further, the aspects may include a neural network processor configured to process the first fixed-point numbers to generate one or more process results.
    Type: Application
    Filed: March 1, 2022
    Publication date: September 29, 2022
    Inventors: Yunji CHEN, Shaoli LIU, Qi GUO, Tianshi CHEN
  • Patent number: 11436522
    Abstract: An indication of a plurality of different entities in a social networking service is received, including at least two entities having a different entity type. A plurality of user profiles in the social networking service is accessed. A first machine-learned model is used to learn embeddings for the plurality of different entities in a d-dimensional space. A second machine-learned model is used to learn an embedding for each of one or more query terms that are not contained in the indication of the plurality of different entities in the social networking service, using the embeddings for the plurality of different entities learned using the first machine-learned model, the second-machine learned model being a deep structured semantic model (DSSM). A similarity score between a query term and an entity is calculated by computing distance between the embedding for the query term and the embedding for the entity in the d-dimensional space.
    Type: Grant
    Filed: February 19, 2018
    Date of Patent: September 6, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Qi Guo, Xianren Wu, Bo Hu, Shan Zhou, Lei Ni, Erik Eugene Buchanan
  • Patent number: 11409524
    Abstract: Aspects for vector operations in neural network are described herein. The aspects may include a vector caching unit configured to store a vector, wherein the vector includes one or more elements. The aspects may further include a computation module that includes one or more comparers configured to compare the one or more elements to generate an output result that satisfies a predetermined condition included in an instruction.
    Type: Grant
    Filed: October 25, 2018
    Date of Patent: August 9, 2022
    Assignee: CAMBRICON TECHNOLOGIES CORPORATION LIMITED
    Inventors: Tian Zhi, Shaoli Liu, Qi Guo, Tianshi Chen, Yunji Chen
  • Patent number: 11397742
    Abstract: In an example embodiment, a platform is provided that utilizes information available to a computer system to feed a neural network. The neural network is trained to determine both the probability that a searcher would select a given potential search result if it was presented to him or her and the probability that a subject of the potential search result would respond to a communication from the searcher. These probabilities are combined to produce a single score that can be used to determine whether to present the searcher with the potential search result and, if so, how high to rank the potential search result among other search results. During the training process, a rescaling transformation for each input feature is learned and applied to the values for the input features.
    Type: Grant
    Filed: June 21, 2019
    Date of Patent: July 26, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Daniel Sairom Krishnan Hewlett, Dan Liu, Qi Guo
  • Publication number: 20220214875
    Abstract: A model conversion method is disclosed. The model conversion method includes obtaining model attribute information of an initial offline model and hardware attribute information of a computer equipment, determining whether the model attribute information of the initial offline model matches the hardware attribute information of the computer equipment according to the initial offline model and the hardware attribute information of the computer equipment and in the case when the model attribute information of the initial offline model does not match the hardware attribute information of the computer equipment, converting the initial offline model to a target offline model that matches the hardware attribute information of the computer equipment according to the hardware attribute information of the computer equipment and a preset model conversion rule.
    Type: Application
    Filed: March 24, 2022
    Publication date: July 7, 2022
    Inventors: Shaoli LIU, Jun LIANG, Qi GUO
  • Patent number: 11378983
    Abstract: Provided is a stable flight control method for a multi-rotor unmanned aerial vehicle based on finite-time neurodynamics, comprising the following implementation process: 1) acquiring real-time flight orientation and attitude data through airborne sensors, and analyzing and processing kinematic problems of the aerial vehicle through an airborne processor to establish a dynamics model of the aerial vehicle; 2) designing a finite-time varying-parameter convergence differential neural network solver according to a finite-time varying-parameter convergence differential neurodynamics design method; 3) solving output control parameters of motors of the aerial vehicle through the finite-time varying-parameter convergence differential neural network solver using the acquired real-time orientation and attitude data; and 4) transmitting results to speed regulators of the motors of the aerial vehicle to control the motion of the unmanned aerial vehicle.
    Type: Grant
    Filed: November 6, 2017
    Date of Patent: July 5, 2022
    Assignee: SOUTH CHINA UNIVERSITY OF TECHNOLOGY
    Inventors: Zhijun Zhang, Lu'nan Zheng, Qi Guo
  • Patent number: 11373353
    Abstract: Methods, apparatus, and computer readable storage medium for simulating and rendering a material with a modified material point method are described. The method includes, for each of a plurality of time-steps of simulating a material: transferring states of particles representing the material at a N-th time-step to a grid, determining a plurality of grid-node velocities at the N-th time-step using a particle-to-grid computation based on the states of the particles at the N-th time-step, updating the plurality of grid-node velocities at a (N+1)-th time-step based on grid forces, and updating the states of the particles at the (N+1)-th time-step using a grid-to-particle computation based on the states of the particles at the N-th time-step, the plurality of grid-node velocities at the N-th and (N+1)-th time-steps. The method further includes rendering one or more image depicting the material based on the states of the particles at the plurality of time-steps.
    Type: Grant
    Filed: February 26, 2021
    Date of Patent: June 28, 2022
    Assignee: TENCENT AMERICA LLC
    Inventors: Yun Fei, Ming Gao, Qi Guo, Rundong Wu
  • Patent number: 11373084
    Abstract: Aspects for forward propagation in fully connected layers of a convolutional artificial neural network are described herein. The aspects may include multiple slave computation modules configured to parallelly calculate multiple groups of slave output values based on an input vector received via the interconnection unit. Further, the aspects may include a master computation module connected to the multiple slave computation modules via an interconnection unit, wherein the master computation module is configured to generate an output vector based on the intermediate result vector.
    Type: Grant
    Filed: October 29, 2018
    Date of Patent: June 28, 2022
    Assignee: CAMBRICON TECHNOLOGIES CORPORATION LIMITED
    Inventors: Shaoli Liu, Huiying Lan, Qi Guo, Yunji Chen, Tianshi Chen
  • Patent number: 11360811
    Abstract: Computer systems, data processing methods, and computer-readable media are provided to run original networks. An exemplary computer system includes first and second processors a memory storing offline models and corresponding input data of a plurality of original networks, and a runtime system configured to run on the first processor. The runtime system, when runs on the first processor, causes the first processor to implement a plurality of virtual devices comprising a data processing device configured to obtain an offline model and corresponding input data of an original network from the memory, an equipment management device configured to control turning on or off of the second processor, and a task execution device configured to control the second processor to run the offline model of the original network.
    Type: Grant
    Filed: December 3, 2019
    Date of Patent: June 14, 2022
    Assignee: SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTD
    Inventors: Linyang Wu, Qi Guo, Xunyu Chen, Kangyu Wang
  • Patent number: 11321392
    Abstract: The present disclosure relates to searching for and committing low-frequency data to a database. An example method generally includes receiving, from a requesting application, a query for data from the data repository. A database system retrieves a set of indices associated with the data specified in the query from an index table in the data repository. Upon determining that the set of indices comprises a non-null set, the database system retrieves records associated with each index in the set of indices from a data table associated with the index table and returns the retrieved records to the requesting application.
    Type: Grant
    Filed: February 19, 2019
    Date of Patent: May 3, 2022
    Assignee: International Business Machines Corporation
    Inventors: Jia Tian Zhong, Bin Yang, Shuang H. Wang, Xing Xing Shen, Qi Guo
  • Patent number: 11314507
    Abstract: A model conversion method is disclosed. The model conversion method includes obtaining model attribute information of an initial offline model and hardware attribute information of a computer equipment, determining whether the model attribute information of the initial offline model matches the hardware attribute information of the computer equipment according to the initial offline model and the hardware attribute information of the computer equipment and in the case when the model attribute information of the initial offline model does not match the hardware attribute information of the computer equipment, converting the initial offline model to a target offline model that matches the hardware attribute information of the computer equipment according to the hardware attribute information of the computer equipment and a preset model conversion rule.
    Type: Grant
    Filed: October 29, 2019
    Date of Patent: April 26, 2022
    Assignee: CAMBRICON TECHNOLOGIES CORPORATION LIMITED
    Inventors: Shaoli Liu, Jun Liang, Qi Guo
  • Patent number: 11295196
    Abstract: Aspects for neural network operations with fixed-point number of short bit length are described herein. The aspects may include a fixed-point number converter configured to convert one or more first floating-point numbers to one or more first fixed-point numbers in accordance with at least one format. Further, the aspects may include a neural network processor configured to process the first fixed-point numbers to generate one or more process results.
    Type: Grant
    Filed: October 29, 2018
    Date of Patent: April 5, 2022
    Assignee: CAMBRICON TECHNOLOGIES CORPORATION LIMITED
    Inventors: Yunji Chen, Shaoli Liu, Qi Guo, Tianshi Chen
  • Patent number: 11263530
    Abstract: Aspects for maxout layer operations in neural network are described herein. The aspects may include a load/store unit configured to retrieve input data from a storage module. The input data may be formatted as a three-dimensional vector that includes one or more feature values stored in a feature dimension of the three-dimensional vector. The aspects may further include a pruning unit configured to divide the one or more feature values into one or more feature groups based on one or more data ranges and select a maximum feature value from each of the one or more feature groups. Further still, the pruning unit may be configured to delete, in each of the one or more feature groups, feature values other than the maximum feature value and update the input data with the one or more maximum feature values.
    Type: Grant
    Filed: October 18, 2018
    Date of Patent: March 1, 2022
    Assignee: Cambricon Technologies Corporation Limited
    Inventors: Dong Han, Qi Guo, Tianshi Chen, Yunji Chen
  • Patent number: 11204968
    Abstract: In an example embodiment, a platform is provided that utilizes information available to a computer system to feed a neural network. The neural network is trained to determine both the probability that a searcher would select a given potential search result if it was presented to him or her and the probability that a subject of the potential search result would respond to a communication from the searcher. These probabilities are essentially combined to produce a single score that can be used to determine whether to present the searcher with the potential search result and, if so, how high to rank the potential search result among other search results. In a further example embodiment, embeddings used for the input features are modified during training to maximize an objective.
    Type: Grant
    Filed: June 21, 2019
    Date of Patent: December 21, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Dan Liu, Daniel Sairom Krishnan Hewlett, Qi Guo, Wei Lu, Xuhong Zhang, Wensheng Sun, Mingzhou Zhou, Anthony Hsu, Keqiu Hu, Yi Wu, Chenya Zhang, Baolei Li
  • Patent number: 11204973
    Abstract: In an example embodiment, position bias and other types of bias may be compensated for by using two-phase training of a machine-learned model. In a first phase, the machine-learned model is trained using non-randomized training data. Since certain types of machine-learned models, such as those involving deep learning (e.g., neural networks) require a lot of training data, this allows the bulk of the training to be devoted to training using non-randomized training data. However, since this non-randomized training data may be biased, a second training phase is then used to revise the machine-learned model based on randomized training data to remove the bias from the machine-learned model. Since this randomized training data may be less plentiful, this allows the deep learning machine-learned model to be trained to operate in an unbiased manner without the need to generate additional randomized training data.
    Type: Grant
    Filed: June 21, 2019
    Date of Patent: December 21, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Daniel Sairom Krishnan Hewlett, Dan Liu, Qi Guo, Wenxiang Chen, Xiaoyi Zhang, Lester Gilbert Cottle, III, Xuebin Yan, Yu Gong, Haitong Tian, Siyao Sun, Pei-Lun Liao
  • Patent number: 11163845
    Abstract: In an example embodiment, position bias is addressed by introducing an inverse propensity weight into a loss function used to train a machine-learned model. This inverse propensity weight essentially increases the weight of candidates in the training data that were presented lower in a list of candidates. This achieves the benefit of counteracting the position bias and increases the effectiveness of the machine-learned model in generating scores for future candidates. In a further example embodiment, a function is generated for the inverse propensity weight based on responses to contact requests from recruiters. In other words, while the machine learned-model may factor in both the likelihood that a recruiter will want to contact a candidate and the likelihood that a candidate will respond to such a contact, the function generated for the inverse propensity weight will be based only on training data where the candidate actually responded to a contact.
    Type: Grant
    Filed: June 21, 2019
    Date of Patent: November 2, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Dan Liu, Daniel Sairom Krishnan Hewlett, Qi Guo
  • Patent number: D936628
    Type: Grant
    Filed: October 24, 2019
    Date of Patent: November 23, 2021
    Assignee: Beijing Xiaomi Mobile Software Co., Ltd.
    Inventors: Zhenhua Liu, Qi Guo, Yangyang Cai