Patents by Inventor Jian Jiao

Jian Jiao has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240151143
    Abstract: The present invention discloses a right-angle turning method for small-diameter TBM exploration adit excavation and belongs to the field of geological exploration of water conservancy and hydropower projects.
    Type: Application
    Filed: January 17, 2023
    Publication date: May 9, 2024
    Inventors: Youlin Wang, Yongshun Liu, Junheng Cao, Jian Jiao, Shuwu Li, Xiaoliang He, Jian Bao, Yue Zhao, Zhongqiang Zhao, Xiaoxia Xu, Lei Feng, Nan Chen, Wei Liu, Zhixiang Zhao
  • Publication number: 20240135413
    Abstract: A query-processing technique includes an operation of matching the input query against a plurality of candidate target items, to produce a set of candidate query-item pairings. The matching is applicable to different classes of matching, but is performed by a computer processing architecture that uses a class-agnostic instance of query-processing logic and a class-agnostic target item index. After the matching operation, the technique assigns a matching class to each candidate query-item pairing in the set of candidate query-item pairings, to produce a set of classified pairings. The technique ultimately serves a particular output item to an end user, where the particular output item is chosen based on the results of the matching and assigning. Some implementations of the technique include a filtering operation whereby the candidate-item pairings are filtered to conform to a specified selection strategy or strategies. This filtering operation supplements or replaces the assigning operation.
    Type: Application
    Filed: October 15, 2022
    Publication date: April 25, 2024
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Jian JIAO, Eren MANAVOGLU
  • Patent number: 11966428
    Abstract: A training system produces a resource-efficient machine-trained model via a training architecture that employs plural processing paths. Some of the processing paths incorporate the use of auxiliary information that imparts external knowledge about source items being processed. The training architecture also employs contrastive learning that operates at different respective levels within the training architecture. For instance, the training architecture uses encoder-level contrastive learning to compare output information generated by different encoders within the training architecture. The training architecture uses decoder-level contrastive learning to compare output information produced by different decoders within the training architecture. An inference-stage system performs an application task using the model produced by the training system.
    Type: Grant
    Filed: July 1, 2021
    Date of Patent: April 23, 2024
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Jian Jiao, Yeyun Gong, Nan Duan, Ruofei Zhang
  • Publication number: 20240054326
    Abstract: Systems and methods are provided for learning classifiers for annotating a document with predicted labels under extreme classification where there are over a million labels. The learning includes receiving a joint graph including documents and labels as nodes. Multi-dimensional vector representations of a document (i.e., document representations) are generated based on graph convolution of the joint graph. Each document representation varies an extent of reliance on neighboring nodes to accommodate context. The document representations are feature-transformed using a residual layer. Per-label document representations are generated from the transformed document representations based on neighboring label attention. A classifier is trained for each of over a million labels based on joint learning using training data and the per-label document representation. The trained classifier performs highly efficiently as compared to other classifiers trained using disjoint graphs of documents and labels.
    Type: Application
    Filed: April 12, 2021
    Publication date: February 15, 2024
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Kushal DAVE, Deepak SAINI, Arnav Kumar JAIN, Jian JIAO, Amit Kumar Rambachan SINGH, Ruofei ZHANG, Manik VARMA
  • Publication number: 20240046037
    Abstract: Systems and methods are provided for training a data model based on training data. The training includes pre-training and fine-tuning the data model based on a combination of an autoregressive (AR) model and a non-autoregressive (NAR) model. Training data may be received and encoded into streams of tokens. A pre-trainer during decoding generates a continuum of data structures of the AR and NAR combined model including a main stream and a series of predicting streams. Masked tokens in predicting streams reference or attend to one or more preceding tokens in the main stream or the preceding predicting streams. A fine-tuner selects streams to generate a trained model according to a target data model. The target data model is determined based on balancing an accuracy constraint and an efficiency constraint for predicting tokens. The decoder acts as abridge between the AR and NAR models in generating a trained data model.
    Type: Application
    Filed: December 25, 2020
    Publication date: February 8, 2024
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Jian JIAO, Yeyun GONG, Nan DUAN, Weizhu CHEN, Kewen TANG, Qiang LOU, Ruofei ZHANG, Yu YAN, Jiusheng CHEN
  • Patent number: 11893128
    Abstract: A query string for an encrypted database storing a plurality of encrypted data records is received from a requestor. The query string is segmented to obtain at least one word. The at least one word is encrypted with the irreversible encryption algorithm to obtain at least one encrypted word. At least one first encrypted item with a co-occurrence weight higher than a preset threshold based on the at least one encrypted word and a co-occurrence statistics model is acquired. The co-occurrence statistics model is built to provide co-occurrence weights, each indicating a probability that the at least one encrypted word appears in a first encrypted data item of the plurality of encrypted data records. At least one second encrypted data item corresponding to the at least one first encrypted data item is acquired from the plurality of encrypted data records.
    Type: Grant
    Filed: June 14, 2021
    Date of Patent: February 6, 2024
    Assignee: International Business Machines Corporation
    Inventors: Yi Liu, Shao Mei Ji, Peng Hui Jiang, Jin Shan Li, Jian Jiao Wen, Yuan Yuan Jia, Li Wei Wang
  • Publication number: 20230422639
    Abstract: A semiconductor structure, system and method. The semiconductor structure comprises: a substrate including circuitry therein; and a semiconductor stack on the substrate, the semiconductor stack including: a first electrically conductive layer including a metal and electrically coupled to the circuitry of the substrate; and a second electrically conductive layer between the substrate and the first electrically conductive layer, the second electrically conductive layer including one of a refractory metal, or a combination including silicon, carbon and nitride. The second electrically conductive layer may serve as a barrier layer between the first electrically conductive layer and the material of the underlying substrate, in this manner avoiding the formation of an intermixing region between the metal of the first electrically conductive layer and the material of the substrate during deposition of the metal.
    Type: Application
    Filed: June 27, 2022
    Publication date: December 28, 2023
    Applicant: Intel Corporation
    Inventors: Shafaat Ahmed, Gowtham Sriram Jawaharram, Cyrus M. Fox, Jose L. Cruz-Campa, Kriti Agarwal, Jian Jiao, Hong Li, Bharat V. Krishnan, Ervin T. Hill, III
  • Publication number: 20230394333
    Abstract: A knowledge injection model for generative commonsense reasoning. In examples, an encoder-decoder model is used to generate a model output (204) a plausible description for a set of concepts. A prototype (218) is generated from an in-domain or out-of-domain knowledge corpus, which is further used as input (202) for the encoder-decoder model. Concept input tokens and prototype input tokens are scaled to limit potential skew that may be introduced by the prototype (218). Additionally, position indicators are generated for each input token, which indicate the relative position each respective input token as compared to other input tokens. As such, when decoding the scaled encoded input tokens, the decoder (214) may be more attuned to the scenario bias that is introduced by the prototype (218) when generating a model output (204). Thus, the encoder-decoder model need not rely solely on the set of concepts when generating the model output (204).
    Type: Application
    Filed: November 12, 2020
    Publication date: December 7, 2023
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Jian JIAO, Yeyun GONG, Nan DUAN, Yameng HUANG, Ruofei ZHANG, Ming ZHOU
  • Patent number: 11834755
    Abstract: The present application provides a lithium niobate having a p-type nanowire region or an n-type nanowire region and a method for preparing the same. The method includes heating and then cooling a multi-domain lithium niobate crystal to confine hydrogen ions of the multi-domain lithium niobate crystal in domain wall regions; and poling the multi-domain lithium niobate crystal that has been heated by applying a voltage, to reverse a direction of polarization of one or more domains of the multi-domain lithium niobate crystal. The lithium niobate includes a lithium niobate crystal and a p-type nanowire region or an n-type nanowire region located in the lithium niobate crystal and adjacent to a surface of the lithium niobate crystal. The present application also provides a method for converting the charge carrier type of the lithium niobate nanowire region.
    Type: Grant
    Filed: April 25, 2021
    Date of Patent: December 5, 2023
    Assignee: NANKAI UNIVERSITY
    Inventors: Guo-Quan Zhang, Xiao-Jie Wang, Yue-Jian Jiao, Fang Bo, Jing-Jun Xu
  • Publication number: 20230385315
    Abstract: Systems and methods are provided for generating a keyword sequence from an input query. A first text sequence corresponding to an input query may be received and encoded into a source sequence representation using an encoder of a machine learning model. A keyword sentence may then be generated from the source sequence representation using a decoder of the machine learning model. The decoder may generate a modified generation score for a plurality of prediction tokens, wherein the modified generation score is based on the respective prediction token generation score and a maximum generation score for a suffix of each prediction token. The decoder may then select the prediction token of the plurality of prediction tokens based on the modified generation score, and add the selected prediction token to the previously decoded partial hypothesis provided by the decoder.
    Type: Application
    Filed: October 14, 2020
    Publication date: November 30, 2023
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Jian JIAO, Yeyun GONG, Nan DUAN, Ruofei ZHANG, Ming ZHOU
  • Publication number: 20230334350
    Abstract: A computing device including a processor configured to receive data indicating, for a query category within a sampled time period, a matching density defined as a number of matches per query. The processor may generate a structural causal model (SCM) of the data within the sampled time period. The SCM may include a plurality of structural equations. Based at least in part on the plurality of structural equations, the processor may estimate a structural equation error value for the matching density. The processor may update a value of a target SCM output variable to a counterfactual updated value. Based at least in part on the SCM, the counterfactual updated value, and the structural equation error value, the processor may compute a predicted matching density when the target SCM output variable has the counterfactual updated value. The processor may output the predicted matching density.
    Type: Application
    Filed: April 14, 2022
    Publication date: October 19, 2023
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Hua LI, Amit SHARMA, Jian JIAO, Ruofei ZHANG
  • Publication number: 20230267308
    Abstract: Knowledge graphs can greatly improve the quality of content recommendation systems. There is a broad variety of knowledge graphs in the domain including clicked user-ad graphs, clicked query-ad graphs, keyword-display URL graphs etc. A hierarchical Transformer model learns entity embeddings in knowledge graphs. The model consists of two different Transformer blocks where the bottom block generates relation-dependent embeddings for the source entity and its neighbors, and the top block aggregates the outputs from the bottom block to produce the target entity embedding. To balance the information from contextual entities and the source entity itself, a masked entity model (MEM) task is combined with a link prediction task in model training.
    Type: Application
    Filed: May 4, 2023
    Publication date: August 24, 2023
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Jian JIAO, Xiaodong LIU, Ruofei ZHANG, Jianfeng GAO
  • Publication number: 20230260788
    Abstract: An embodiment of an apparatus may include a substrate and a semiconductor structure disposed on the substrate, where the semiconductor structure comprises a plurality of layers of material and where at least one layer of the plurality of layers of material comprises carbon-nitride-carbon (CNC). Other embodiments are disclosed and claimed.
    Type: Application
    Filed: February 14, 2022
    Publication date: August 17, 2023
    Applicant: Intel Corporation
    Inventors: Huy Cao, Hong Li, Jian Jiao, Xiandong Yang, Honore Djieutedjeu, Jean Claude Chokomakoua, Ram Raju, Bharat Krishnan
  • Patent number: 11676001
    Abstract: Knowledge graphs can greatly improve the quality of content recommendation systems. There is a broad variety of knowledge graphs in the domain including clicked user-ad graphs, clicked query-ad graphs, keyword-display URL graphs etc. A hierarchical Transformer model learns entity embeddings in knowledge graphs. The model consists of two different Transformer blocks where the bottom block generates relation-dependent embeddings for the source entity and its neighbors, and the top block aggregates the outputs from the bottom block to produce the target entity embedding. To balance the information from contextual entities and the source entity itself, a masked entity model (MEM) task is combined with a link prediction task in model training.
    Type: Grant
    Filed: November 9, 2020
    Date of Patent: June 13, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Jian Jiao, Xiaodong Liu, Ruofei Zhang, Jianfeng Gao
  • Publication number: 20230114440
    Abstract: Methods, systems, and devices supporting configurable resistivities for lines in a memory device, such as access lines in a memory array are described. For example, metal lines at different levels of a memory device may be oxidized to different extents in order for the lines at different levels of the memory device to have different resistivities. This may allow the resistivity of lines to be tuned on a level-by-level basis without altering the fabrication techniques and related parameters used to initially form the lines at the different levels, which may have benefits related to at least reduced cost and complexity. Lines may be oxidized to a controlled extent using either a dry or wet process.
    Type: Application
    Filed: October 21, 2022
    Publication date: April 13, 2023
    Inventors: Koushik Banerjee, Isaiah O. Gyan, Robert Cassel, Jian Jiao, William L. Cooper, Jason R. Johnson, Michael P. O'Toole
  • Publication number: 20230114458
    Abstract: A method for reminder object operation includes: displaying a dialogue page, the dialogue page including an information editing area and an object display interface, the object display interface including identification information of a plurality of reminder objects and operation controls corresponding to the plurality of reminder objects; in response to a first operation instruction triggered based on a first operation control, displaying a first reminder object corresponding to the first operation control in the information editing area, the object display interface locating in the dialogue page; and in response to a second operation instruction triggered based on a second operation control, displaying a second reminder object corresponding to the second operation control in the information editing area, the object display interface locating in the dialogue page.
    Type: Application
    Filed: April 4, 2022
    Publication date: April 13, 2023
    Applicant: BEIJING DAJIA INTERNET INFORMAITON TECHNOLOGY CO., LTD.
    Inventors: Xuechun Dong, Xiyu Li, Jiaxin Chen, Bolin Zhang, Jian Jiao
  • Patent number: 11619931
    Abstract: An interface integration method of AGV job automatic scheduling system and MES system includes an AGV job automatic scheduling system unit, an MES system unit and a data transmission and processing unit. The data transmission and processing unit performs interface integration through a data dictionary which includes multiple data sets. Based on the standardized data dictionary integration method, the relevant data in the manufacturing process in the factory are classified and stored in the above-mentioned multiple data sets, respectively, which can greatly reduce the non-standard customization characteristics of data that need to be mutually integrated when the interface of the MES system unit is integrated with the interface of the AGV job automatic scheduling system unit, thereby facilitating the seamless and standardized integration of the MES system and the AGV system in the manufacturing process in digital workshops or smart factories, and enabling interconnection and interoperability.
    Type: Grant
    Filed: June 10, 2022
    Date of Patent: April 4, 2023
    Assignee: MACHINERY TECHNOLOGY DEVELOPMENT CO., LTD
    Inventors: Sheng Zhang, Bin Xu, Xiangzhen Kong, Jian Jiao
  • Publication number: 20230081624
    Abstract: A training technique trains a neural network having sparsely-activated sub-networks. It does so by processing plural batches of training data in two respective passes of the neural network, yielding first prediction information and second prediction information. For each batch, the technique randomly assigns different sub-networks in the first and second passes of the neural network to process the batch. Over the course of training, the technique attempts to minimize loss information, which describes the difference between the first prediction information and ground-truth information, and the difference between the second prediction information and the ground-truth information. Simultaneously, the technique attempts to minimize divergence information, which describes the divergence of the first prediction information from the second prediction information (and vice versa).
    Type: Application
    Filed: October 11, 2021
    Publication date: March 16, 2023
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Jian JIAO, Xiaodong LIU, Jianfeng GAO, Ruofei ZHANG
  • Publication number: 20230004588
    Abstract: A training system produces a resource-efficient machine-trained model via a training architecture that employs plural processing paths. Some of the processing paths incorporate the use of auxiliary information that imparts external knowledge about source items being processed. The training architecture also employs contrastive learning that operates at different respective levels within the training architecture. For instance, the training architecture uses encoder-level contrastive learning to compare output information generated by different encoders within the training architecture. The training architecture uses decoder-level contrastive learning to compare output information produced by different decoders within the training architecture. An inference-stage system performs an application task using the model produced by the training system.
    Type: Application
    Filed: July 1, 2021
    Publication date: January 5, 2023
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Jian JIAO, Yeyun GONG, Nan DUAN, Ruofei ZHANG
  • Publication number: 20220405416
    Abstract: A query string for an encrypted database storing a plurality of encrypted data records is received from a requestor. The query string is segmented to obtain at least one word. The at least one word is encrypted with the irreversible encryption algorithm to obtain at least one encrypted word. At least one first encrypted item with a co-occurrence weight higher than a preset threshold based on the at least one encrypted word and a co-occurrence statistics model is acquired. The co-occurrence statistics model is built to provide co-occurrence weights, each indicating a probability that the at least one encrypted word appears in a first encrypted data item of the plurality of encrypted data records. At least one second encrypted data item corresponding to the at least one first encrypted data item is acquired from the plurality of encrypted data records.
    Type: Application
    Filed: June 14, 2021
    Publication date: December 22, 2022
    Inventors: YI LIU, Shao Mei Ji, Peng Hui Jiang, Jin Shan Li, Jian Jiao Wen, Yuan Yuan Jia, Li Wei Wang