Patents by Inventor Junyi Liu
Junyi Liu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20250119349Abstract: A wireless communication method for use in a first network function (NF) is disclosed. The method comprises transmitting, to a network repository function (NRF), an NF update message comprising planned removal information which indicates a planned removal of the first NF.Type: ApplicationFiled: December 18, 2024Publication date: April 10, 2025Applicant: ZTE CORPORATIONInventors: Zhijun LI, Jinguo ZHU, Junyi LIU
-
Patent number: 12222922Abstract: A system comprising: memory storing a data structure comprising a plurality of items, each item comprising a key-value pair; a writer arranged to perform a plurality of write operations each to write a respective item, either a new item being added to the data structure or an existing item being modified in the data structure; and a reader configured to perform a group-read operation to read from the data structure any items having keys in a specified range. The writer is configured to maintain a global write version and the reader is configured to maintain a global read version.Type: GrantFiled: June 10, 2022Date of Patent: February 11, 2025Assignee: Microsoft Technology Licensing, LLC.Inventors: Aleksandar Dragojevic, Junyi Liu, Antonios Katsarakis
-
Patent number: 12204854Abstract: Techniques are described for training and/or utilizing sub-agent machine learning models to generate candidate dialog responses. In various implementations, a user-facing dialog agent (202, 302), or another component on its behalf, selects one of the candidate responses which is closest to user defined global priority objectives (318). Global priority objectives can include values (306) for a variety of dialog features such as emotion, confusion, objective-relatedness, personality, verbosity, etc. In various implementations, each machine learning model includes an encoder portion and a decoder portion. Each encoder portion and decoder portion can be a recurrent neural network (RNN) model, such as a RNN model that includes at least one memory layer, such as a long short-term memory (LSTM) layer.Type: GrantFiled: January 4, 2024Date of Patent: January 21, 2025Assignee: KONINKLIJKE PHILIPS N.V.Inventors: Vivek Varma Datla, Sheikh Sadid Al Hasan, Aaditya Prakash, Oladimeji Feyisetan Farri, Tilak Raj Arora, Junyi Liu, Ashequl Qadir
-
Patent number: 12180671Abstract: A skid steer loader includes a chassis frame, a cab, a main beam, and a bucket. The chassis frame is assembled with a walking system to form a movable chassis of the skid steer loader, and a power system of the skid steer loader is assembled to a tail portion of the chassis frame. The cab is arranged on a front portion of the chassis frame in an. overturning manner, and the cab is horizontally placed on the chassis frame when a telescopic support rod is in a retracted state and overturns forward when the telescopic support rod is in an extended state. A tail end of the main beam is movably hinged to the tail portion of the chassis frame by a four-bar linkage mechanism, the bucket is assembled to a front end of the main beam.Type: GrantFiled: December 19, 2019Date of Patent: December 31, 2024Assignee: SUNWARD INTELLIGENT EQUIPMENT CO., LTD.Inventors: Zeping Tan, Zhen Liu, Junyi Liu, Huiwen Hu, Xianghua Xie, Zhengyu Gan
-
Publication number: 20240292317Abstract: The present disclosure provides a location service entity selection method and apparatus, an electronic device, and a readable storage medium, and relates to the technical field of communications. The method includes: acquiring service capability information of a candidate location service entity from a preset storage space according to location requirement information of a user equipment (UE), and determining a target location service entity according to the service capability information of the candidate location service entity.Type: ApplicationFiled: June 24, 2022Publication date: August 29, 2024Inventors: Xiaoyong TU, Junyi LIU, Minya YE, Xiliang LIU, Fangting ZHENG
-
Publication number: 20240273080Abstract: A system comprising: memory storing a data structure comprising a plurality of items, each item comprising a key-value pair; a writer arranged to perform a plurality of write operations each to write a respective item, either a new item being added to the data structure or an existing item being modified in the data structure; and a reader configured to perform a group-read operation to read from the data structure any items having keys in a specified range. The writer is configured to maintain a global write version and the reader is configured to maintain a global read version.Type: ApplicationFiled: June 10, 2022Publication date: August 15, 2024Inventors: Aleksandar DRAGOJEVIC, Junyi LIU, Antonios KATSARAKIS
-
Publication number: 20240272941Abstract: A system comprising: a producer of work items, a circular buffer for queueing the items, and a consumer of the items. Each slot in the buffer comprises a descriptor field and a sequence number field. The buffer also comprises a head field specifying the sequence number of the head slot, and a tail field specifying the sequence number of the tail slot. To enqueue a new item, the producer increments the tail field, and writes the new item to the slot that was the tail slot prior to the increment. The consumer tracks a next expected sequence number based on how many items it has consumed so far. To consume a next item from the work queue, the consumer polls the sequence number of the head slot to check whether it equals the expected sequence number, and on condition thereof consumes the next work item.Type: ApplicationFiled: June 17, 2022Publication date: August 15, 2024Inventors: Aleksandar DRAGOJEVIC, Junyi LIU
-
Publication number: 20240241873Abstract: A writer writes items to leaf nodes of a tree, and a reader read items from the leaf nodes. Each node comprises a respective first block and second block, the first block comprising a plurality of the items of the respective leaf sorted in order of key. When writing new items to a leaf, the writer writes the new items to the second block of the identified leaf node in an order in which written, rather than sorted in order of key. When reading one or more target items from a leaf, the reader searches the leaf for the one or more target items based on a) the order of the items as already sorted in the first block and b) the reader sorting the items of the second block by key relative to the items of the first block.Type: ApplicationFiled: May 18, 2022Publication date: July 18, 2024Inventors: Aleksandar DRAGOJEVIC, Junyi LIU
-
Patent number: 12017190Abstract: Composite hollow fiber gas separation membranes with improved permeance and separation layer adhesion are manufactured by providing dipping a hollow fiber membrane substrate in a pre-coat layer coating composition followed by drying to thereby provide a pre-coated substrate and dipping the pre-coated substrate in a separation layer coating composition followed by drying to thereby provide the composite hollow fiber gas separation membranes. The pre-coating composition includes a first polymer dissolved in a first solvent and the separation layer composition includes a second polymer dissolved in a second solvent. The first and second polymers are the same or different, each of the first and second polymers is at least 1 wt % soluble in a same third solvent, the first and second solvents are the same or different, the first and third solvents are the same or different, and the second and third solvent are the same or different.Type: GrantFiled: October 10, 2022Date of Patent: June 25, 2024Assignee: L'Air Liquide, Société Anonyme pour l'Etude et l'Exploitation des Procédés Georges ClaudeInventors: Junyi Liu, Sudhir Kulkarni, Raja Swaidan, Megha Sharma
-
Publication number: 20240143921Abstract: Techniques are described for training and/or utilizing sub-agent machine learning models to generate candidate dialog responses. In various implementations, a user-facing dialog agent (202, 302), or another component on its behalf, selects one of the candidate responses which is closest to user defined global priority objectives (318). Global priority objectives can include values (306) for a variety of dialog features such as emotion, confusion, objective-relatedness, personality, verbosity, etc. In various implementations, each machine learning model includes an encoder portion and a decoder portion. Each encoder portion and decoder portion can be a recurrent neural network (RNN) model, such as a RNN model that includes at least one memory layer, such as a long short-term memory (LSTM) layer.Type: ApplicationFiled: January 4, 2024Publication date: May 2, 2024Inventors: Vivek Varma DATLA, Sheikh Sadid AL HASAN, Aaditya PRAKASH, Oladimeji Feyisetan FARRI, Tilak Raj ARORA, Junyi LIU, Ashequl QADIR
-
Publication number: 20240116009Abstract: Composite hollow fiber gas separation membranes with improved permeance and separation layer adhesion are manufactured by providing dipping a hollow fiber membrane substrate in a pre-coat layer coating composition followed by drying to thereby provide a pre-coated substrate and dipping the pre-coated substrate in a separation layer coating composition followed by drying to thereby provide the composite hollow fiber gas separation membranes. The pre-coating composition includes a first polymer dissolved in a first solvent and the separation layer composition includes a second polymer dissolved in a second solvent. The first and second polymers are the same or different, each of the first and second polymers is at least 1 wt % soluble in a same third solvent, the first and second solvents are the same or different, the first and third solvents are the same or different, and the second and third solvent are the same or different.Type: ApplicationFiled: October 10, 2022Publication date: April 11, 2024Applicant: L'Air Liquide, Societe Anonyme pour l'Etude et l'Exploitation des Procedes Georges ClaudeInventors: Junyi LIU, Sudhir KULKARNI, Raja SWAIDAN, Megha SHARMA
-
Patent number: 11868720Abstract: Techniques are described for training and/or utilizing sub-agent machine learning models to generate candidate dialog responses. In various implementations, a user-facing dialog agent (202, 302), or another component on its behalf, selects one of the candidate responses which is closest to user defined global priority objectives (318). Global priority objectives can include values (306) for a variety of dialog features such as emotion, confusion, objective-relatedness, personality, verbosity, etc. In various implementations, each machine learning model includes an encoder portion and a decoder portion. Each encoder portion and decoder portion can be a recurrent neural network (RNN) model, such as a RNN model that includes at least one memory layer, such as a long short-term memory (LSTM) layer.Type: GrantFiled: January 16, 2020Date of Patent: January 9, 2024Assignee: KONINKLIJKE PHILIPS N.V.Inventors: Vivek Varma Datla, Sheikh Sadid Al Hasan, Aaditya Prakash, Oladimeji Feyisetan Farri, Tilak Raj Arora, Junyi Liu, Ashequl Qadir
-
Patent number: 11822605Abstract: A system (1000) for automated question answering, including: semantic space (210) generated from a corpus of questions and answers; a user interface (1030) configured to receive a question; and a processor (1100) comprising: (i) a question decomposition engine (1050) configured to decompose the question into a domain, a keyword, and a focus word; (ii) a question similarity generator (1060) configured to identify one or more questions in a semantic space using the decomposed question; (iii) an answer extraction and ranking engine (1080) configured to: extract, from the semantic space, answers associated with the one or more identified questions; and identify one or more of the extracted answers as a best answer; and (iv) an answer tuning engine (1090) configured to fine-tune the identified best answer using one or more of the domain, keyword, and focus word; wherein the fine-tuned answer is provided to the user via the user interface.Type: GrantFiled: October 17, 2017Date of Patent: November 21, 2023Assignee: KONINKLIJKE PHILIPS N.V.Inventors: Vivek Varma Datla, Sheikh Sadid Al Hasan, Oladimeji Feyisetan Farri, Junyi Liu, Kathy Mi Young Lee, Ashequl Qadir, Adi Prakash
-
Publication number: 20230237330Abstract: Techniques are described herein for training and applying memory neural networks, such as “condensed” memory neural networks (“C-MemNN”) and/or “average” memory neural networks (“A-MemNN”). In various embodiments, the memory neural networks may be iteratively trained using training data in the form of free form clinical notes and clinical reference documents. In various embodiments, during each iteration of the training, a so-called “condensed” memory state may be generated and used as part of the next iteration. Once trained, a free form clinical note associated with a patient may be applied as input across the memory neural network to predict one or more diagnoses or outcomes of the patient.Type: ApplicationFiled: April 4, 2023Publication date: July 27, 2023Inventors: Aaditya PRAKASH, Sheikh Sadid AL HASAN, Oladimeji Feyisetan FARRI, Kathy Mi Young LEE, Vivek Varma DATLA, Ashequl QADIR, Junyi LIU
-
Patent number: 11670420Abstract: Techniques are described herein for drawing conclusions using free form texts and external resources. In various embodiments, free form input data (202) may be segmented (504) into a plurality of input data segments. A first input data segment may be compared (510) with an external resource (304) to identify a first candidate conclusion. A reinforcement learning trained agent (310) may be applied (512) to make a first determination of whether to accept or reject the first candidate conclusion. Similar actions may be performed with a second input data segment to make a second determination of whether to accept or reject a second candidate conclusion. A final conclusion may be presented (522) based on the first and second determinations of the reinforcement learning trained agent with respect to at least the first candidate conclusion and the second candidate conclusion.Type: GrantFiled: April 3, 2018Date of Patent: June 6, 2023Assignee: Koninklijke Philips N.V.Inventors: Yuan Ling, Sheikh Sadid Al Hasan, Oladimeji Feyisetan Farri, Vivek Varma Datla, Junyi Liu
-
Patent number: 11620506Abstract: Techniques are described herein for training and applying memory neural networks, such as “condensed” memory neural networks (“C-MemNN”) and/or “average” memory neural networks (“A-MemNN”). In various embodiments, the memory neural networks may be iteratively trained using training data in the form of free form clinical notes and clinical reference documents. In various embodiments, during each iteration of the training, a so-called “condensed” memory state may be generated and used as part of the next iteration. Once trained, a free form clinical note associated with a patient may be applied as input across the memory neural network to predict one or more diagnoses or outcomes of the patient.Type: GrantFiled: September 18, 2017Date of Patent: April 4, 2023Assignee: KONINKLIJKE PHILIPS N.V.Inventors: Aaditya Prakash, Sheikh Sadid AL Hasan, Oladimeji Feyisetan Farri, Kathy Mi Young Lee, Vivek Varma Datla, Ashequl Qadir, Junyi Liu
-
Patent number: 11621075Abstract: The described embodiments relate to systems, methods, and apparatus for providing a multimodal deep memory network (200) capable of generating patient diagnoses (222). The multimodal deep memory network can employ different neural networks, such as a recurrent neural network and a convolution neural network, for creating embeddings (204, 214, 216) from medical images (212) and electronic health records (206). Connections between the input embeddings (204) and diagnoses embeddings (222) can be based on an amount of attention that was given to the images and electronic health records when creating a particular diagnosis. For instance, the amount of attention can be characterized by data (110) that is generated based on sensors that monitor eye movements of clinicians observing the medical images and electronic health records.Type: GrantFiled: September 5, 2017Date of Patent: April 4, 2023Assignee: KONINKLIJKE PHILIPS N.V.Inventors: Sheikh Sadid Al Hasan, Siyuan Zhao, Oladimeji Feyisetan Farri, Kathy Mi Young Lee, Vivek Datla, Ashequl Qadir, Junyi Liu, Aaditya Prakash
-
Publication number: 20230018044Abstract: A skid steer loader includes a chassis frame, a cab, a main beam, and a bucket. The chassis frame is assembled with a walking system to form a movable chassis of the skid steer loader, and a power system of the skid steer loader is assembled to a tail portion of the chassis frame. The cab is arranged on a front portion of the chassis frame in an. overturning manner, and the cab is horizontally placed on the chassis frame when a telescopic support rod is in a retracted state and overturns forward when the telescopic support rod is in an extended state. A tail end of the main beam is movably hinged to the tail portion of the chassis frame by a four-bar linkage mechanism, the bucket is assembled to a front end of the main beam.Type: ApplicationFiled: December 19, 2019Publication date: January 19, 2023Applicant: SUNWARD INTELLIGENT EQUIPMENT CO., LTD.Inventors: Zeping TAN, Zhen LIU, Junyi LIU, Huiwen HU, Xianghua XIE, Zhengyu GAN
-
Patent number: 11544529Abstract: Techniques described herein relate to semi-supervised training and application of stacked autoencoders and other classifiers for predictive and other purposes. In various embodiments, a semi-supervised model (108) may be trained for sentence classification, and may combine what is referred to herein as a “residual stacked de-noising autoencoder” (“RSDA”) (220), which may be unsupervised, with a supervised classifier (218) such as a classification neural network (e.g., a multilayer perceptron, or “MLP”). In various embodiments, the RSDA may be a stacked denoising autoencoder that may or may not include one or more residual connections. If present, the residual connections may help the RSDA “remember” forgotten information across multiple layers. In various embodiments, the semi-supervised model may be trained with unlabeled data (for the RSDA) and labeled data (for the classifier) simultaneously.Type: GrantFiled: September 4, 2017Date of Patent: January 3, 2023Assignee: Koninklijke Philips N.V.Inventors: Reza Ghaeini, Sheikh Sadid Al Hasan, Oladimeji Feyisetan Farri, Kathy Lee, Vivek Datla, Ashequl Qadir, Junyi Liu, Aaditya Prakash
-
Patent number: 11544587Abstract: A medical information retrieval system comprises a natural language processing system that processes a vocal user query to identify key words and phrases. These key words and phrases are provided to an inferencing engine that provides a set of knowledge-based inferences from medical knowledge sources, based on these key words and phrases. Thereafter, these knowledge-based inferences are provided to an information retrieval engine that retrieves a corresponding plurality of medical articles based on these knowledge-based inferences, and ranks each with respect to the knowledge-based inferences. A summary engine receives the ranked articles and creates a model based on the topical keywords and candidate sentences found in the highly ranked articles. A paraphrase engine processes the candidate sentences to provide a summary response based on a knowledge-based paraphrase model. An audio output device renders the summary report as the response to the user's original vocal query.Type: GrantFiled: September 25, 2017Date of Patent: January 3, 2023Assignee: Koninklijke Philips N.V.Inventors: Oladimeji Feyisetan Farri, Sheikh Al Hasan, Junyi Liu, Kathy Mi Young Lee, Vivek Varma Datla