Patents by Inventor Yilin Shen

Yilin Shen has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240117451
    Abstract: Positive reference spiked in collected sample for use in qualitatively and quantitatively detecting viral RNA.
    Type: Application
    Filed: March 10, 2021
    Publication date: April 11, 2024
    Inventors: Shuwei YANG, Liancheng HUANG, Feifei FENG, Longwen SU, Kun LIN, Can TANG, Chen LIANG, Yuanmei WANG, Yanqing CAI, Yilin PANG, Chuan SHEN, Zhixue YU
  • Publication number: 20240119077
    Abstract: A method of performing a multimodal tasks by using a multimodal model that includes a text encoder and a vision encoder, may include obtaining a text feature from the query via the text encoder; obtaining an image feature from the one or more input images via the vision encoder; and outputting a response to the query based on similarity between the text feature and the image feature, wherein weights vectors of the text encoder and the vision encoder are pruned and shared according to a sharing vector and a pruning vector that are generated by a hypernetwork, and wherein the hypernetwork and the multimodal model are jointly trained to minimize at least one of a difference between the weight vectors in the text encoder and the vision encoder, a difference between the weight vectors in different layers of the text encoder, and a number of parameters in the multimodal model.
    Type: Application
    Filed: September 14, 2023
    Publication date: April 11, 2024
    Applicant: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Shangqian GAO, Burak UZKENT, Yilin SHEN, Hongxia JIN
  • Publication number: 20240104309
    Abstract: A method includes receiving an input for a large language model (LLM) from a user. The method also includes generating one or more token embeddings based on the input. The method further includes generating one or more prompt embeddings based on the input using a contextual prompt generator (CPG), the one or more prompt embeddings representing new or updated information that is not contained in existing knowledge of the LLM. The method also includes providing the one or more token embeddings and the one or more prompt embeddings to the LLM. In addition, the method includes outputting a prediction based on the one or more token embeddings and the one or more prompt embeddings using the LLM, wherein the prediction reflects the new or updated information represented by the one or more prompt embeddings.
    Type: Application
    Filed: September 12, 2023
    Publication date: March 28, 2024
    Inventors: Yen-Chang Hsu, Harshavardhan Kamarthi, Yilin Shen, Hongxia Jin
  • Publication number: 20240080423
    Abstract: A method includes obtaining raw image data, where the raw image data includes data values each having most significant bits and least significant bits. The method also includes providing the raw image data to a trained machine learning model and generating processed image data using the trained machine learning model. The method further includes presenting an image based on the processed image data. The trained machine learning model is trained to modulate a feature map associated with the most significant bits of the data values of the raw image data based on the least significant bits of the data values of the raw image data in order to generate a fusion of the most significant bits and the least significant bits of the data values of the raw image data.
    Type: Application
    Filed: November 18, 2022
    Publication date: March 7, 2024
    Inventors: Wenbo Li, Zhipeng Mo, Yi Wei, Burak Uzkent, Qian Lou, Yilin Shen, Hongxia Jin
  • Publication number: 20240046946
    Abstract: A method includes obtaining, using at least one processing device, noisy speech signals and extracting, using the at least one processing device, acoustic features from the noisy speech signals. The method also includes receiving, using the at least one processing device, a predicted speech mask from a speech mask prediction model based on a first acoustic feature subset and receiving, using the at least one processing device, a predicted noise mask from a noise mask prediction model based on a second acoustic feature subset. The method further includes providing, using the at least one processing device, predicted speech features determined using the predicted speech mask and predicted noise features determined using the predicted noise mask to a filtering mask prediction model. In addition, the method includes generating, using the at least one processing device, a clean speech signal using a predicted filtering mask output by the filtering mask prediction model.
    Type: Application
    Filed: November 22, 2022
    Publication date: February 8, 2024
    Inventors: Chou-Chang Yang, Ching-Hua Lee, Rakshith Sharma Srinivasa, Yashas Malur Saidutta, Yilin Shen, Hongxia Jin
  • Patent number: 11875231
    Abstract: An electronic device for complex task machine learning includes at least one memory and at least one processor coupled to the at least one memory. The at least one processor is configured to receive an unknown command for performing a task and generate a prompt regarding the unknown command. The at least one processor is also configured to receive one or more instructions in response to the prompt, where each of the one or more instructions provides information on performing at least a portion of the task. The at least one processor is further configured to determine at least one action for each one of the one or more instructions. In addition, the at least one processor is configured to create a complex action for performing the task based on the at least one action for each one of the one or more instructions.
    Type: Grant
    Filed: October 23, 2019
    Date of Patent: January 16, 2024
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Avik Ray, Yilin Shen, Hongxia Jin
  • Patent number: 11854528
    Abstract: An apparatus for detecting unsupported utterances in natural language understanding, includes a memory storing instructions, and at least one processor configured to execute the instructions to classify a feature that is extracted from an input utterance of a user, as one of in-domain and out-of-domain (OOD) for a response to the input utterance, obtain an OOD score of the extracted feature, and identify whether the feature is classified as OOD. The at least one processor is further configured to executed the instructions to, based on the feature being identified to be classified as in-domain, identify whether the obtained OOD score is greater than a predefined threshold, and based on the OOD score being identified to be greater than the predefined threshold, re-classify the feature as OOD.
    Type: Grant
    Filed: August 13, 2021
    Date of Patent: December 26, 2023
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Yen-Chang Hsu, Yilin Shen, Avik Ray, Hongxia Jin
  • Patent number: 11775815
    Abstract: An electronic device including a deep memory model includes at least one memory and at least one processor coupled to the at least one memory. The at least one processor is configured to receive input data to the deep memory model. The at least one processor is also configured to extract a history state of an external memory coupled to the deep memory model based on the input data. The at least one processor is further configured to update the history state of the external memory based on the input data. In addition, the at least one processor is configured to output a prediction based on the extracted history state of the external memory.
    Type: Grant
    Filed: August 8, 2019
    Date of Patent: October 3, 2023
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Yilin Shen, Yue Deng, Avik Ray, Hongxia Jin
  • Publication number: 20230289590
    Abstract: A method of training a model includes configuring a first transformer for visual learning with a first set of weights, configuring a second transformer for textual learning with a second set of weights, adjusting at least the second set of weights based on minimizing a weight difference between the first set of weights and the second set of weights, replacing the first set of weights for the first transformer with the adjusted second set of weights, and updating the first transformer based on the adjusted second set of weights.
    Type: Application
    Filed: September 8, 2022
    Publication date: September 14, 2023
    Applicant: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Burak UZKENT, Vasili Ramanishka, Yilin Shen, Hongxia Jin
  • Patent number: 11741307
    Abstract: A method includes applying, by at least one processor, a natural language understanding (NLU) model to an input utterance in order to obtain initial slot probability distributions. The method also includes performing, by the at least one processor, a confidence calibration by applying a calibration probability distribution to the initial slot probability distributions in order to generate calibrated slot probability distributions. The calibration probability distribution has a higher number of dimensions than the initial slot probability distributions. The method further includes identifying, by the at least one processor, uncertainties associated with words in the input utterance based on the calibrated slot probability distributions. In addition, the method includes identifying, by the at least one processor, a new concept contained in the input utterance that is not recognized by the NLU model based on the identified uncertainties.
    Type: Grant
    Filed: October 20, 2020
    Date of Patent: August 29, 2023
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Yilin Shen, Hongxia Jin
  • Patent number: 11721090
    Abstract: A recommendation method includes retrieving content consumption data including content consumed and content not consumed. Based on the content consumption data, identifying a first piece of content not consumed. A first feature of the first piece of content related to negative consumption of the first piece of content is determined. A first system is used to revise the first feature to a second feature. A second piece of content including the second feature is provided to an electronic device. The second piece of content is a revised instance of the first piece of content.
    Type: Grant
    Filed: July 20, 2018
    Date of Patent: August 8, 2023
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Yue Deng, Yilin Shen, Hongxia Jin
  • Patent number: 11720814
    Abstract: A recognition method includes retrieving an input including data of a first window size. The method further includes classifying the input based on comparison of warping distance of the input with a pruning threshold.
    Type: Grant
    Filed: December 27, 2018
    Date of Patent: August 8, 2023
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Yilin Shen, Yue Deng, Hongxia Jin
  • Publication number: 20230245435
    Abstract: A method includes obtaining a batch of training data including multiple paired image-text pairs and multiple unpaired image-text pairs, where each paired image-text pair and each unpaired image-text pair includes an image and a text. The method also includes training a machine learning model using the training data based on an optimization of a combination of losses. The losses include, for each paired image-text pair, (i) a first multi-modal representation loss based on the paired image-text pair and (ii) a second multi-modal representation loss based on two or more unpaired image-text pairs, selected from among the multiple unpaired image-text pairs, wherein each of the two or more unpaired image-text pairs includes either the image or the text of the paired image-text pair.
    Type: Application
    Filed: January 31, 2022
    Publication date: August 3, 2023
    Inventors: Changsheng Zhao, Burak Uzkent, Yilin Shen, Hongxia Jin
  • Patent number: 11681923
    Abstract: Intent determination based on one or more multi-model structures can include generating an output from each of a plurality of domain-specific models in response to a received input. The domain-specific models can comprise simultaneously trained machine learning models that are trained using a corresponding local loss metric for each domain-specific model and a global loss metric for the plurality of domain-specific models. The presence or absence of an intent corresponding to one or more domain-specific models can be determined by classifying the output of each domain-specific model.
    Type: Grant
    Filed: December 27, 2019
    Date of Patent: June 20, 2023
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Yu Wang, Yilin Shen, Yue Deng, Hongxia Jin
  • Publication number: 20230177332
    Abstract: A method includes accessing, using at least one processor of an electronic device, a machine learning model. The machine learning model is trained by directing a gradient direction of gradients to one or more flat local minima and using a dynamic learning rate for one or more additional tasks. The method also includes receiving, using the at least one processor, an input from an input source. The method further includes providing, using the at least one processor, the input to the machine learning model. The method also includes receiving, using the at least one processor, an output from the machine learning model. In addition, the method includes instructing, using the at least one processor, at least one action based on the output from the machine learning model.
    Type: Application
    Filed: December 2, 2022
    Publication date: June 8, 2023
    Inventors: Sima Behpour, Yilin Shen, Hongxia Jin
  • Publication number: 20230177338
    Abstract: A method includes obtaining, using a first electronic device, a weight matrix associated with a trained transformer model. The method also includes factorizing the weight matrix into a dictionary weight matrix and an intermediate matrix. The method further includes pruning the intermediate matrix to generate a sparse intermediate matrix. The method also includes fine-tuning the sparse intermediate matrix based on a training dataset to generate a fine-tuned sparse intermediate matrix. The method further includes determining an index matrix and a coefficient matrix based on the fine-tuned sparse intermediate matrix. In addition, the method includes deploying the dictionary weight matrix, the index matrix, and the coefficient matrix to a second electronic device without deploying the weight matrix to the second electronic device. A number of parameters in the dictionary weight matrix, the index matrix, and the coefficient matrix is smaller than a number of parameters in the weight matrix.
    Type: Application
    Filed: December 1, 2022
    Publication date: June 8, 2023
    Inventors: Qian Lou, Yen-Chang Hsu, Burak Uzkent, Ting Hua, Yilin Shen, Hongxia Jin
  • Patent number: 11669746
    Abstract: An electronic device for active learning includes at least one memory and at least one processor coupled to the at least one memory. The at least one processor is configured to select one or more entries from a data set including unlabeled data based on a similarity between the one or more entries and labeled data. The at least one processor is further configured to cause the one or more entries to be labeled.
    Type: Grant
    Filed: March 29, 2019
    Date of Patent: June 6, 2023
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Yue Deng, Yilin Shen, Hongxia Jin
  • Publication number: 20230104491
    Abstract: A method includes receiving one or more training corpora for training a machine learning model having a plurality of encoder blocks, where each encoder block includes an attention layer and a feedforward network. The method also includes using the one or more training corpora to train an attention dictionary shared across the plurality of encoder blocks. Training the attention dictionary may include training attention parameters of the attention layer in each of the plurality of encoder blocks, and the attention parameters for a given encoder block among the plurality of encoder blocks may be a weighted combination of columns from the attention dictionary shared across the plurality of encoder blocks.
    Type: Application
    Filed: September 22, 2022
    Publication date: April 6, 2023
    Inventors: Qian Lou, Yilin Shen, Hongxia Jin, Ting Hua, Yen-Chang Hsu
  • Publication number: 20230107006
    Abstract: A method includes providing, using at least one processing device of an electronic device, input data to a machine learning model. The method also includes extracting, using the at least one processing device, features of the input data. The method further includes performing, using the at least one processing device, a geometric transformation of the features, where the geometric transformation is based on first and second parametric instance-dependent scalar functions. In addition, the method includes producing, using the at least one processing device, a predictive probability distribution based on the transformed features.
    Type: Application
    Filed: September 12, 2022
    Publication date: April 6, 2023
    Inventors: Junjiao Tian, Yen-Chang Hsu, Yilin Shen, Hongxia Jin
  • Publication number: 20230106716
    Abstract: In one embodiment, a method includes accessing an image and a natural-language question regarding the image and extracting, from the image, a first set of image features at a first level of granularity and a second set of image features at a second level of granularity. The method further includes extracting, from the question, a first set of text features at the first level of granularity and a second set of text features at the second level of granularity; generating a first output representing an alignment between the first set of image features and the first set of text features; generating a second output representing an alignment between the second set of image features and the second set of text features; and determining an answer to the question based on the first output and the second output.
    Type: Application
    Filed: September 16, 2022
    Publication date: April 6, 2023
    Inventors: Peixi Xiong, Yilin Shen, Hongxia Jin