Patents by Inventor Chen Xing
Chen Xing has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20250030650Abstract: A packet processing method includes: allocating a portion of storage space in a memory circuit as a storage pool including first storage blocks; storing a packet in one of the first storage blocks when a data size of the packet is less than or equal to a predetermined value, and releasing the one of the first storage blocks to the storage pool after the packet is processed; requesting an increase in a number of the first storage blocks from a kernel when a number of remaining storage blocks in the first storage blocks that do not store data is less than a threshold value; and requesting second storage block from the kernel to increase a data capacity of the storage pool to store the packet when the data size is greater than the predetermined value, and releasing the second storage block to the kernel after the packet is processed.Type: ApplicationFiled: July 12, 2024Publication date: January 23, 2025Inventors: HAO-CHEN XING, TAO CUI, FENG-LIN WANG, MING-XU WANG
-
Patent number: 12198453Abstract: Embodiments described herein provide methods and systems for open vocabulary object detection of images. given a pre-trained vision-language model and an image-caption pair, an activation map may be computed in the image that corresponds to an object of interest mentioned in the caption. The activation map is then converted into a pseudo bounding-box label for the corresponding object category. The open vocabulary detector is then directly supervised by these pseudo box-labels, which enables training object detectors with no human-provided bounding-box annotations.Type: GrantFiled: January 28, 2022Date of Patent: January 14, 2025Assignee: Salesforce, Inc.Inventors: Mingfei Gao, Chen Xing
-
Patent number: 12200892Abstract: A structure for mounting a storage device to a server and providing a fast installation and removal of a storage device includes a carrier and a chassis. The carrier includes a rotatable handle with a cam on the axis of rotation and a frame. The cam is connected to the frame. The chassis includes an immovable limiting component. With the carrier mounted in the chassis, rotation of the handle clockwise or counterclockwise pushes the carrier to move, to lock or unlock the carrier because of the limiting component resting against the cam. The structure greatly improves the convenience of the installation and removal of the storage device. A computing device is also disclosed.Type: GrantFiled: August 25, 2022Date of Patent: January 14, 2025Assignee: Fulian Precision Electronics (Tianjin) Co., LTD.Inventors: Han-Yu Li, Wen-Hu Lu, Jun Li, Chen Xing
-
Publication number: 20240330409Abstract: Embodiments are directed to pre-training a transformer model using more parameters for sophisticated patterns (PSP++). The transformer model is divided into a held-out model and a main model. A forward pass and a backward pass are performed on the held-out model, where the forward pass determines self-attention hidden states of the held-out model and the backward pass determines loss of the held-out model. A forward pass on the main model is performed to determine a self-attention hidden states of the main model. The self-attention hidden states of the main model are concatenated with the self-attention hidden states of the held-out model. A backward pass is performed on the main model to determine a loss of the main model. The parameters of the held-out model are updated to reflect the loss of the held-out model and parameters of the main model are updated to reflect the loss of the main model.Type: ApplicationFiled: June 10, 2024Publication date: October 3, 2024Inventors: Chen Xing, Wenhao Liu, Chu Hong Hoi, Nitish Shirish Keskar, Caiming Xiong
-
Patent number: 12072955Abstract: Embodiments are directed to pre-training a transformer model using more parameters for sophisticated patterns (PSP++). The transformer model is divided into a held-out model and a main model. A forward pass and a backward pass are performed on the held-out model, where the forward pass determines self-attention hidden states of the held-out model and the backward pass determines loss of the held-out model. A forward pass on the main model is performed to determine a self-attention hidden states of the main model. The self-attention hidden states of the main model are concatenated with the self-attention hidden states of the held-out model. A backward pass is performed on the main model to determine a loss of the main model. The parameters of the held-out model are updated to reflect the loss of the held-out model and parameters of the main model are updated to reflect the loss of the main model.Type: GrantFiled: November 22, 2021Date of Patent: August 27, 2024Assignee: Salesforce, Inc.Inventors: Chen Xing, Wenhao Liu, Chu Hong Hoi, Nitish Shirish Keskar, Caiming Xiong
-
Patent number: 12073178Abstract: Embodiments are directed to a training framework for reducing gender bias in a pre-trained language model. To reduce gender bias a gender neutral dataset is generated. Next, parameters of the pre-trained language model are frozen and do not change during a subsequent training phase. As all the pre-trained parameters are frozen, forgetting of information from the original training data is minimized. New parameters are added to the language model. The new parameters may be associated with gender related terms, such as profession names. In a subsequent training phase the new parameters of the language model are trained using a gender neutral dataset.Type: GrantFiled: January 27, 2022Date of Patent: August 27, 2024Assignee: Salesforce, Inc.Inventors: Zahra Fatemi, Caiming Xiong, Wenhao Liu, Chen Xing
-
Patent number: 12039443Abstract: A method includes receiving a training data set including a plurality of training data subsets. From two or more training data subsets in the training data set, the method includes selecting a support set of training examples and a query set of training examples. The method includes determining, using the classification model, a centroid value for each respective class. For each training example in the query set of training examples, the method includes generating, using the classification model, a query encoding, determining a class distance measure, determining a ground-truth distance, and updating parameters of the classification model. For each training example in the query set of training examples identified as being misclassified, the method further includes generating a standard deviation value, sampling a new query, and updating parameters of the confidence model based on the new query encoding.Type: GrantFiled: October 11, 2022Date of Patent: July 16, 2024Assignee: GOOGLE LLCInventors: Sercan Omer Arik, Chen Xing, Zizhao Zhang, Tomas Jon Pfister
-
Publication number: 20240185035Abstract: Embodiments described herein provide a mechanism for replacing existing text encoders in text-to-image generation models with more powerful pre-trained language models. Specifically, a translation network is trained to map features from the pre-trained language model output into the space of the target text encoder. The training preserves the rich structure of the pre-trained language model while allowing it to operate within the text-to-image generation model. The resulting modularized text-to-image model receives prompt and generates an image representing the features contained in the prompt.Type: ApplicationFiled: January 31, 2023Publication date: June 6, 2024Inventors: Ning Yu, Can Qin, Chen Xing, Shu Zhang, Stefano Ermon, Caiming Xiong, Ran Xu
-
Publication number: 20240169704Abstract: Systems and methods for training a neural network based three-dimensional (3D) encoder for 3D classification are provided. A training dataset including a plurality of samples is received, wherein a first sample includes an image, a text, and a point cloud. An image encoder of a pretrained vision and language model is used to generate image representations for the image of the first sample. A text encoder of the pretrained vision and language model is used to generate text representations for the text of the first sample. The neural network based 3D encoder is used to generate 3D representations for the point cloud of the first sample. A loss objective is computed based on the image representations, text representations, and 3D representations. Parameters of the neural network based 3D encoder are updated based on the computed loss objective via backpropagation.Type: ApplicationFiled: March 13, 2023Publication date: May 23, 2024Inventors: Le XUE, Chen XING, Juan Carlos NIEBLES DUQUE, Caiming XIONG, Ran XU, Silvio SAVARESE
-
Publication number: 20240160917Abstract: A method of training a neural network based three-dimensional (3D) encoder is provided. A training dataset is generated using a plurality of 3D models of a 3D model dataset. To generate a first sample of the training dataset, an image generator with multi-view rendering is used to generate a plurality of image candidates of a first 3D model. A word is chosen from metadata associated with the first 3D model. A language model is used to generate one or more text descriptions using the selected word and a plurality of prompts. A point cloud is generated by randomly sampling points in the 3D model. The first sample is generated to include a first image randomly selected from the plurality of image candidates, one or more text descriptions, and the point cloud is generated. The 3D encoder is trained using the training dataset including the first sample.Type: ApplicationFiled: March 13, 2023Publication date: May 16, 2024Inventors: Le XUE, Chen XING, Juan Carlos NIEBLES DUQUE, Caiming XIONG, Ran XU, Silvio SAVARESE
-
Patent number: 11928597Abstract: There is described a computer-implemented method and system for classifying images, the computer-implemented method comprising: receiving an image to be classified, generating a vector representation of the image to be classified using an image embedding method, comparing the vector representation of the image to predefined vector representations of the predefined image categories, and identifying a relevant category amongst the predefined image categories based on the comparison, the relevant category being associated with the image to be classified and outputting the relevant category.Type: GrantFiled: March 21, 2023Date of Patent: March 12, 2024Assignee: ServiceNow CanadaInventors: Pedro Oliveira Pinheiro, Chen Xing, Negar Rostamzadeh
-
Publication number: 20240070394Abstract: Embodiments described herein provide a mechanism that ensembles trainable soft prompts to transfer knowledge from source tasks under few-shot learning settings. Specifically, given a source task input from a source task training dataset, a set of soft prompts may be trained using a frozen PLM on the large-scale source task training dataset. The set of soft prompts are then prepended to a target task input, based on which the frozen pre-trained language model generates a set of logits for predicting classification of the target task input, respectively. An attention module is used to generate input-logit attention scores, which are used to compute a weighted linear combination of the logits given the attention scores. The weighted linear combination are the final logits to predict the final classification of the target task input.Type: ApplicationFiled: January 27, 2023Publication date: February 29, 2024Inventors: Xiangyu Peng, Chen Xing, Prafulla Kumar Choubey, Chieng-Sheng Wu
-
Publication number: 20240070868Abstract: Embodiments described herein provide an open-vocabulary instance segmentation framework that adopts a pre-trained vision-language model to develop a pipeline in detecting novel categories of instances.Type: ApplicationFiled: January 25, 2023Publication date: February 29, 2024Inventors: Ning Yu, Vibashan Vishnukumar Sharmini, Chen Xing, Juan Carlos Niebles Duque, Ran Xu
-
Publication number: 20230237334Abstract: There is described a computer-implemented method and system for classifying images, the computer-implemented method comprising: receiving an image to be classified, generating a vector representation of the image to be classified using an image embedding method, comparing the vector representation of the image to predefined vector representations of the predefined image categories, and identifying a relevant category amongst the predefined image categories based on the comparison, the relevant category being associated with the image to be classified and outputting the relevant category.Type: ApplicationFiled: March 21, 2023Publication date: July 27, 2023Applicant: ServiceNow Canada Inc.Inventors: Pedro Oliveira PINHEIRO, Chen XING, Negar ROSTAMZADEH
-
Publication number: 20230154213Abstract: Embodiments described herein provide methods and systems for open vocabulary object detection of images. given a pre-trained vision-language model and an image-caption pair, an activation map may be computed in the image that corresponds to an object of interest mentioned in the caption. The activation map is then converted into a pseudo bounding-box label for the corresponding object category. The open vocabulary detector is then directly supervised by these pseudo box-labels, which enables training object detectors with no human-provided bounding-box annotations.Type: ApplicationFiled: January 28, 2022Publication date: May 18, 2023Inventors: Mingfei Gao, Chen Xing
-
Patent number: 11645505Abstract: There is described a computer-implemented method for generating a vector representation of an image, the computer-implemented method comprising: receiving a given image and semantic information about the given image; generating a first vector representation of the given image using an image embedding method; generating a second vector representation of the semantic information using a word embedding method; combining the first vector representation of the image to be embedded and the second vector representation of the semantic information together, thereby obtaining a modified vector representation for the image to be embedded; and outputting the modified vector representation.Type: GrantFiled: January 17, 2020Date of Patent: May 9, 2023Assignee: ServiceNow Canada Inc.Inventors: Pedro Oliveira Pinheiro, Chen Xing, Negar Rostamzadeh
-
Patent number: 11645289Abstract: A graph query is executed against a graph index that connects actors with objects through edges. A graph ranking model is obtained and results of the graph query are ranked, using the graph ranking model, based upon edge data available from edges in the graph that match the query.Type: GrantFiled: June 5, 2014Date of Patent: May 9, 2023Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Torbjorn Helvik, Chen Xing, Oivind Wang, Bard Kvalheim, Nicolai Bodd
-
Publication number: 20230120894Abstract: A method includes receiving a training data set including a plurality of training data subsets. From two or more training data subsets in the training data set, the method includes selecting a support set of training examples and a query set of training examples. The method includes determining, using the classification model, a centroid value for each respective class. For each training example in the query set of training examples, the method includes generating, using the classification model, a query encoding, determining a class distance measure, determining a ground-truth distance, and updating parameters of the classification model. For each training example in the query set of training examples identified as being misclassified, the method further includes generating a standard deviation value, sampling a new query, and updating parameters of the confidence model based on the new query encoding.Type: ApplicationFiled: October 11, 2022Publication date: April 20, 2023Applicant: Google LLCInventors: Sercan Omer Arik, Chen Xing, Zizhao Zhang, Tomas Jon Pfister
-
Publication number: 20230104662Abstract: Embodiments are directed to a training framework for reducing gender bias in a pre-trained language model. To reduce gender bias a gender neutral dataset is generated. Next, parameters of the pre-trained language model are frozen and do not change during a subsequent training phase. As all the pre-trained parameters are frozen, forgetting of information from the original training data is minimized. New parameters are added to the language model. The new parameters may be associated with gender related terms, such as profession names. In a subsequent training phase the new parameters of the language model are trained using a gender neutral dataset.Type: ApplicationFiled: January 27, 2022Publication date: April 6, 2023Inventors: Zahra Fatemi, Caiming Xiong, Wenhao Liu, Chen Xing
-
Patent number: 11615098Abstract: A graph query is executed against a graph index that connects actors with objects through edges. A graph ranking model is obtained and results of the graph query are ranked, using the graph ranking model, based upon edge data available from edges in the graph that match the query.Type: GrantFiled: June 5, 2014Date of Patent: March 28, 2023Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Torbjorn Helvik, Chen Xing, Oivind Wang, Bard Kvalheim, Nicolai Bodd