Patents by Inventor Wan-Yi Lin
Wan-Yi Lin has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20250111243Abstract: Methods and systems for training neural networks with federated learning. A portion of a server-maintained machine-learning model is transferred from a server to clients, yielding a plurality of local machine-learning models. At each client, the local models are trained with locally-stored data, including determining a respective cross entropy loss for each local models. Weights are updated for each local model, and evaluated based on a common dataset to obtain activation outputs for each layer. These are transferred to the server without transferring the locally-stored data of the clients, whereupon they are permuted according to the one respective updated weight to match a dimension of the selected client to obtain a matrix, which is sent to each client for permuting the local models based on the matrix. The permuted weights are sent to the server, whereupon they are aggregated and transferred back to the clients for updating of the local models.Type: ApplicationFiled: September 22, 2023Publication date: April 3, 2025Inventors: Augustine D. Saravanos, Filipe J. CABRITA CONDESSA, Wan-Yi LIN, Zhenzhen Li, Madan RAVI GANESH
-
Publication number: 20250104394Abstract: A method of generating text-driven prompts and class prediction probabilities using a vision-language model (VLM) includes receiving candidate class names associated with a plurality of candidate classes for images, generating class text tokens based on a text description of the candidate class names, and generating a plurality of context prompt vectors using a prompt generator. The context prompt vectors define context information associated with an image classification task to be performed by the VLM. The method further includes generating prompts for each of the plurality of candidate classes by appending respective class text tokens to the context prompt vectors for each of the plurality of candidate classes, and, using the VLM, generating and outputting a class prediction probability for a sample image based on the plurality of context prompt vectors.Type: ApplicationFiled: September 22, 2023Publication date: March 27, 2025Inventors: CHEN QIU, XINGYU LI, CHAITHANYA KUMAR MUMMADI, MADAN RAVI GANESH, ZHENZHEN LI, WAN-YI LIN, SABRINA SCHMEDDING
-
Publication number: 20250103900Abstract: Methods and systems of training neural networks with federated learning. Machine learning models are sent from a server to clients, yielding local machine learning models. At each client, the models are trained with locally-stored data, including determining a respective cross entropy loss for each of the plurality of local machine learning models. Weights for each local model are updated, and transferred to the server without transferring locally-stored data. The transferred weights are aggregated at the server to obtain an aggregated server-maintained machine learning model. At the server, a distillation loss based on a foundation model is generated. The aggregated server-maintained machine learning is updated to obtain aggregated respective weights, which are transferred to the clients for updating in the local models.Type: ApplicationFiled: September 22, 2023Publication date: March 27, 2025Inventors: Xidong WU, Filipe J. CABRITA CONDESSA, Wan-Yi LIN, Devin T. WILLMOTT, Zhenzhen LI, Madan Ravi GANESH
-
Publication number: 20250103899Abstract: Methods and systems of training neural networks with federated learning. Server-maintained machine learning models are sent from a server to clients, yielding local machine learning models. At each client, the models are trained with local data to determine a respective cross entropy loss and a distillation loss based on foundation models. Respective weights are updated at each client for each of the local machine learning model based on the losses. The updated weights are transferred to the server without transferring the locally-stored data, whereupon they are aggregated and transferred back to the clients. At each client, the local machine learning model is updated with the aggregated updated weights.Type: ApplicationFiled: September 22, 2023Publication date: March 27, 2025Inventors: Xidong WU, Filipe J. CABRITA CONDESSA, Wan-Yi LIN, Devin T. WILLMOTT, Zhenzhen LI, Madan Ravi GANESH
-
Publication number: 20250100133Abstract: Methods and systems of training neural networks with federated learning. A portion of a server-maintained machine learning (ML) model is sent from a server to clients, whereupon local ML models are trained with locally-stored data, including determining cross entropy loss for each local ML model. The updated weights are evaluated on a common data set to obtain activation outputs for each layer of the local ML model, which are transferred to the server whereupon they are permuted to match a dimension of the selected client to obtain a matrix, which is sent to the clients. At each client, the local ML model is permuted based on the matrix to obtain permuted weights which are transferred to the server and aggregated. The aggregated permuted weights are transferred to the clients so that the local ML models are updated with the aggregated permuted weights.Type: ApplicationFiled: September 22, 2023Publication date: March 27, 2025Inventors: Augustine D. Saravanos, Filipe J. CABRITA CONDESSA, Wan-Yi LIN, Zhenzhen Li, Madan RAVI GANESH
-
Publication number: 20250103901Abstract: Methods and systems for training neural networks with federated learning. Respective weights are transferred from each client to a respective neighboring client without transferring the locally-stored data of the clients. All models are permuted according to the respective weights to match the respectively updated weights to obtain permuted weights. The permuted weights are aggregated at the clients. At each client, local machine learning models are trained with locally-stored data, wherein the training includes determining a respective cross entropy loss for each of the plurality of local machine learning models and a loss computed based on a distance of the local MLM to the aggregated permuted weights. Respective weights of each local machine learning models are updated based on the determined cross entropy loss and the loss.Type: ApplicationFiled: September 22, 2023Publication date: March 27, 2025Inventors: Augustine D. Saravanos, Filipe J. CABRITA CONDESSA, Wan-Yi LIN, Zhenzhen Li, Madan RAVI GANESH
-
Patent number: 12242657Abstract: A method of identifying an attack comprising receiving an input of one or more images, wherein the one or more images includes a patch size and size, divide the image into a first sub-image and a second sub-image, classify the first sub-image and the second sub-image, wherein classifying is accomplished via introducing a variable in a pixel location associated with the first and second sub-image, and in response to classifying the first and second sub-image and identifying an adversarial patch, output a notification indicating that the input is not certified.Type: GrantFiled: July 26, 2022Date of Patent: March 4, 2025Assignees: Robert Bosch GmbHInventors: Leslie Rice, Huan Zhang, Wan-Yi Lin, Jeremy Kolter
-
Patent number: 12236695Abstract: A computer-implemented system and method relate to certified defense against adversarial patch attacks. A set of one-mask images is generated using a first mask at a set of predetermined regions of a source image. The source image is obtained from a sensor. A set of one-mask predictions is generated, via a machine learning system, based on the set of one-mask images. A first one-mask image is extracted from the set of one-mask images. The first one-mask image is associated with a first one-mask prediction that is identified as a minority amongst the set of one-mask predictions. A set of two-mask images is generated by masking the first one-mask image using a set of second masks. The set of second masks include at least a first submask and a second submask in which a dimension of the first submask is less than a dimension of the first mask. A set of two-mask predictions is generated based on the set of two-mask images.Type: GrantFiled: September 21, 2022Date of Patent: February 25, 2025Assignee: Robert Bosch GmbHInventors: Shuhua Yu, Aniruddha Saha, Chaithanya Kumar Mummadi, Wan-Yi Lin
-
Publication number: 20250061327Abstract: Disclosed embodiments use diffusion-based generative models for radar point cloud super-resolution. Disclosed embodiments use the mathematics of diffusion modeling to generate higher-resolution radar point cloud data from lower-resolution radar point cloud data.Type: ApplicationFiled: August 18, 2023Publication date: February 20, 2025Inventors: Marcus A. Pereira, Filipe Condessa, Wan-Yi Lin, Ravi Ganesh Madan, Matthias Hagedorn
-
Patent number: 12205349Abstract: A system includes a machine-learning network. The network includes an input interface configured to receive input data from a sensor. The processor is programmed to receive the input data, generate a perturbed input data set utilize the input data, wherein the perturbed input data set includes perturbations of the input data, denoise the perturbed input data set utilizing a denoiser, wherein the denoiser is configured to generate a denoised data set, send the denoised data set to both a pre-trained classifier and a rejector, wherein the pre-trained classifier is configured to classify the denoised data set and the rejector is configured to reject a classification of the denoised data set, train, utilizing the denoised input data set, the a rejector to achieve a trained rejector, and in response to obtaining the trained rejector, output an abstain classification associated with the input data, wherein the abstain classification is ignored for classification.Type: GrantFiled: March 18, 2022Date of Patent: January 21, 2025Assignees: Robert Bosch GmbH, Carnegie Mellon UniversityInventors: Fatemeh Sheikholeslami, Wan-Yi Lin, Jan Hendrik Metzen, Huan Zhang, Jeremy Kolter
-
Publication number: 20250005376Abstract: Methods and systems for training neural networks with federated learning. Server-maintained machine learning models are sent from a server to a plurality of clients, yielding local machine learning models. At each client, the local machine learning models are trained with locally-stored data, stored locally at that respective client. Respective losses are determined and weights updated for each of the local machine learning models. Updated weights are transferred to the server for updating of the server-maintained machine learning models for training of those models. If one of the clients is disconnected or otherwise unable to receive the server-maintained models, that disconnected client can connect to neighboring clients, receiving the models from those neighboring clients, and training those models with the disconnected clients own locally-stored data.Type: ApplicationFiled: June 30, 2023Publication date: January 2, 2025Inventors: Zhenzhen Li, FILIPE J. CABRITA CONDESSA, Madan Ravi Ganesh, Wan-Yi Lin, Chen Qiu
-
Publication number: 20250005375Abstract: Methods and systems for federated learning in a machine learning environment are disclosed. At least portions of a plurality of server-maintained machine learning models are sent from a server to a plurality of clients, yielding a plurality of local machine learning models. At each client, the plurality of local machine learning models are trained with locally-stored data that is stored locally at that respective client. A respective loss for each of the plurality of local machine learning models is determined, and respective weights for each of the plurality of local machine learning models are updated. The respective updated weights from each client are transferred to the server without transferring the locally-stored data of the clients. At the server, the plurality of server-maintained machine learning models are trained with the updated weights sent from each of the clients.Type: ApplicationFiled: June 30, 2023Publication date: January 2, 2025Inventors: Zhenzhen Li, FILIPE J. CABRITA CONDESSA, Madan Ravi Ganesh, Wan-Yi Lin, Chen Qiu
-
Patent number: 12175336Abstract: A computer-implemented method for training a machine learning network. The method may include receiving an input data, selecting one or more batch samples from the input data, applying a perturbation object onto the one or more batch samples to create a perturbed sample, running the perturbed sample through the machine learning network, updating the perturbation object in response to the function in response to running the perturbed sample, and outputting the perturbation object in response to exceeding a convergence threshold.Type: GrantFiled: September 20, 2020Date of Patent: December 24, 2024Assignee: Robert Bosch GmbHInventors: Filipe J. Cabrita Condessa, Wan-Yi Lin, Karren Yang, Manash Pratim
-
Patent number: 12172670Abstract: Methods and systems of estimating an accuracy of a neural network on out-of-distribution data. In-distribution accuracies of a plurality of machine learning models trained with in-distribution data are determined. The plurality of machine learning models includes a first model, and a remainder of models. In-distribution agreement is determined between (i) an output of the first machine learning model executed with an in-distribution dataset and (ii) outputs of a remainder of the plurality of machine learning models executed with the in-distribution dataset. The machine learning models are also executed with an unlabeled out-of-distribution dataset, and an out-of-distribution agreement is determined. The in-distribution agreement is compared with the out-of-distribution agreement.Type: GrantFiled: June 15, 2022Date of Patent: December 24, 2024Assignee: Robert Bosch GmbHInventors: Yiding Jiang, Christina Baek, Jeremy Kolter, Aditi Raghunathan, João D. Semedo, Filipe J. Cabrita Condessa, Wan-Yi Lin
-
Publication number: 20240411892Abstract: A computer-implemented system and method relate to certified robust defenses against adversarial patches. A set of one-mask images are generated using a source image and a first mask at a set of predetermined image regions. The set of predetermined image regions collectively cover at least every pixel of the source image. A particular one-mask image with a highest prediction loss is selected from among the set of one-mask images. A set of two-mask images is generated using the selected one-mask image and a second mask at the set of predetermined image regions. A particular two-mask image with a highest prediction loss is selected from among the set of two-mask images. The machine learning system is trained using a training dataset, which includes the selected two-mask image.Type: ApplicationFiled: June 9, 2023Publication date: December 12, 2024Inventors: Chaithanya Kumar Mummadi, Wan-Yi Lin, Filipe Condessa, Aniruddha Saha, Shuhua Yu
-
Publication number: 20240412004Abstract: A computer-implemented method includes converting tabular data to a text representation, generating metadata associated with the text representation of the tabular data, outputting one or more natural language data descriptions indicative of the tabular data in response to utilizing a large language model (LLM) and zero-shot prompting of the metadata and text representation of the tabular data, outputting one or more summaries utilizing the LLM and appending a prompt on the one or more natural language data descriptions, selecting a single summary of the one or more summaries in response to the single summary having a smallest validation rate, receiving a query associated with the tabular data, outputting one or more predictions associated with the query, and in response to meeting a convergence threshold with the one or more predictions generated from the one or more iterations, output a final prediction associated with the query.Type: ApplicationFiled: June 9, 2023Publication date: December 12, 2024Inventors: Hariharan Manikandan, Yiding Jiang, Jeremy Kolter, Chen Qiu, Wan-Yi Lin, Filipe J. Cabrita Condessa
-
Publication number: 20240411931Abstract: A computer-implemented system and method relate to certified robust defenses against adversarial patches. A two-mask image is generated using a first mask and a second mask with respect to a source image. The two-mask image is associated with a highest prediction loss. A set of two-submask images are generated using a first submask and a second submask with respect to the source image. The first submask refers to a portion of the first mask. The second submask refers a portion of the second mask. A machine learning system generates a set of predictions upon receiving the set of two-submask images. A particular two-submask image with a highest prediction loss is selected from among the set of two-submask images. The machine learning system is trained via a training dataset, which includes the source image, the two-mask image, and the selected two-submask image.Type: ApplicationFiled: June 9, 2023Publication date: December 12, 2024Inventors: Chaithanya Kumar Mummadi, Wan-Yi Lin, Filipe Condessa, Aniruddha Saha, Shuhua Yu
-
Patent number: 12118788Abstract: Performing semantic segmentation in an absence of labels for one or more semantic classes is provided. One or more weak predictors are utilized to obtain label proposals of novel classes for an original dataset for which at least a subset of sematic classes are unlabeled classes. The label proposals are merged with ground truth of the original dataset to generate a merged dataset, the ground truth defining labeled classes of portions of the original dataset. A machine learning model is trained using the merged dataset. The machine learning model is utilized for performing semantic segmentation on image data.Type: GrantFiled: February 3, 2022Date of Patent: October 15, 2024Assignee: Robert Bosch GmbHInventors: S Alireza Golestaneh, João D. Semedo, Filipe J. Cabrita Condessa, Wan-Yi Lin, Stefan Gehrer
-
Patent number: 12079995Abstract: A method of image segmentation includes receiving one or more images, determining a loss component, for each pixel one image of the one or more images, identifying a majority class and identify a cross-entropy loss between a network output and a target, randomly selecting pixels associated with the one image and select a second set of pixels to compute a super pixel loss for each pair of pixels, summing corresponding loss associated with each pair of pixels, for each corresponding frame of the plurality of frames of the image, computing a flow loss, a negative flow loss, a contrastive optical flow loss, and a equivariant optical flow loss, computing a final loss including a weighted average of the flow loss, the cross entropy loss, the super pixel loss, and foreground loss, updating a network parameter and outputting a trained neural network.Type: GrantFiled: September 28, 2021Date of Patent: September 3, 2024Assignee: Robert Bosch GmbHInventors: Chirag Pabbaraju, João D. Semedo, Wan-Yi Lin
-
Publication number: 20240289644Abstract: A computer-implemented system and method includes establishing a station sequence that a given part traverses. Each station includes a machine that performs at least one operation with respect to the given part. Measurement data, which relates to attributes of a plurality of parts that traversed the plurality of machines, is received. The measurement data is obtained by sensors and corresponds to a current process period. A first machine learning model is pretrained to generate (i) latent representations based on the measurement data and (ii) machine states based on the latent representations. Machine observation data, which relates to the current process period, is received. Aggregated data is generated based on the measurement data and the machine observation data. A second machine learning model generates a maintenance prediction based on the aggregated data. The maintenance prediction corresponds to a next process period.Type: ApplicationFiled: February 28, 2023Publication date: August 29, 2024Inventors: Filipe Condessa, Devin Willmott, Ivan Batalov, Joao Semedo, Wan-Yi Lin, Bahare Aazari