Patents by Inventor Wan-Yi Lin

Wan-Yi Lin has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250259274
    Abstract: A method for attacking a neural network that includes receiving an input data that includes an image and ground truth label, adding a pre-determined amount of noise to the image, denoising the noisy image utilizing a diffusion model that includes a deep equilibrium root solver, determining a first gradient of the denoised image with respect to the input data including at least the image, wherein the first gradient is associated with the diffusion model, utilizing the denoised image at downstream model, outputting a predicated label associated with the denoised image, determining a loss utilizing with the predicted label and the ground truth label, determining a second gradient associated with the downstream model utilizing at least the loss, and outputting an aggregate gradient that represents an error of the neural network output utilizing the predicted label, wherein the aggregate gradient is calculated utilizing the first gradient and the second gradient.
    Type: Application
    Filed: February 14, 2024
    Publication date: August 14, 2025
    Inventors: Ivan BATALOV, Wan-Yi LIN, Chaithanya Kumar MUMMADI
  • Patent number: 12387057
    Abstract: A computer-implemented method includes converting tabular data to a text representation, generating metadata associated with the text representation of the tabular data, outputting one or more natural language data descriptions indicative of the tabular data in response to utilizing a large language model (LLM) and zero-shot prompting of the metadata and text representation of the tabular data, outputting one or more summaries utilizing the LLM and appending a prompt on the one or more natural language data descriptions, selecting a single summary of the one or more summaries in response to the single summary having a smallest validation rate, receiving a query associated with the tabular data, outputting one or more predictions associated with the query, and in response to meeting a convergence threshold with the one or more predictions generated from the one or more iterations, output a final prediction associated with the query.
    Type: Grant
    Filed: June 9, 2023
    Date of Patent: August 12, 2025
    Assignees: Robert Bosch GmbH, Carnegie Mellon University
    Inventors: Hariharan Manikandan, Yiding Jiang, Jeremy Kolter, Chen Qiu, Wan-Yi Lin, Filipe J. Cabrita Condessa
  • Publication number: 20250232570
    Abstract: A method for fine tuning a pre-trained machine learning model includes receiving, from a pre-trained machine learning model, at least one image embedding corresponding to first training data used to train the pre-trained machine learning model. The method also includes receiving, from the pre-trained machine learning model, at least one text embedding corresponding to the at least one image embedding. The method also includes generating at least one perturbation vector that includes the at least one image embedding, the at least one text embedding, a perturbation magnitude value, and a perturbation direction value. The method also includes generating second training data based on the at least one perturbation vector, and fine tuning the pre-trained machine learning model using the second training data.
    Type: Application
    Filed: January 16, 2024
    Publication date: July 17, 2025
    Inventors: IVAN BATALOV, CHAITHANYA KUMAR MUMMADI, WAN-YI LIN
  • Publication number: 20250231974
    Abstract: A method that includes obtaining, from one or more stations, embedding vectors that embed features of the stations, obtaining measurement vectors and associated measurement names, generating a text array of the measurement names utilizing a language model, concatenating the text array and the measurement vector at one or more cross-attention modules configured to encode one or more measurement arrays to one or more latent embedding vectors, generating one or more latent embedding vectors associated with the measurement vector and corresponding measurement names via the cross-attention module and a fixed-size station embedding vector, outputting the latent embeddings; generating a query vector; generating key vectors value vectors utilizing a latent embedding vector; decoding the latent vectors utilizing the key vector and value vector; utilizing the cross attention modules and query vectors, decoding the latent embedding vectors; and output a predication.
    Type: Application
    Filed: January 17, 2024
    Publication date: July 17, 2025
    Inventors: Chen QIU, Wan-Yi LIN, Carlos CUNHA, Jared EVANS
  • Publication number: 20250225779
    Abstract: A method and system for training a target neural network using a foundation model having a source neural network that has been pre-trained to operate on a source modality. Inputting source data to the foundation model. The source neural network of the foundation model having at least one source encoder having a source weights which has been pre-trained to compute source features which are computable within the source data of the source modality. Inputting target data to a target neural network operating on a target modality. The target neural network including at least one target encoder having target weights for computing target features within the target data of the target modality. Training the target weight by pairing the target data with the source data and freezing the source weights of the source neural network for a pre-determined epoch.
    Type: Application
    Filed: January 5, 2024
    Publication date: July 10, 2025
    Inventors: Kilian RAMBACH, Joao SEMEDO, Bingqing CHEN, Marcus PEREIRA, Wan-Yi LIN, Csaba DOMOKOS, Yuri FELDMAN, Mariia PUSHKAREVA
  • Publication number: 20250220042
    Abstract: A system includes a controller configured to generate an original patch utilizing Bayesian optimization, output the original patch at a display at a scene and determine if the original patch does not meet a success criteria of the machine-learning model, in response to the original patch not meeting the success criteria, upscaling the patch, decompose the upscaled patch into o components, for each of the components, utilize Bayesian optimization to update one of the components of the upscaled patch and freezing the other components to generate an updated patch, in response to the updated patch meeting the success criteria, output the updated upscaled patch, and in response to the updated upscaled patch not meeting the success criteria, iteratively update the unfrozen components and determine if the success criteria is met and if not met, unfreeze the frozen components and iteratively update the unfrozen components until the success criteria is met.
    Type: Application
    Filed: December 29, 2023
    Publication date: July 3, 2025
    Inventors: Jianghong Shi, Devin T. Willmott, Wan-Yi Lin, Filipe J. Cabrita Condessa, João D. Semedo
  • Publication number: 20250217494
    Abstract: A computer-implemented method for attacking a machine-learning model, comprising establishing a connection between a processor that is utilizing the machine-learning model, wherein the processor is in communication with a sensor located in a physical scene, outputting on a display device in the physical scene, an adversarial pattern, wherein the display device including the adversarial pattern is located in a sensor range of the sensor, obtaining, from the machine-learning model, a classification associated with the physical scene that includes the adversarial pattern, determining if a target classification has been met with a classification output from the machine-learning model, and in response to the target classification not being met, output additional adversarial patterns at the display device and repeat steps until the target classification has been met.
    Type: Application
    Filed: December 29, 2023
    Publication date: July 3, 2025
    Inventors: Monikasrivyshnavi Nagalla, João D. Semedo, Wan-Yi Lin, FILIPE J. CABRITA CONDESSA
  • Publication number: 20250217493
    Abstract: A system includes a machine learning network input interface configured to receive input data from a sensor, one or more processors collectively programmed to receive an input data from the sensor, wherein the input data is indicative of image of a scene that includes a perturbation from a black-box attack with a physical perturbation at the scene, display an adversarial pattern at the scene, determine an objective function utilizing at least the adversarial pattern and a target classification of the machine-learning network, randomly select a plurality of data points associated with the adversarial pattern and the objective function, wherein the data points are associated with a number of queries of the objective function, obtain a machine-learning model output utilizing the data points displayed in the scene, and in response to meeting a criteria associated with the adversarial pattern and model output, identify a successful attack pattern.
    Type: Application
    Filed: December 29, 2023
    Publication date: July 3, 2025
    Inventors: Jianghong Shi, Devin T. Willmott, Wan-Yi Lin, FILIPE J. CABRITA CONDESSA, Bingqing Chen, João D. Semedo
  • Patent number: 12327331
    Abstract: A computer-implemented system and method include performing neural style transfer augmentations using at least a content image, a first style image, and a second style image. A first augmented image is generated based at least on content of the content image and a first style of the first style image. A second augmented image is generated based at least on the content of the content image and a second style of the second style image. The machine learning system is trained with training data that includes at least the content image, the first augmented image, and the second augmented image. A loss output is computed for the machine learning system. The loss output includes at least a consistency loss that accounts for a predicted label provided by the machine learning system with respect to each of the content image, the first augmented image, and the second augmented image.
    Type: Grant
    Filed: December 2, 2021
    Date of Patent: June 10, 2025
    Assignee: Robert Bosch GmbH
    Inventors: Akash Umakantha, S. Alireza Golestaneh, Joao Semedo, Wan-Yi Lin
  • Publication number: 20250111243
    Abstract: Methods and systems for training neural networks with federated learning. A portion of a server-maintained machine-learning model is transferred from a server to clients, yielding a plurality of local machine-learning models. At each client, the local models are trained with locally-stored data, including determining a respective cross entropy loss for each local models. Weights are updated for each local model, and evaluated based on a common dataset to obtain activation outputs for each layer. These are transferred to the server without transferring the locally-stored data of the clients, whereupon they are permuted according to the one respective updated weight to match a dimension of the selected client to obtain a matrix, which is sent to each client for permuting the local models based on the matrix. The permuted weights are sent to the server, whereupon they are aggregated and transferred back to the clients for updating of the local models.
    Type: Application
    Filed: September 22, 2023
    Publication date: April 3, 2025
    Inventors: Augustine D. Saravanos, Filipe J. CABRITA CONDESSA, Wan-Yi LIN, Zhenzhen Li, Madan RAVI GANESH
  • Publication number: 20250103901
    Abstract: Methods and systems for training neural networks with federated learning. Respective weights are transferred from each client to a respective neighboring client without transferring the locally-stored data of the clients. All models are permuted according to the respective weights to match the respectively updated weights to obtain permuted weights. The permuted weights are aggregated at the clients. At each client, local machine learning models are trained with locally-stored data, wherein the training includes determining a respective cross entropy loss for each of the plurality of local machine learning models and a loss computed based on a distance of the local MLM to the aggregated permuted weights. Respective weights of each local machine learning models are updated based on the determined cross entropy loss and the loss.
    Type: Application
    Filed: September 22, 2023
    Publication date: March 27, 2025
    Inventors: Augustine D. Saravanos, Filipe J. CABRITA CONDESSA, Wan-Yi LIN, Zhenzhen Li, Madan RAVI GANESH
  • Publication number: 20250103900
    Abstract: Methods and systems of training neural networks with federated learning. Machine learning models are sent from a server to clients, yielding local machine learning models. At each client, the models are trained with locally-stored data, including determining a respective cross entropy loss for each of the plurality of local machine learning models. Weights for each local model are updated, and transferred to the server without transferring locally-stored data. The transferred weights are aggregated at the server to obtain an aggregated server-maintained machine learning model. At the server, a distillation loss based on a foundation model is generated. The aggregated server-maintained machine learning is updated to obtain aggregated respective weights, which are transferred to the clients for updating in the local models.
    Type: Application
    Filed: September 22, 2023
    Publication date: March 27, 2025
    Inventors: Xidong WU, Filipe J. CABRITA CONDESSA, Wan-Yi LIN, Devin T. WILLMOTT, Zhenzhen LI, Madan Ravi GANESH
  • Publication number: 20250104394
    Abstract: A method of generating text-driven prompts and class prediction probabilities using a vision-language model (VLM) includes receiving candidate class names associated with a plurality of candidate classes for images, generating class text tokens based on a text description of the candidate class names, and generating a plurality of context prompt vectors using a prompt generator. The context prompt vectors define context information associated with an image classification task to be performed by the VLM. The method further includes generating prompts for each of the plurality of candidate classes by appending respective class text tokens to the context prompt vectors for each of the plurality of candidate classes, and, using the VLM, generating and outputting a class prediction probability for a sample image based on the plurality of context prompt vectors.
    Type: Application
    Filed: September 22, 2023
    Publication date: March 27, 2025
    Inventors: CHEN QIU, XINGYU LI, CHAITHANYA KUMAR MUMMADI, MADAN RAVI GANESH, ZHENZHEN LI, WAN-YI LIN, SABRINA SCHMEDDING
  • Publication number: 20250103899
    Abstract: Methods and systems of training neural networks with federated learning. Server-maintained machine learning models are sent from a server to clients, yielding local machine learning models. At each client, the models are trained with local data to determine a respective cross entropy loss and a distillation loss based on foundation models. Respective weights are updated at each client for each of the local machine learning model based on the losses. The updated weights are transferred to the server without transferring the locally-stored data, whereupon they are aggregated and transferred back to the clients. At each client, the local machine learning model is updated with the aggregated updated weights.
    Type: Application
    Filed: September 22, 2023
    Publication date: March 27, 2025
    Inventors: Xidong WU, Filipe J. CABRITA CONDESSA, Wan-Yi LIN, Devin T. WILLMOTT, Zhenzhen LI, Madan Ravi GANESH
  • Publication number: 20250100133
    Abstract: Methods and systems of training neural networks with federated learning. A portion of a server-maintained machine learning (ML) model is sent from a server to clients, whereupon local ML models are trained with locally-stored data, including determining cross entropy loss for each local ML model. The updated weights are evaluated on a common data set to obtain activation outputs for each layer of the local ML model, which are transferred to the server whereupon they are permuted to match a dimension of the selected client to obtain a matrix, which is sent to the clients. At each client, the local ML model is permuted based on the matrix to obtain permuted weights which are transferred to the server and aggregated. The aggregated permuted weights are transferred to the clients so that the local ML models are updated with the aggregated permuted weights.
    Type: Application
    Filed: September 22, 2023
    Publication date: March 27, 2025
    Inventors: Augustine D. Saravanos, Filipe J. CABRITA CONDESSA, Wan-Yi LIN, Zhenzhen Li, Madan RAVI GANESH
  • Patent number: 12242657
    Abstract: A method of identifying an attack comprising receiving an input of one or more images, wherein the one or more images includes a patch size and size, divide the image into a first sub-image and a second sub-image, classify the first sub-image and the second sub-image, wherein classifying is accomplished via introducing a variable in a pixel location associated with the first and second sub-image, and in response to classifying the first and second sub-image and identifying an adversarial patch, output a notification indicating that the input is not certified.
    Type: Grant
    Filed: July 26, 2022
    Date of Patent: March 4, 2025
    Assignees: Robert Bosch GmbH
    Inventors: Leslie Rice, Huan Zhang, Wan-Yi Lin, Jeremy Kolter
  • Patent number: 12236695
    Abstract: A computer-implemented system and method relate to certified defense against adversarial patch attacks. A set of one-mask images is generated using a first mask at a set of predetermined regions of a source image. The source image is obtained from a sensor. A set of one-mask predictions is generated, via a machine learning system, based on the set of one-mask images. A first one-mask image is extracted from the set of one-mask images. The first one-mask image is associated with a first one-mask prediction that is identified as a minority amongst the set of one-mask predictions. A set of two-mask images is generated by masking the first one-mask image using a set of second masks. The set of second masks include at least a first submask and a second submask in which a dimension of the first submask is less than a dimension of the first mask. A set of two-mask predictions is generated based on the set of two-mask images.
    Type: Grant
    Filed: September 21, 2022
    Date of Patent: February 25, 2025
    Assignee: Robert Bosch GmbH
    Inventors: Shuhua Yu, Aniruddha Saha, Chaithanya Kumar Mummadi, Wan-Yi Lin
  • Publication number: 20250061327
    Abstract: Disclosed embodiments use diffusion-based generative models for radar point cloud super-resolution. Disclosed embodiments use the mathematics of diffusion modeling to generate higher-resolution radar point cloud data from lower-resolution radar point cloud data.
    Type: Application
    Filed: August 18, 2023
    Publication date: February 20, 2025
    Inventors: Marcus A. Pereira, Filipe Condessa, Wan-Yi Lin, Ravi Ganesh Madan, Matthias Hagedorn
  • Patent number: 12205349
    Abstract: A system includes a machine-learning network. The network includes an input interface configured to receive input data from a sensor. The processor is programmed to receive the input data, generate a perturbed input data set utilize the input data, wherein the perturbed input data set includes perturbations of the input data, denoise the perturbed input data set utilizing a denoiser, wherein the denoiser is configured to generate a denoised data set, send the denoised data set to both a pre-trained classifier and a rejector, wherein the pre-trained classifier is configured to classify the denoised data set and the rejector is configured to reject a classification of the denoised data set, train, utilizing the denoised input data set, the a rejector to achieve a trained rejector, and in response to obtaining the trained rejector, output an abstain classification associated with the input data, wherein the abstain classification is ignored for classification.
    Type: Grant
    Filed: March 18, 2022
    Date of Patent: January 21, 2025
    Assignees: Robert Bosch GmbH, Carnegie Mellon University
    Inventors: Fatemeh Sheikholeslami, Wan-Yi Lin, Jan Hendrik Metzen, Huan Zhang, Jeremy Kolter
  • Publication number: 20250005376
    Abstract: Methods and systems for training neural networks with federated learning. Server-maintained machine learning models are sent from a server to a plurality of clients, yielding local machine learning models. At each client, the local machine learning models are trained with locally-stored data, stored locally at that respective client. Respective losses are determined and weights updated for each of the local machine learning models. Updated weights are transferred to the server for updating of the server-maintained machine learning models for training of those models. If one of the clients is disconnected or otherwise unable to receive the server-maintained models, that disconnected client can connect to neighboring clients, receiving the models from those neighboring clients, and training those models with the disconnected clients own locally-stored data.
    Type: Application
    Filed: June 30, 2023
    Publication date: January 2, 2025
    Inventors: Zhenzhen Li, FILIPE J. CABRITA CONDESSA, Madan Ravi Ganesh, Wan-Yi Lin, Chen Qiu