Patents by Inventor Bryant Chen
Bryant Chen has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240129356Abstract: Novel tools and techniques are provided for implementing encoding or decoding of adaptive bitrate streams. In various embodiments, one or more first computing systems may divide a live media content stream into one or more segments, each segment might include a starting segment boundary and an ending segment boundary. The one or more first computing systems might encode the one or more segments into one or more primary adaptive bitrate streams. The one or more first computing systems might also divide the one or more segments of the live media content stream into one or more subsegments. Each subsegment might be less than a length of a corresponding segment of the one or more segments. The one or more first computing systems might the encode and/or a second computing system might decode the one or more subsegments into or from one or more secondary adaptive bitrate streams.Type: ApplicationFiled: December 21, 2023Publication date: April 18, 2024Inventors: Rajesh Mamidwar, Wade Wan, Bryant Tan, Xuemin Chen
-
Patent number: 11856021Abstract: Computer-implemented methods, program products, and systems for provenance-based defense against poison attacks are disclosed. In one approach, a method includes: receiving observations and corresponding provenance data from data sources; determining whether the observations are poisoned based on the corresponding provenance data; and removing the poisoned observation(s) from a final training dataset used to train a final prediction model. Another implementation involves provenance-based defense against poison attacks in a fully untrusted data environment. Untrusted data points are grouped according to provenance signature, and the groups are used to train learning algorithms and generate complete and filtered prediction models. The results of applying the prediction models to an evaluation dataset are compared, and poisoned data points identified where the performance of the filtered prediction model exceeds the performance of the complete prediction model.Type: GrantFiled: March 22, 2023Date of Patent: December 26, 2023Assignee: International Business Machines CorporationInventors: Nathalie Baracaldo-Angel, Bryant Chen, Evelyn Duesterwald, Heiko H. Ludwig
-
Publication number: 20230231875Abstract: Computer-implemented methods, program products, and systems for provenance-based defense against poison attacks are disclosed. In one approach, a method includes: receiving observations and corresponding provenance data from data sources; determining whether the observations are poisoned based on the corresponding provenance data; and removing the poisoned observation(s) from a final training dataset used to train a final prediction model. Another implementation involves provenance-based defense against poison attacks in a fully untrusted data environment. Untrusted data points are grouped according to provenance signature, and the groups are used to train learning algorithms and generate complete and filtered prediction models. The results of applying the prediction models to an evaluation dataset are compared, and poisoned data points identified where the performance of the filtered prediction model exceeds the performance of the complete prediction model.Type: ApplicationFiled: March 22, 2023Publication date: July 20, 2023Inventors: Nathalie Baracaldo-Angel, Bryant Chen, Evelyn Duesterwald, Heiko H. Ludwig
-
Patent number: 11689566Abstract: Computer-implemented methods, program products, and systems for provenance-based defense against poison attacks are disclosed. In one approach, a method includes: receiving observations and corresponding provenance data from data sources; determining whether the observations are poisoned based on the corresponding provenance data; and removing the poisoned observation(s) from a final training dataset used to train a final prediction model. Another implementation involves provenance-based defense against poison attacks in a fully untrusted data environment. Untrusted data points are grouped according to provenance signature, and the groups are used to train learning algorithms and generate complete and filtered prediction models. The results of applying the prediction models to an evaluation dataset are compared, and poisoned data points identified where the performance of the filtered prediction model exceeds the performance of the complete prediction model.Type: GrantFiled: July 10, 2018Date of Patent: June 27, 2023Assignee: International Business Machines CorporationInventors: Nathalie Baracaldo-Angel, Bryant Chen, Evelyn Duesterwald, Heiko H. Ludwig
-
Patent number: 11663662Abstract: There are provided systems and methods for automatic adjustment of limits based on machine learning forecasting. An entity, such as company or other entity, may purchase items utilizing a payment instrument or card provided to the company by a credit provider system or entity. In order to provide proper underwriting for credit extensions, such as balances and limits of extendable credit, the credit provider system may utilize a forecasting machine learning (ML) model trained to predict a future global balance of funds or a likelihood of repayment of the extended credit limit. This may be based on information retrievable balances from a banking system and a staleness of this data. When the data is stale and has not been updated, the forecasted balance may have a wider range, and thus risk factors may designate less risky and lower limits.Type: GrantFiled: December 22, 2021Date of Patent: May 30, 2023Assignee: Brex Inc.Inventors: Bryant Chen, Lillian Xu, Jeanette Jin
-
Patent number: 11645515Abstract: Embodiments relate to a system, program product, and method for automatically determining which activation data points in a neural model have been poisoned to erroneously indicate association with a particular label or labels. A neural network is trained using potentially poisoned training data. Each of the training data points is classified using the network to retain the activations of the last hidden layer, and segment those activations by the label of corresponding training data. Clustering is applied to the retained activations of each segment, and a cluster assessment is conducted for each cluster associated with each label to distinguish clusters with potentially poisoned activations from clusters populated with legitimate activations. The assessment includes executing a set of analyses and integrating the results of the analyses into a determination as to whether a training data set is poisonous based on determining if resultant activation clusters are poisoned.Type: GrantFiled: September 16, 2019Date of Patent: May 9, 2023Assignee: International Business Machines CorporationInventors: Nathalie Baracaldo Angel, Bryant Chen, Biplav Srivastava, Heiko H. Ludwig
-
Patent number: 11601468Abstract: Systems, computer-implemented methods, and computer program products that can facilitate detection of an adversarial backdoor attack on a trained model at inference time are provided. According to an embodiment, a system can comprise a memory that stores computer executable components and a processor that executes the computer executable components stored in the memory. The computer executable components can comprise a log component that records predictions and corresponding activation values generated by a trained model based on inference requests. The computer executable components can further comprise an analysis component that employs a model at an inference time to detect a backdoor trigger request based on the predictions and the corresponding activation values. In some embodiments, the log component records the predictions and the corresponding activation values from one or more layers of the trained model.Type: GrantFiled: June 25, 2019Date of Patent: March 7, 2023Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Nathalie Baracaldo Angel, Yi Zhou, Bryant Chen, Ali Anwar, Heiko H. Ludwig
-
Publication number: 20230005055Abstract: There are provided systems and methods for automatic adjustment of limits based on machine learning forecasting. An entity, such as company or other entity, may purchase items utilizing a payment instrument or card provided to the company by a credit provider system or entity. In order to provide proper underwriting for credit extensions, such as balances and limits of extendable credit, the credit provider system may utilize a forecasting machine learning (ML) model trained to predict a future global balance of funds or a likelihood of repayment of the extended credit limit. This may be based on information retrievable balances from a banking system and a staleness of this data. When the data is stale and has not been updated, the forecasted balance may have a wider range, and thus risk factors may designate less risky and lower limits.Type: ApplicationFiled: December 22, 2021Publication date: January 5, 2023Inventors: Bryant Chen, Lillian Xu, Jeanette Jin
-
Patent number: 11538236Abstract: Embodiments relate to a system, program product, and method for processing an untrusted data set to automatically determine which data points there are poisonous. A neural network is trained network using potentially poisoned training data. Each of the training data points is classified using the network to retain the activations of at least one hidden layer, and segment those activations by the label of corresponding training data. Clustering is applied to the retained activations of each segment, and a clustering assessment is conducted to remove an identified cluster from the data set, form a new training set, and train a second neural model with the new training set. The removed cluster and corresponding data are applied to the trained second neural model to analyze and classify data in the removed cluster as either legitimate or poisonous.Type: GrantFiled: September 16, 2019Date of Patent: December 27, 2022Assignee: International Business Machines CorporationInventors: Nathalie Baracaldo Angel, Bryant Chen, Heiko H. Ludwig
-
Patent number: 11487963Abstract: Embodiments relate to a system, program product, and method for automatically determining which activation data points in a neural model have been poisoned to erroneously indicate association with a particular label or labels. A neural network is trained network using potentially poisoned training data. Each of the training data points is classified using the network to retain the activations of the last hidden layer, and segment those activations by the label of corresponding training data. Clustering is applied to the retained activations of each segment, and a cluster assessment is conducted for each cluster associated with each label to distinguish clusters with potentially poisoned activations from clusters populated with legitimate activations. The assessment includes analyzing, for each cluster, a distance of a median of the activations therein to medians of the activations in the labels.Type: GrantFiled: September 16, 2019Date of Patent: November 1, 2022Assignee: International Business Machines CorporationInventors: Nathalie Baracaldo Angel, Bryant Chen, Biplav Srivastava, Heiko H. Ludwig
-
Publication number: 20220114259Abstract: One or more computer processors determine a tolerance value, and a norm value associated with an untrusted model and an adversarial training method. The one or more computer processors generate a plurality of interpolated adversarial images ranging between a pair of images utilizing the adversarial training method, wherein each image in the pair of images is from a different class. The one or more computer processors detect a backdoor associated with the untrusted model utilizing the generated plurality of interpolated adversarial images. The one or more computer processors harden the untrusted model by training the untrusted model with the generated plurality of interpolated adversarial images.Type: ApplicationFiled: October 13, 2020Publication date: April 14, 2022Inventors: Heiko H. Ludwig, Ebube Chuba, Bryant Chen, Benjamin James Edwards, Taesung Lee, Ian Michael Molloy
-
Patent number: 11188789Abstract: One embodiment provides a method comprising receiving a training set comprising a plurality of data points, where a neural network is trained as a classifier based on the training set. The method further comprises, for each data point of the training set, classifying the data point with one of a plurality of classification labels using the trained neural network, and recording neuronal activations of a portion of the trained neural network in response to the data point. The method further comprises, for each classification label that a portion of the training set has been classified with, clustering a portion of all recorded neuronal activations that are in response to the portion of the training set, and detecting one or more poisonous data points in the portion of the training set based on the clustering.Type: GrantFiled: August 7, 2018Date of Patent: November 30, 2021Assignee: International Business Machines CorporationInventors: Bryant Chen, Wilka Carvalho, Heiko H. Ludwig, Ian Michael Molloy, Taesung Lee, Jialong Zhang, Benjamin J. Edwards
-
Patent number: 11132444Abstract: Mechanisms are provided for evaluating a trained machine learning model to determine whether the machine learning model has a backdoor trigger. The mechanisms process a test dataset to generate output classifications for the test dataset, and generate, for the test dataset, gradient data indicating a degree of change of elements within the test dataset based on the output generated by processing the test dataset. The mechanisms analyze the gradient data to identify a pattern of elements within the test dataset indicative of a backdoor trigger. The mechanisms generate, in response to the analysis identifying the pattern of elements indicative of a backdoor trigger, an output indicating the existence of the backdoor trigger in the trained machine learning model.Type: GrantFiled: April 16, 2018Date of Patent: September 28, 2021Assignee: International Business Machines CorporationInventors: Wilka Carvalho, Bryant Chen, Benjamin J. Edwards, Taesung Lee, Ian M. Molloy, Jialong Zhang
-
Publication number: 20210081718Abstract: Embodiments relate to a system, program product, and method for processing an untrusted data set to automatically determine which data points there are poisonous. A neural network is trained network using potentially poisoned training data. Each of the training data points is classified using the network to retain the activations of at least one hidden layer, and segment those activations by the label of corresponding training data. Clustering is applied to the retained activations of each segment, and a clustering assessment is conducted to remove an identified cluster from the data set, form a new training set, and train a second neural model with the new training set. The removed cluster and corresponding data are applied to the trained second neural model to analyze and classify data in the removed cluster as either legitimate or poisonous.Type: ApplicationFiled: September 16, 2019Publication date: March 18, 2021Applicant: International Business Machines CorporationInventors: Nathalie Baracaldo Angel, Bryant Chen, Heiko H. Ludwig
-
Publication number: 20210081831Abstract: Embodiments relate to a system, program product, and method for automatically determining which activation data points in a neural model have been poisoned to erroneously indicate association with a particular label or labels. A neural network is trained using potentially poisoned training data. Each of the training data points is classified using the network to retain the activations of the last hidden layer, and segment those activations by the label of corresponding training data. Clustering is applied to the retained activations of each segment, and a cluster assessment is conducted for each cluster associated with each label to distinguish clusters with potentially poisoned activations from clusters populated with legitimate activations. The assessment includes executing a set of analyses and integrating the results of the analyses into a determination as to whether a training data set is poisonous based on determining if resultant activation clusters are poisoned.Type: ApplicationFiled: September 16, 2019Publication date: March 18, 2021Applicant: International Business Machines CorporationInventors: Nathalie Baracaldo Angel, Bryant Chen, Biplav Srivastava, Heiko H. Ludwig
-
Publication number: 20210081708Abstract: Embodiments relate to a system, program product, and method for automatically determining which activation data points in a neural model have been poisoned to erroneously indicate association with a particular label or labels. A neural network is trained network using potentially poisoned training data. Each of the training data points is classified using the network to retain the activations of the last hidden layer, and segment those activations by the label of corresponding training data. Clustering is applied to the retained activations of each segment, and a cluster assessment is conducted for each cluster associated with each label to distinguish clusters with potentially poisoned activations from clusters populated with legitimate activations. The assessment includes analyzing, for each cluster, a distance of a median of the activations therein to medians of the activations in the labels.Type: ApplicationFiled: September 16, 2019Publication date: March 18, 2021Applicant: International Business Machines CorporationInventors: Nathalie Baracaldo Angel, Bryant Chen, Biplav Srivastava, Heiko H. Ludwig
-
Publication number: 20200412743Abstract: Systems, computer-implemented methods, and computer program products that can facilitate detection of an adversarial backdoor attack on a trained model at inference time are provided. According to an embodiment, a system can comprise a memory that stores computer executable components and a processor that executes the computer executable components stored in the memory. The computer executable components can comprise a log component that records predictions and corresponding activation values generated by a trained model based on inference requests. The computer executable components can further comprise an analysis component that employs a model at an inference time to detect a backdoor trigger request based on the predictions and the corresponding activation values. In some embodiments, the log component records the predictions and the corresponding activation values from one or more layers of the trained model.Type: ApplicationFiled: June 25, 2019Publication date: December 31, 2020Inventors: Nathalie Baracaldo Angel, Yi Zhou, Bryant Chen, Ali Anwar, Heiko H. Ludwig
-
Patent number: 10688029Abstract: Compositions, kits and methods are provided for restoring moisture and retarding the aging process in mature skin. In general, ions, combined amino acids, fatty acids and polyols are included in a physiologically acceptable medium. The compositions, kits and methods can be used as cosmetics, cosmeceuticals or pharmaceuticals for improving mature skin condition, and preventing or treating the aging process and/or lack of moisture.Type: GrantFiled: July 11, 2018Date of Patent: June 23, 2020Assignee: DERMSOLACE BIOTECHNOLOGY LLCInventors: Wen Ching Chen, Shu Chen Wang, Bryant Chen, Hanafi Tanojo
-
Publication number: 20200050945Abstract: One embodiment provides a method comprising receiving a training set comprising a plurality of data points, where a neural network is trained as a classifier based on the training set. The method further comprises, for each data point of the training set, classifying the data point with one of a plurality of classification labels using the trained neural network, and recording neuronal activations of a portion of the trained neural network in response to the data point. The method further comprises, for each classification label that a portion of the training set has been classified with, clustering a portion of all recorded neuronal activations that are in response to the portion of the training set, and detecting one or more poisonous data points in the portion of the training set based on the clustering.Type: ApplicationFiled: August 7, 2018Publication date: February 13, 2020Inventors: Bryant Chen, Wilka Carvalho, Heiko H. Ludwig, Ian Michael Molloy, Taesung Lee, Jialong Zhang, Benjamin J. Edwards
-
Publication number: 20200019821Abstract: Computer-implemented methods, program products, and systems for provenance-based defense against poison attacks are disclosed. In one approach, a method includes: receiving observations and corresponding provenance data from data sources; determining whether the observations are poisoned based on the corresponding provenance data; and removing the poisoned observation(s) from a final training dataset used to train a final prediction model. Another implementation involves provenance-based defense against poison attacks in a fully untrusted data environment. Untrusted data points are grouped according to provenance signature, and the groups are used to train learning algorithms and generate complete and filtered prediction models. The results of applying the prediction models to an evaluation dataset are compared, and poisoned data points identified where the performance of the filtered prediction model exceeds the performance of the complete prediction model.Type: ApplicationFiled: July 10, 2018Publication date: January 16, 2020Inventors: Nathalie Baracaldo-Angel, Bryant Chen, Evelyn Duesterwald, Heiko H. Ludwig