Patents by Inventor Marek Oszajec

Marek Oszajec has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11003910
    Abstract: A first and second scoring endpoint with payload logging are deployed. At the second scoring endpoint, native data and a user-generated score for the native data are received, the native data is pre-processed into readable data for the deep-learning model, and the user-generated score and the readable data are output to the first scoring endpoint, which is associated directly with the deep-learning model. A raw payload that includes the native data is output to a payload store. At the first scoring endpoint, the readable data and the user-generated score are processed by the deep-learning model, which outputs a transformed payload and a prediction, respectively, to the payload store. The raw payload is matched with the transformed payload and the prediction to produce a comprehensive data set, which is evaluated to describe a set of transformation parameters. The deep-learning model is retrained to account for the set of transformation parameters.
    Type: Grant
    Filed: July 17, 2019
    Date of Patent: May 11, 2021
    Assignee: International Business Machines Corporation
    Inventors: Rafal Bigaj, Lukasz G. Cmielowski, Marek Oszajec, Maksymilian Erazmus
  • Patent number: 10885332
    Abstract: A first and second scoring endpoint with payload logging are deployed. At the second scoring endpoint, native data and a user-generated score for the native data are received, the native data is pre-processed into readable data for the deep-learning model, and the user-generated score and the readable data are output to the first scoring endpoint, which is associated directly with the deep-learning model. A raw payload that includes the native data is output to a payload store. At the first scoring endpoint, the readable data and the user-generated score are processed by the deep-learning model, which outputs a transformed payload and a prediction, respectively, to the payload store. The raw payload is matched with the transformed payload and the prediction to produce a comprehensive data set, which is evaluated to describe a set of transformation parameters. The deep-learning model is retrained to account for the set of transformation parameters.
    Type: Grant
    Filed: March 15, 2019
    Date of Patent: January 5, 2021
    Assignee: International Business Machines Corporation
    Inventors: Rafal Bigaj, Lukasz G. Cmielowski, Marek Oszajec, Maksymilian Erazmus
  • Publication number: 20200293775
    Abstract: A first and second scoring endpoint with payload logging are deployed. At the second scoring endpoint, native data and a user-generated score for the native data are received, the native data is pre-processed into readable data for the deep-learning model, and the user-generated score and the readable data are output to the first scoring endpoint, which is associated directly with the deep-learning model. A raw payload that includes the native data is output to a payload store. At the first scoring endpoint, the readable data and the user-generated score are processed by the deep-learning model, which outputs a transformed payload and a prediction, respectively, to the payload store. The raw payload is matched with the transformed payload and the prediction to produce a comprehensive data set, which is evaluated to describe a set of transformation parameters. The deep-learning model is retrained to account for the set of transformation parameters.
    Type: Application
    Filed: July 17, 2019
    Publication date: September 17, 2020
    Inventors: RAFAL BIGAJ, LUKASZ G. CMIELOWSKI, MAREK OSZAJEC, MAKSYMILIAN ERAZMUS
  • Publication number: 20200293774
    Abstract: A first and second scoring endpoint with payload logging are deployed. At the second scoring endpoint, native data and a user-generated score for the native data are received, the native data is pre-processed into readable data for the deep-learning model, and the user-generated score and the readable data are output to the first scoring endpoint, which is associated directly with the deep-learning model. A raw payload that includes the native data is output to a payload store. At the first scoring endpoint, the readable data and the user-generated score are processed by the deep-learning model, which outputs a transformed payload and a prediction, respectively, to the payload store. The raw payload is matched with the transformed payload and the prediction to produce a comprehensive data set, which is evaluated to describe a set of transformation parameters. The deep-learning model is retrained to account for the set of transformation parameters.
    Type: Application
    Filed: March 15, 2019
    Publication date: September 17, 2020
    Inventors: RAFAL BIGAJ, LUKASZ G. CMIELOWSKI, MAREK OSZAJEC, MAKSYMILIAN ERAZMUS
  • Patent number: 10761958
    Abstract: A processor may acquire a trained predictive computational model from a database. The processor may apply a trained reduced complexity model to the trained predictive computational model. The trained reduced complexity model may be associated with the trained predictive computational model. The processor may select at least one metric. The processor may determine a quality indicator related to the at least one metric by identifying the type of the at least one metric, evaluating the output of the trained predictive computational model in relation to the type of the at least one metric, and generating, based on the evaluation of the trained predictive computational model, a threshold associated with the at least one metric. The processor may determine the accuracy of the trained predictive computational model based on the quality indicator.
    Type: Grant
    Filed: March 19, 2018
    Date of Patent: September 1, 2020
    Assignee: International Business Machines Corporation
    Inventors: Wojciech Sobala, Umit M. Cakmak, Marek Oszajec, Lukasz G. Cmielowski
  • Publication number: 20200065630
    Abstract: Embodiments of the present invention provide a method, system and computer program product for automated early anomaly detection in a continuous learning model. In an embodiment of the invention, a method includes training a continuous learning model with a training data set of different records and a known target class for each of the different records, deploying the model, and monitoring performance of the model. The method further includes prior to receiving a complete feedback data set for the model, computing a metric in the model based upon unseen records in the model that had not been present in the training data set, determining poor quality of the model for a metric computed to exceed a threshold value and displaying a recommendation in the host server to retrain the model responsive to the determination of poor quality of the model.
    Type: Application
    Filed: August 21, 2018
    Publication date: February 27, 2020
    Inventors: Lucas G. Cmielowski, Wojciech Sobala, Umit M. Cakmak, Marek Oszajec
  • Patent number: 10535001
    Abstract: A method for training a deep learning algorithm using N-dimensional data sets may be provided. Each data set comprises a plurality of N?1-dimensional data sets. The method comprises selecting a batch size and assembling an equally sized training batch. The samples are selected to be evenly distributed within said respective N-dimensional data sets. The method comprises also starting from a predetermined offset number, wherein the number of samples is equal to the selected batch size number, and feeding said training batches of N?1-dimensional samples into a deep learning algorithm for the training. Upon the training resulting in a learning rate that is below a predetermined level, selecting a different offset number for at least one of said N-dimensional data sets, and going back to the step of assembling. Upon the training resulting in a learning rate that is equal or higher than said predetermined level, the method stops.
    Type: Grant
    Filed: November 6, 2017
    Date of Patent: January 14, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Umit Cakmak, Lukasz G. Cmielowski, Marek Oszajec, Wojciech Sobala
  • Publication number: 20190286541
    Abstract: A processor may acquire a trained predictive computational model from a database. The processor may apply a trained reduced complexity model to the trained predictive computational model. The trained reduces complexity model may be associated with the trained predictive computational model. The processor may select at least one metric. The processor may determine a quality indicator related to the at least one metric by identifying the type of the at least one metric, evaluating the output of the trained predictive computational model in relation to the type of the at least one metric, and generating, based on the evaluation of the trained predictive computational model, a threshold associated with the at least one metric. The processor may determine the accuracy of the trained predictive computational model based on the quality indicator.
    Type: Application
    Filed: March 19, 2018
    Publication date: September 19, 2019
    Inventors: Wojciech Sobala, Umit M. Cakmak, Marek Oszajec, Lukasz G. Cmielowski
  • Publication number: 20190138906
    Abstract: A method for training a deep learning algorithm using N-dimensional data sets may be provided. Each data set comprises a plurality of N-1-dimensional data sets. The method comprises selecting a batch size and assembling an equally sized training batch. The samples are selected to be evenly distributed within said respective N-dimensional data sets. The method comprises also starting from a predetermined offset number, wherein the number of samples is equal to the selected batch size number, and feeding said training batches of N-1-dimensional samples into a deep learning algorithm for the training. Upon the training resulting in a learning rate that is below a predetermined level, selecting a different offset number for at least one of said N-dimensional data sets, and going back to the step of assembling. Upon the training resulting in a learning rate that is equal or higher than said predetermined level, the method stops.
    Type: Application
    Filed: November 6, 2017
    Publication date: May 9, 2019
    Inventors: Umit Cakmak, Lukasz G. Cmielowski, Marek Oszajec, Wojciech Sobala