Patents by Inventor Nway Nway AUNG

Nway Nway AUNG has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250086389
    Abstract: According to an embodiment, a method for generating textual features corresponding to text documents from a raw dataset is disclosed. The method includes preprocessing the text documents and determining topic probability scores (TPS) and confidence scores (CS) using unsupervised and supervised machine learning models, respectively. The combination of TPS and CS is used to generate a compound distribution score (CDS), which forms a comprehensive representation of the output of the machine learning models. The determined TPS, CS, and CDS are then used to generate a set of textual features, which serve as independent variables for a forecasting model.
    Type: Application
    Filed: September 12, 2023
    Publication date: March 13, 2025
    Inventors: Gayathri SARANATHAN, Nway Nway AUNG, Ariel BECK, Chandra Suwandi WIJAYA, Jianyu CHEN, Debdeep PAUL, Sahim YAMAURA, Koji MIURA
  • Publication number: 20240233344
    Abstract: According to an embodiment, a method for estimating robustness of a trained machine learning model is disclosed. The method comprises receiving a labelled dataset, a model of an object for which defect detection is required, and the trained machine learning model. Further, the method comprises determining one or more parameters associated with image capturing conditions in the environment. Furthermore, the method comprises performing an auto extraction of one or more defects using the model of the object and the labelled dataset based on image processing. Furthermore, the method comprises generating one or more images based on the one or more parameters and the one or more defects. Additionally, the method comprises testing the trained machine learning model using the generated images. Moreover, the method comprises estimating a robustness report for the machine learning model based on the testing of the machine learning model.
    Type: Application
    Filed: October 25, 2022
    Publication date: July 11, 2024
    Inventors: Yuya SUGASAWA, Hisaji MURATA, Nway Nway AUNG, Ariel BECK, Zong Sheng TANG
  • Publication number: 20240185576
    Abstract: An image determination device according to the present disclosure includes: a trainer that obtains one or more first models by training machine learning models of one or more types with use of a first training data set including first images and first labels, and obtains one or more second models by training machine learning models of one or more types with use of one or more second training data sets each including second images different from the first images, second labels, and at least part of the first training data set; an image obtainer that obtains a target image; and a determiner that outputs a determination result of a label of the target image obtained by the image obtainer, which is obtained by using, for the target image, at least two models including one of the one or more first models and one of the one or more second models.
    Type: Application
    Filed: March 14, 2022
    Publication date: June 6, 2024
    Inventors: Yuya SUGASAWA, Yoshinori SATOU, Hisaji MURATA, Jeffery FERNANDO, Yao ZHOU, Nway Nway AUNG
  • Publication number: 20240160196
    Abstract: First, a plurality of models that predict categories of input data are pooled. At least one of the plurality of models is a model trained by machine learning. Next, each of a plurality of hybrid model candidates that judge the categories are created by selecting and combining two or more models from among the plurality of pooled models. Then, by comparing the plurality of hybrid model candidates, one of the plurality of hybrid model candidates is selected as a hybrid model.
    Type: Application
    Filed: March 25, 2022
    Publication date: May 16, 2024
    Inventors: Yao ZHOU, Athul M. MATHEW, Ariel BECK, Chandra Suwandi WIJAYA, Nway Nway AUNG, Khai Jun KEK, Yuya SUGASAWA, Jeffry FERNANDO, Yoshinori SATOU, Hisaji MURATA
  • Publication number: 20240135689
    Abstract: According to an embodiment, a method for estimating robustness of a trained machine learning model is disclosed. The method comprises receiving a labelled dataset, a model of an object for which defect detection is required, and the trained machine learning model. Further, the method comprises determining one or more parameters associated with image capturing conditions in the environment. Furthermore, the method comprises performing an auto extraction of one or more defects using the model of the object and the labelled dataset based on image processing. Furthermore, the method comprises generating one or more images based on the one or more parameters and the one or more defects. Additionally, the method comprises testing the trained machine learning model using the generated images. Moreover, the method comprises estimating a robustness report for the machine learning model based on the testing of the machine learning model.
    Type: Application
    Filed: October 24, 2022
    Publication date: April 25, 2024
    Inventors: Yuya SUGASAWA, Hisaji MURATA, Nway Nway AUNG, Ariel BECK, Zong Sheng TANG
  • Patent number: 11538355
    Abstract: A method for predicting a condition of living-being in an environment, the method including capturing image-data associated with the at-least one person and based thereupon determining a current-condition of the person; receiving content from a plurality of content-sources with respect to said at least one person being imaged, said content defined by at least one of text and statistics; defining one or more weighted parameters based on allocating a plurality of weights to at least one of the captured image data and the received content based on the current-condition; and predicting, by a predictive-analysis module, a condition of the at-least one person based on analysis of the one or more weighted parameters.
    Type: Grant
    Filed: March 24, 2020
    Date of Patent: December 27, 2022
    Assignee: PANASONIC INTELLECTUAL PROPERTY MANAGEMENT CO., LTD.
    Inventors: Faye Juliano, Nway Nway Aung, Aimin Zhao, Eng Chye Lim, Khai Jun Kek
  • Patent number: 11521313
    Abstract: A method and system for checking data gathering conditions or image capturing conditions associated with images during AI based visual-inspection process. The method comprises generating a first representative (FR1) image for a first group of images and a second representative image (FR2) for a second group of images. A difference image data is generated between FR1 image and the FR2 image based on calculating difference between luminance values of pixels with same coordinate values. Thereafter, one or more of a plurality of white pixels or intensity-values are determined within the difference image based on acquiring difference image data formed of luminance difference-values of pixels. An index representing difference of data-capturing conditions across the FR1 image and the FR2 image is determined, said index having been determined at least based on the plurality of white pixels or intensity-values, for example, based on application of a plurality of AI or ML techniques.
    Type: Grant
    Filed: February 11, 2021
    Date of Patent: December 6, 2022
    Assignee: PANASONIC INTELLECTUAL PROPERTY MANAGEMENT CO., LTD.
    Inventors: Ariel Beck, Chandra Suwandi Wijaya, Athul M. Mathew, Nway Nway Aung, Ramdas Krishnakumar, Zong Sheng Tang, Yao Zhou, Pradeep Rajagopalan, Yuya Sugasawa
  • Publication number: 20220338773
    Abstract: The present subject matter refers a method to evaluate concentration of a living being based on artificial intelligent techniques. The method comprises detecting a continuous increase in concentration of a living being based on an artificial neural network (ANN), The method comprises receiving a parameter of the continuous increase in concentration, determining a first value of the concentration based on a first condition, said first condition defined by increase of the received parameter by more than a first predetermined threshold; and determining a second value of the concentration based on a second condition, said second condition defined by decline in the concentration from the first value by more than a second predetermined threshold.
    Type: Application
    Filed: March 16, 2022
    Publication date: October 27, 2022
    Inventors: Ai Min ZHAO, Faye Juliano, Eng Chye Lim, Nway Nway Aung, Souksakhone Bounyong
  • Publication number: 20220253995
    Abstract: A method and system for checking data gathering conditions or image capturing conditions associated with images during AI based visual-inspection process. The method comprises generating a first representative (FR1) image for a first group of images and a second representative image (FR2) for a second group of images. A difference image data is generated between FR1 image and the FR2 image based on calculating difference between luminance values of pixels with same coordinate values. Thereafter, one or more of a plurality of white pixels or intensity-values are determined within the difference image based on acquiring difference image data formed of luminance difference-values of pixels. An index representing difference of data-capturing conditions across the FR1 image and the FR2 image is determined, said index having been determined at least based on the plurality of white pixels or intensity-values, for example, based on application of a plurality of AI or ML techniques.
    Type: Application
    Filed: February 11, 2021
    Publication date: August 11, 2022
    Inventors: Ariel BECK, Chandra Suwandi WIJAYA, Athul M. MATHEW, Nway Nway AUNG, Ramdas KRISHNAKUMAR, Zong Sheng TANG, Yao ZHOU, Pradeep RAJAGOPALAN, Yuya SUGASAWA
  • Publication number: 20210304634
    Abstract: A method for predicting a condition of living-being in an environment, the method including capturing image-data associated with the at-least one person and based thereupon determining a current-condition of the person; receiving content from a plurality of content-sources with respect to said at least one person being imaged, said content defined by at least one of text and statistics; defining one or more weighted parameters based on allocating a plurality of weights to at least one of the captured image data and the received content based on the current-condition; and predicting, by a predictive-analysis module, a condition of the at-least one person based on analysis of the one or more weighted parameters.
    Type: Application
    Filed: March 24, 2020
    Publication date: September 30, 2021
    Inventors: Faye JULIANO, Nway Nway AUNG, Aimin ZHAO, Eng Chye LIM, Khai Jun KEK