Patents by Inventor Nway Nway AUNG

Nway Nway AUNG has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240160196
    Abstract: First, a plurality of models that predict categories of input data are pooled. At least one of the plurality of models is a model trained by machine learning. Next, each of a plurality of hybrid model candidates that judge the categories are created by selecting and combining two or more models from among the plurality of pooled models. Then, by comparing the plurality of hybrid model candidates, one of the plurality of hybrid model candidates is selected as a hybrid model.
    Type: Application
    Filed: March 25, 2022
    Publication date: May 16, 2024
    Inventors: Yao ZHOU, Athul M. MATHEW, Ariel BECK, Chandra Suwandi WIJAYA, Nway Nway AUNG, Khai Jun KEK, Yuya SUGASAWA, Jeffry FERNANDO, Yoshinori SATOU, Hisaji MURATA
  • Publication number: 20240135689
    Abstract: According to an embodiment, a method for estimating robustness of a trained machine learning model is disclosed. The method comprises receiving a labelled dataset, a model of an object for which defect detection is required, and the trained machine learning model. Further, the method comprises determining one or more parameters associated with image capturing conditions in the environment. Furthermore, the method comprises performing an auto extraction of one or more defects using the model of the object and the labelled dataset based on image processing. Furthermore, the method comprises generating one or more images based on the one or more parameters and the one or more defects. Additionally, the method comprises testing the trained machine learning model using the generated images. Moreover, the method comprises estimating a robustness report for the machine learning model based on the testing of the machine learning model.
    Type: Application
    Filed: October 24, 2022
    Publication date: April 25, 2024
    Inventors: Yuya SUGASAWA, Hisaji MURATA, Nway Nway AUNG, Ariel BECK, Zong Sheng TANG
  • Patent number: 11538355
    Abstract: A method for predicting a condition of living-being in an environment, the method including capturing image-data associated with the at-least one person and based thereupon determining a current-condition of the person; receiving content from a plurality of content-sources with respect to said at least one person being imaged, said content defined by at least one of text and statistics; defining one or more weighted parameters based on allocating a plurality of weights to at least one of the captured image data and the received content based on the current-condition; and predicting, by a predictive-analysis module, a condition of the at-least one person based on analysis of the one or more weighted parameters.
    Type: Grant
    Filed: March 24, 2020
    Date of Patent: December 27, 2022
    Assignee: PANASONIC INTELLECTUAL PROPERTY MANAGEMENT CO., LTD.
    Inventors: Faye Juliano, Nway Nway Aung, Aimin Zhao, Eng Chye Lim, Khai Jun Kek
  • Patent number: 11521313
    Abstract: A method and system for checking data gathering conditions or image capturing conditions associated with images during AI based visual-inspection process. The method comprises generating a first representative (FR1) image for a first group of images and a second representative image (FR2) for a second group of images. A difference image data is generated between FR1 image and the FR2 image based on calculating difference between luminance values of pixels with same coordinate values. Thereafter, one or more of a plurality of white pixels or intensity-values are determined within the difference image based on acquiring difference image data formed of luminance difference-values of pixels. An index representing difference of data-capturing conditions across the FR1 image and the FR2 image is determined, said index having been determined at least based on the plurality of white pixels or intensity-values, for example, based on application of a plurality of AI or ML techniques.
    Type: Grant
    Filed: February 11, 2021
    Date of Patent: December 6, 2022
    Assignee: PANASONIC INTELLECTUAL PROPERTY MANAGEMENT CO., LTD.
    Inventors: Ariel Beck, Chandra Suwandi Wijaya, Athul M. Mathew, Nway Nway Aung, Ramdas Krishnakumar, Zong Sheng Tang, Yao Zhou, Pradeep Rajagopalan, Yuya Sugasawa
  • Publication number: 20220338773
    Abstract: The present subject matter refers a method to evaluate concentration of a living being based on artificial intelligent techniques. The method comprises detecting a continuous increase in concentration of a living being based on an artificial neural network (ANN), The method comprises receiving a parameter of the continuous increase in concentration, determining a first value of the concentration based on a first condition, said first condition defined by increase of the received parameter by more than a first predetermined threshold; and determining a second value of the concentration based on a second condition, said second condition defined by decline in the concentration from the first value by more than a second predetermined threshold.
    Type: Application
    Filed: March 16, 2022
    Publication date: October 27, 2022
    Inventors: Ai Min ZHAO, Faye Juliano, Eng Chye Lim, Nway Nway Aung, Souksakhone Bounyong
  • Publication number: 20220253995
    Abstract: A method and system for checking data gathering conditions or image capturing conditions associated with images during AI based visual-inspection process. The method comprises generating a first representative (FR1) image for a first group of images and a second representative image (FR2) for a second group of images. A difference image data is generated between FR1 image and the FR2 image based on calculating difference between luminance values of pixels with same coordinate values. Thereafter, one or more of a plurality of white pixels or intensity-values are determined within the difference image based on acquiring difference image data formed of luminance difference-values of pixels. An index representing difference of data-capturing conditions across the FR1 image and the FR2 image is determined, said index having been determined at least based on the plurality of white pixels or intensity-values, for example, based on application of a plurality of AI or ML techniques.
    Type: Application
    Filed: February 11, 2021
    Publication date: August 11, 2022
    Inventors: Ariel BECK, Chandra Suwandi WIJAYA, Athul M. MATHEW, Nway Nway AUNG, Ramdas KRISHNAKUMAR, Zong Sheng TANG, Yao ZHOU, Pradeep RAJAGOPALAN, Yuya SUGASAWA
  • Publication number: 20210304634
    Abstract: A method for predicting a condition of living-being in an environment, the method including capturing image-data associated with the at-least one person and based thereupon determining a current-condition of the person; receiving content from a plurality of content-sources with respect to said at least one person being imaged, said content defined by at least one of text and statistics; defining one or more weighted parameters based on allocating a plurality of weights to at least one of the captured image data and the received content based on the current-condition; and predicting, by a predictive-analysis module, a condition of the at-least one person based on analysis of the one or more weighted parameters.
    Type: Application
    Filed: March 24, 2020
    Publication date: September 30, 2021
    Inventors: Faye JULIANO, Nway Nway AUNG, Aimin ZHAO, Eng Chye LIM, Khai Jun KEK