Patents by Inventor Yen-Yun Yu

Yen-Yun Yu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20210390704
    Abstract: Systems and methods for identifying and segmenting objects from images include a preprocessing module configured to adjust a size of a source image; a region-proposal module configured to propose one or more regions of interest in the size-adjusted source image; and a prediction module configured to predict a classification, bounding box coordinates, and mask. Such systems and methods may utilize end-to-end training of the modules using adversarial loss, facilitating the use of a small training set, and can be configured to process historical documents, such as large images comprising text. The preprocessing module within said systems and methods can utilize a conventional image scaler in tandem with a custom image scaler to provide a resized image suitable for GPU processing, and the region-proposal module can utilize a region-proposal network from a single-stage detection model in tandem with a two-stage detection model paradigm to capture substantially all particles in an image.
    Type: Application
    Filed: June 9, 2021
    Publication date: December 16, 2021
    Applicant: Ancestry.com Operations Inc.
    Inventors: Masaki Stanley Fujimoto, Yen-Yun Yu
  • Patent number: 11170257
    Abstract: Techniques for training a machine-learning (ML) model for captioning images are disclosed. A plurality of feature vectors and a plurality of visual attention maps are generated by a visual model of the ML model based on an input image. Each of the plurality of feature vectors correspond to different regions of the input image. A plurality of caption attention maps are generated by an attention model of the ML model based on the plurality of feature vectors. An attention penalty is calculated based on a comparison between the caption attention maps and the visual attention maps. A loss function is calculated based on the attention penalty. One or both of the visual model and the attention model are trained using the loss function.
    Type: Grant
    Filed: October 8, 2019
    Date of Patent: November 9, 2021
    Assignee: ANCESTRY.COM OPERATIONS INC.
    Inventors: Jiayun Li, Mohammad K. Ebrahimpour, Azadeh Moghtaderi, Yen-Yun Yu
  • Publication number: 20210174083
    Abstract: Embodiments described herein relate generally to a methodology of efficient object classification within a visual medium. The methodology utilizes a first neural network to perform an attention based object localization within a visual medium to generate a visual mask. The visual mask is applied to the visual medium to generate a masked visual medium. The masked visual medium may be then fed into a second neural network to detect and classify objects within the visual medium.
    Type: Application
    Filed: February 18, 2021
    Publication date: June 10, 2021
    Applicant: Ancestry.com Operations Inc.
    Inventors: Mohammad K. Ebrahimpour, Yen-Yun Yu, Jiayun Li, Jack Reese, Azadeh Moghtaderi
  • Publication number: 20210110205
    Abstract: Described herein are systems, methods, and other techniques for training a generative adversarial network (GAN) to perform an image-to-image transformation for recognizing text. A pair of training images are provided to the GAN. The pair of training images include a training image containing a set of characters in handwritten form and a reference training image containing the set of characters in machine-recognizable form. The GAN includes a generator and a discriminator. The generated image is generated using the generator based on the training image. Update data is generated using the discriminator based on the generated image and the reference training image. The GAN is trained by modifying one or both of the generator and the discriminator using the update data.
    Type: Application
    Filed: October 8, 2020
    Publication date: April 15, 2021
    Applicant: Ancestry.com Operations Inc.
    Inventors: Mostafa Karimi, Gopalkrishna Veni, Yen-Yun Yu
  • Patent number: 10949666
    Abstract: Embodiments described herein relate generally to a methodology of efficient object classification within a visual medium. The methodology utilizes a first neural network to perform an attention based object localization within a visual medium to generate a visual mask. The visual mask is applied to the visual medium to generate a masked visual medium. The masked visual medium may be then fed into a second neural network to detect and classify objects within the visual medium.
    Type: Grant
    Filed: September 11, 2020
    Date of Patent: March 16, 2021
    Assignee: ANCESTRY.COM OPERATIONS INC.
    Inventors: Mohammad K. Ebrahimpour, Yen-Yun Yu, Jiayun Li, Jack Reese, Azadeh Moghtaderi
  • Publication number: 20200410235
    Abstract: Embodiments described herein relate generally to a methodology of efficient object classification within a visual medium. The methodology utilizes a first neural network to perform an attention based object localization within a visual medium to generate a visual mask. The visual mask is applied to the visual medium to generate a masked visual medium. The masked visual medium may be then fed into a second neural network to detect and classify objects within the visual medium.
    Type: Application
    Filed: September 11, 2020
    Publication date: December 31, 2020
    Applicant: Ancestry.com Operations Inc.
    Inventors: Mohammad K. Ebrahimpour, Yen-Yun Yu, Jiayun Li, Jack Reese, Azadeh Moghtaderi
  • Patent number: 10796152
    Abstract: Embodiments described herein relate generally to a methodology of efficient object classification within a visual medium. The methodology utilizes a first neural network to perform an attention based object localization within a visual medium to generate a visual mask. The visual mask is applied to the visual medium to generate a masked visual medium. The masked visual medium may be then fed into a second neural network to detect and classify objects within the visual medium.
    Type: Grant
    Filed: September 17, 2019
    Date of Patent: October 6, 2020
    Assignee: ANCESTRY.COM OPERATIONS INC.
    Inventors: Mohammad K. Ebrahimpour, Yen-Yun Yu, Jiayun Li, Jack Reese, Azadeh Moghtaderi
  • Publication number: 20200117951
    Abstract: Techniques for training a machine-learning (ML) model for captioning images are disclosed. A plurality of feature vectors and a plurality of visual attention maps are generated by a visual model of the ML model based on an input image. Each of the plurality of feature vectors correspond to different regions of the input image. A plurality of caption attention maps are generated by an attention model of the ML model based on the plurality of feature vectors. An attention penalty is calculcated based on a comparison between the caption attention maps and the visual attention maps. A loss function is calculcated based on the attention penalty. One or both of the visual model and the attention model are trained using the loss function.
    Type: Application
    Filed: October 8, 2019
    Publication date: April 16, 2020
    Applicant: Ancestry.com Operations Inc. (019404) (019404)
    Inventors: Jiayun Li, Mohammad K. Ebrahimpour, Azadeh Moghtaderi, Yen-Yun Yu
  • Publication number: 20200097723
    Abstract: Embodiments described herein relate generally to a methodology of efficient object classification within a visual medium. The methodology utilizes a first neural network to perform an attention based object localization within a visual medium to generate a visual mask. The visual mask is applied to the visual medium to generate a masked visual medium. The masked visual medium may be then fed into a second neural network to detect and classify objects within the visual medium.
    Type: Application
    Filed: September 17, 2019
    Publication date: March 26, 2020
    Applicant: Ancestry.com Operations Inc.
    Inventors: Mohammad K. Ebrahimpour, Yen-Yun Yu, Jiayun Li, Jack Reese, Azadeh Moghtaderi
  • Publication number: 20190347511
    Abstract: Systems and methods for training a machine learning (ML) ranking model to rank genealogy hints are described herein. One method includes retrieving a plurality of genealogy hints for a target person, where each of the plurality of genealogy hints corresponds to a genealogy item and has a hint type of a plurality of hint types. The method includes generating, for each of the plurality of genealogy hints, a feature vector having a plurality of feature values, the feature vector being included in a plurality of feature vectors. The method includes extending each of the plurality of feature vectors by at least one additional feature value based on the number of features of one or more other hint types of the plurality of hint types. The method includes training the ML ranking model using the extended plurality of feature vectors and user-provided labels.
    Type: Application
    Filed: May 8, 2019
    Publication date: November 14, 2019
    Applicant: Ancestry.com Operations Inc.
    Inventors: Peng Jiang, Tyler Folkman, Tsung-Nan Liu, Yen-Yun Yu, Ruhan Wang, Jack Reese, Azadeh Moghtaderi