Patents by Inventor Eylon AMI

Eylon AMI has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11954893
    Abstract: The technology described herein is directed to systems, methods, and software for indexing video. In an implementation, a method comprises identifying one or more regions of interest around target content in a frame of the video. Further, the method includes identifying, in a portion of the frame outside a region of interest, potentially empty regions adjacent to the region of interest. The method continues with identifying at least one empty region of the potentially empty regions that satisfies one or more criteria and classifying at least the one empty region as a negative sample of the target content. In some implementations, the negative sample of the target content in a set of negative samples of the target content, with which to train a machine learning model employed to identify instances of the target content.
    Type: Grant
    Filed: June 17, 2022
    Date of Patent: April 9, 2024
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Oron Nir, Maria Zontak, Tucker Cunningham Burns, Apar Singhal, Lei Zhang, Irit Ofer, Avner Levi, Haim Sabo, Ika Bar-Menachem, Eylon Ami, Ella Ben Tov, Anika Zaman
  • Patent number: 11823453
    Abstract: The technology described herein is directed to a media indexer framework including a character recognition engine that automatically detects and groups instances (or occurrences) of characters in a multi-frame animated media file. More specifically, the character recognition engine automatically detects and groups the instances (or occurrences) of the characters in the multi-frame animated media file such that each group contains images associated with a single character. The character groups are then labeled and used to train an image classification model. Once trained, the image classification model can be applied to subsequent multi-frame animated media files to automatically classifying the animated characters included therein.
    Type: Grant
    Filed: February 1, 2022
    Date of Patent: November 21, 2023
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Oron Nir, Maria Zontak, Tucker Cunningham Burns, Apar Singhal, Lei Zhang, Irit Ofer, Avner Levi, Haim Sabo, Ika Bar-Menachem, Eylon Ami, Ella Ben Tov
  • Publication number: 20220318574
    Abstract: The technology described herein is directed to systems, methods, and software for indexing video. In an implementation, a method comprises identifying one or more regions of interest around target content in a frame of the video. Further, the method includes identifying, in a portion of the frame outside a region of interest, potentially empty regions adjacent to the region of interest. The method continues with identifying at least one empty region of the potentially empty regions that satisfies one or more criteria and classifying at least the one empty region as a negative sample of the target content. In some implementations, the negative sample of the target content in a set of negative samples of the target content, with which to train a machine learning model employed to identify instances of the target content.
    Type: Application
    Filed: June 17, 2022
    Publication date: October 6, 2022
    Inventors: Oron NIR, Maria ZONTAK, Tucker Cunningham BURNS, Apar SINGHAL, Lei ZHANG, Irit OFER, Avner LEVI, Haim SABO, Ika BAR-MENACHEM, Eylon AMI, Ella BEN TOV, Anika ZAMAN
  • Patent number: 11366989
    Abstract: The technology described herein is directed to systems, methods, and software for indexing video. In an implementation, a method comprises identifying one or more regions of interest around target content in a frame of the video. Further, the method includes identifying, in a portion of the frame outside a region of interest, potentially empty regions adjacent to the region of interest. The method continues with identifying at least one empty region of the potentially empty regions that satisfies one or more criteria and classifying at least the one empty region as a negative sample of the target content. In some implementations, the negative sample of the target content in a set of negative samples of the target content, with which to train a machine learning model employed to identify instances of the target content.
    Type: Grant
    Filed: March 26, 2020
    Date of Patent: June 21, 2022
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Oron Nir, Maria Zontak, Tucker Cunningham Burns, Apar Singhal, Lei Zhang, Irit Ofer, Avner Levi, Haim Sabo, Ika Bar-Menachem, Eylon Ami, Ella Ben Tov, Anika Zaman
  • Publication number: 20220157057
    Abstract: The technology described herein is directed to a media indexer framework including a character recognition engine that automatically detects and groups instances (or occurrences) of characters in a multi-frame animated media file. More specifically, the character recognition engine automatically detects and groups the instances (or occurrences) of the characters in the multi-frame animated media file such that each group contains images associated with a single character. The character groups are then labeled and used to train an image classification model. Once trained, the image classification model can be applied to subsequent multi-frame animated media files to automatically classifying the animated characters included therein.
    Type: Application
    Filed: February 1, 2022
    Publication date: May 19, 2022
    Inventors: Oron NIR, Maria ZONTAK, Tucker Cunningham BURNS, Apar SINGHAL, Lei ZHANG, Irit OFER, Avner LEVI, Haim SABO, Ika BAR-MENACHEM, Eylon AMI, Ella BEN TOV
  • Patent number: 11270121
    Abstract: The technology described herein is directed to a media indexer framework including a character recognition engine that automatically detects and groups instances (or occurrences) of characters in a multi-frame animated media file. More specifically, the character recognition engine automatically detects and groups the instances (or occurrences) of the characters in the multi-frame animated media file such that each group contains images associated with a single character. The character groups are then labeled and used to train an image classification model. Once trained, the image classification model can be applied to subsequent multi-frame animated media files to automatically classifying the animated characters included therein.
    Type: Grant
    Filed: March 26, 2020
    Date of Patent: March 8, 2022
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Oron Nir, Maria Zontak, Tucker Cunningham Burns, Apar Singhal, Lei Zhang, Irit Ofer, Avner Levi, Haim Sabo, Ika Bar-Menachem, Eylon Ami, Ella Ben Tov
  • Patent number: 10936630
    Abstract: Systems and methods are disclosed for inferring topics from a file containing both audio and video, for example a multimodal or multimedia file, in order to facilitate video indexing. A set of entities is extracted from the file and linked to produce a graph, and reference information is also obtained for the set of entities. Entities may be drawn, for example, from Wikipedia categories, or other large ontological data sources. Analysis of the graph, using unsupervised learning, permits determining clusters in the graph. Extracting features from the clusters, possibly using supervised learning, provides for selection of topic identifiers. The topic identifiers are then used for indexing the file.
    Type: Grant
    Filed: September 13, 2018
    Date of Patent: March 2, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Royi Ronen, Oron Nir, Chin-Yew Lin, Ohad Jassin, Daniel Nurieli, Eylon Ami, Avner Levi
  • Publication number: 20210056313
    Abstract: The technology described herein is directed to a media indexer framework including a character recognition engine that automatically detects and groups instances (or occurrences) of characters in a multi-frame animated media file. More specifically, the character recognition engine automatically detects and groups the instances (or occurrences) of the characters in the multi-frame animated media file such that each group contains images associated with a single character. The character groups are then labeled and used to train an image classification model. Once trained, the image classification model can be applied to subsequent multi-frame animated media files to automatically classifying the animated characters included therein.
    Type: Application
    Filed: March 26, 2020
    Publication date: February 25, 2021
    Inventors: Oron Nir, Maria Zontak, Tucker Cunningham Burns, Apar Singhal, Lei Zhang, Irit Ofer, Avner Levi, Haim Sabo, Ika Bar-Menachem, Eylon Ami, Ella Ben Tov
  • Publication number: 20210056362
    Abstract: The technology described herein is directed to systems, methods, and software for indexing video. In an implementation, a method comprises identifying one or more regions of interest around target content in a frame of the video. Further, the method includes identifying, in a portion of the frame outside a region of interest, potentially empty regions adjacent to the region of interest. The method continues with identifying at least one empty region of the potentially empty regions that satisfies one or more criteria and classifying at least the one empty region as a negative sample of the target content. In some implementations, the negative sample of the target content in a set of negative samples of the target content, with which to train a machine learning model employed to identify instances of the target content.
    Type: Application
    Filed: March 26, 2020
    Publication date: February 25, 2021
    Inventors: Oron Nir, Maria Zontak, Tucker Cunningham Burns, Apar Singhal, Lei Zhang, Irit Ofer, Avner Levi, Haim Sabo, Ika Bar-Menachem, Eylon Ami, Ella Ben Tov, Anika Zaman
  • Publication number: 20200089802
    Abstract: Systems and methods are disclosed for inferring topics from a file containing both audio and video, for example a multimodal or multimedia file, in order to facilitate video indexing. A set of entities is extracted from the file and linked to produce a graph, and reference information is also obtained for the set of entities. Entities may be drawn, for example, from Wikipedia categories, or other large ontological data sources. Analysis of the graph, using unsupervised learning, permits determining clusters in the graph. Extracting features from the clusters, possibly using supervised learning, provides for selection of topic identifiers. The topic identifiers are then used for indexing the file.
    Type: Application
    Filed: September 13, 2018
    Publication date: March 19, 2020
    Inventors: Royi RONEN, Oron NIR, Chin-Yew LIN, Ohad JASSIN, Daniel NURIELI, Eylon AMI, Avner Levi