Patents by Inventor Yaacob Aizer

Yaacob Aizer has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11216954
    Abstract: A goal of the disclosure is to provide real-time adjustment of a deep learning-based tracking system to track a moving individual without using a labeled set of training data. Disclosed are systems and methods for tracking a moving individual with an autonomous drone. Initialization video data of the specific individual is obtained. Based on the initialization video data, real-time training of an input neural network is performed to generate a detection neural network that uniquely corresponds to the specific individual. Real-time video monitoring data of the specific individual and the surrounding environment is captured. Using the detection neural network, target detection is performed on the real-time video monitoring data and a detection output corresponding to a location of the specific individual within a given frame of the real-time video monitoring data is generated.
    Type: Grant
    Filed: May 20, 2019
    Date of Patent: January 4, 2022
    Assignee: TG-17, Inc.
    Inventors: Olga Peled, Yaacob Aizer, Zcharia Baratz, Ran Banker, Joseph Keshet, Ron Asher
  • Patent number: 10637674
    Abstract: First and second screenshot images are obtained from a monitoring application provided on a first computing device. Each screenshot image comprises a plurality of content portions displayed by a communication application on the first computing device, and content boxing is performed to calculate a plurality of content boxes for the plurality of content portions. Each content box is classified as containing textual communication content or image communication content. Textual communications are extracted via Optical Character Recognition (OCR) and object identifiers are extracted from the image communications via image recognition. At least one shared content box present in both the first and second screenshot images is identified and used to temporally align the extracted textual communications. The temporally aligned textual communications are condensed into a textual communication sequence.
    Type: Grant
    Filed: June 7, 2018
    Date of Patent: April 28, 2020
    Assignee: TG-17, Inc.
    Inventors: Ron Asher, Yaacob Aizer
  • Publication number: 20190325584
    Abstract: A goal of the disclosure is to provide real-time adjustment of a deep learning-based tracking system to track a moving individual without using a labeled set of training data. Disclosed are systems and methods for tracking a moving individual with an autonomous drone. Initialization video data of the specific individual is obtained. Based on the initialization video data, real-time training of an input neural network is performed to generate a detection neural network that uniquely corresponds to the specific individual. Real-time video monitoring data of the specific individual and the surrounding environment is captured. Using the detection neural network, target detection is performed on the real-time video monitoring data and a detection output corresponding to a location of the specific individual within a given frame of the real-time video monitoring data is generated.
    Type: Application
    Filed: May 20, 2019
    Publication date: October 24, 2019
    Inventors: Olga Peled, Yaacob Aizer, Zcharia Baratz, Ran Banker, Joseph Keshet, Ron Asher
  • Publication number: 20180359107
    Abstract: First and second screenshot images are obtained from a monitoring application provided on a first computing device. Each screenshot image comprises a plurality of content portions displayed by a communication application on the first computing device, and content boxing is performed to calculate a plurality of content boxes for the plurality of content portions. Each content box is classified as containing textual communication content or image communication content. Textual communications are extracted via Optical Character Recognition (OCR) and object identifiers are extracted from the image communications via image recognition. At least one shared content box present in both the first and second screenshot images is identified and used to temporally align the extracted textual communications. The temporally aligned textual communications are condensed into a textual communication sequence.
    Type: Application
    Filed: June 7, 2018
    Publication date: December 13, 2018
    Inventors: Ron ASHER, Yaacob Aizer