Patents by Inventor Omer BAR TAL

Omer BAR TAL has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250238905
    Abstract: Provided is a video generation model for performing text-to-video (T2V) or other video generation techniques. The proposed model reduces the computational costs associated with video generation. In particular, unlike traditional T2V methods, the disclosed technology can generate the full temporal duration of a video clip at once, bypassing the need for extensive computation. As one example, a machine-learned denoising diffusion model can simultaneously process a plurality of noisy inputs that correspond to various timestamps spanning the temporal dimension of a video to simultaneously generate synthetic frames for the video that match the timestamps.
    Type: Application
    Filed: January 22, 2025
    Publication date: July 24, 2025
    Inventors: Inbar Mosseri, Omer Bar Tal, Hila Chefer-Livshen, Omer Tov, Charles Irwin Herrmann, Rony Paiss, Shiran Elyahu Zada, Ariel Ephrat, Junhwa Hur, Guanghui Liu, Amit Raj, Yuanzhen Li, Michael Rubinstein, Tomer Michaeli, Oliver Wang, Deqing Sun, Tali Dekel
  • Patent number: 12282696
    Abstract: Using a pre-trained and fixed Vision Transformer (ViT) model as an external semantic prior, a generator is trained given only a single structure/appearance image pair as input. Given two input images, a source structure image and a target appearance image, a new image is generated by the generator in which the structure of the source image is preserved, while the visual appearance of the target image is transferred in a semantically aware manner, so that objects in the structure image are “painted” with the visual appearance of semantically related objects in the appearance image. A self-supervised, pre-trained ViT model, such as a DINO-VIT model, is leveraged as an external semantic prior, allowing for training of the generator only on a single input image pair, without any additional information (e.g., segmentation/correspondences), and without adversarial training. The method may generate high quality results in high resolution (e.g., HD).
    Type: Grant
    Filed: December 18, 2022
    Date of Patent: April 22, 2025
    Assignee: Yeda Research and Development Co. Ltd.
    Inventors: Tali Dekel, Shai Bagon, Omer Bar Tal, Narek Tumanyan
  • Publication number: 20240419382
    Abstract: Using a pre-trained and fixed Vision Transformer (ViT) model as an external semantic prior, a generator is trained given only a single structure/appearance image pair as input. Given two input images, a source structure image and a target appearance image, a new image is generated by the generator in which the structure of the source image is preserved, while the visual appearance of the target image is transferred in a semantically aware manner, so that objects in the structure image are “painted” with the visual appearance of semantically related objects in the appearance image. A self-supervised, pre-trained ViT model, such as a DINO-VIT model, is leveraged as an external semantic prior, allowing for training of the generator only on a single input image pair, without any additional information (e.g., segmentation/correspondences), and without adversarial training. The method may generate high quality results in high resolution (e.g., HD).
    Type: Application
    Filed: December 18, 2022
    Publication date: December 19, 2024
    Inventors: Tali DEKEL, Shai BAGON, Omer BAR TAL, Narek TUMANYAN