Patents by Inventor Dvir Yerushalmi

Dvir Yerushalmi has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250037428
    Abstract: Systems, methods and non-transitory computer readable media for attributing generated visual content to training examples are provided. A first visual content generated using a generative model may be received. The generative model may be associated with a plurality of training examples. Each training example may be associated with a visual content. Properties of the first visual content may be determined. Each visual content associated with a training example may be analyzed to determine properties of the visual content. The properties of the first visual content and the properties of the visual contents associated with the plurality of training examples may be used to attribute the first visual content to a subgroup of the plurality of training examples. The visual contents associated with the training examples of the subgroup may be associated with a source. A data-record associated with the source may be updated based on the attribution.
    Type: Application
    Filed: October 9, 2024
    Publication date: January 30, 2025
    Inventors: Yair ADATO, Ran ACHITUV, Eyal GUTFLAISH, Dvir YERUSHALMI
  • Patent number: 12182910
    Abstract: Systems, methods and non-transitory computer readable media for propagating changes from one visual content to other visual contents are provided. A plurality of visual contents may be accessed. A first visual content and a modified version of the first visual content may be accessed. The first visual content and the modified version of the first visual content may be analyzed to determine a manipulation for the plurality of visual contents. The determined manipulation may be used to generate a manipulated visual content for each visual content of the plurality of visual contents. The generated manipulated visual contents may be provided.
    Type: Grant
    Filed: November 4, 2021
    Date of Patent: December 31, 2024
    Assignee: BRIA ARTIFICIAL INTELLIGENCE LTD.
    Inventors: Yair Adato, Gal Jacobi, Efrat Taig, Bar Fingerman, Dvir Yerushalmi, Eyal Gutflaish
  • Patent number: 12142029
    Abstract: Systems, methods and non-transitory computer readable media for attributing generated visual content to training examples are provided. A first visual content generated using a generative model may be received. The generative model may be associated with a plurality of training examples. Each training example may be associated with a visual content. Properties of the first visual content may be determined. Each visual content associated with a training example may be analyzed to determine properties of the visual content. The properties of the first visual content and the properties of the visual contents associated with the plurality of training examples may be used to attribute the first visual content to a subgroup of the plurality of training examples. The visual contents associated with the training examples of the subgroup may be associated with a source. A data-record associated with the source may be updated based on the attribution.
    Type: Grant
    Filed: November 14, 2022
    Date of Patent: November 12, 2024
    Assignee: BRIA ARTIFICIAL INTELLIGENCE LTD
    Inventors: Yair Adato, Ran Achituv, Eyal Gutflaish, Dvir Yerushalmi
  • Publication number: 20240273307
    Abstract: Systems, methods and non-transitory computer readable media for inference based on different portions of a training set using a single inference model are provided. Textual inputs may be received, each of which may include a source-identifying-keyword. An inference model may be a result of training a machine learning model using a plurality of training examples. Each training example may include a respective textual content and a respective media content. The training examples may be grouped based on source-identifying-keywords included in the textual contents. Different parameters of the inference model may be based on different groups, and thereby be associated with different source-identifying-keywords. When generating new media content using the inference model and a textual input, parameters associated with the source-identifying-keyword included in the textual input may be used.
    Type: Application
    Filed: November 7, 2023
    Publication date: August 15, 2024
    Inventors: Yair ADATO, Michael FEINSTEIN, Efrat TAIG, Dvir YERUSHALMI, Ori LIBERMAN
  • Publication number: 20240273782
    Abstract: Systems, methods and non-transitory computer readable media for providing diverse visual contents based on prompts are provided. A textual input in a natural language indicative of a desire of an individual to receive at least one visual content of an inanimate object of a particular category may be received. Further, a demographic requirement may be obtained. For example, the textual input may be analyzed to determine a demographic requirement. Further, a visual content may be obtained based on the demographic requirement and the textual input. The visual content may include a depiction of at least one inanimate object of the particular category and a depiction of one or more persons matching the demographic requirement. Further, a presentation of the visual content to the individual may be caused.
    Type: Application
    Filed: November 7, 2023
    Publication date: August 15, 2024
    Inventors: Yair ADATO, Michael FEINSTEIN, Efrat TAIG, Dvir YERUSHALMI, Ori LIBERMAN
  • Publication number: 20240273300
    Abstract: Systems, methods and non-transitory computer readable media for identifying prompts used for training of inference models are provided. In some examples, a specific textual prompt in a natural language may be received. Further, data based on at least one parameter of an inference model may be accessed. The inference model may be a result of training a machine learning model using a plurality of training examples. Each training example of the plurality of training examples may include a respective textual content and a respective media content. The data and the specific textual prompt may be analyzed to determine a likelihood that the specific textual prompt is included in at least one training example of the plurality of training examples. A digital signal indicative of the likelihood that the specific textual prompt is included in at least one training example of the plurality of training examples may be generated.
    Type: Application
    Filed: February 16, 2024
    Publication date: August 15, 2024
    Inventors: Yair ADATO, Michael FEINSTEIN, Efrat TAIG, Dvir YERUSHALMI, Ori LIBERMAN
  • Patent number: 12033372
    Abstract: Systems, methods and non-transitory computer readable media for attributing generated visual content to training examples are provided. A first visual content generated using a generative model may be received. The generative model may be associated with a plurality of training examples. Each training example may be associated with a visual content. Properties of the first visual content may be determined. Each visual content associated with a training example may be analyzed to determine properties of the visual content. The properties of the first visual content and the properties of the visual contents associated with the plurality of training examples may be used to attribute the first visual content to a subgroup of the plurality of training examples. The visual contents associated with the training examples of the subgroup may be associated with a source. A data-record associated with the source may be updated based on the attribution.
    Type: Grant
    Filed: December 6, 2023
    Date of Patent: July 9, 2024
    Assignee: BRIA ARTIFICIAL INTELLIGENCE LTD
    Inventors: Yair Adato, Ran Achituv, Eyal Gutflaish, Dvir Yerushalmi
  • Publication number: 20240153039
    Abstract: Systems, methods and non-transitory computer readable media for attributing generated visual content to training examples are provided. A first visual content generated using a generative model may be received. The generative model may be associated with a plurality of training examples. Each training example may be associated with a visual content. Properties of the first visual content may be determined. Each visual content associated with a training example may be analyzed to determine properties of the visual content. The properties of the first visual content and the properties of the visual contents associated with the plurality of training examples may be used to attribute the first visual content to a subgroup of the plurality of training examples. The visual contents associated with the training examples of the subgroup may be associated with a source. A data-record associated with the source may be updated based on the attribution.
    Type: Application
    Filed: November 14, 2022
    Publication date: May 9, 2024
    Inventors: Yair ADATO, Ran ACHITUV, Eyal GUTFLAISH, Dvir YERUSHALMI
  • Patent number: 11947922
    Abstract: Systems, methods and non-transitory computer readable media for prompt-based attribution of generated media contents to training examples are provided. In some examples, a first media content generated using a generative model in response to a first textual input may be received. The generative model may be a result of training a machine learning model using a plurality of training examples. Each training example of the plurality of training examples may include a respective textual content and a respective media content. Properties of the first textual input and properties of the textual contents included in the plurality of training examples may be used to attribute the first media content to a first subgroup of the plurality of training examples. The training examples of the first subgroup may be associated with a source. Further, a data-record associated with the source may be updated based on the attribution.
    Type: Grant
    Filed: November 7, 2023
    Date of Patent: April 2, 2024
    Assignee: BRIA ARTIFICIAL INTELLIGENCE LTD.
    Inventors: Yair Adato, Michael Feinstein, Efrat Taig, Dvir Yerushalmi, Ori Liberman
  • Publication number: 20240104697
    Abstract: Systems, methods and non-transitory computer readable media for attributing generated visual content to training examples are provided. A first visual content generated using a generative model may be received. The generative model may be associated with a plurality of training examples. Each training example may be associated with a visual content. Properties of the first visual content may be determined. Each visual content associated with a training example may be analyzed to determine properties of the visual content. The properties of the first visual content and the properties of the visual contents associated with the plurality of training examples may be used to attribute the first visual content to a subgroup of the plurality of training examples. The visual contents associated with the training examples of the subgroup may be associated with a source. A data-record associated with the source may be updated based on the attribution.
    Type: Application
    Filed: December 6, 2023
    Publication date: March 28, 2024
    Inventors: Yair ADATO, Ran ACHITUV, Eyal GUTFLAISH, Dvir YERUSHALMI
  • Patent number: 11934792
    Abstract: Systems, methods and non-transitory computer readable media for identifying prompts used for training of inference models are provided. In some examples, a specific textual prompt in a natural language may be received. Further, data based on at least one parameter of an inference model may be accessed. The inference model may be a result of training a machine learning model using a plurality of training examples. Each training example of the plurality of training examples may include a respective textual content and a respective media content. The data and the specific textual prompt may be analyzed to determine a likelihood that the specific textual prompt is included in at least one training example of the plurality of training examples. A digital signal indicative of the likelihood that the specific textual prompt is included in at least one training example of the plurality of training examples may be generated.
    Type: Grant
    Filed: November 7, 2023
    Date of Patent: March 19, 2024
    Assignee: BRIA ARTIFICIAL INTELLIGENCE LTD.
    Inventors: Yair Adato, Michael Feinstein, Efrat Taig, Dvir Yerushalmi, Ori Liberman
  • Patent number: 11769283
    Abstract: Systems, methods and non-transitory computer readable media for generating looped video clips are provided. A still image may be received. The still image may be analyzed to generate a series of images. The series of images may include at least first, middle and last images. The first image may be substantially visually similar to the last image, and the middle image may be visually different from the first and last images. The series of images may be provided. Playing the series of images in a video clip that starts with the first image and finishes with the last image, and repeating the video clip from the first image immediately after completing the playing of the video clip with the last image may create visually smooth transaction in which the transition from the last image to the first image is visually indistinguishable from the transactions between frames within the video clip.
    Type: Grant
    Filed: November 4, 2021
    Date of Patent: September 26, 2023
    Assignee: BRIA ARTIFICIAL INTELLIGENCE LTD.
    Inventors: Yair Adato, Gal Jacobi, Dvir Yerushalmi, Efrat Taig
  • Publication number: 20230154153
    Abstract: Systems, methods and non-transitory computer readable media for identifying visual contents used for training of inference models are provided. A specific visual content may be received. Data based on at least one parameter of an inference model may be received. The inference model may be a result of training a machine learning algorithm using a plurality of training examples. Each training example of the plurality of training examples may include a visual content. The data and the specific visual content may be analyzed to determine a likelihood that the specific visual content is included in at least one training example of the plurality of training examples. A digital signal indicative of the likelihood that the specific visual content is included in at least one training example of the plurality of training examples may be generated.
    Type: Application
    Filed: November 14, 2022
    Publication date: May 18, 2023
    Inventors: Yair ADATO, Ran ACHITUV, Eyal GUTFLAISH, Dvir YERUSHALMI
  • Publication number: 20230154064
    Abstract: Systems, methods and non-transitory computer readable media for transforming non-realistic virtual environments to realistic virtual environments are provided. First digital signals representing virtual content in an extended reality environment may be received. The first digital signals may be used to identify a non-realistic portion of the virtual content. A generative model may be used to analyze the first digital signals to generate a realistic version of the identified non-realistic portion of the virtual content. Second digital signals configured to cause a wearable extended reality appliance to present the generated realistic version instead of the identified non-realistic portion of the virtual content in the extended reality environment may be generated.
    Type: Application
    Filed: November 14, 2022
    Publication date: May 18, 2023
    Inventors: Yair ADATO, Dvir YERUSHALMI, Efrat TAIG
  • Publication number: 20220156317
    Abstract: Systems, methods and non-transitory computer readable media for generating looped video clips are provided. A still image may be received. The still image may be analyzed to generate a series of images. The series of images may include at least first, middle and last images. The first image may be substantially visually similar to the last image, and the middle image may be visually different from the first and last images. The series of images may be provided. Playing the series of images in a video clip that starts with the first image and finishes with the last image, and repeating the video clip from the first image immediately after completing the playing of the video clip with the last image may create visually smooth transaction in which the transition from the last image to the first image is visually indistinguishable from the transactions between frames within the video clip.
    Type: Application
    Filed: November 4, 2021
    Publication date: May 19, 2022
    Inventors: Yair ADATO, Gal JACOBI, Dvir Yerushalmi, Efrat TAIG
  • Publication number: 20220156318
    Abstract: Systems, methods and non-transitory computer readable media for propagating changes from one visual content to other visual contents are provided. A plurality of visual contents may be accessed. A first visual content and a modified version of the first visual content may be accessed. The first visual content and the modified version of the first visual content may be analyzed to determine a manipulation for the plurality of visual contents. The determined manipulation may be used to generate a manipulated visual content for each visual content of the plurality of visual contents. The generated manipulated visual contents may be provided.
    Type: Application
    Filed: November 4, 2021
    Publication date: May 19, 2022
    Inventors: Yair ADATO, Gal JACOBI, Efrat TAIG, Bar FINGERMAN, Dvir YERUSHALMI, Eyal GUTFLAISH