Patents by Inventor Dvir Yerushalmi
Dvir Yerushalmi has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20250037428Abstract: Systems, methods and non-transitory computer readable media for attributing generated visual content to training examples are provided. A first visual content generated using a generative model may be received. The generative model may be associated with a plurality of training examples. Each training example may be associated with a visual content. Properties of the first visual content may be determined. Each visual content associated with a training example may be analyzed to determine properties of the visual content. The properties of the first visual content and the properties of the visual contents associated with the plurality of training examples may be used to attribute the first visual content to a subgroup of the plurality of training examples. The visual contents associated with the training examples of the subgroup may be associated with a source. A data-record associated with the source may be updated based on the attribution.Type: ApplicationFiled: October 9, 2024Publication date: January 30, 2025Inventors: Yair ADATO, Ran ACHITUV, Eyal GUTFLAISH, Dvir YERUSHALMI
-
Patent number: 12182910Abstract: Systems, methods and non-transitory computer readable media for propagating changes from one visual content to other visual contents are provided. A plurality of visual contents may be accessed. A first visual content and a modified version of the first visual content may be accessed. The first visual content and the modified version of the first visual content may be analyzed to determine a manipulation for the plurality of visual contents. The determined manipulation may be used to generate a manipulated visual content for each visual content of the plurality of visual contents. The generated manipulated visual contents may be provided.Type: GrantFiled: November 4, 2021Date of Patent: December 31, 2024Assignee: BRIA ARTIFICIAL INTELLIGENCE LTD.Inventors: Yair Adato, Gal Jacobi, Efrat Taig, Bar Fingerman, Dvir Yerushalmi, Eyal Gutflaish
-
Patent number: 12142029Abstract: Systems, methods and non-transitory computer readable media for attributing generated visual content to training examples are provided. A first visual content generated using a generative model may be received. The generative model may be associated with a plurality of training examples. Each training example may be associated with a visual content. Properties of the first visual content may be determined. Each visual content associated with a training example may be analyzed to determine properties of the visual content. The properties of the first visual content and the properties of the visual contents associated with the plurality of training examples may be used to attribute the first visual content to a subgroup of the plurality of training examples. The visual contents associated with the training examples of the subgroup may be associated with a source. A data-record associated with the source may be updated based on the attribution.Type: GrantFiled: November 14, 2022Date of Patent: November 12, 2024Assignee: BRIA ARTIFICIAL INTELLIGENCE LTDInventors: Yair Adato, Ran Achituv, Eyal Gutflaish, Dvir Yerushalmi
-
Publication number: 20240273307Abstract: Systems, methods and non-transitory computer readable media for inference based on different portions of a training set using a single inference model are provided. Textual inputs may be received, each of which may include a source-identifying-keyword. An inference model may be a result of training a machine learning model using a plurality of training examples. Each training example may include a respective textual content and a respective media content. The training examples may be grouped based on source-identifying-keywords included in the textual contents. Different parameters of the inference model may be based on different groups, and thereby be associated with different source-identifying-keywords. When generating new media content using the inference model and a textual input, parameters associated with the source-identifying-keyword included in the textual input may be used.Type: ApplicationFiled: November 7, 2023Publication date: August 15, 2024Inventors: Yair ADATO, Michael FEINSTEIN, Efrat TAIG, Dvir YERUSHALMI, Ori LIBERMAN
-
Publication number: 20240273782Abstract: Systems, methods and non-transitory computer readable media for providing diverse visual contents based on prompts are provided. A textual input in a natural language indicative of a desire of an individual to receive at least one visual content of an inanimate object of a particular category may be received. Further, a demographic requirement may be obtained. For example, the textual input may be analyzed to determine a demographic requirement. Further, a visual content may be obtained based on the demographic requirement and the textual input. The visual content may include a depiction of at least one inanimate object of the particular category and a depiction of one or more persons matching the demographic requirement. Further, a presentation of the visual content to the individual may be caused.Type: ApplicationFiled: November 7, 2023Publication date: August 15, 2024Inventors: Yair ADATO, Michael FEINSTEIN, Efrat TAIG, Dvir YERUSHALMI, Ori LIBERMAN
-
Publication number: 20240273300Abstract: Systems, methods and non-transitory computer readable media for identifying prompts used for training of inference models are provided. In some examples, a specific textual prompt in a natural language may be received. Further, data based on at least one parameter of an inference model may be accessed. The inference model may be a result of training a machine learning model using a plurality of training examples. Each training example of the plurality of training examples may include a respective textual content and a respective media content. The data and the specific textual prompt may be analyzed to determine a likelihood that the specific textual prompt is included in at least one training example of the plurality of training examples. A digital signal indicative of the likelihood that the specific textual prompt is included in at least one training example of the plurality of training examples may be generated.Type: ApplicationFiled: February 16, 2024Publication date: August 15, 2024Inventors: Yair ADATO, Michael FEINSTEIN, Efrat TAIG, Dvir YERUSHALMI, Ori LIBERMAN
-
Patent number: 12033372Abstract: Systems, methods and non-transitory computer readable media for attributing generated visual content to training examples are provided. A first visual content generated using a generative model may be received. The generative model may be associated with a plurality of training examples. Each training example may be associated with a visual content. Properties of the first visual content may be determined. Each visual content associated with a training example may be analyzed to determine properties of the visual content. The properties of the first visual content and the properties of the visual contents associated with the plurality of training examples may be used to attribute the first visual content to a subgroup of the plurality of training examples. The visual contents associated with the training examples of the subgroup may be associated with a source. A data-record associated with the source may be updated based on the attribution.Type: GrantFiled: December 6, 2023Date of Patent: July 9, 2024Assignee: BRIA ARTIFICIAL INTELLIGENCE LTDInventors: Yair Adato, Ran Achituv, Eyal Gutflaish, Dvir Yerushalmi
-
Publication number: 20240153039Abstract: Systems, methods and non-transitory computer readable media for attributing generated visual content to training examples are provided. A first visual content generated using a generative model may be received. The generative model may be associated with a plurality of training examples. Each training example may be associated with a visual content. Properties of the first visual content may be determined. Each visual content associated with a training example may be analyzed to determine properties of the visual content. The properties of the first visual content and the properties of the visual contents associated with the plurality of training examples may be used to attribute the first visual content to a subgroup of the plurality of training examples. The visual contents associated with the training examples of the subgroup may be associated with a source. A data-record associated with the source may be updated based on the attribution.Type: ApplicationFiled: November 14, 2022Publication date: May 9, 2024Inventors: Yair ADATO, Ran ACHITUV, Eyal GUTFLAISH, Dvir YERUSHALMI
-
Patent number: 11947922Abstract: Systems, methods and non-transitory computer readable media for prompt-based attribution of generated media contents to training examples are provided. In some examples, a first media content generated using a generative model in response to a first textual input may be received. The generative model may be a result of training a machine learning model using a plurality of training examples. Each training example of the plurality of training examples may include a respective textual content and a respective media content. Properties of the first textual input and properties of the textual contents included in the plurality of training examples may be used to attribute the first media content to a first subgroup of the plurality of training examples. The training examples of the first subgroup may be associated with a source. Further, a data-record associated with the source may be updated based on the attribution.Type: GrantFiled: November 7, 2023Date of Patent: April 2, 2024Assignee: BRIA ARTIFICIAL INTELLIGENCE LTD.Inventors: Yair Adato, Michael Feinstein, Efrat Taig, Dvir Yerushalmi, Ori Liberman
-
Publication number: 20240104697Abstract: Systems, methods and non-transitory computer readable media for attributing generated visual content to training examples are provided. A first visual content generated using a generative model may be received. The generative model may be associated with a plurality of training examples. Each training example may be associated with a visual content. Properties of the first visual content may be determined. Each visual content associated with a training example may be analyzed to determine properties of the visual content. The properties of the first visual content and the properties of the visual contents associated with the plurality of training examples may be used to attribute the first visual content to a subgroup of the plurality of training examples. The visual contents associated with the training examples of the subgroup may be associated with a source. A data-record associated with the source may be updated based on the attribution.Type: ApplicationFiled: December 6, 2023Publication date: March 28, 2024Inventors: Yair ADATO, Ran ACHITUV, Eyal GUTFLAISH, Dvir YERUSHALMI
-
Patent number: 11934792Abstract: Systems, methods and non-transitory computer readable media for identifying prompts used for training of inference models are provided. In some examples, a specific textual prompt in a natural language may be received. Further, data based on at least one parameter of an inference model may be accessed. The inference model may be a result of training a machine learning model using a plurality of training examples. Each training example of the plurality of training examples may include a respective textual content and a respective media content. The data and the specific textual prompt may be analyzed to determine a likelihood that the specific textual prompt is included in at least one training example of the plurality of training examples. A digital signal indicative of the likelihood that the specific textual prompt is included in at least one training example of the plurality of training examples may be generated.Type: GrantFiled: November 7, 2023Date of Patent: March 19, 2024Assignee: BRIA ARTIFICIAL INTELLIGENCE LTD.Inventors: Yair Adato, Michael Feinstein, Efrat Taig, Dvir Yerushalmi, Ori Liberman
-
Patent number: 11769283Abstract: Systems, methods and non-transitory computer readable media for generating looped video clips are provided. A still image may be received. The still image may be analyzed to generate a series of images. The series of images may include at least first, middle and last images. The first image may be substantially visually similar to the last image, and the middle image may be visually different from the first and last images. The series of images may be provided. Playing the series of images in a video clip that starts with the first image and finishes with the last image, and repeating the video clip from the first image immediately after completing the playing of the video clip with the last image may create visually smooth transaction in which the transition from the last image to the first image is visually indistinguishable from the transactions between frames within the video clip.Type: GrantFiled: November 4, 2021Date of Patent: September 26, 2023Assignee: BRIA ARTIFICIAL INTELLIGENCE LTD.Inventors: Yair Adato, Gal Jacobi, Dvir Yerushalmi, Efrat Taig
-
Publication number: 20230154153Abstract: Systems, methods and non-transitory computer readable media for identifying visual contents used for training of inference models are provided. A specific visual content may be received. Data based on at least one parameter of an inference model may be received. The inference model may be a result of training a machine learning algorithm using a plurality of training examples. Each training example of the plurality of training examples may include a visual content. The data and the specific visual content may be analyzed to determine a likelihood that the specific visual content is included in at least one training example of the plurality of training examples. A digital signal indicative of the likelihood that the specific visual content is included in at least one training example of the plurality of training examples may be generated.Type: ApplicationFiled: November 14, 2022Publication date: May 18, 2023Inventors: Yair ADATO, Ran ACHITUV, Eyal GUTFLAISH, Dvir YERUSHALMI
-
Publication number: 20230154064Abstract: Systems, methods and non-transitory computer readable media for transforming non-realistic virtual environments to realistic virtual environments are provided. First digital signals representing virtual content in an extended reality environment may be received. The first digital signals may be used to identify a non-realistic portion of the virtual content. A generative model may be used to analyze the first digital signals to generate a realistic version of the identified non-realistic portion of the virtual content. Second digital signals configured to cause a wearable extended reality appliance to present the generated realistic version instead of the identified non-realistic portion of the virtual content in the extended reality environment may be generated.Type: ApplicationFiled: November 14, 2022Publication date: May 18, 2023Inventors: Yair ADATO, Dvir YERUSHALMI, Efrat TAIG
-
Publication number: 20220156317Abstract: Systems, methods and non-transitory computer readable media for generating looped video clips are provided. A still image may be received. The still image may be analyzed to generate a series of images. The series of images may include at least first, middle and last images. The first image may be substantially visually similar to the last image, and the middle image may be visually different from the first and last images. The series of images may be provided. Playing the series of images in a video clip that starts with the first image and finishes with the last image, and repeating the video clip from the first image immediately after completing the playing of the video clip with the last image may create visually smooth transaction in which the transition from the last image to the first image is visually indistinguishable from the transactions between frames within the video clip.Type: ApplicationFiled: November 4, 2021Publication date: May 19, 2022Inventors: Yair ADATO, Gal JACOBI, Dvir Yerushalmi, Efrat TAIG
-
Publication number: 20220156318Abstract: Systems, methods and non-transitory computer readable media for propagating changes from one visual content to other visual contents are provided. A plurality of visual contents may be accessed. A first visual content and a modified version of the first visual content may be accessed. The first visual content and the modified version of the first visual content may be analyzed to determine a manipulation for the plurality of visual contents. The determined manipulation may be used to generate a manipulated visual content for each visual content of the plurality of visual contents. The generated manipulated visual contents may be provided.Type: ApplicationFiled: November 4, 2021Publication date: May 19, 2022Inventors: Yair ADATO, Gal JACOBI, Efrat TAIG, Bar FINGERMAN, Dvir YERUSHALMI, Eyal GUTFLAISH