Patents by Inventor Stav Yagev

Stav Yagev has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240104699
    Abstract: Techniques for generating a gallery view of tiles for in-area participants who are participating in an online meeting are disclosed. A video stream is accessed, where this stream includes an area view of an area in which an in-area participant is located. This area view comprises pixels representative of the area and pixels representative of the in-area participant. The pixels representative of the in-area participant are identified. A field of view of the in-area participant is generated. A tile of the in-area participant is generated based on the field of view. This tile is then displayed while the area view is not displayed.
    Type: Application
    Filed: September 22, 2022
    Publication date: March 28, 2024
    Inventors: Karen MASTER BEN-DOR, Eshchar ZYCHLINSKI, Stav YAGEV, Yoni SMOLIN, Raz HALALY, Adi DIAMANT, Ido LEICHTER, Tamir SHLOMI
  • Publication number: 20230092783
    Abstract: Systems and methods are provided for configuring and utilizing botcasts, which comprise audio content with transitions corresponding to the audio content, to facilitate accessibility and presentation of the media content within the botcasts according to contextual relevance for different individual users. The systems identify, access, filter, augment, customize, personalize, create and/or otherwise configure the media content, as well as the content transitions in the botcasts, according to the individual preferences and profiles of each user, as well as the contextual circumstances for each user.
    Type: Application
    Filed: November 18, 2021
    Publication date: March 23, 2023
    Inventors: Karen MASTER BEN-DOR, Adi DIAMANT, Stav YAGEV, Eshchar ZYCHLINSKI, Yoni SMOLIN
  • Publication number: 20220329960
    Abstract: The disclosed technology is generally directed to audio capture. In one example of the technology, recorded sounds are received such that the sounds recorded were emitted from multiple locations in an environment and such that the sounds recorded are sounds that can be converted to room impulse responses. The room impulse responses are generated from the recorded sounds. Location information that is associated with the multiple locations is received. At least the room impulses responses and the location information are used to generate at least one environment-specific model. Audio captured in the environment is received. An output is generated by processing the captured audio with the at least one environment-specific model such that the output includes at least one adjustment of the captured audio based on at least one acoustical property of the environment.
    Type: Application
    Filed: April 13, 2021
    Publication date: October 13, 2022
    Inventors: Stav YAGEV, Sharon KOUBI, Aviv HURVITZ, Igor ABRAMOVSKI, Eyal KRUPKA
  • Patent number: 10592780
    Abstract: In order for the feature extractors to operate with sufficient accuracy, a high degree of training is required. In this situation, a neural network implementing the feature extractor may be trained by providing it with images having known correspondence. A 3D model of a city may be utilized in order to train a neural network for location detection. 3D models are sophisticated and allow manipulation of viewer perspective and ambient features such as day/night sky variations, weather variations, and occlusion placement. Various manipulations may be executed in order to generate vast numbers of image pairs having known correspondence despite having variations. These image pairs with known correspondence may be utilized to train the neural network to be able to generate feature maps from query images and identify correspondence between query image feature maps and reference feature maps. This training can be accomplished without requiring the capture of real images with known correspondence.
    Type: Grant
    Filed: March 30, 2018
    Date of Patent: March 17, 2020
    Assignee: WHITE RAVEN LTD.
    Inventors: Roni Gurvich, Idan Ilan, Ofer Avni, Stav Yagev
  • Publication number: 20190303725
    Abstract: In order for the feature extractors to operate with sufficient accuracy, a high degree of training is required. In this situation, a neural network implementing the feature extractor may be trained by providing it with images having known correspondence. A 3D model of a city may be utilized in order to train a neural network for location detection. 3D models are sophisticated and allow manipulation of viewer perspective and ambient features such as day/night sky variations, weather variations, and occlusion placement. Various manipulations may be executed in order to generate vast numbers of image pairs having known correspondence despite having variations. These image pairs with known correspondence may be utilized to train the neural network to be able to generate feature maps from query images and identify correspondence between query image feature maps and reference feature maps. This training can be accomplished without requiring the capture of real images with known correspondence.
    Type: Application
    Filed: March 30, 2018
    Publication date: October 3, 2019
    Applicant: FRINGEFY LTD.
    Inventors: Roni Gurvich, Idan Ilan, Ofer Avni, Stav Yagev
  • Patent number: 10043097
    Abstract: An image abstraction engine is provided to characterize scenes like typically found in an urban setting. Specifically buildings and manmade structures have certain characteristic properties that may be abstracted and compressed in a manner that takes advantage of those characteristic properties. This allows for a more compact and computationally efficient abstraction and recognition.
    Type: Grant
    Filed: March 10, 2016
    Date of Patent: August 7, 2018
    Assignee: Fringefy LTD.
    Inventors: Stav Yagev, Omer Meir, Eitan Sharon, Achi Brandt, Assif Ziv
  • Publication number: 20160267326
    Abstract: An image abstraction engine is provided to characterize scenes like typically found in an urban setting. Specifically buildings and manmade structures have certain characteristic properties that may be abstracted and compressed in a manner that takes advantage of those characteristic properties. This allows for a more compact and computationally efficient abstraction and recognition.
    Type: Application
    Filed: March 10, 2016
    Publication date: September 15, 2016
    Applicant: FRINGEFY LTD.
    Inventors: Stav Yagev, Omer Meir, Eitan Sharon, Achi Brandt, Assif Ziv