Patents by Inventor Yonit Hoffman
Yonit Hoffman has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11948599Abstract: A computing system for a plurality of classes of audio events is provided, including one or more processors configured to divide a run-time audio signal into a plurality of segments and process each segment of the run-time audio signal in a time domain to generate a normalized time domain representation of each segment. The processor is further configured to feed the normalized time domain representation of each segment to an input layer of a trained neural network. The processor is further configured to generate, by the neural network, a plurality of predicted classification scores and associated probabilities for each class of audio event contained in each segment of the run-time input audio signal. In post-processing, the processor is further configured to generate smoothed predicted classification scores, associated smoothed probabilities, and class window confidence values for each class for each of a plurality of candidate window sizes.Type: GrantFiled: January 6, 2022Date of Patent: April 2, 2024Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Lihi Ahuva Shiloh Perl, Ben Fishman, Gilad Pundak, Yonit Hoffman
-
Publication number: 20240020338Abstract: A video-processing technique uses machine-trained logic to detect and track people that appear in video information. The technique then ranks the prominence of these people in the video information, to produce ranking information. The prominence of a person reflects a level of importance of the person in the video information, corresponding to the capacity of the person to draw the attention of a viewer. For instance, the prominence of the person reflects, at least in part, an extent to which the person appears in the video information. The technique performs its ranking based on person-specific feature information. The technique produces each instance of person-specific feature information by accumulating features pertaining to a particular person. One or more application systems make use of the ranking information to control the presentation of the video information.Type: ApplicationFiled: July 14, 2022Publication date: January 18, 2024Applicant: Microsoft Technology Licensing, LLCInventors: Yonit HOFFMAN, Tom HIRSHBERG, Maayan YEDIDIA, Zvi FIGOV
-
Publication number: 20230419663Abstract: Examples of the present disclosure describe systems and methods for video genre classification. In one example implementation, video content is received. A plurality of sliding windows of the video content is sampled. The plurality of sliding windows comprises audio data and video data. The audio data is analyzed to identify a set of audio features. The video data is analyzed to identify a set of video features. The set of audio features and the set of video features is provided to a classifier. The classifier is configured to detect a genre for the video content using the set of audio features and the set of video features. The video content is indexed based on the genre.Type: ApplicationFiled: June 27, 2022Publication date: December 28, 2023Applicant: Microsoft Technology Licensing, LLCInventors: Oron NIR, Mattan SERRY, Yonit HOFFMAN, Michael BEN-HAYM, Zvi FIGOV, Eliyahu STRUGO, Avi NEEMAN
-
Publication number: 20230316753Abstract: Systems, methods, and a computer-readable medium are provided for matching textless elements to texted elements in video content. A video processing system including a textless matching system may divide a video into shots, identify shots having similar durations, identify sequences of shots having similar durations, and compare image content in representative frames of the sequences to determine whether the sequences match. When the sequences are determined to match, the sequences may be paired, wherein the first sequence may include shots with overlaid text and the second sequence may include textless version of corresponding texted shots included in the first sequence. In some examples, the video processing system may further replace the determined corresponding texted shots.Type: ApplicationFiled: May 26, 2022Publication date: October 5, 2023Applicant: Microsoft Technology Licensing, LLCInventors: Mattan SERRY, Zvi FIGOV, Yonit HOFFMAN, Maayan YEDIDIA
-
Publication number: 20230215460Abstract: A computing system for a plurality of classes of audio events is provided, including one or more processors configured to divide a run-time audio signal into a plurality of segments and process each segment of the run-time audio signal in a time domain to generate a normalized time domain representation of each segment. The processor is further configured to feed the normalized time domain representation of each segment to an input layer of a trained neural network. The processor is further configured to generate, by the neural network, a plurality of predicted classification scores and associated probabilities for each class of audio event contained in each segment of the run-time input audio signal. In post-processing, the processor is further configured to generate smoothed predicted classification scores, associated smoothed probabilities, and class window confidence values for each class for each of a plurality of candidate window sizes.Type: ApplicationFiled: January 6, 2022Publication date: July 6, 2023Applicant: Microsoft Technology Licensing, LLCInventors: Lihi Ahuva SHILOH PERL, Ben FISHMAN, Gilad PUNDAK, Yonit HOFFMAN
-
Publication number: 20230112904Abstract: A system for indexing animated content receives detections extracted from a media file, where each one of the detections includes an image extracted from a corresponding frame of the media file that corresponds to a detected instance of an animated character. The system determines, for each of the received detections, an embedding defining a set of characteristics for the detected instance. The embedding associated with each detection is provided to a grouping engine that is configured to dynamically configure at least one grouping parameter based on a total number of the detections received. The grouping engine is also configured to sort the detections into groups using the grouping parameter and the embedding for each detection. A character ID is assigned to each one of the groups of detections, and the system indexes the groups of detections in a database in association with the character ID assigned to each group.Type: ApplicationFiled: August 26, 2022Publication date: April 13, 2023Inventors: Yonit HOFFMAN, Irit OFER, Avner LEVI, Haim SABO, Reut AMIOR
-
Patent number: 11450107Abstract: A system for indexing animated content receives detections extracted from a media file, where each one of the detections includes an image extracted from a corresponding frame of the media file that corresponds to a detected instance of an animated character. The system determines, for each of the received detections, an embedding defining a set of characteristics for the detected instance. The embedding associated with each detection is provided to a grouping engine that is configured to dynamically configure at least one grouping parameter based on a total number of the detections received. The grouping engine is also configured to sort the detections into groups using the grouping parameter and the embedding for each detection. A character ID is assigned to each one of the groups of detections, and the system indexes the groups of detections in a database in association with the character ID assigned to each group.Type: GrantFiled: March 10, 2021Date of Patent: September 20, 2022Assignee: Microsoft Technology Licensing, LLCInventors: Yonit Hoffman, Irit Ofer, Avner Levi, Haim Sabo, Reut Amior
-
Publication number: 20220292284Abstract: A system for indexing animated content receives detections extracted from a media file, where each one of the detections includes an image extracted from a corresponding frame of the media file that corresponds to a detected instance of an animated character. The system determines, for each of the received detections, an embedding defining a set of characteristics for the detected instance. The embedding associated with each detection is provided to a grouping engine that is configured to dynamically configure at least one grouping parameter based on a total number of the detections received. The grouping engine is also configured to sort the detections into groups using the grouping parameter and the embedding for each detection. A character ID is assigned to each one of the groups of detections, and the system indexes the groups of detections in a database in association with the character ID assigned to each group.Type: ApplicationFiled: March 10, 2021Publication date: September 15, 2022Inventors: Yonit HOFFMAN, Irit OFER, Avner LEVI, Haim SABO, Reut AMIOR
-
Patent number: 10922981Abstract: An apparatus, method and computer readable medium, the method comprising: obtaining raw maritime data from a plurality of sources, the raw maritime data indicative of a geolocation of vessels at different times and comprises duplicative data obtained from separate sources; analyzing the raw maritime data to produce for each vessel a vessel story comprising a set of activities and corresponding timestamps, wherein the set of activities associated with each vessel is smaller by at least one order of magnitude than the raw maritime data associated with the each vessel; identifying a pattern in the vessel story associated with a vessel, wherein the pattern conforms with a risk event; and validating the risk event using the raw maritime data or vessel stories, whereby identifying the risk event using reduced resources than required to identify the risk event in the raw maritime data, and without increasing false positive metrics.Type: GrantFiled: December 5, 2018Date of Patent: February 16, 2021Assignee: WINDWARD LTD.Inventors: Yair Mazor, Tomer Benyamini, Omid Rokni, Ido Sovran, Yonit Hoffman, Yaniv Meoded, Rotem Abeles, Ilan Atias, Maksim Bocharenko
-
Publication number: 20200184828Abstract: Method, system and product for risk event identification in maritime data and usage thereof. Ram maritime data is analyzed to produce for each vessel a vessel story. The vessel story comprises a set of activities and corresponding timestamps, that are deduced from the raw maritime data. A pattern, that conforms with a risk event, in the vessel story associated with a vessel. The risk event is validated using the raw maritime data or using one or more vessel stories. So, the risk event is identified using reduced resources in comparison with resources required to identify the risk event in the raw maritime data, and without increasing false positive metrics. Additionally, or alternatively, a pattern, that conforms with a risk event, in the vessel story associated with a first vessel is identified. The risk event is validated using the vessel story associated with a second vessel.Type: ApplicationFiled: December 5, 2018Publication date: June 11, 2020Applicant: WINDWARD LTD.Inventors: Yair Mazor, Tomer Benyamini, Omid Rokni, Ido Sovran, Yonit Hoffman, Yaniv Meoded, Rotem Abeles, Ilan Atias, Maksim Bocharenko