Patents by Inventor Yaron Galant

Yaron Galant has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20180132006
    Abstract: Methods and apparatuses for highlight-based movie navigation, editing and sharing are described. In one embodiment, the method for processing media comprises: playing back a movie on a display of a media device performing gesture recognition to recognize one or more gestures made with respect to the display; and navigating through the media on a per highlight basis in response to recognizing the one or more gestures.
    Type: Application
    Filed: November 1, 2016
    Publication date: May 10, 2018
    Inventors: Yaron Galant, Federico de Samaniego Steta, Martin Boliek
  • Publication number: 20180033462
    Abstract: A system may comprise a trigger creation module configured to provide a set of one or more trigger conditions, satisfaction of each trigger condition being based on sensor data to be received. A sensor interface module may be configured to receive actual sensor data from one or more sensors, the actual sensor data being generated contemporaneously with recording of an activity. A trigger satisfaction module may be configured to determine whether at least one trigger condition of the set of trigger conditions has been satisfied based on the actual sensor data. An event identification module may be configured to identify a potentially interesting event within the recording of the activity based on the satisfied at least one trigger condition.
    Type: Application
    Filed: October 9, 2017
    Publication date: February 1, 2018
    Inventors: Yaron Galant, Ehud Chatow
  • Patent number: 9792951
    Abstract: A system may comprise a trigger creation module configured to provide a set of one or more trigger conditions, satisfaction of each trigger condition being based on sensor data to be received. A sensor interface module may be configured to receive actual sensor data from one or more sensors, the actual sensor data being generated contemporaneously with recording of an activity. A trigger satisfaction module may be configured to determine whether at least one trigger condition of the set of trigger conditions has been satisfied based on the actual sensor data. An event identification module may be configured to identify a potentially interesting event within the recording of the activity based on the satisfied at least one trigger condition.
    Type: Grant
    Filed: February 25, 2014
    Date of Patent: October 17, 2017
    Assignee: VIEU LABS, INC.
    Inventors: Yaron Galant, Ehud Chatow
  • Publication number: 20160365117
    Abstract: Methods for processing captured video data based on capture device orientation are described. In one embodiment, the method comprises capturing video data with a video capture device, detecting orientation of the video capture device, mapping pixels of the video data captured to a landscape orientation if the video capture device is in a portrait orientation, and displaying the video data on a screen of the video capture device in landscape orientation regardless of the orientation of the video capture device.
    Type: Application
    Filed: October 9, 2015
    Publication date: December 15, 2016
    Inventors: Martin Paul Boliek, Yaron Galant
  • Publication number: 20160365122
    Abstract: A video editing system and method with multi-stage control to generate clips are described.
    Type: Application
    Filed: October 9, 2015
    Publication date: December 15, 2016
    Inventors: Eran Steinberg, Yaron Galant, Martin Paul Boliek
  • Publication number: 20160365120
    Abstract: A video editing apparatus and method for generating multiple final cut clips is described. In one embodiment, a video editing system comprises editing processing logic controllable to perform, on one or more raw input feeds, a plurality of different edits to render one or more final cut clips for viewing, where each of the one or more edits transforms data from one or more of the raw input feeds into the one or more of the plurality of final cut clips by generating tags that identify highlights from signals, and generating one or more variations of the final cut clips as a result of independent control and application of the editing processing logic to data from the one or more raw input feeds.
    Type: Application
    Filed: October 9, 2015
    Publication date: December 15, 2016
    Inventors: Eran Steinberg, Yaron Galant, Martin Paul Boliek
  • Publication number: 20160365119
    Abstract: A video editing system and method using multi-stakeholder, multi-stage control are described. In one embodiment, the video editing system comprises editing processing logic controllable to perform, on one or more raw input feeds, a plurality of different edits to render one or more final cut clips for viewing, where the editing processing logic is part of each of a plurality of stages of an editing process that is responsive to a plurality of stakeholders interacting with the tagging and the highlights to generate the plurality of final cut streams, and each of the one or more edits to transform data from one or more of the raw input feeds into the one or more of the plurality of final cut clips by generating tags that identify highlights from signals, and generating one or more variations of the final cut clips as a result of independent control and application of the editing processing logic to data from the one or more raw input feeds.
    Type: Application
    Filed: October 9, 2015
    Publication date: December 15, 2016
    Inventors: Eran Steinberg, Yaron Galant, Martin Paul Boliek
  • Publication number: 20160365116
    Abstract: A Video Editing Apparatus with Participant Sharing is described. In one embodiment, the video editing system comprises a memory to store instructions and video data that captures an activity of a participant and one or more processing units coupled to the memory, the one or more processing units executing the instructions to determine existence of one or both of signal data and media of a co-participant in an activity and to implement editing processing logic to create a clip from the video data by processing signals and editing the video data, where the processing of the signals and the editing of the video data are based on one or more of signal data and media associated with the co-participant in the activity.
    Type: Application
    Filed: October 9, 2015
    Publication date: December 15, 2016
    Inventors: Yaron Galant, Martin Paul Boliek
  • Publication number: 20160365118
    Abstract: A video editing system and method using multi-stakeholder control are described. In one embodiment, the video editing system comprises editing processing logic controllable to perform, on one or more raw input feeds, a plurality of different edits to render one or more final cut clips for viewing, where each of the one or more edits transforms data from one or more of the raw input feeds into the one or more of the plurality of final cut clips by generating tags that identify highlights from signals, and generating one or more variations of the final cut clips as a result of independent control and application of the editing processing logic to data from the one or more raw input feeds, wherein the highlights are generated based on a master highlight list generated based on processing of tags from the tagging.
    Type: Application
    Filed: October 9, 2015
    Publication date: December 15, 2016
    Inventors: Eran Steinberg, Yaron Galant, Martin Paul Boliek
  • Publication number: 20160365114
    Abstract: A video editing system and method using machine learning are described. In one embodiment, the video editing system comprises editing processing logic controllable to perform at least one edit on one or more raw input feeds to render one or more final cut clips for viewing, each edit to transform data from one or more of the raw input feeds into the one or more of the plurality of final cut clips by generating tags that identify highlights from signals and a machine learning module operable to access data from memory and operable to generate settings to control the editing processing logic based on the data using one or more machine learning algorithms to control the editing processing logic.
    Type: Application
    Filed: October 9, 2015
    Publication date: December 15, 2016
    Inventors: Yaron Galant, Martin Paul Boliek, Michael William Mahoney
  • Publication number: 20160364103
    Abstract: A method and apparatus for using gestures during video playback are described. In one embodiment, a method of tagging a stream comprises playing back the stream on a media device and tagging a portion of the stream in response recognizing one or more gestures to cause a tag to be associated with the portion of the stream.
    Type: Application
    Filed: October 9, 2015
    Publication date: December 15, 2016
    Inventors: Yaron Galant, Martin Paul Boliek
  • Publication number: 20160365115
    Abstract: A video editing apparatus and method using time-based highlight identification are described. In one embodiment, a video editing apparatus comprises a memory to store first video data, a time mapper module operable to determine first time information associated with the first video data, the first time information specifying a time frame during which the video data was captured, a communication interface to receive highlight list data corresponding to the time frame, an extractor operable to extract media clip data from the first video data based on the highlight list data, and a composer operable to compose the movie with the media clip data.
    Type: Application
    Filed: October 9, 2015
    Publication date: December 15, 2016
    Inventors: Martin Paul Boliek, Yaron Galant
  • Publication number: 20160366330
    Abstract: Apparatus for processing captured video data based on capture device orientation is described. In one embodiment, the apparatus comprises a camera to capture video data, a first memory to store captured video data, one or more processors coupled to the memory to process the captured video data, a display screen coupled to the one or more processors to display portions of the captured video data, one or more sensors to capture signal information, a second memory coupled to the one or more processors, wherein the memory includes instructions which when executed by the one or more processors implement logic to: detect orientation of the video capture device, map pixels of the video data captured to a landscape orientation if the video capture device is in a portrait orientation, and cause the display of video data on the display screen in landscape orientation regardless of the orientation of the video capture device.
    Type: Application
    Filed: October 9, 2015
    Publication date: December 15, 2016
    Inventors: Martin Paul Boliek, Yaron Galant
  • Publication number: 20160364102
    Abstract: A method and apparatus for using gestures during video capture are described. In one embodiment, a method of tagging a stream comprises recording the stream with a media device in real-time and tagging a portion of the stream in response recognizing one or more gestures to cause a tag to be associated with the portion of the stream, the tag for use in specifying an action associated with the stream.
    Type: Application
    Filed: October 9, 2015
    Publication date: December 15, 2016
    Inventors: Yaron Galant, Martin Paul Boliek
  • Publication number: 20160365124
    Abstract: A video editing method using participant sharing is described. In one embodiment, the method comprises determining existence of one or both of signal data and media of a co-participant in an activity, obtaining video data that captures an activity of a participant, and creating a clip from the video data by processing signals and editing the video data, where the processing of the signals and the editing of the video data are based on one or more of signal data and media associated with the co-participant in the activity.
    Type: Application
    Filed: October 9, 2015
    Publication date: December 15, 2016
    Inventors: Yaron Galant, Martin Paul Boliek
  • Publication number: 20160189752
    Abstract: A method and apparatus for performing real-time capture and editing of video are disclosed. In one embodiment, the method comprises editing, on a capture device, raw captured media data by extracting media data for a set of highlights in real-time using tags that identify each highlight in the set of highlights from signals generated from triggers; creating, on the capture device, a video clip by combining the set of highlights; and processing, during one or both of editing the raw input data and creating the video clip, a portion of the raw captured media data that is being stored in a memory on the capture device but not included in the video clip during one or both of editing the raw input media data and creating the video.
    Type: Application
    Filed: December 29, 2015
    Publication date: June 30, 2016
    Inventors: Yaron GALANT, Martin Paul BOLIEK
  • Patent number: 9183203
    Abstract: The GENERALIZED DATA MINING AND ANALYTICS APPARATUSES, METHODS AND SYSTEMS (“GDMA”), in various embodiments, may identify statistical relationships among query terms by analyzing a corpus of electronic documents. Inputs may be automatically generated automatically and/or user provided. In one embodiment, a method includes: accessing a term tensor associated with at least one term in a corpus of documents, wherein the term tensor comprises a plurality of data type vectors corresponding respectively to a plurality of term-correlated data types correlated with the at least one term in the corpus and each data type vector comprising a plurality of binned data type values with corresponding weighted occurrence values derived from the corpus; providing at least one of the plurality of term-correlated data types for selectable display; receiving at least one term-correlated data type selection; and providing data type values associated with the at least one term-correlated data type selection for display.
    Type: Grant
    Filed: October 4, 2011
    Date of Patent: November 10, 2015
    Assignee: Quantifind, Inc.
    Inventors: Ari Tuchman, Yaron Galant, Erich Nachbar, John Stockton, Karthik Thiyagarajan
  • Publication number: 20140334796
    Abstract: A system may comprise a trigger creation module configured to provide a set of one or more trigger conditions, satisfaction of each trigger condition being based on sensor data to be received. A sensor interface module may be configured to receive actual sensor data from one or more sensors, the actual sensor data being generated contemporaneously with recording of an activity. A trigger satisfaction module may be configured to determine whether at least one trigger condition of the set of trigger conditions has been satisfied based on the actual sensor data. An event identification module may be configured to identify a potentially interesting event within the recording of the activity based on the satisfied at least one trigger condition.
    Type: Application
    Filed: February 25, 2014
    Publication date: November 13, 2014
    Applicant: VIEU LABS, INC.
    Inventors: Yaron Galant, Ehud Chatow
  • Patent number: 7774835
    Abstract: A method and computer program for automatically and continually extracting application protocols (i.e., defining a set of allowable or authorized actions) for any application. The method involves receiving a message from a server before it is sent or in parallel with sending to a client. The message may be in response to a specific request for it from the client. The program then extracts the application protocol data from the server message. Working with a copy of the message, the program strips off the communications protocol(s) from the message and parses the remaining message to identify user-selectable options contained in the message such as commands, fields, etc. These items represent the set of allowable or authorized user actions for the particular “stage” of the current version of the application as set forth in the message. The set of allowable user actions is then stored by the extraction program in a protocol database accessible to a gateway or filter module.
    Type: Grant
    Filed: August 2, 2004
    Date of Patent: August 10, 2010
    Assignee: F5 Networks, Inc.
    Inventors: Gil Raanan, Tal Moran, Yaron Galant, Yuval El-Hanani, Eran Reshef
  • Publication number: 20050044420
    Abstract: A method and computer program for automatically and continually extracting application protocols (i.e., defining a set of allowable or authorized actions) for any application. The method involves receiving a message from a server before it is sent or in parallel with sending to a client. The message may be in response to a specific request for it from the client. The program then extracts the application protocol data from the server message. Working with a copy of the message, the program strips off the communications protocol(s) from the message and parses the remaining message to identify user-selectable options contained in the message such as commands, fields, etc. These items represent the set of allowable or authorized user actions for the particular “stage” of the current version of the application as set forth in the message. The set of allowable user actions is then stored by the extraction program in a protocol database accessible to a gateway or filter module.
    Type: Application
    Filed: August 2, 2004
    Publication date: February 24, 2005
    Inventors: Gil Raanan, Tal Moran, Yaron Galant, Yuval El-Hanani, Eran Reshef