Patents by Inventor Shamir Allibhai

Shamir Allibhai has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11929099
    Abstract: The disclosed technology is a system and computer-implemented method for assembling and editing a video program from spoken words or soundbites. The disclosed technology imports source audio/video clips and any of multiple formats. Spoken audio is transcribed into searchable text. The text transcript is synchronized to the video track by timecode markers. Each spoken word corresponds to a timecode marker, which in turn corresponds to a video frame or frames. Using word processing operations and text editing functions, a user selects video segments by selecting corresponding transcribed text segments. By selecting text and arranging that text, a corresponding video program is assembled. The selected video segments are assembled on a timeline display in any chosen order by the user. The sequence of video segments may be reordered and edited, as desired, to produce a finished video program for export.
    Type: Grant
    Filed: October 25, 2021
    Date of Patent: March 12, 2024
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Shamir Allibhai, Roderick Hodgson
  • Patent number: 11626139
    Abstract: The disclosed technology is a system and computer-implemented method for assembling and editing a video program from spoken words or soundbites. The disclosed technology imports source audio/video clips and any of multiple formats. Spoken audio is transcribed into searchable text. The text transcript is synchronized to the video track by timecode markers. Each spoken word corresponds to a timecode marker, which in turn corresponds to a video frame or frames. Using word processing operations and text editing functions, a user selects video segments by selecting corresponding transcribed text segments. By selecting text and arranging that text, a corresponding video program is assembled. The selected video segments are assembled on a timeline display in any chosen order by the user. The sequence of video segments may be reordered and edited, as desired, to produce a finished video program for export.
    Type: Grant
    Filed: July 18, 2021
    Date of Patent: April 11, 2023
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Shamir Allibhai, Roderick Neil Hodgson
  • Patent number: 11508411
    Abstract: The disclosed technology is a system and computer-implemented method for assembling and editing a video program from spoken words or soundbites. The disclosed technology imports source audio/video clips and any of multiple formats. Spoken audio is transcribed into searchable text. The text transcript is synchronized to the video track by timecode markers. Each spoken word corresponds to a timecode marker, which in turn corresponds to a video frame or frames. Using word processing operations and text editing functions, a user selects video segments by selecting corresponding transcribed text segments. By selecting text and arranging that text, a corresponding video program is assembled. The selected video segments are assembled on a timeline display in any chosen order by the user. The sequence of video segments may be reordered and edited, as desired, to produce a finished video program for export.
    Type: Grant
    Filed: July 18, 2021
    Date of Patent: November 22, 2022
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Shamir Allibhai, Roderick Neil Hodgson
  • Publication number: 20220130421
    Abstract: The disclosed technology is a system and computer-implemented method for assembling and editing a video program from spoken words or soundbites. The disclosed technology imports source audio/video clips and any of multiple formats. Spoken audio is transcribed into searchable text. The text transcript is synchronized to the video track by timecode markers. Each spoken word corresponds to a timecode marker, which in turn corresponds to a video frame or frames. Using word processing operations and text editing functions, a user selects video segments by selecting corresponding transcribed text segments. By selecting text and arranging that text, a corresponding video program is assembled. The selected video segments are assembled on a timeline display in any chosen order by the user. The sequence of video segments may be reordered and edited, as desired, to produce a finished video program for export.
    Type: Application
    Filed: July 18, 2021
    Publication date: April 28, 2022
    Applicant: Simon Says, Inc.
    Inventors: Shamir ALLIBHAI, Roderick Neil HODGSON
  • Publication number: 20220130427
    Abstract: The disclosed technology is a system and computer-implemented method for assembling and editing a video program from spoken words or soundbites. The disclosed technology imports source audio/video clips and any of multiple formats. Spoken audio is transcribed into searchable text. The text transcript is synchronized to the video track by timecode markers. Each spoken word corresponds to a timecode marker, which in turn corresponds to a video frame or frames. Using word processing operations and text editing functions, a user selects video segments by selecting corresponding transcribed text segments. By selecting text and arranging that text, a corresponding video program is assembled. The selected video segments are assembled on a timeline display in any chosen order by the user. The sequence of video segments may be reordered and edited, as desired, to produce a finished video program for export.
    Type: Application
    Filed: October 25, 2021
    Publication date: April 28, 2022
    Inventors: Shamir Allibhai, Roderick Hodgson
  • Publication number: 20220130423
    Abstract: The disclosed technology is a system and computer-implemented method for assembling and editing a video program from spoken words or soundbites. The disclosed technology imports source audio/video clips and any of multiple formats. Spoken audio is transcribed into searchable text. The text transcript is synchronized to the video track by timecode markers. Each spoken word corresponds to a timecode marker, which in turn corresponds to a video frame or frames. Using word processing operations and text editing functions, a user selects video segments by selecting corresponding transcribed text segments. By selecting text and arranging that text, a corresponding video program is assembled. The selected video segments are assembled on a timeline display in any chosen order by the user. The sequence of video segments may be reordered and edited, as desired, to produce a finished video program for export.
    Type: Application
    Filed: July 18, 2021
    Publication date: April 28, 2022
    Applicant: Simon Says, Inc.
    Inventors: Shamir ALLIBHAI, Roderick Neil HODGSON
  • Publication number: 20220130424
    Abstract: The disclosed technology is a system and computer-implemented method for assembling and editing a video program from spoken words or soundbites. The disclosed technology imports source audio/video clips and any of multiple formats. Spoken audio is transcribed into searchable text. The text transcript is synchronized to the video track by timecode markers. Each spoken word corresponds to a timecode marker, which in turn corresponds to a video frame or frames. Using word processing operations and text editing functions, a user selects video segments by selecting corresponding transcribed text segments. By selecting text and arranging that text, a corresponding video program is assembled. The selected video segments are assembled on a timeline display in any chosen order by the user. The sequence of video segments may be reordered and edited, as desired, to produce a finished video program for export.
    Type: Application
    Filed: October 25, 2021
    Publication date: April 28, 2022
    Inventors: Shamir Allibhai, Roderick Hodgson
  • Publication number: 20220130422
    Abstract: The disclosed technology is a system and computer-implemented method for assembling and editing a video program from spoken words or soundbites. The disclosed technology imports source audio/video clips and any of multiple formats. Spoken audio is transcribed into searchable text. The text transcript is synchronized to the video track by timecode markers. Each spoken word corresponds to a timecode marker, which in turn corresponds to a video frame or frames. Using word processing operations and text editing functions, a user selects video segments by selecting corresponding transcribed text segments. By selecting text and arranging that text, a corresponding video program is assembled. The selected video segments are assembled on a timeline display in any chosen order by the user. The sequence of video segments may be re-ordered and edited, as desired, to produce a finished video program for export.
    Type: Application
    Filed: October 25, 2021
    Publication date: April 28, 2022
    Inventors: Shamir Allibhai, Roderick Hodgson
  • Patent number: 11315570
    Abstract: The technology disclosed relates to a machine learning based speech-to-text transcription intermediary which, from among multiple speech recognition engines, selects a speech recognition engine for accurately transcribing an audio channel based on sound and speech characteristics of the audio channel.
    Type: Grant
    Filed: April 2, 2019
    Date of Patent: April 26, 2022
    Assignee: Facebook Technologies, LLC
    Inventor: Shamir Allibhai
  • Publication number: 20200097733
    Abstract: The technology disclosed relates to data captured in streams from sensors. Streams often are edited, especially video and audio data streams. In particular, the technology disclosed facilitates identification of segments of an originally captured stream that find their way into a finally edited stream and identification of changed segments in the finally edited stream. Summary analysis on self-aligned meta-blocks of stream data is described, along with pushing at least some self-aligned meta-hashes into a blockchain network, applying an alignment and hashing procedure described in a smart contract.
    Type: Application
    Filed: November 27, 2019
    Publication date: March 26, 2020
    Applicant: Unveiled Labs, Inc.
    Inventors: Roderick Neil HODGSON, Shamir ALLIBHAI
  • Publication number: 20190341052
    Abstract: The technology disclosed relates to a machine learning based speech-to-text transcription intermediary which, from among multiple speech recognition engines, selects a speech recognition engine for accurately transcribing an audio channel based on sound and speech characteristics of the audio channel
    Type: Application
    Filed: April 2, 2019
    Publication date: November 7, 2019
    Applicant: Simon Says, Inc.
    Inventor: Shamir ALLIBHAI
  • Publication number: 20180349706
    Abstract: The technology disclosed relates to data captured in streams from sensors. Streams often are edited, especially video and audio data streams. In particular, the technology disclosed facilitates identification of segments of an originally captured stream that find their way into a finally edited stream and identification of changed segments in the finally edited stream. Summary analysis on self-aligned meta-blocks of stream data is described, along with pushing at least some self-aligned meta-hashes into a blockchain network, applying an alignment and hashing procedure described in a smart contract.
    Type: Application
    Filed: December 5, 2017
    Publication date: December 6, 2018
    Applicant: Unveiled Labs, Inc.
    Inventors: Roderick Neil Hodgson, Shamir Allibhai
  • Patent number: 9870508
    Abstract: The technology disclosed relates to data captured in streams from sensors. Streams often are edited, especially video and audio data streams. In particular, the technology disclosed facilitates identification of segments of an originally captured stream that find their way into a finally edited stream and identification of changed segments in the finally edited stream. Summary analysis on self-aligned meta-blocks of stream data is described, along with pushing at least some self-aligned meta-hashes into a blockchain network, applying an alignment and hashing procedure described in a smart contract.
    Type: Grant
    Filed: June 1, 2017
    Date of Patent: January 16, 2018
    Assignee: Unveiled Labs, Inc.
    Inventors: Roderick Neil Hodgson, Shamir Allibhai