Patents by Inventor David C. Gibbon

David C. Gibbon has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20170126793
    Abstract: A method and apparatus for providing an opportunistic crowd based service platform is disclosed. A mobile sensor device is identified based on a current location and/or other qualities, such as intrinsic properties, previous sensor data, or demographic data of an associated user of the mobile sensor device. Data is collected from the mobile sensor device. The data collected from the mobile sensor device is aggregated with data collected from other sensor devices, and content generated based on the aggregated data is delivered to a user device.
    Type: Application
    Filed: January 11, 2017
    Publication date: May 4, 2017
    Applicant: AT&T Intellectual Property I, L.P.
    Inventors: Eric Zavesky, Andrea Basso, Lee Begeja, David C. Gibbon, Zhu Liu, Bernard S. Renger, Behzad Shahraray
  • Publication number: 20170116236
    Abstract: Disclosed herein are systems, computer-implemented methods, and tangible computer-readable media for representing media assets. The method includes receiving an original media asset and derivative versions of the original media asset and associated descriptors, determining a lineage to each derivative version that traces to the original media asset, generating a version history tree of the original media asset representing the lineage to each derivative version and associated descriptors from the original media asset, and presenting at least part of the version history tree to a user. In one aspect, the method further includes receiving a modification to one associated descriptor and updating associated descriptors for related derivative versions with the received modification. The original media asset and the derivative versions of the original media asset can share a common identifying mark.
    Type: Application
    Filed: January 10, 2017
    Publication date: April 27, 2017
    Inventors: Andrea BASSO, Paul GAUSMAN, David C. GIBBON
  • Patent number: 9613636
    Abstract: Speaker content generated in an audio conference is selectively visually represented. A profile for each audience member who listen to an audio conference is obtained. Speaker content from audio conference participants who speak in the audio conference is monitored. The speaker content from each of the audio conference participants is analyzed. Based on the analyzing and on the profiles for each of the plurality of audience members, visual representations of the speaker content to present to the audience members are identified. Visual representations of the speaker content are generated based on the analyzing. Different visual representations of the speaker content are presented to different audience members based on the analyzing and identifying.
    Type: Grant
    Filed: May 4, 2015
    Date of Patent: April 4, 2017
    Assignee: AT&T INTELLECTUAL PROPERTY I, L.P.
    Inventors: David C. Gibbon, Andrea Basso, Lee Begeja, Sumit Kumar, Zhu Liu, Bernard S. Renger, Behzad Shahraray, Eric Zavesky
  • Publication number: 20170061986
    Abstract: Disclosed herein are systems, computer-implemented methods, and tangible computer-readable storage media for captioning a media presentation. The method includes receiving automatic speech recognition (ASR) output from a media presentation and a transcription of the media presentation. The method includes selecting via a processor a pair of anchor words in the media presentation based on the ASR output and transcription and generating captions by aligning the transcription with the ASR output between the selected pair of anchor words. The transcription can be human-generated. Selecting pairs of anchor words can be based on a similarity threshold between the ASR output and the transcription. In one variation, commonly used words on a stop list are ineligible as anchor words. The method includes outputting the media presentation with the generated captions. The presentation can be a recording of a live event.
    Type: Application
    Filed: November 14, 2016
    Publication date: March 2, 2017
    Inventors: Yeon-Jun KIM, David C. GIBBON, Horst J. SCHROETER
  • Patent number: 9578095
    Abstract: A method and apparatus for providing an opportunistic crowd based service platform is disclosed. A mobile sensor device is identified based on a current location and/or other qualities, such as intrinsic properties, previous sensor data, or demographic data of an associated user of the mobile sensor device. Data is collected from the mobile sensor device. The data collected from the mobile sensor device is aggregated with data collected from other sensor devices, and content generated based on the aggregated data is delivered to a user device.
    Type: Grant
    Filed: May 12, 2015
    Date of Patent: February 21, 2017
    Assignee: AT&T Intellectual Property I, L.P.
    Inventors: Eric Zavesky, Andrea Basso, Lee Begeja, David C. Gibbon, Zhu Liu, Bernard S. Renger, Behzad Shahraray
  • Patent number: 9547684
    Abstract: Disclosed herein are systems, computer-implemented methods, and tangible computer-readable media for representing media assets. The method includes receiving an original media asset and derivative versions of the original media asset and associated descriptors, determining a lineage to each derivative version that traces to the original media asset, generating a version history tree of the original media asset representing the lineage to each derivative version and associated descriptors from the original media asset, and presenting at least part of the version history tree to a user. In one aspect, the method further includes receiving a modification to one associated descriptor and updating associated descriptors for related derivative versions with the received modification. The original media asset and the derivative versions of the original media asset can share a common identifying mark.
    Type: Grant
    Filed: November 29, 2012
    Date of Patent: January 17, 2017
    Assignee: AT&T Intellectual Property I, L.P.
    Inventors: Andrea Basso, Paul Gausman, David C. Gibbon
  • Patent number: 9495964
    Abstract: Disclosed herein are systems, computer-implemented methods, and tangible computer-readable storage media for captioning a media presentation. The method includes receiving automatic speech recognition (ASR) output from a media presentation and a transcription of the media presentation. The method includes selecting via a processor a pair of anchor words in the media presentation based on the ASR output and transcription and generating captions by aligning the transcription with the ASR output between the selected pair of anchor words. The transcription can be human-generated. Selecting pairs of anchor words can be based on a similarity threshold between the ASR output and the transcription. In one variation, commonly used words on a stop list are ineligible as anchor words. The method includes outputting the media presentation with the generated captions. The presentation can be a recording of a live event.
    Type: Grant
    Filed: March 16, 2016
    Date of Patent: November 15, 2016
    Assignee: AT&T Intellectual Property I, L.P.
    Inventors: Yeon-Jun Kim, David C. Gibbon, Horst J. Schroeter
  • Publication number: 20160321699
    Abstract: Disclosed herein are systems, methods, and computer readable-media for adaptive media playback based on destination. The method for adaptive media playback comprises determining one or more destinations, collecting media content that is relevant to or describes the one or more destinations, assembling the media content into a program, and outputting the program. In various embodiments, media content may be advertising, consumer-generated, based on real-time events, based on a schedule, or assembled to fit within an estimated available time. Media content may be assembled using an adaptation engine that selects a plurality of media segments that fit in the estimated available time, orders the plurality of media segments, alters at least one of the plurality of media segments to fit the estimated available time, if necessary, and creates a playlist of selected media content containing the plurality of media segments.
    Type: Application
    Filed: July 11, 2016
    Publication date: November 3, 2016
    Inventors: Behzad SHAHRARAY, Andrea BASSO, Lee BEGEJA, David C. GIBBON, Zhu LIU, Bernard S. RENGER
  • Publication number: 20160323657
    Abstract: Disclosed herein are systems, methods, and computer readable-media for temporally adaptive media playback. The method for adaptive media playback includes estimating or determining an amount of time between a first event and a second event, selecting media content to fill the estimated amount of time between the first event and the second event, and playing the selected media content possibly at a reasonably different speed to fit the time interval. One embodiment includes events that are destination-based or temporal-based. Another embodiment includes adding, removing, speeding up, or slowing down selected media content in order to fit the estimated amount of time between the first event and the second event or to modify the selected media content to adjust to an updated estimated amount of time. Another embodiment bases selected media content on a user or group profile.
    Type: Application
    Filed: July 11, 2016
    Publication date: November 3, 2016
    Inventors: Andrea BASSO, Lee BEGEJA, David C. GIBBON, Zhu LIU, Bernard S. RENGER, Behzad SHAHRARAY
  • Publication number: 20160249100
    Abstract: Disclosed herein are systems, methods, and computer readable-media for adaptive content rendition, the method comprising receiving media content for playback to a user, adapting the media content for playback on a first device in the user's first location, receiving a notification when the user changes to a second location, adapting the media content for playback on a second device in the second location, and transitioning media content playback from the first device to second device. One aspect conserves energy by optionally turning off the first device after transitioning to the second device. Another aspect includes playback devices that are “dumb devices” which receive media content already prepared for playback, “smart devices” which receive media content in a less than ready form and prepare the media content for playback, or hybrid smart and dumb devices. A single device may be substituted by a plurality of devices.
    Type: Application
    Filed: May 2, 2016
    Publication date: August 25, 2016
    Inventors: Andrea Basso, David C. Gibbon, Zhu Liu, Bernard S. Renger
  • Patent number: 9392345
    Abstract: Disclosed herein are systems, methods, and computer readable-media for temporally adaptive media playback. The method for adaptive media playback includes estimating or determining an amount of time between a first event and a second event, selecting media content to fill the estimated amount of time between the first event and the second event, and playing the selected media content possibly at a reasonably different speed to fit the time interval. One embodiment includes events that are destination-based or temporal-based. Another embodiment includes adding, removing, speeding up, or slowing down selected media content in order to fit the estimated amount of time between the first event and the second event or to modify the selected media content to adjust to an updated estimated amount of time. Another embodiment bases selected media content on a user or group profile.
    Type: Grant
    Filed: March 24, 2015
    Date of Patent: July 12, 2016
    Assignee: AT&T Intellectual Property I, L.P.
    Inventors: Andrea Basso, Lee Begeja, David C. Gibbon, Zhu Liu, Bernard S. Renger, Behzad Shahraray
  • Patent number: 9390757
    Abstract: Disclosed herein are systems, methods, and computer readable-media for adaptive media playback based on destination. The method for adaptive media playback comprises determining one or more destinations, collecting media content that is relevant to or describes the one or more destinations, assembling the media content into a program, and outputting the program. In various embodiments, media content may be advertising, consumer-generated, based on real-time events, based on a schedule, or assembled to fit within an estimated available time. Media content may be assembled using an adaptation engine that selects a plurality of media segments that fit in the estimated available time, orders the plurality of media segments, alters at least one of the plurality of media segments to fit the estimated available time, if necessary, and creates a playlist of selected media content containing the plurality of media segments.
    Type: Grant
    Filed: May 4, 2015
    Date of Patent: July 12, 2016
    Assignee: AT&T Intellectual Property I, L.P.
    Inventors: Behzad Shahraray, Andrea Basso, Lee Begeja, David C. Gibbon, Zhu Liu, Bernard S. Renger
  • Publication number: 20160198234
    Abstract: Disclosed herein are systems, computer-implemented methods, and tangible computer-readable storage media for captioning a media presentation. The method includes receiving automatic speech recognition (ASR) output from a media presentation and a transcription of the media presentation. The method includes selecting via a processor a pair of anchor words in the media presentation based on the ASR output and transcription and generating captions by aligning the transcription with the ASR output between the selected pair of anchor words. The transcription can be human-generated. Selecting pairs of anchor words can be based on a similarity threshold between the ASR output and the transcription. In one variation, commonly used words on a stop list are ineligible as anchor words. The method includes outputting the media presentation with the generated captions. The presentation can be a recording of a live event.
    Type: Application
    Filed: March 16, 2016
    Publication date: July 7, 2016
    Inventors: Yeon-Jun KIM, David C. GIBBON, Horst J. SCHROETER
  • Patent number: 9356983
    Abstract: Disclosed herein are systems, methods, and computer readable-media for adaptive content rendition, the method comprising receiving media content for playback to a user, adapting the media content for playback on a first device in the user's first location, receiving a notification when the user changes to a second location, adapting the media content for playback on a second device in the second location, and transitioning media content playback from the first device to second device. One aspect conserves energy by optionally turning off the first device after transitioning to the second device. Another aspect includes playback devices that are “dumb devices” which receive media content already prepared for playback, “smart devices” which receive media content in a less than ready form and prepare the media content for playback, or hybrid smart and dumb devices. A single device may be substituted by a plurality of devices.
    Type: Grant
    Filed: July 14, 2014
    Date of Patent: May 31, 2016
    Assignee: AT&T Intellectual Property I, L.P.
    Inventors: Andrea Basso, David C. Gibbon, Zhu Liu, Bernard S. Renger
  • Patent number: 9342596
    Abstract: Disclosed herein are systems, methods, and computer-readable media for transmedia video bookmarks, the method comprising receiving a first place marker and a second place marker for a segment of video media, extracting metadata from the video media between the first and second place markers, normalizing the extracted metadata, storing the normalized metadata, first place marker, and second place marker as a video bookmark, and retrieving the media represented by the video bookmark upon request from a user. Systems can aggregate video bookmarks from multiple sources and refine the first place marker and second place marker based on the aggregated video bookmarks. Metadata can be extracted by analyzing text or audio annotations. Metadata can be normalized by generating a video thumbnail representing the video media between the first place marker and the second place marker. Multiple video bookmarks may be searchable by metadata or by the video thumbnail visually.
    Type: Grant
    Filed: June 12, 2015
    Date of Patent: May 17, 2016
    Assignee: AT&T Intellectual Property I, L.P.
    Inventors: Behzad Shahraray, Andrea Basso, Lee Begeja, David C. Gibbon, Zhu Liu, Bernard S. Renger
  • Patent number: 9305552
    Abstract: Disclosed herein are systems, computer-implemented methods, and tangible computer-readable storage media for captioning a media presentation. The method includes receiving automatic speech recognition (ASR) output from a media presentation and a transcription of the media presentation. The method includes selecting via a processor a pair of anchor words in the media presentation based on the ASR output and transcription and generating captions by aligning the transcription with the ASR output between the selected pair of anchor words. The transcription can be human-generated. Selecting pairs of anchor words can be based on a similarity threshold between the ASR output and the transcription. In one variation, commonly used words on a stop list are ineligible as anchor words. The method includes outputting the media presentation with the generated captions. The presentation can be a recording of a live event.
    Type: Grant
    Filed: September 22, 2014
    Date of Patent: April 5, 2016
    Assignee: AT&T Intellectual Property I, L.P.
    Inventors: Yeon-Jun Kim, David C. Gibbon, Horst J. Schroeter
  • Publication number: 20150378544
    Abstract: A content summary is generated by determining a relevance of each of a plurality of scenes, removing at least one of the plurality of scenes based on the determined relevance, and creating a scene summary based on the plurality of scenes. The scene summary is output to a graphical user interface, which may be a three-dimensional interface. The plurality of scenes is automatically detected in a source video and a scene summary is created with user input to modify the scene summary. A synthetic frame representation is formed by determining a sentiment of at least one frame object in a plurality of frame objects and creating a synthetic representation of the at least one frame object based at least in part on the determined sentiment. The relevance of the frame object may be determined and the synthetic representation is then created based on the determined relevance and the determined sentiment.
    Type: Application
    Filed: September 3, 2015
    Publication date: December 31, 2015
    Applicant: AT&T INTELLECTUAL PROPERTY I, L.P.
    Inventors: Behzad Shahraray, Andrea Basso, Lee Begeja, David C. Gibbon, Zhu Liu, Bernard S. Renger
  • Publication number: 20150324094
    Abstract: An interactive conference is supplemented based on terminology content. Terminology content from a plurality of devices connected to the interactive conference is monitored. A set of words from the terminology content is selected. Supplemental media content at an external source is identified based on the selected set of words, and selectively made available to a device connected to the interactive conference.
    Type: Application
    Filed: July 20, 2015
    Publication date: November 12, 2015
    Applicant: AT&T INTELLECTUAL PROPERTY I, L.P.
    Inventors: David C. GIBBON, Lee BEGEJA, Zhu LIU, Bernard S. RENGER, Behzad SHAHRARAY, Eric ZAVESKY
  • Patent number: 9167189
    Abstract: A content summary is generated by determining a relevance of each of a plurality of scenes, removing at least one of the plurality of scenes based on the determined relevance, and creating a scene summary based on the plurality of scenes. The scene summary is output to a graphical user interface, which may be a three-dimensional interface. The plurality of scenes is automatically detected in a source video and a scene summary is created with user input to modify the scene summary. A synthetic frame representation is formed by determining a sentiment of at least one frame object in a plurality of frame objects and creating a synthetic representation of the at least one frame object based at least in part on the determined sentiment. The relevance of the frame object may be determined and the synthetic representation is then created based on the determined relevance and the determined sentiment.
    Type: Grant
    Filed: October 15, 2009
    Date of Patent: October 20, 2015
    Assignee: AT&T Intellectual Property I, L.P.
    Inventors: Behzad Shahraray, Andrea Basso, Lee Begeja, David C. Gibbon, Zhu Liu, Bernard S. Renger
  • Publication number: 20150278232
    Abstract: Disclosed herein are systems, methods, and computer-readable media for transmedia video bookmarks, the method comprising receiving a first place marker and a second place marker for a segment of video media, extracting metadata from the video media between the first and second place markers, normalizing the extracted metadata, storing the normalized metadata, first place marker, and second place marker as a video bookmark, and retrieving the media represented by the video bookmark upon request from a user. Systems can aggregate video bookmarks from multiple sources and refine the first place marker and second place marker based on the aggregated video bookmarks. Metadata can be extracted by analyzing text or audio annotations. Metadata can be normalized by generating a video thumbnail representing the video media between the first place marker and the second place marker. Multiple video bookmarks may be searchable by metadata or by the video thumbnail visually.
    Type: Application
    Filed: June 12, 2015
    Publication date: October 1, 2015
    Inventors: Behzad Shahraray, Andrea Basso, Lee Begeja, David C. Gibbon, Zhu Liu, Bernard S. Renger