Patents by Inventor David C. Gibbon

David C. Gibbon has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 9146996
    Abstract: A methodology is disclosed for improving searches of a distributed Internet network. A distributed Internet network is searched for a particular information type, searching for a field identified using a predetermined identifier indicating that the field comprises information of the particular information type. When the field identified using the predetermined identifier is found, an association of the contents of the field with the search results is made, and repeated using the same predetermined identifier. Information of a particular information type may then be served in a field identified using a predetermined identifier that identifies the field as containing information of the particular information type.
    Type: Grant
    Filed: May 10, 2014
    Date of Patent: September 29, 2015
    Assignee: AT&T INTELLECTUAL PROPERTY I, L.P.
    Inventors: Zhu Liu, Andrea Basso, Lee Begeja, David C. Gibbon, Bernard S. Renger, Behzad Shahraray
  • Patent number: 9129151
    Abstract: A method, apparatus, and computer readable medium for identifying a person in an image includes an image analyzer. The image analyzer determines the content of an image such as a person, location, and object shown in the image. A person in the image may be identified based on the content and event data stored in a database. Event data includes information concerning events and related people, locations, and objects determined from other images and information. Identification metadata is generated and linked to each analyzed image and comprises information determined during image analysis. Tags for images are generated based on identification metadata. The event database can be queried to identify particular people, locations, objects, and events depending on a user's request.
    Type: Grant
    Filed: June 18, 2014
    Date of Patent: September 8, 2015
    Assignee: AT&T Intellectual Property I, L.P.
    Inventors: Zhu Liu, Andrea Basso, Lee Begeja, David C. Gibbon, Bernard S. Renger, Behzad Shahraray, Eric Zavesky
  • Publication number: 20150249863
    Abstract: A method for monitoring a monitored display monitors data to be output from a monitored display. The monitored data is analyzed to generate one or more content identifiers. The content identifiers are compared to a set of rules to determine if the monitored data should be blocked from being output or if an alert should be transmitted to a supervisor device. One or more supervisor devices may be used to respond to alerts and may also be used to control the output of the monitored display.
    Type: Application
    Filed: May 18, 2015
    Publication date: September 3, 2015
    Applicant: AT&T INTELLECTUAL PROPERTY I, L.P.
    Inventors: Behzad Shahraray, Andrea Basso, Lee Begeja, David C. Gibbon, Zhu Liu, Bernard S. Renger
  • Patent number: 9124660
    Abstract: An interactive conference based is supplemented based on terminology content. Terminology content from a plurality of interactive conference participants is monitored. A set of words from the terminology content is selected. Supplemental media content at an external source is identified based on the selected set of words, and selectively made available and presented to an audience member for the interactive conference.
    Type: Grant
    Filed: March 20, 2014
    Date of Patent: September 1, 2015
    Assignee: AT&T INTELLECTUAL PROPERTY I, L.P.
    Inventors: David C. Gibbon, Lee Begeja, Zhu Liu, Bernard S. Renger, Behzad Shahraray, Eric Zavesky
  • Publication number: 20150244790
    Abstract: A method and apparatus for providing an opportunistic crowd based service platform is disclosed. A mobile sensor device is identified based on a current location and/or other qualities, such as intrinsic properties, previous sensor data, or demographic data of an associated user of the mobile sensor device. Data is collected from the mobile sensor device. The data collected from the mobile sensor device is aggregated with data collected from other sensor devices, and content generated based on the aggregated data is delivered to a user device.
    Type: Application
    Filed: May 12, 2015
    Publication date: August 27, 2015
    Applicant: AT&T INTELLECTUAL PROPERTY I, L.P.
    Inventors: Eric Zavesky, Andrea Basso, Lee Begeja, David C. Gibbon, Zhu Liu, Bernard S. Renger, Behzad Shahraray
  • Patent number: 9116989
    Abstract: There is provided for a system, method, and computer readable medium storing instructions related to controlling a presentation in a multimodal system. A method for the retrieval of information on the basis of its content for real-time incorporation into an electronic presentation is discussed. One method includes controlling a media presentation using a multimodal interface. The method involves receiving from a presenter a content-based request associated with a plurality of segments within a media presentation preprocessed for context-based searching; displaying the media presentation and displaying to the presenter results in response to the content-based request; receiving a selection from the presenter of at least one result; and displaying the selected result to an audience.
    Type: Grant
    Filed: December 29, 2005
    Date of Patent: August 25, 2015
    Assignee: AT&T Intellectual Property II, L.P.
    Inventors: Patrick Ehlen, David C. Gibbon, Mazin Gilbert, Michael Johnston, Zhu Liu, Behzad Shahraray
  • Publication number: 20150235654
    Abstract: Speaker content generated in an audio conference is selectively visually represented. A profile for each audience member who listen to an audio conference is obtained. Speaker content from audio conference participants who speak in the audio conference is monitored. The speaker content from each of the audio conference participants is analyzed. Based on the analyzing and on the profiles for each of the plurality of audience members, visual representations of the speaker content to present to the audience members are identified. Visual representations of the speaker content are generated based on the analyzing. Different visual representations of the speaker content are presented to different audience members based on the analyzing and identifying.
    Type: Application
    Filed: May 4, 2015
    Publication date: August 20, 2015
    Applicant: AT&T INTELLECTUAL PROPERTY I, L.P.
    Inventors: David C. GIBBON, Andrea BASSO, Lee BEGEJA, Sumit KUMAR, Zhu LIU, Bernard S. RENGER, Behzad SHAHRARAY, Eric ZAVESKY
  • Publication number: 20150235671
    Abstract: Disclosed herein are systems, methods, and computer readable-media for adaptive media playback based on destination. The method for adaptive media playback comprises determining one or more destinations, collecting media content that is relevant to or describes the one or more destinations, assembling the media content into a program, and outputting the program. In various embodiments, media content may be advertising, consumer-generated, based on real-time events, based on a schedule, or assembled to fit within an estimated available time. Media content may be assembled using an adaptation engine that selects a plurality of media segments that fit in the estimated available time, orders the plurality of media segments, alters at least one of the plurality of media segments to fit the estimated available time, if necessary, and creates a playlist of selected media content containing the plurality of media segments.
    Type: Application
    Filed: May 4, 2015
    Publication date: August 20, 2015
    Inventors: Behzad SHAHRARAY, Andrea BASSO, Lee BEGEJA, David C. GIBBON, Zhu LIU, Bernard S. RENGER
  • Publication number: 20150195627
    Abstract: Disclosed herein are systems, methods, and computer readable-media for temporally adaptive media playback. The method for adaptive media playback includes estimating or determining an amount of time between a first event and a second event, selecting media content to fill the estimated amount of time between the first event and the second event, and playing the selected media content possibly at a reasonably different speed to fit the time interval. One embodiment includes events that are destination-based or temporal-based. Another embodiment includes adding, removing, speeding up, or slowing down selected media content in order to fit the estimated amount of time between the first event and the second event or to modify the selected media content to adjust to an updated estimated amount of time. Another embodiment bases selected media content on a user or group profile.
    Type: Application
    Filed: March 24, 2015
    Publication date: July 9, 2015
    Inventors: Andrea BASSO, Lee BEGEJA, David C. GIBBON, Zhu LIU, Bernard S. RENGER, Behzad SHAHRARAY
  • Patent number: 9066143
    Abstract: A method for monitoring a monitored display monitors data to be output from a monitored display. The monitored data is analyzed to generate one or more content identifiers. The content identifiers are compared to a set of rules to determine if the monitored data should be blocked from being output or if an alert should be transmitted to a supervisor device. One or more supervisor devices may be used to respond to alerts and may also be used to control the output of the monitored display.
    Type: Grant
    Filed: November 25, 2009
    Date of Patent: June 23, 2015
    Assignee: AT&T Intellectual Property I, L.P.
    Inventors: Behzad Shahraray, Andrea Basso, Lee Begeja, David C. Gibbon, Zhu Liu, Bernard S. Renger
  • Patent number: 9058565
    Abstract: A method and apparatus for providing an opportunistic crowd based service platform is disclosed. A mobile sensor device is identified based on a current location and/or other qualities, such as intrinsic properties, previous sensor data, or demographic data of an associated user of the mobile sensor device. Data is collected from the mobile sensor device. The data collected from the mobile sensor device is aggregated with data collected from other sensor devices, and content generated based on the aggregated data is delivered to a user device.
    Type: Grant
    Filed: August 17, 2011
    Date of Patent: June 16, 2015
    Assignee: AT&T Intellectual Property I, L.P.
    Inventors: Eric Zavesky, Andrea Basso, Lee Begeja, David C. Gibbon, Zhu Liu, Bernard S. Renger, Behzad Shahraray
  • Patent number: 9058386
    Abstract: Disclosed herein are systems, methods, and computer-readable media for transmedia video bookmarks, the method comprising receiving a first place marker and a second place marker for a segment of video media, extracting metadata from the video media between the first and second place markers, normalizing the extracted metadata, storing the normalized metadata, first place marker, and second place marker as a video bookmark, and retrieving the media represented by the video bookmark upon request from a user. Systems can aggregate video bookmarks from multiple sources and refine the first place marker and second place marker based on the aggregated video bookmarks. Metadata can be extracted by analyzing text or audio annotations. Metadata can be normalized by generating a video thumbnail representing the video media between the first place marker and the second place marker. Multiple video bookmarks may be searchable by metadata or by the video thumbnail visually.
    Type: Grant
    Filed: February 17, 2014
    Date of Patent: June 16, 2015
    Assignee: AT&T Intellectual Property I, L.P.
    Inventors: Behzad Shahraray, Andrea Basso, Lee Begeja, David C. Gibbon, Zhu Liu, Bernard S. Renger
  • Patent number: 9053750
    Abstract: Speaker content generated in an audio conference is visually represented in accordance with a method. Speaker content from a plurality of audio conference participants is monitored using a computer with a tangible non-transitory processor and memory. The speaker content from each of the plurality of audio conference participants is monitored. A visual representation of speaker content for each of the plurality of audio conference participants is generated based on the analysis of the speaker content from each of the plurality of audio conference participant. The visual representation of speaker content is displayed.
    Type: Grant
    Filed: June 17, 2011
    Date of Patent: June 9, 2015
    Assignee: AT&T INTELLECTUAL PROPERTY I, L.P.
    Inventors: David C. Gibbon, Andrea Basso, Lee Begeja, Sumit Kumar, Zhu Liu, Bernard S. Renger, Behzad Shahraray, Eric Zavesky
  • Patent number: 9026555
    Abstract: Disclosed herein are systems, methods, and computer readable-media for adaptive media playback based on destination. The method for adaptive media playback comprises determining one or more destinations, collecting media content that is relevant to or describes the one or more destinations, assembling the media content into a program, and outputting the program. In various embodiments, media content may be advertising, consumer-generated, based on real-time events, based on a schedule, or assembled to fit within an estimated available time. Media content may be assembled using an adaptation engine that selects a plurality of media segments that fit in the estimated available time, orders the plurality of media segments, alters at least one of the plurality of media segments to fit the estimated available time, if necessary, and creates a playlist of selected media content containing the plurality of media segments.
    Type: Grant
    Filed: August 6, 2012
    Date of Patent: May 5, 2015
    Assignee: AT&T Intellectual Property I, L.P.
    Inventors: Behzad Shahraray, Andrea Basso, Lee Begeja, David C. Gibbon, Zhu Liu, Bernard S. Renger
  • Patent number: 8988513
    Abstract: A method is provided for sharing a display. The method includes displaying periodically a first image sequence on the display in synchronicity with a first signal, and displaying periodically a second image sequence on the display in synchronicity with a second signal. The method also includes selecting by a user the first image sequence for viewing, and shuttering periodically a set of goggles for the user in synchronicity with the first signal. A method is provided for sharing a display. The method includes displaying periodically a private image sequence on the display in synchronicity with a first signal, and displaying periodically a non-private image sequence on the display. In the method, the private image sequence and the non-private image sequence combine to form a public image sequence on the display. A system is provided for sharing a display.
    Type: Grant
    Filed: April 23, 2013
    Date of Patent: March 24, 2015
    Assignee: AT&T Intellectual Property I, L.P.
    Inventors: Behzad Shahraray, David C. Gibbon, Zhu Liu, Bernard S. Renger
  • Patent number: 8990848
    Abstract: Disclosed herein are systems, methods, and computer readable-media for temporally adaptive media playback. The method for adaptive media playback includes estimating or determining an amount of time between a first event and a second event, selecting media content to fill the estimated amount of time between the first event and the second event, and playing the selected media content possibly at a reasonably different speed to fit the time interval. One embodiment includes events that are destination-based or temporal-based. Another embodiment includes adding, removing, speeding up, or slowing down selected media content in order to fit the estimated amount of time between the first event and the second event or to modify the selected media content to adjust to an updated estimated amount of time. Another embodiment bases selected media content on a user or group profile.
    Type: Grant
    Filed: July 22, 2008
    Date of Patent: March 24, 2015
    Assignee: AT&T Intellectual Property I, L.P.
    Inventors: Andrea Basso, Lee Begeja, David C. Gibbon, Zhu Liu, Bernard S. Renger, Behzad Shahraray
  • Publication number: 20150082248
    Abstract: A method and apparatus for a dynamic glyph based search includes an image server. The image server analyzes images to determine the content of an image. The image and data related to the determined content of the image are stored in an image database. A user can access the image server and search images using search glyphs. In response to selection of a generic-search glyph, the image server finds related images in the image database and the images are displayed to the user. In addition, refine-search glyphs are displayed to a user based on the selected generic-search glyph. One or more refine-search glyphs can be selected by a user to further narrow a search to specific people, locations, objects, and other image content.
    Type: Application
    Filed: November 25, 2014
    Publication date: March 19, 2015
    Applicant: AT&T INTELLECTUAL PROPERTY I, L.P.
    Inventors: Lee Begeja, Robert J. Andres, David C. Gibbon, Steven Neil Tischer
  • Publication number: 20150046160
    Abstract: Disclosed herein are systems, computer-implemented methods, and tangible computer-readable storage media for captioning a media presentation. The method includes receiving automatic speech recognition (ASR) output from a media presentation and a transcription of the media presentation. The method includes selecting via a processor a pair of anchor words in the media presentation based on the ASR output and transcription and generating captions by aligning the transcription with the ASR output between the selected pair of anchor words. The transcription can be human-generated. Selecting pairs of anchor words can be based on a similarity threshold between the ASR output and the transcription. In one variation, commonly used words on a stop list are ineligible as anchor words. The method includes outputting the media presentation with the generated captions. The presentation can be a recording of a live event.
    Type: Application
    Filed: September 22, 2014
    Publication date: February 12, 2015
    Inventors: Yeon-Jun KIM, David C. GIBBON, Horst J. SCHROETER
  • Patent number: 8924890
    Abstract: A method and apparatus for a dynamic glyph based search includes an image server. The image server analyzes images to determine the content of an image. The image and data related to the determined content of the image are stored in an image database. A user can access the image server and search images using search glyphs. In response to selection of a generic-search glyph, the image server finds related images in the image database and the images are displayed to the user. In addition, refine-search glyphs are displayed to a user based on the selected generic-search glyph. One or more refine-search glyphs can be selected by a user to further narrow a search to specific people, locations, objects, and other image content.
    Type: Grant
    Filed: January 10, 2012
    Date of Patent: December 30, 2014
    Assignee: AT&T Intellectual Property I, L.P.
    Inventors: Lee Begeja, Robert J. Andres, David C. Gibbon, Steven Neil Tischer
  • Patent number: RE45316
    Abstract: A compressed rendition of a video program is provided in a format suitable for electronic searching and retrieval. An electronic pictorial transcript representation of the video program is initially received. The video program has a video component and a second information-bearing media component associated therewith. The pictorial transcript representation includes a representative frame from each segment of the video component of the video program and a portion of the second media component associated with the segment. The electronic pictorial transcript is transformed into a hypertext format to form a hypertext pictorial transcript. The hypertext pictorial transcript is subsequently recorded in an electronic medium.
    Type: Grant
    Filed: June 26, 2001
    Date of Patent: December 30, 2014
    Assignee: AT&T Intellectual Property II, L.P.
    Inventors: David C. Gibbon, Behzad Shahraray