Patents by Inventor David C. Gibbon

David C. Gibbon has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10375320
    Abstract: Disclosed herein are systems, computer-implemented methods, and tangible computer-readable media for synthesizing a virtual window. The method includes receiving an environment feed, selecting video elements of the environment feed, displaying the selected video elements on a virtual window in a window casing, selecting non-video elements of the environment feed, and outputting the selected non-video elements coordinated with the displayed video elements. Environment feeds can include synthetic and natural elements. The method can further toggle the virtual window between displaying the selected elements and being transparent. The method can track user motion and adapt the displayed selected elements on the virtual window based on the tracked user motion. The method can further detect a user in close proximity to the virtual window, receive an interaction from the detected user, and adapt the displayed selected elements on the virtual window based on the received interaction.
    Type: Grant
    Filed: August 31, 2012
    Date of Patent: August 6, 2019
    Assignee: AT&T INTELLECTUAL PROPERTY I, L.P.
    Inventors: Andrea Basso, Lee Begeja, David C. Gibbon, Zhu Liu, Bernard S. Renger, Behzad Shahraray
  • Patent number: 10311893
    Abstract: Speaker content generated in an audio conference is selectively and visually represented. A profile for each audience member who participates in the audio conference is obtained. Speaker content spoken during the audio conference is monitored. Different weights are applied to words included in the speaker content according to a parameter of the profile for each of the audience members. A relation between the speaker content to the profile for each of the audience members is determined. Visual representations of the speaker content are presented to selective members among the audience members based on the determined relation.
    Type: Grant
    Filed: July 24, 2017
    Date of Patent: June 4, 2019
    Assignee: AT&T INTELLECTUAL PROPERTY I, L.P.
    Inventors: David C. Gibbon, Andrea Basso, Lee Begeja, Sumit Kumar, Zhu Liu, Bernard S. Renger, Behzad Shahraray, Eric Zavesky
  • Publication number: 20190079932
    Abstract: Disclosed herein are systems, methods, and computer readable-media for rich media annotation, the method comprising receiving a first recorded media content, receiving at least one audio annotation about the first recorded media, extracting metadata from the at least one of audio annotation, and associating all or part of the metadata with the first recorded media content. Additional data elements may also be associated with the first recorded media content. Where the audio annotation is a telephone conversation, the recorded media content may be captured via the telephone. The recorded media content, audio annotations, and/or metadata may be stored in a central repository which may be modifiable. Speech characteristics such as prosody may be analyzed to extract additional metadata. In one aspect, a specially trained grammar identifies and recognizes metadata.
    Type: Application
    Filed: November 12, 2018
    Publication date: March 14, 2019
    Inventors: Paul GAUSMAN, David C. GIBBON
  • Publication number: 20190052704
    Abstract: A method and apparatus for providing an opportunistic crowd based service platform is disclosed. A mobile sensor device is identified based on a current location and/or other qualities, such as intrinsic properties, previous sensor data, or demographic data of an associated user of the mobile sensor device. Data is collected from the mobile sensor device. The data collected from the mobile sensor device is aggregated with data collected from other sensor devices, and content generated based on the aggregated data is delivered to a user device.
    Type: Application
    Filed: October 12, 2018
    Publication date: February 14, 2019
    Applicant: AT&T Intellectual Property I, L.P.
    Inventors: Eric Zavesky, Andrea Basso, Lee Begeja, David C. Gibbon, Zhu Liu, Bernard S. Renger, Behzad Shahraray
  • Patent number: 10198748
    Abstract: Disclosed herein are systems, methods, and computer readable-media for adaptive media playback based on destination. The method for adaptive media playback comprises determining one or more destinations, collecting media content that is relevant to or describes the one or more destinations, assembling the media content into a program, and outputting the program. In various embodiments, media content may be advertising, consumer-generated, based on real-time events, based on a schedule, or assembled to fit within an estimated available time. Media content may be assembled using an adaptation engine that selects a plurality of media segments that fit in the estimated available time, orders the plurality of media segments, alters at least one of the plurality of media segments to fit the estimated available time, if necessary, and creates a playlist of selected media content containing the plurality of media segments.
    Type: Grant
    Filed: July 11, 2016
    Date of Patent: February 5, 2019
    Assignee: AT&T INTELLECTUAL PROPERTY I, L.P.
    Inventors: Behzad Shahraray, Andrea Basso, Lee Begeja, David C. Gibbon, Zhu Liu, Bernard S. Renger
  • Publication number: 20190034455
    Abstract: A method and apparatus for a dynamic glyph based search includes an image server. The image server analyzes images to determine the content of an image. The image and data related to the determined content of the image are stored in an image database. A user can access the image server and search images using search glyphs. In response to selection of a generic-search glyph, the image server finds related images in the image database and the images are displayed to the user. In addition, refine-search glyphs are displayed to a user based on the selected generic-search glyph. One or more refine-search glyphs can be selected by a user to further narrow a search to specific people, locations, objects, and other image content.
    Type: Application
    Filed: September 28, 2018
    Publication date: January 31, 2019
    Applicant: AT&T Intellectual Property I, L.P.
    Inventors: Lee Begeja, Robert J. Andres, David C. Gibbon, Steven Neil Tischer
  • Patent number: 10133752
    Abstract: A method and apparatus for a dynamic glyph based search includes an image server. The image server analyzes images to determine the content of an image. The image and data related to the determined content of the image are stored in an image database. A user can access the image server and search images using search glyphs. In response to selection of a generic-search glyph, the image server finds related images in the image database and the images are displayed to the user. In addition, refine-search glyphs are displayed to a user based on the selected generic-search glyph. One or more refine-search glyphs can be selected by a user to further narrow a search to specific people, locations, objects, and other image content.
    Type: Grant
    Filed: November 25, 2014
    Date of Patent: November 20, 2018
    Assignee: AT&T INTELLECTUAL PROPERTY I, L.P.
    Inventors: Lee Begeja, Robert J. Andres, David C. Gibbon, Steven Neil Tischer
  • Patent number: 10135920
    Abstract: A method and apparatus for providing an opportunistic crowd based service platform is disclosed. A mobile sensor device is identified based on a current location and/or other qualities, such as intrinsic properties, previous sensor data, or demographic data of an associated user of the mobile sensor device. Data is collected from the mobile sensor device. The data collected from the mobile sensor device is aggregated with data collected from other sensor devices, and content generated based on the aggregated data is delivered to a user device.
    Type: Grant
    Filed: January 2, 2018
    Date of Patent: November 20, 2018
    Assignee: AT&T Intellectual Property I, L.P.
    Inventors: Eric Zavesky, Andrea Basso, Lee Begeja, David C. Gibbon, Zhu Liu, Bernard S. Renger, Behzad Shahraray
  • Patent number: 10127231
    Abstract: Disclosed herein are systems, methods, and computer readable-media for rich media annotation, the method comprising receiving a first recorded media content, receiving at least one audio annotation about the first recorded media, extracting metadata from the at least one of audio annotation, and associating all or part of the metadata with the first recorded media content. Additional data elements may also be associated with the first recorded media content. Where the audio annotation is a telephone conversation, the recorded media content may be captured via the telephone. The recorded media content, audio annotations, and/or metadata may be stored in a central repository which may be modifiable. Speech characteristics such as prosody may be analyzed to extract additional metadata. In one aspect, a specially trained grammar identifies and recognizes metadata.
    Type: Grant
    Filed: July 22, 2008
    Date of Patent: November 13, 2018
    Assignee: AT&T INTELLECTUAL PROPERTY I, L.P.
    Inventors: Paul Gausman, David C. Gibbon
  • Patent number: 10031651
    Abstract: An interactive conference is supplemented based on terminology content. Terminology content from a plurality of devices connected to the interactive conference is monitored. A set of words from the terminology content is selected. Supplemental media content at an external source is identified based on the selected set of words, and selectively made available to a device connected to the interactive conference.
    Type: Grant
    Filed: July 20, 2015
    Date of Patent: July 24, 2018
    Assignee: AT&T INTELLECTUAL PROPERTY I, L.P.
    Inventors: David C. Gibbon, Lee Begeja, Zhu Liu, Bernard S. Renger, Behzad Shahraray, Eric Zavesky
  • Patent number: 10031649
    Abstract: A content summary is generated by determining a relevance of each of a plurality of scenes, removing at least one of the plurality of scenes based on the determined relevance, and creating a scene summary based on the plurality of scenes. The scene summary is output to a graphical user interface, which may be a three-dimensional interface. The plurality of scenes is automatically detected in a source video and a scene summary is created with user input to modify the scene summary. A synthetic frame representation is formed by determining a sentiment of at least one frame object in a plurality of frame objects and creating a synthetic representation of the at least one frame object based at least in part on the determined sentiment. The relevance of the frame object may be determined and the synthetic representation is then created based on the determined relevance and the determined sentiment.
    Type: Grant
    Filed: September 3, 2015
    Date of Patent: July 24, 2018
    Assignee: AT&T Intellectual Property I, L.P.
    Inventors: Behzad Shahraray, Andrea Basso, Lee Begeja, David C. Gibbon, Zhu Liu, Bernard S. Renger
  • Patent number: 10002612
    Abstract: Disclosed herein are systems, computer-implemented methods, and tangible computer-readable storage media for captioning a media presentation. The method includes receiving automatic speech recognition (ASR) output from a media presentation and a transcription of the media presentation. The method includes selecting via a processor a pair of anchor words in the media presentation based on the ASR output and transcription and generating captions by aligning the transcription with the ASR output between the selected pair of anchor words. The transcription can be human-generated. Selecting pairs of anchor words can be based on a similarity threshold between the ASR output and the transcription. In one variation, commonly used words on a stop list are ineligible as anchor words. The method includes outputting the media presentation with the generated captions. The presentation can be a recording of a live event.
    Type: Grant
    Filed: November 14, 2016
    Date of Patent: June 19, 2018
    Assignee: AT&T Intellectual Property I, L.P.
    Inventors: Yeon-Jun Kim, David C. Gibbon, Horst J. Schroeter
  • Publication number: 20180144275
    Abstract: A method and apparatus for a service platform capable of providing device-based task completion is disclosed. A request for a task is received at a service platform from a customer. A worker device to complete the task is selected from a group of worker devices registered with the service platform based on a current attribute of the worker device. Data resulting from completion of the task is received from the selected worker device, validated, and presented to the customer. A reward or incentive can be provided to the worker device in response to the data being received from the worker device and validated.
    Type: Application
    Filed: January 22, 2018
    Publication date: May 24, 2018
    Applicant: AT&T Intellectual Property I, L.P.
    Inventors: Behzad Shahraray, Andrea Basso, Lee Begeja, David C. Gibbon, Zhu Liu, Bernard S. Renger, Eric Zavesky
  • Publication number: 20180124166
    Abstract: A method and apparatus for providing an opportunistic crowd based service platform is disclosed. A mobile sensor device is identified based on a current location and/or other qualities, such as intrinsic properties, previous sensor data, or demographic data of an associated user of the mobile sensor device. Data is collected from the mobile sensor device. The data collected from the mobile sensor device is aggregated with data collected from other sensor devices, and content generated based on the aggregated data is delivered to a user device.
    Type: Application
    Filed: January 2, 2018
    Publication date: May 3, 2018
    Applicant: AT&T Intellectual Property I, L.P.
    Inventors: Eric Zavesky, Andrea Basso, Lee Begeja, David C. Gibbon, Zhu Liu, Bernard S. Renger, Behzad Shahraray
  • Patent number: 9882978
    Abstract: A method and apparatus for providing an opportunistic crowd based service platform is disclosed. A mobile sensor device is identified based on a current location and/or other qualities, such as intrinsic properties, previous sensor data, or demographic data of an associated user of the mobile sensor device. Data is collected from the mobile sensor device. The data collected from the mobile sensor device is aggregated with data collected from other sensor devices, and content generated based on the aggregated data is delivered to a user device.
    Type: Grant
    Filed: January 11, 2017
    Date of Patent: January 30, 2018
    Assignee: AT&T INTELLECTUAL PROPERTY I, L.P.
    Inventors: Eric Zavesky, Andrea Basso, Lee Begeja, David C. Gibbon, Zhu Liu, Bernard S. Renger, Behzad Shahraray
  • Patent number: 9875448
    Abstract: A method and apparatus for a service platform capable of providing device-based task completion is disclosed. A request for a task is received at a service platform from a customer. A worker device to complete the task is selected from a group of worker devices registered with the service platform based on a current attribute of the worker device. Data resulting from completion of the task is received from the selected worker device, validated, and presented to the customer. A reward or incentive can be provided to the worker device in response to the data being received from the worker device and validated.
    Type: Grant
    Filed: November 30, 2011
    Date of Patent: January 23, 2018
    Assignee: AT&T INTELLECTUAL PROPERTY I, L.P.
    Inventors: Behzad Shahraray, Andrea Basso, Lee Begeja, David C. Gibbon, Zhu Liu, Bernard S. Renger, Eric Zavesky
  • Publication number: 20170323655
    Abstract: Speaker content generated in an audio conference is selectively and visually represented. A profile for each audience member who participates in the audio conference is obtained. Speaker content spoken during the audio conference is monitored. Different weights are applied to words included in the speaker content according to a parameter of the profile for each of the audience members. A relation between the speaker content to the profile for each of the audience members is determined. Visual representations of the speaker content are presented to selective members among the audience members based on the determined relation.
    Type: Application
    Filed: July 24, 2017
    Publication date: November 9, 2017
    Applicant: AT&T INTELLECTUAL PROPERTY I, L.P.
    Inventors: David C. GIBBON, Andrea BASSO, Lee BEGEJA, Sumit KUMAR, Zhu LIU, Bernard S. RENGER, Behzad SHAHRARAY, Eric ZAVESKY
  • Patent number: 9769525
    Abstract: A method for monitoring a monitored display monitors data to be output from a monitored display. The monitored data is analyzed to generate one or more content identifiers. The content identifiers are compared to a set of rules to determine if the monitored data should be blocked from being output or if an alert should be transmitted to a supervisor device. One or more supervisor devices may be used to respond to alerts and may also be used to control the output of the monitored display.
    Type: Grant
    Filed: May 18, 2015
    Date of Patent: September 19, 2017
    Assignee: AT&T Intellectual Property I, L.P.
    Inventors: Behzad Shahraray, Andrea Basso, Lee Begeja, David C. Gibbon, Zhu Liu, Bernard S. Renger
  • Patent number: 9747925
    Abstract: Speaker content generated in an audio conference is selectively visually represented. A profile for each audience member who participates in the audio conference is obtained. Speaker content spoken during the audio conference is monitored. Words of the speaker content are classified to have different weights according to a parameter of the profile for each of the audience members. A relation between the speaker content to the profile for each of the audience members is determined. Different visual representations of the speaker content are presented to different ones of the audience members based on the determined relation.
    Type: Grant
    Filed: February 22, 2017
    Date of Patent: August 29, 2017
    Assignee: AT&T INTELLECTUAL PROPERTY I, L.P.
    Inventors: David C. Gibbon, Andrea Basso, Lee Begeja, Sumit Kumar, Zhu Liu, Bernard S. Renger, Behzad Shahraray, Eric Zavesky
  • Publication number: 20170162214
    Abstract: Speaker content generated in an audio conference is selectively visually represented. A profile for each audience member who participates in the audio conference is obtained. Speaker content spoken during the audio conference is monitored. Words of the speaker content are classified to have different weights according to a parameter of the profile for each of the audience members. A relation between the speaker content to the profile for each of the audience members is determined. Different visual representations of the speaker content are presented to different ones of the audience members based on the determined relation.
    Type: Application
    Filed: February 22, 2017
    Publication date: June 8, 2017
    Applicant: AT&T INTELLECTUAL PROPERTY I, L.P.
    Inventors: David C. GIBBON, Andrea BASSO, Lee BEGEJA, Sumit KUMAR, Zhu LIU, Bernard S. RENGER, Behzad SHAHRARAY, Eric ZAVESKY