Patents by Inventor Andrea Basso
Andrea Basso has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 8051081Abstract: Disclosed herein are systems, methods, and computer-readable media for transmedia video bookmarks, the method comprising receiving a first place marker and a second place marker for a segment of video media, extracting metadata from the video media between the first and second place markers, normalizing the extracted metadata, storing the normalized metadata, first place marker, and second place marker as a video bookmark, and retrieving the media represented by the video bookmark upon request from a user. One aspect further aggregates video bookmarks from multiple sources and refines the first place marker and second place marker based on the aggregated video bookmarks. Metadata can be extracted by analyzing text or audio annotations. Another aspect of normalizing the extracted metadata includes generating a video thumbnail representing the video media between the first place marker and the second place marker. Multiple video bookmarks may be searchable by metadata or by the video thumbnail visually.Type: GrantFiled: August 15, 2008Date of Patent: November 1, 2011Assignee: AT&T Intellectual Property I, L.P.Inventors: Behzad Shahraray, Andrea Basso, Lee Begeja, David C. Gibbon, Zhu Liu, Bernard S. Renger
-
Patent number: 8046338Abstract: File formats systems and methods are disclosed that provide a framework that integrates concepts, such as objects based audio-visual representation, meta-data and object oriented programming, to achieve a flexible and generic representation of the audiovisual information and the associated methods to operate on the audiovisual information. A system and method are disclosed for storing data processed from presentation data. The data is stored according to a method comprising coding input presentation data by identifying objects from within the presentation data, coding each object individually and organizing the coded data into access layer data units. The access layer data units are stored throughout a plurality of segments, each segment comprising a segment table in a header portion thereof and those access layer data units that are members of the respective segment, there being one entry in the segment table for each access layer data unit therein.Type: GrantFiled: May 29, 2008Date of Patent: October 25, 2011Assignee: AT&T Intellectual Property II, L.P.Inventors: Andrea Basso, Alexandros Eleftheriadis, Hari Kalva, Atul Puri, Robert Lewis Schmidt
-
Publication number: 20110252156Abstract: A plurality of multimedia data streams that are being provided via an internet protocol (IP) network is received, wherein each multimedia data stream carries multimedia content. Real-time metadata relating to the plurality of multimedia data streams is generated based on the multimedia content. The metadata is provided in real-time in a metadata stream to a plurality of user devices, via the IP network. The plurality of multimedia data streams may be multicast within the IP network. The metadata may be multicast in real-time in a metadata stream to a plurality of user devices, via the IP network.Type: ApplicationFiled: April 8, 2010Publication date: October 13, 2011Applicant: AT&T INTELLECTUAL PROPERTY I, L.P.Inventors: Andrea Basso, David C. Gibbon, Behzad Shahraray, Herani S. Brotman, Han Q. Nguyen, Douglas Nortz, Steven J. Solomon, David A. Parisi, Sunil Maloo, Mark W. Altom
-
Patent number: 7996422Abstract: Disclosed herein are systems, methods, and computer readable-media for adaptive media playback based on destination. The method for adaptive media playback comprises determining one or more destinations, collecting media content that is relevant to or describes the one or more destinations, assembling the media content into a program, and outputting the program. In various embodiments, media content may be advertising, consumer-generated, based on real-time events, based on a schedule, or assembled to fit within an estimated available time. Media content may be assembled using an adaptation engine that selects a plurality of media segments that fit in the estimated available time, orders the plurality of media segments, alters at least one of the plurality of media segments to fit the estimated available time, if necessary, and creates a playlist of selected media content containing the plurality of media segments.Type: GrantFiled: July 22, 2008Date of Patent: August 9, 2011Assignee: AT&T Intellectual Property L.L.P.Inventors: Behzad Shahraray, Andrea Basso, Lee Begeja, David C. Gibbon, Zhu Liu, Bernard S. Renger
-
Patent number: 7996878Abstract: The invention provides a system and method that transforms a set of still/motion media (i.e., a series of related or unrelated still frames, web-pages rendered as images, or video clips) or other multimedia, into a video stream that is suitable for delivery over a display medium, such as TV, cable TV, computer displays, wireless display devices, etc. The video data stream may be presented and displayed in real time or stored and later presented through a set-top box, for example. Because these media are transformed into coded video streams (e.g. MPEG-2, MPEG-4, etc.), a user can watch them on a display screen without the need to connect to the Internet through a service provider. The user may request and interact with the desired media through a simple telephone interface, for example. Moreover, several wireless and cable-based services can be developed on the top of this system.Type: GrantFiled: August 29, 2000Date of Patent: August 9, 2011Assignee: AT&T Intellectual Property II, L.P.Inventors: Andrea Basso, Eric Cosatto, Steven Lloyd Greenspan, David M. Weimer
-
Publication number: 20110138430Abstract: A method and computer readable medium for encoding data onto a channel broadcasting a program are disclosed. For example, the method selects a channel that is being used to broadcast a program, generates data having characteristics in accordance with an error burst signature and transmits the data on the channel that is being used to broadcast the program.Type: ApplicationFiled: December 8, 2009Publication date: June 9, 2011Inventors: ANDREA BASSO, Paul Shala Henry, Byoung-Jo Kim
-
Publication number: 20110126025Abstract: Active intelligent content is aware of its own timeline, lifecycle, capabilities, limitations, and related information. The active intelligent content is aware of its surroundings and can convert automatically into a format or file type more conducive to the device or environment it is stored in. If the active intelligent content does not have the required tools to make such a transformation, it is self-aware enough to seek out the tools and/or information to make that transformation. Such active intelligent content can be used for enhanced file portability, target advertising, personalization of media, and selective encryption, enhancement, and restriction. The content can also be used to collaborate with other content and provide users with enhanced information based on user preferences, ratings, costs, genres, file types, and the like.Type: ApplicationFiled: November 25, 2009Publication date: May 26, 2011Applicant: AT&T INTELLECTUAL PROPERTY I, L.P.Inventors: Andrea Basso, Vishwa M. Prasad
-
Publication number: 20110126223Abstract: A method for monitoring a monitored display monitors data to be output from a monitored display. The monitored data is analyzed to generate one or more content identifiers. The content identifiers are compared to a set of rules to determine if the monitored data should be blocked from being output or if an alert should be transmitted to a supervisor device. One or more supervisor devices may be used to respond to alerts and may also be used to control the output of the monitored display.Type: ApplicationFiled: November 25, 2009Publication date: May 26, 2011Applicant: AT&T INTELLECTUAL PROPERTY I, L.P.Inventors: Behzad Shahraray, Andrea Basso, Lee Begeja, David C. Gibbon, Zhu Liu, Bernard S. Renger
-
Publication number: 20110125758Abstract: A collaborative automated structured tagging method, apparatus, and computer readable medium generate tags for an asset based on context items and the content of the asset. The tags are then ranked and stored until requested by a user or system. Users viewing the asset and the ranked tags can select tags indicating that the tags correctly define the asset. Users can also enter new tags for assets. The user input is then used to re-rank the tags associated with the particular asset.Type: ApplicationFiled: November 23, 2009Publication date: May 26, 2011Applicant: AT&T INTELLECTUAL PROPERTY I, L.P.Inventors: Andrea Basso, Bernard S. Renger
-
Publication number: 20110112821Abstract: In one embodiment, the present disclosure is a method and apparatus for multimodal content translation. In one embodiment, a method for translating content includes receiving the content via a first modality, extracting at least one verbal component and at least one non-verbal component from the content, and translating the at least one verbal component and the at least one non-verbal component into translated content, where the translated content is in a form for output in a second modality.Type: ApplicationFiled: November 11, 2009Publication date: May 12, 2011Inventors: ANDREA BASSO, David Gibbon, Zhu Liu, Bernard S. Renger, Behzad Shahraray
-
Publication number: 20110103484Abstract: A system, method and computer-readable media are introduced that relate to data coding and decoding. A computing device encodes received data such as video data into a base layer of compressed video and an enhancement layer of compressed video. The computing device controls drift introduced into the base layer of the compressed video. The computing device, such as a scalable video coder, allows drift by predicting the base layer from the enhancement layer information. The amount of drift is managed to improve overall compression efficiency.Type: ApplicationFiled: November 2, 2010Publication date: May 5, 2011Applicant: AT&T Intellectual Property II, L.P. via transfer from AT&T Corp.Inventors: Amy Ruth Reibman, Leon Bottou, Andrea Basso
-
Publication number: 20110093798Abstract: A content summary is generated by determining a relevance of each of a plurality of scenes, removing at least one of the plurality of scenes based on the determined relevance, and creating a scene summary based on the plurality of scenes. The scene summary is output to a graphical user interface, which may be a three-dimensional interface. The plurality of scenes is automatically detected in a source video and a scene summary is created with user input to modify the scene summary. A synthetic frame representation is formed by determining a sentiment of at least one frame object in a plurality of frame objects and creating a synthetic representation of the at least one frame object based at least in part on the determined sentiment. The relevance of the frame object may be determined and the synthetic representation is then created based on the determined relevance and the determined sentiment.Type: ApplicationFiled: October 15, 2009Publication date: April 21, 2011Applicant: AT&T INTELLECTUAL PROPERTY I, L.P.Inventors: Behzad Shahraray, Andrea Basso, Lee Begeja, David C. Gibbon, Zhu Liu, Bernard S. Renger
-
Publication number: 20110093473Abstract: A system that incorporates teachings of the present disclosure may include, for example network device having a controller to receive multiple streams of content for portions of a multimedia work (MMW), perform a high level analysis for features in each of the streams for the MMW, perform a specialized analysis on the portion having a detected general feature to generate a content analysis output, correlate the content analysis output with other content analysis of the MMW, and output a weighted content description based on the correlation function. Other embodiments are disclosed.Type: ApplicationFiled: October 21, 2009Publication date: April 21, 2011Applicant: AT&T INTELLECTUAL PROPERTY I, L.P.Inventors: Andrea Basso, Gustavo De Los Reyes
-
Publication number: 20110072466Abstract: A method includes steps of indexing a media collection, searching an indexed library and browsing a set of candidate program segments. The step of indexing a media collection creates the indexed library based on a content of the media collection. The step of searching the indexed library identifies the set of candidate program segments based on a search criteria. The step of browsing the set of candidate program segments selects a segment for viewing.Type: ApplicationFiled: November 30, 2010Publication date: March 24, 2011Applicant: AT&T Intellectual Property II, L.P. via transfer from AT&T Corp.Inventors: Andrea Basso, Mehmet Reha Civanlar, David Crawford Gibbon, Qian Huang, Esther Levin, Roberto Pieraccini, Behzad Shahraray
-
Patent number: 7877774Abstract: A method includes steps of indexing a media collection, searching an indexed library and browsing a set of candidate program segments. The step of indexing a media collection creates the indexed library based on a content of the media collection. The step of searching the indexed library identifies the set of candidate program segments based on a search criteria. The step of browsing the set of candidate program segments selects a segment for viewing.Type: GrantFiled: April 19, 2000Date of Patent: January 25, 2011Assignee: AT&T Intellectual Property II, L.P.Inventors: Andrea Basso, Mehmet Reha Civanlar, David Crawford Gibbon, Qian Huang, Esther Levin, Roberto Pieraccini, Behzad Shahraray
-
Publication number: 20110002508Abstract: A method of improving the lighting conditions of a real scene or video sequence. Digitally generated light is added to a scene for video conferencing over telecommunication networks. A virtual illumination equation takes into account light attenuation, lambertian and specular reflection. An image of an object is captured, a virtual light source illuminates the object within the image. In addition, the object can be the head of the user. The position of the head of the user is dynamically tracked so that an three-dimensional model is generated which is representative of the head of the user. Synthetic light is applied to a position on the model to form an illuminated model.Type: ApplicationFiled: September 8, 2010Publication date: January 6, 2011Applicant: AT&T Intellectual Property II, L.P..Inventors: Andrea Basso, Eric Cosatto, David Crawford Gibbon, Hans Peter Graf, Shan Liu
-
Publication number: 20100315549Abstract: Disclosed herein are systems, methods, and computer readable-media for adaptive content rendition, the method comprising receiving media content for playback to a user, adapting the media content for playback on a first device in the user's first location, receiving a notification when the user changes to a second location, adapting the media content for playback on a second device in the second location, and transitioning media content playback from the first device to second device. One aspect conserves energy by optionally turning off the first device after transitioning to the second device. Another aspect includes playback devices that are “dumb devices” which receive media content already prepared for playback, “smart devices” which receive media content in a less than ready form and prepare the media content for playback, or hybrid smart and dumb devices. A single device may be substituted by a plurality of devices.Type: ApplicationFiled: August 26, 2010Publication date: December 16, 2010Applicant: AT&T Labs, Inc.Inventors: Andrea Basso, David C. Gibbon, Zhu Liu, Bernard S. Renger
-
Patent number: 7844463Abstract: According to MPEG-4's TTS architecture, facial animation can be driven by two streams simultaneously—text and Facial Animation Parameters. A Text-To-Speech converter drives the mouth shapes of the face. An encoder sends Facial Animation Parameters to the face. The text input can include codes, or bookmarks, transmitted to the Text-to-Speech converter, which are placed between and inside words. The bookmarks carry an encoder time stamp. Due to the nature of text-to-speech conversion, the encoder time stamp does not relate to real-world time, and should be interpreted as a counter. The Facial Animation Parameter stream carries the same encoder time stamp found in the bookmark of the text. The system reads the bookmark and provides the encoder time stamp and a real-time time stamp. The facial animation system associates the correct facial animation parameter with the real-time time stamp using the encoder time stamp of the bookmark as a reference.Type: GrantFiled: August 18, 2008Date of Patent: November 30, 2010Assignee: AT&T Intellectual Property II, L.P.Inventors: Andrea Basso, Mark Charles Beutnagel, Joern Ostermann
-
Patent number: 7805017Abstract: A method of improving the lighting conditions of a real scene or video sequence. Digitally generated light is added to a scene for video conferencing over telecommunication networks. A virtual illumination equation takes into account light attenuation, lambertian and specular reflection. An image of an object is captured, a virtual light source illuminates the object within the image. In addition, the object can be the head of the user. The position of the head of the user is dynamically tracked so that an three-dimensional model is generated which is representative of the head of the user. Synthetic light is applied to a position on the model to form an illuminated model.Type: GrantFiled: May 8, 2007Date of Patent: September 28, 2010Assignee: AT&T Intellectual Property II, L.P.Inventors: Andrea Basso, Eric Cosatto, David Crawford Gibbon, Hans Peter Graf, Shan Liu
-
Patent number: 7796190Abstract: Disclosed herein are systems, methods, and computer readable-media for adaptive content rendition, the method comprising receiving media content for playback to a user, adapting the media content for playback on a first device in the user's first location, receiving a notification when the user changes to a second location, adapting the media content for playback on a second device in the second location, and transitioning media content playback from the first device to second device. One aspect conserves energy by optionally turning off the first device after transitioning to the second device. Another aspect includes playback devices that are “dumb devices” which receive media content already prepared for playback, “smart devices” which receive media content in a less than ready form and prepare the media content for playback, or hybrid smart and dumb devices. A single device may be substituted by a plurality of devices.Type: GrantFiled: August 15, 2008Date of Patent: September 14, 2010Assignee: AT&T Labs, Inc.Inventors: Andrea Basso, David C. Gibbon, Zhu Liu, Bernard S. Renger