Patents by Inventor Andrea Basso
Andrea Basso has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20090024596Abstract: Disclosed herein are systems, methods, and computer-readable media to represent, store, and manipulate metadata. The method for representing metadata includes defining a map to metadata stored in a global database for each of a plurality of metadata containers, receiving a query for metadata associated with a file, determining which of the plurality of metadata containers the query requires, and responding to the query based on metadata associated with the file from the global database retrieved using the corresponding map for the determined metadata container.Type: ApplicationFiled: October 30, 2007Publication date: January 22, 2009Applicant: AT&T Labs, Inc.Inventors: Andrea BASSO, David Crawford GIBBON
-
Publication number: 20080312930Abstract: According to MPEG-4's TTS architecture, facial animation can be driven by two streams simultaneously—text, and Facial Animation Parameters. In this architecture, text input is sent to a Text-To-Speech converter at a decoder that drives the mouth shapes of the face. Facial Animation Parameters are sent from an encoder to the face over the communication channel. The present invention includes codes (known as bookmarks) in the text string transmitted to the Text-to-Speech converter, which bookmarks are placed between words as well as inside them. According to the present invention, the bookmarks carry an encoder time stamp. Due to the nature of text-to-speech conversion, the encoder time stamp does not relate to real-world time, and should be interpreted as a counter. In addition, the Facial Animation Parameter stream carries the same encoder time stamp found in the bookmark of the text.Type: ApplicationFiled: August 18, 2008Publication date: December 18, 2008Applicant: AT&T Corp.Inventors: Andrea Basso, Mark Charles Beutnagel, Joern Ostermann
-
Patent number: 7428547Abstract: File formats systems and methods are disclosed that provide a framework that integrates concepts, such as objects based audio-visual representation, meta-data and object oriented programming, to achieve a flexible and generic representation of the audiovisual information and the associated methods to operate on the audiovisual information. A system and method are disclosed for storing data processed from presentation data. The data is stored according to a method comprising coding input presentation data by identifying objects from within the presentation data, coding each object individually and organizing the coded data into access layer data units. The access layer data units are stored throughout a plurality of segments, each segment comprising a segment table in a header portion thereof and those access layer data units that are members of the respective segment, there being one entry in the segment table for each access layer data unit therein.Type: GrantFiled: February 24, 2004Date of Patent: September 23, 2008Assignees: AT&T Corp., The Trustees of Columbia UniversityInventors: Andrea Basso, Alexandros Eleftheriadis, Hari Kalva, Atul Puri, Robert Lewis Schmidt
-
Publication number: 20080228825Abstract: File formats systems and methods are disclosed that provide a framework that integrates concepts, such as objects based audio-visual representation, meta-data and object oriented programming, to achieve a flexible and generic representation of the audiovisual information and the associated methods to operate on the audiovisual information. A system and method are disclosed for storing data processed from presentation data. The data is stored according to a method comprising coding input presentation data by identifying objects from within the presentation data, coding each object individually and organizing the coded data into access layer data units. The access layer data units are stored throughout a plurality of segments, each segment comprising a segment table in a header portion thereof and those access layer data units that are members of the respective segment, there being one entry in the segment table for each access layer data unit therein.Type: ApplicationFiled: May 29, 2008Publication date: September 18, 2008Applicants: AT&T Corp., The Trustees of Columbia UniversityInventors: Andrea BASSO, Alexandros Eleftheriadis, Hari Kalva, Atul Puri, Robert Lewis Schmidt
-
Publication number: 20080114723Abstract: A method and apparatus for displaying received data, analyze the quality of the displayed data formulating a media-parameter suggestion for the encoder to alter the characteristics of data to be sent to the receiver, and sending from the receiver, the formulated suggestion.Type: ApplicationFiled: October 31, 2007Publication date: May 15, 2008Applicant: AT&T Corp.Inventors: Andrea Basso, Erich Haratsch, Barin Geoffry Haskell, Joern Ostermann
-
Patent number: 7366670Abstract: Facial animation in MPEG-4 can be driven by a text stream and a Facial Animation Parameters (FAP) stream. Text input is sent to a TTS converter that drives the mouth shapes of the face. FAPs are sent from an encoder to the face over the communication channel. Disclosed are codes bookmarks in the text string transmitted to the TTS converter. Bookmarks are placed between and inside words and carry an encoder time stamp. The encoder time stamp does not relate to real-world time. The FAP stream carries the same encoder time stamp found in the bookmark of the text. The system reads the bookmark and provides the encoder time stamp as well as a real-time time stamp to the facial animation system. The facial animation system associates the correct facial animation parameter with the real-time time stamp using the encoder time stamp of the bookmark as a reference.Type: GrantFiled: August 11, 2006Date of Patent: April 29, 2008Assignee: AT&T Corp.Inventors: Andrea Basso, Mark Charles Beutnagel, Joern Ostermann
-
Publication number: 20080090242Abstract: The present invention provides, inter alia, methods for selecting a patient with cancer for treatment with a farnesyl protein transferase inhibitor as well as methods for treating said patient.Type: ApplicationFiled: September 28, 2007Publication date: April 17, 2008Inventors: Diane Levitan, Andrea Basso, Marvin Bayne, Walter Bishop, Paul Kirschmeier
-
Publication number: 20080059194Abstract: According to MPEG-4's TTS architecture, facial animation can be driven by two streams simultaneously—text, and Facial Animation Parameters. In this architecture, text input is sent to a Text-To-Speech converter at a decoder that drives the mouth shapes of the face. Facial Animation Parameters are sent from an encoder to the face over the communication channel. The present invention includes codes (known as bookmarks) in the text string transmitted to the Text-to-Speech converter, which bookmarks are placed between words as well as inside them. According to the present invention, the bookmarks carry an encoder time stamp. Due to the nature of text-to-speech conversion, the encoder time stamp does not relate to real-world time, and should be interpreted as a counter. In addition, the Facial Animation Parameter stream carries the same encoder time stamp found in the bookmark of the text.Type: ApplicationFiled: October 31, 2007Publication date: March 6, 2008Applicant: AT&T Corp.Inventors: Andrea Basso, Mark Beutnagel, Joern Ostermann
-
Patent number: 7310811Abstract: A method and apparatus for displaying received data, analyze the quality of the displayed data formulating a media-parameter suggestion for the encoder to alter the characteristics of data to be sent to the receiver, and sending from the receiver, the formulated suggestion.Type: GrantFiled: July 10, 1998Date of Patent: December 18, 2007Assignee: AT&T Corp.Inventors: Andrea Basso, Erich Haratsch, Barin Geoffry Haskell, Joern Ostermann
-
Patent number: 7231099Abstract: A method of improving the lighting conditions of a real scene or video sequence. Digitally generated light is added to a scene for video conferencing over telecommunication networks. A virtual illumination equation takes into account light attenuation, lambertian and specular reflection. An image of an object is captured, a virtual light source illuminates the object within the image. In addition, the object can be the head of the user. The position of the head of the user is dynamically tracked so that an three-dimensional model is generated which is representative of the head of the user. Synthetic light is applied to a position on the model to form an illuminated model.Type: GrantFiled: August 31, 2005Date of Patent: June 12, 2007Assignee: AT&TInventors: Andrea Basso, Eric Cosatto, David Crawford Gibbon, Hans Peter Graf, Shan Liu
-
Patent number: 7110950Abstract: According to MPEG-4's TTS architecture, facial animation can be driven by two streams simultaneously—text, and Facial Animation Parameters. In this architecture, text input is sent to a Text-To-Speech converter at a decoder that drives the mouth shapes of the face. Facial Animation Parameters are sent from an encoder to the face over the communication channel. The present invention includes codes (known as bookmarks) in the text string transmitted to the Text-to-Speech converter, which bookmarks are placed between words as well as inside them. According to the present invention, the bookmarks carry an encoder time stamp. Due to the nature of text-to-speech conversion, the encoder time stamp does not relate to real-world time, and should be interpreted as a counter. In addition, the Facial Animation Parameter stream carries the same encoder time stamp found in the bookmark of the text.Type: GrantFiled: January 7, 2005Date of Patent: September 19, 2006Assignee: AT&T Corp.Inventors: Andrea Basso, Mark Charles Beutnagel, Joern Ostermann
-
Patent number: 6980697Abstract: A method of improving the lighting conditions of a real scene or video sequence. Digitally generated light is added to a scene for video conferencing over telecommunication networks. A virtual illumination equation takes into account light attenuation, lambertian and specular reflection. An image of an object is captured, a virtual light source illuminates the object within the image. In addition, the object can be the head of the user. The position of the head of the user is dynamically tracked so that an three-dimensional model is generated which is representative of the head of the user. Synthetic light is applied to a position on the model to form an illuminated model.Type: GrantFiled: January 25, 2002Date of Patent: December 27, 2005Assignee: AT&T Corp.Inventors: Andrea Basso, Eric Cosatto, David Crawford Gibbon, Hans Peter Graf, Shan Liu
-
Patent number: 6961383Abstract: Scalable video coders have traditionally avoided using enhancement layer information to predict the base layer, so as to avoid so-called “drift”. As a result, they are less efficient than a one-layer coder. The present invention is directed to a scalable video coder that allows drift, by predicting the base layer from the enhancement layer information. Through careful management of the amount of drift introduced, the overall compression efficiency can be improved while only slighly degrading resilience for lower bit-rates.Type: GrantFiled: November 21, 2001Date of Patent: November 1, 2005Assignee: AT&T Corp.Inventors: Amy Ruth Reibman, Leon Bottou, Andrea Basso
-
Publication number: 20050119877Abstract: According to MPEG-4's TTS architecture, facial animation can be driven by two streams simultaneously—text, and Facial Animation Parameters. In this architecture, text input is sent to a Text-To-Speech converter at a decoder that drives the mouth shapes of the face. Facial Animation Parameters are sent from an encoder to the face over the communication channel. The present invention includes codes (known as bookmarks) in the text string transmitted to the Text-to-Speech converter, which bookmarks are placed between words as well as inside them. According to the present invention, the bookmarks carry an encoder time stamp. Due to the nature of text-to-speech conversion, the encoder time stamp does not relate to real-world time, and should be interpreted as a counter. In addition, the Facial Animation Parameter stream carries the same encoder time stamp found in the bookmark of the text.Type: ApplicationFiled: January 7, 2005Publication date: June 2, 2005Applicant: AT&T Corp.Inventors: Andrea Basso, Mark Beutnagel, Joern Ostermann
-
Patent number: 6862569Abstract: According to MPEG-4's TTS architecture, facial animation can be driven by two streams simultaneously—text, and Facial Animation Parameters. In this architecture, text input is sent to a Text-To-Speech converter at a decoder that drives the mouth shapes of the face. Facial Animation Parameters are sent from an encoder to the face over the communication channel. The present invention includes codes (known as bookmarks) in the text string transmitted to the Text-to-Speech converter, which bookmarks are placed between words as well as inside them. According to the present invention, the bookmarks carry-an encoder time stamp. Due to the nature of text-to-speech conversion, the encoder time stamp does not relate to real-world time, and should be interpreted as a counter. In addition, the Facial Animation Parameter stream carries the same encoder time stamp found in the bookmark of the text.Type: GrantFiled: January 23, 2003Date of Patent: March 1, 2005Assignee: AT&T Corp.Inventors: Andrea Basso, Mark Charles Beutnagel, Joern Ostermann
-
Publication number: 20040167916Abstract: File formats systems and methods are disclosed that provide a framework that integrates concepts, such as objects based audio-visual representation, meta-data and object oriented programming, to achieve a flexible and generic representation of the audiovisual information and the associated methods to operate on the audiovisual information. A system and method are disclosed for storing data processed from presentation data. The data is stored according to a method comprising coding input presentation data by identifying objects from within the presentation data, coding each object individually and organizing the coded data into access layer data units. The access layer data units are stored throughout a plurality of segments, each segment comprising a segment table in a header portion thereof and those access layer data units that are members of the respective segment, there being one entry in the segment table for each access layer data unit therein.Type: ApplicationFiled: February 24, 2004Publication date: August 26, 2004Applicant: AT&T Corp.Inventors: Andrea Basso, Alexandros Eleftheriadis, Hari Kalva, Atul Puri, Robert Lewis Schmidt
-
Patent number: 6751623Abstract: A fundamental limitation in the exchange of audiovisual information today is that its representation is extremely low level. It is composed of coded video or audio samples (often as blocks) arranged in a commercial format. In contrast, the new generation multimedia requires flexible formats to allow a quick adaptation to requirements in terms of access, bandwidth scalability, streaming as well as general data reorganization. The Flexible-Integrated Intermedia Format (Flexible-IIF or F-IIF) is an advanced extension to the Integrated Intermedia Format (IIF). The Flexible-Integrated Intermedia Format (Flexible-IIF) datastructures, file formats systems and methods provide a framework that integrates advanced concepts, such as objects based audio-visual representation, meta-data and object oriented programming, to achieve a flexible and generic representation of the audiovisual information and the associated methods to operate on the audiovisual information.Type: GrantFiled: January 26, 1999Date of Patent: June 15, 2004Assignees: AT&T Corp., The Trustees of Columbia UniversityInventors: Andrea Basso, Alexandros Eleftheriadis, Hari Kalva, Atul Puri, Robert Lewis Schmidt
-
Patent number: 6602299Abstract: A flexible framework for synchronization of multimedia streams synchronizes the incoming streams on the basis of the collaboration of a transmitter-driven and a local inter-media synchronization module. Whenever the first one it is not enough to ensure reliable synchronization or cannot assure synchronization because the encoder does not know the exact timing of the decoder, the second one comes into play. Normally, the transmitter-driven module uses the stream time stamps if their drift is acceptable. If the drift is too high, the system activates an internal inter-media synchronization mode while the transmitter driven module extracts the coarsest inter-media synchronization and/or the structural information present in the streams. The internal clock of the receiver is used as absolute time reference. Whenever the drift value stabilizes to acceptable values, the system switches back smoothly to the external synchronization mode.Type: GrantFiled: November 13, 2000Date of Patent: August 5, 2003Assignee: AT&T Corp.Inventors: Andrea Basso, Joern Ostermann
-
Patent number: 6567779Abstract: According to MPEG-4's TTS architecture, facial animation can be driven by two streams simultaneously—text, and Facial Animation Parameters. In this architecture, text input is sent to a Text-To-Speech converter at a decoder that drives the mouth shapes of the face. Facial Animation Parameters are sent from an encoder to the face over the communication channel. The present invention includes codes (known as bookmarks) in the text string transmitted to the Text-to-Speech converter, which bookmarks are placed between words as well as inside them. According to the present invention, the bookmarks carry an encoder time stamp. Due to the nature of text-to-speech conversion, the encoder time stamp does not relate to real-world time, and should be interpreted as a counter. In addition, the Facial Animation Parameter stream carries the same encoder time stamp found in the bookmark of the text.Type: GrantFiled: August 5, 1997Date of Patent: May 20, 2003Assignee: AT&T Corp.Inventors: Andrea Basso, Mark Charles Beutnagel, Joern Ostermann
-
Publication number: 20030012679Abstract: Gold alloy comprising, by weight, at least Gold Au≧33%, Iridium Ir≦0.4%, germanium Ge≦2%, 0.015% ≦silicon ≦0.3%, Phosphorus ≦0.02% and Copper Cu≦66%. The alloy can also comprise, in percentage by weight, Silver Ag≦34%, nickel Ni≦20% and Zinc Zn≦25%. In some variations the gold alloy can further comprise no more than 4% of at least one of the elements of the group constituted by cobalt, manganese, tin and indium, and no more than 0.15% of at least one of the deoxidizing elements of the group constituted by magnesium, silicon, boron and lithium. To the alloy can also be added at least one of the refining elements of the group constituted by ruthenium, rhenium and platinum in quantities not exceeding 0.4% by weight. The invention further relates to a master alloy for obtaining said gold alloy.Type: ApplicationFiled: May 29, 2002Publication date: January 16, 2003Applicant: LEG.OR S.R.L.Inventors: Massimo Poliero, Andrea Basso