Patents by Inventor Andrea Basso

Andrea Basso has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20090024596
    Abstract: Disclosed herein are systems, methods, and computer-readable media to represent, store, and manipulate metadata. The method for representing metadata includes defining a map to metadata stored in a global database for each of a plurality of metadata containers, receiving a query for metadata associated with a file, determining which of the plurality of metadata containers the query requires, and responding to the query based on metadata associated with the file from the global database retrieved using the corresponding map for the determined metadata container.
    Type: Application
    Filed: October 30, 2007
    Publication date: January 22, 2009
    Applicant: AT&T Labs, Inc.
    Inventors: Andrea BASSO, David Crawford GIBBON
  • Publication number: 20080312930
    Abstract: According to MPEG-4's TTS architecture, facial animation can be driven by two streams simultaneously—text, and Facial Animation Parameters. In this architecture, text input is sent to a Text-To-Speech converter at a decoder that drives the mouth shapes of the face. Facial Animation Parameters are sent from an encoder to the face over the communication channel. The present invention includes codes (known as bookmarks) in the text string transmitted to the Text-to-Speech converter, which bookmarks are placed between words as well as inside them. According to the present invention, the bookmarks carry an encoder time stamp. Due to the nature of text-to-speech conversion, the encoder time stamp does not relate to real-world time, and should be interpreted as a counter. In addition, the Facial Animation Parameter stream carries the same encoder time stamp found in the bookmark of the text.
    Type: Application
    Filed: August 18, 2008
    Publication date: December 18, 2008
    Applicant: AT&T Corp.
    Inventors: Andrea Basso, Mark Charles Beutnagel, Joern Ostermann
  • Patent number: 7428547
    Abstract: File formats systems and methods are disclosed that provide a framework that integrates concepts, such as objects based audio-visual representation, meta-data and object oriented programming, to achieve a flexible and generic representation of the audiovisual information and the associated methods to operate on the audiovisual information. A system and method are disclosed for storing data processed from presentation data. The data is stored according to a method comprising coding input presentation data by identifying objects from within the presentation data, coding each object individually and organizing the coded data into access layer data units. The access layer data units are stored throughout a plurality of segments, each segment comprising a segment table in a header portion thereof and those access layer data units that are members of the respective segment, there being one entry in the segment table for each access layer data unit therein.
    Type: Grant
    Filed: February 24, 2004
    Date of Patent: September 23, 2008
    Assignees: AT&T Corp., The Trustees of Columbia University
    Inventors: Andrea Basso, Alexandros Eleftheriadis, Hari Kalva, Atul Puri, Robert Lewis Schmidt
  • Publication number: 20080228825
    Abstract: File formats systems and methods are disclosed that provide a framework that integrates concepts, such as objects based audio-visual representation, meta-data and object oriented programming, to achieve a flexible and generic representation of the audiovisual information and the associated methods to operate on the audiovisual information. A system and method are disclosed for storing data processed from presentation data. The data is stored according to a method comprising coding input presentation data by identifying objects from within the presentation data, coding each object individually and organizing the coded data into access layer data units. The access layer data units are stored throughout a plurality of segments, each segment comprising a segment table in a header portion thereof and those access layer data units that are members of the respective segment, there being one entry in the segment table for each access layer data unit therein.
    Type: Application
    Filed: May 29, 2008
    Publication date: September 18, 2008
    Applicants: AT&T Corp., The Trustees of Columbia University
    Inventors: Andrea BASSO, Alexandros Eleftheriadis, Hari Kalva, Atul Puri, Robert Lewis Schmidt
  • Publication number: 20080114723
    Abstract: A method and apparatus for displaying received data, analyze the quality of the displayed data formulating a media-parameter suggestion for the encoder to alter the characteristics of data to be sent to the receiver, and sending from the receiver, the formulated suggestion.
    Type: Application
    Filed: October 31, 2007
    Publication date: May 15, 2008
    Applicant: AT&T Corp.
    Inventors: Andrea Basso, Erich Haratsch, Barin Geoffry Haskell, Joern Ostermann
  • Patent number: 7366670
    Abstract: Facial animation in MPEG-4 can be driven by a text stream and a Facial Animation Parameters (FAP) stream. Text input is sent to a TTS converter that drives the mouth shapes of the face. FAPs are sent from an encoder to the face over the communication channel. Disclosed are codes bookmarks in the text string transmitted to the TTS converter. Bookmarks are placed between and inside words and carry an encoder time stamp. The encoder time stamp does not relate to real-world time. The FAP stream carries the same encoder time stamp found in the bookmark of the text. The system reads the bookmark and provides the encoder time stamp as well as a real-time time stamp to the facial animation system. The facial animation system associates the correct facial animation parameter with the real-time time stamp using the encoder time stamp of the bookmark as a reference.
    Type: Grant
    Filed: August 11, 2006
    Date of Patent: April 29, 2008
    Assignee: AT&T Corp.
    Inventors: Andrea Basso, Mark Charles Beutnagel, Joern Ostermann
  • Publication number: 20080090242
    Abstract: The present invention provides, inter alia, methods for selecting a patient with cancer for treatment with a farnesyl protein transferase inhibitor as well as methods for treating said patient.
    Type: Application
    Filed: September 28, 2007
    Publication date: April 17, 2008
    Inventors: Diane Levitan, Andrea Basso, Marvin Bayne, Walter Bishop, Paul Kirschmeier
  • Publication number: 20080059194
    Abstract: According to MPEG-4's TTS architecture, facial animation can be driven by two streams simultaneously—text, and Facial Animation Parameters. In this architecture, text input is sent to a Text-To-Speech converter at a decoder that drives the mouth shapes of the face. Facial Animation Parameters are sent from an encoder to the face over the communication channel. The present invention includes codes (known as bookmarks) in the text string transmitted to the Text-to-Speech converter, which bookmarks are placed between words as well as inside them. According to the present invention, the bookmarks carry an encoder time stamp. Due to the nature of text-to-speech conversion, the encoder time stamp does not relate to real-world time, and should be interpreted as a counter. In addition, the Facial Animation Parameter stream carries the same encoder time stamp found in the bookmark of the text.
    Type: Application
    Filed: October 31, 2007
    Publication date: March 6, 2008
    Applicant: AT&T Corp.
    Inventors: Andrea Basso, Mark Beutnagel, Joern Ostermann
  • Patent number: 7310811
    Abstract: A method and apparatus for displaying received data, analyze the quality of the displayed data formulating a media-parameter suggestion for the encoder to alter the characteristics of data to be sent to the receiver, and sending from the receiver, the formulated suggestion.
    Type: Grant
    Filed: July 10, 1998
    Date of Patent: December 18, 2007
    Assignee: AT&T Corp.
    Inventors: Andrea Basso, Erich Haratsch, Barin Geoffry Haskell, Joern Ostermann
  • Patent number: 7231099
    Abstract: A method of improving the lighting conditions of a real scene or video sequence. Digitally generated light is added to a scene for video conferencing over telecommunication networks. A virtual illumination equation takes into account light attenuation, lambertian and specular reflection. An image of an object is captured, a virtual light source illuminates the object within the image. In addition, the object can be the head of the user. The position of the head of the user is dynamically tracked so that an three-dimensional model is generated which is representative of the head of the user. Synthetic light is applied to a position on the model to form an illuminated model.
    Type: Grant
    Filed: August 31, 2005
    Date of Patent: June 12, 2007
    Assignee: AT&T
    Inventors: Andrea Basso, Eric Cosatto, David Crawford Gibbon, Hans Peter Graf, Shan Liu
  • Patent number: 7110950
    Abstract: According to MPEG-4's TTS architecture, facial animation can be driven by two streams simultaneously—text, and Facial Animation Parameters. In this architecture, text input is sent to a Text-To-Speech converter at a decoder that drives the mouth shapes of the face. Facial Animation Parameters are sent from an encoder to the face over the communication channel. The present invention includes codes (known as bookmarks) in the text string transmitted to the Text-to-Speech converter, which bookmarks are placed between words as well as inside them. According to the present invention, the bookmarks carry an encoder time stamp. Due to the nature of text-to-speech conversion, the encoder time stamp does not relate to real-world time, and should be interpreted as a counter. In addition, the Facial Animation Parameter stream carries the same encoder time stamp found in the bookmark of the text.
    Type: Grant
    Filed: January 7, 2005
    Date of Patent: September 19, 2006
    Assignee: AT&T Corp.
    Inventors: Andrea Basso, Mark Charles Beutnagel, Joern Ostermann
  • Patent number: 6980697
    Abstract: A method of improving the lighting conditions of a real scene or video sequence. Digitally generated light is added to a scene for video conferencing over telecommunication networks. A virtual illumination equation takes into account light attenuation, lambertian and specular reflection. An image of an object is captured, a virtual light source illuminates the object within the image. In addition, the object can be the head of the user. The position of the head of the user is dynamically tracked so that an three-dimensional model is generated which is representative of the head of the user. Synthetic light is applied to a position on the model to form an illuminated model.
    Type: Grant
    Filed: January 25, 2002
    Date of Patent: December 27, 2005
    Assignee: AT&T Corp.
    Inventors: Andrea Basso, Eric Cosatto, David Crawford Gibbon, Hans Peter Graf, Shan Liu
  • Patent number: 6961383
    Abstract: Scalable video coders have traditionally avoided using enhancement layer information to predict the base layer, so as to avoid so-called “drift”. As a result, they are less efficient than a one-layer coder. The present invention is directed to a scalable video coder that allows drift, by predicting the base layer from the enhancement layer information. Through careful management of the amount of drift introduced, the overall compression efficiency can be improved while only slighly degrading resilience for lower bit-rates.
    Type: Grant
    Filed: November 21, 2001
    Date of Patent: November 1, 2005
    Assignee: AT&T Corp.
    Inventors: Amy Ruth Reibman, Leon Bottou, Andrea Basso
  • Publication number: 20050119877
    Abstract: According to MPEG-4's TTS architecture, facial animation can be driven by two streams simultaneously—text, and Facial Animation Parameters. In this architecture, text input is sent to a Text-To-Speech converter at a decoder that drives the mouth shapes of the face. Facial Animation Parameters are sent from an encoder to the face over the communication channel. The present invention includes codes (known as bookmarks) in the text string transmitted to the Text-to-Speech converter, which bookmarks are placed between words as well as inside them. According to the present invention, the bookmarks carry an encoder time stamp. Due to the nature of text-to-speech conversion, the encoder time stamp does not relate to real-world time, and should be interpreted as a counter. In addition, the Facial Animation Parameter stream carries the same encoder time stamp found in the bookmark of the text.
    Type: Application
    Filed: January 7, 2005
    Publication date: June 2, 2005
    Applicant: AT&T Corp.
    Inventors: Andrea Basso, Mark Beutnagel, Joern Ostermann
  • Patent number: 6862569
    Abstract: According to MPEG-4's TTS architecture, facial animation can be driven by two streams simultaneously—text, and Facial Animation Parameters. In this architecture, text input is sent to a Text-To-Speech converter at a decoder that drives the mouth shapes of the face. Facial Animation Parameters are sent from an encoder to the face over the communication channel. The present invention includes codes (known as bookmarks) in the text string transmitted to the Text-to-Speech converter, which bookmarks are placed between words as well as inside them. According to the present invention, the bookmarks carry-an encoder time stamp. Due to the nature of text-to-speech conversion, the encoder time stamp does not relate to real-world time, and should be interpreted as a counter. In addition, the Facial Animation Parameter stream carries the same encoder time stamp found in the bookmark of the text.
    Type: Grant
    Filed: January 23, 2003
    Date of Patent: March 1, 2005
    Assignee: AT&T Corp.
    Inventors: Andrea Basso, Mark Charles Beutnagel, Joern Ostermann
  • Publication number: 20040167916
    Abstract: File formats systems and methods are disclosed that provide a framework that integrates concepts, such as objects based audio-visual representation, meta-data and object oriented programming, to achieve a flexible and generic representation of the audiovisual information and the associated methods to operate on the audiovisual information. A system and method are disclosed for storing data processed from presentation data. The data is stored according to a method comprising coding input presentation data by identifying objects from within the presentation data, coding each object individually and organizing the coded data into access layer data units. The access layer data units are stored throughout a plurality of segments, each segment comprising a segment table in a header portion thereof and those access layer data units that are members of the respective segment, there being one entry in the segment table for each access layer data unit therein.
    Type: Application
    Filed: February 24, 2004
    Publication date: August 26, 2004
    Applicant: AT&T Corp.
    Inventors: Andrea Basso, Alexandros Eleftheriadis, Hari Kalva, Atul Puri, Robert Lewis Schmidt
  • Patent number: 6751623
    Abstract: A fundamental limitation in the exchange of audiovisual information today is that its representation is extremely low level. It is composed of coded video or audio samples (often as blocks) arranged in a commercial format. In contrast, the new generation multimedia requires flexible formats to allow a quick adaptation to requirements in terms of access, bandwidth scalability, streaming as well as general data reorganization. The Flexible-Integrated Intermedia Format (Flexible-IIF or F-IIF) is an advanced extension to the Integrated Intermedia Format (IIF). The Flexible-Integrated Intermedia Format (Flexible-IIF) datastructures, file formats systems and methods provide a framework that integrates advanced concepts, such as objects based audio-visual representation, meta-data and object oriented programming, to achieve a flexible and generic representation of the audiovisual information and the associated methods to operate on the audiovisual information.
    Type: Grant
    Filed: January 26, 1999
    Date of Patent: June 15, 2004
    Assignees: AT&T Corp., The Trustees of Columbia University
    Inventors: Andrea Basso, Alexandros Eleftheriadis, Hari Kalva, Atul Puri, Robert Lewis Schmidt
  • Patent number: 6602299
    Abstract: A flexible framework for synchronization of multimedia streams synchronizes the incoming streams on the basis of the collaboration of a transmitter-driven and a local inter-media synchronization module. Whenever the first one it is not enough to ensure reliable synchronization or cannot assure synchronization because the encoder does not know the exact timing of the decoder, the second one comes into play. Normally, the transmitter-driven module uses the stream time stamps if their drift is acceptable. If the drift is too high, the system activates an internal inter-media synchronization mode while the transmitter driven module extracts the coarsest inter-media synchronization and/or the structural information present in the streams. The internal clock of the receiver is used as absolute time reference. Whenever the drift value stabilizes to acceptable values, the system switches back smoothly to the external synchronization mode.
    Type: Grant
    Filed: November 13, 2000
    Date of Patent: August 5, 2003
    Assignee: AT&T Corp.
    Inventors: Andrea Basso, Joern Ostermann
  • Patent number: 6567779
    Abstract: According to MPEG-4's TTS architecture, facial animation can be driven by two streams simultaneously—text, and Facial Animation Parameters. In this architecture, text input is sent to a Text-To-Speech converter at a decoder that drives the mouth shapes of the face. Facial Animation Parameters are sent from an encoder to the face over the communication channel. The present invention includes codes (known as bookmarks) in the text string transmitted to the Text-to-Speech converter, which bookmarks are placed between words as well as inside them. According to the present invention, the bookmarks carry an encoder time stamp. Due to the nature of text-to-speech conversion, the encoder time stamp does not relate to real-world time, and should be interpreted as a counter. In addition, the Facial Animation Parameter stream carries the same encoder time stamp found in the bookmark of the text.
    Type: Grant
    Filed: August 5, 1997
    Date of Patent: May 20, 2003
    Assignee: AT&T Corp.
    Inventors: Andrea Basso, Mark Charles Beutnagel, Joern Ostermann
  • Publication number: 20030012679
    Abstract: Gold alloy comprising, by weight, at least Gold Au≧33%, Iridium Ir≦0.4%, germanium Ge≦2%, 0.015% ≦silicon ≦0.3%, Phosphorus ≦0.02% and Copper Cu≦66%. The alloy can also comprise, in percentage by weight, Silver Ag≦34%, nickel Ni≦20% and Zinc Zn≦25%. In some variations the gold alloy can further comprise no more than 4% of at least one of the elements of the group constituted by cobalt, manganese, tin and indium, and no more than 0.15% of at least one of the deoxidizing elements of the group constituted by magnesium, silicon, boron and lithium. To the alloy can also be added at least one of the refining elements of the group constituted by ruthenium, rhenium and platinum in quantities not exceeding 0.4% by weight. The invention further relates to a master alloy for obtaining said gold alloy.
    Type: Application
    Filed: May 29, 2002
    Publication date: January 16, 2003
    Applicant: LEG.OR S.R.L.
    Inventors: Massimo Poliero, Andrea Basso