Patents by Inventor Eric Cosatto

Eric Cosatto has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 8135652
    Abstract: Disclosed is an improved technique for training a support vector machine using a distributed architecture. A training data set is divided into subsets, and the subsets are optimized in a first level of optimizations, with each optimization generating a support vector set. The support vector sets output from the first level optimizations are then combined and used as input to a second level of optimizations. This hierarchical processing continues for multiple levels, with the output of each prior level being fed into the next level of optimizations. In order to guarantee a global optimal solution, a final set of support vectors from a final level of optimization processing may be fed back into the first level of the optimization cascade so that the results may be processed along with each of the training data subsets.
    Type: Grant
    Filed: April 28, 2008
    Date of Patent: March 13, 2012
    Assignee: NEC Laboratories America, Inc.
    Inventors: Hans Peter Graf, Eric Cosatto, Leon Bottou, Vladimir N. Vapnik
  • Patent number: 8131551
    Abstract: A system and method of controlling the movement of a virtual agent while the agent is speaking to a human user during a conversation is disclosed. The method comprises receiving speech data to be spoken by the virtual agent, performing a prosodic analysis of the speech data, selecting matching prosody patterns from a speaking database and controlling the virtual agent movement according to the selected prosody patterns.
    Type: Grant
    Filed: July 18, 2006
    Date of Patent: March 6, 2012
    Assignee: AT&T Intellectual Property II, L.P.
    Inventors: Eric Cosatto, Hans Peter Graf, Volker Franz Strom
  • Publication number: 20120033743
    Abstract: The invention provides a system and method that transforms a set of still/motion media (i.e., a series of related or unrelated still frames, web-pages rendered as images, or video clips) or other multimedia, into a video stream that is suitable for delivery over a display medium, such as TV, cable TV, computer displays, wireless display devices, etc. The video data stream may be presented and displayed in real time or stored and later presented through a set-top box, for example. Because these media are transformed into coded video streams (e.g. MPEG-2, MPEG-4, etc.), a user can watch them on a display screen without the need to connect to the Internet through a service provider. The user may request and interact with the desired media through a simple telephone interface, for example. Moreover, several wireless and cable-based services can be developed on the top of this system.
    Type: Application
    Filed: August 8, 2011
    Publication date: February 9, 2012
    Applicant: AT&T Intellectual Property II, L.P.
    Inventors: Andrea Basso, Eric Cosatto, Steven Lloyd Greenspan, David M. Weimer
  • Patent number: 8086464
    Abstract: Methods and apparatus for rendering a talking head on a client device are disclosed. The client device has a client cache capable of storing audio/visual data associated with rendering the talking head. The method comprises storing sentences in a client cache of a client device that relate to bridging delays in a dialog, storing sentence templates to be used in dialogs, generating a talking head response to a user inquiry from the client device, and determining whether sentences or stored templates stored in the client cache relate to the talking head response. If the stored sentences or stored templates relate to the talking head response, the method comprises instructing the client device to use the appropriate stored sentence or template from the client cache to render at least a part of the talking head response and transmitting a portion of the talking head response not stored in the client cache, if any, to the client device to render a complete talking head response.
    Type: Grant
    Filed: November 30, 2009
    Date of Patent: December 27, 2011
    Assignee: AT&T Intellectual Property II, L.P.
    Inventors: Eric Cosatto, Hans Peter Graf, Joern Ostermann
  • Patent number: 8086066
    Abstract: A method of improving the lighting conditions of a real scene or video sequence. Digitally generated light is added to a scene for video conferencing over telecommunication networks. A virtual illumination equation takes into account light attenuation, lambertian and specular reflection. An image of an object is captured, a virtual light source illuminates the object within the image. In addition, the object can be the head of the user. The position of the head of the user is dynamically tracked so that an three-dimensional model is generated which is representative of the head of the user. Synthetic light is applied to a position on the model to form an illuminated model.
    Type: Grant
    Filed: September 8, 2010
    Date of Patent: December 27, 2011
    Assignee: AT&T Intellectual Property II, L.P.
    Inventors: Andrea Basso, Eric Cosatto, David Crawford Gibbon, Hans Peter Graf, Shan Liu
  • Patent number: 8078466
    Abstract: A method for generating animated sequences of talking heads in text-to-speech applications wherein a processor samples a plurality of frames comprising image samples. The processor reads first data comprising one or more parameters associated with noise-producing orifice images of sequences of at least three concatenated phonemes which correspond to an input stimulus. The processor reads, based on the first data, second data comprising images of a noise-producing entity. The processor generates an animated sequence of the noise-producing entity.
    Type: Grant
    Filed: November 30, 2009
    Date of Patent: December 13, 2011
    Assignee: AT&T Intellectual Property II, L.P.
    Inventors: Eric Cosatto, Hans Peter Graf, Juergen Schroeter
  • Publication number: 20110295520
    Abstract: A method for measuring structural entropy of cell nuclei in a histological micrograph of a biopsy tissue sample involves the steps of: obtaining a dye color map from a color image of the biopsy tissue sample; locating cell nuclei in the dye color map; and measuring structural features within small groups (cliques or paths) of cell nuclei to determine their degree of organization (or structural entropy). Also, an apparatus for measuring structural entropy of cell nuclei in a histological micrograph of a biopsy tissue sample includes a processor executing instructions for: obtaining a dye color map of the biopsy tissue sample; locating cell nuclei in the dye color map; and measuring structural features within small groups (cliques or paths) of cell nuclei to determine their degree of organization (or structural entropy).
    Type: Application
    Filed: May 25, 2011
    Publication date: December 1, 2011
    Applicant: NEC LABORATORIES AMERICA, INC.
    Inventors: Eric Cosatto, Christopher D. Malon, Hans Peter Graf
  • Publication number: 20110293165
    Abstract: A method for training a classifier to be operative as an epithelial texture classifier, includes obtaining a plurality of training micrograph areas of biopsy tissue and for each of the training micrograph areas, identifying probable locations of nuclei that form epithelia, generating a skeleton graph from the probable locations of the nuclei that form the epithelia, manually drawing walls on the skeleton graph outside of the epithelia to divide the epithelia from one another, and manually selecting points that lie entirely inside the epithelia to generate open and/or closed geodesic paths in the skeleton graph between pairs of the selected points. Data is obtained from points selected from the walls and the paths and applied to a classifier to train the classifier as the epithelial texture classifier.
    Type: Application
    Filed: May 11, 2011
    Publication date: December 1, 2011
    Applicants: NEC CORPORATION, NEC LABORATORIES AMERICA, INC.
    Inventors: Christopher D. Malon, Atsushi Marugame, Eric Cosatto
  • Patent number: 7996878
    Abstract: The invention provides a system and method that transforms a set of still/motion media (i.e., a series of related or unrelated still frames, web-pages rendered as images, or video clips) or other multimedia, into a video stream that is suitable for delivery over a display medium, such as TV, cable TV, computer displays, wireless display devices, etc. The video data stream may be presented and displayed in real time or stored and later presented through a set-top box, for example. Because these media are transformed into coded video streams (e.g. MPEG-2, MPEG-4, etc.), a user can watch them on a display screen without the need to connect to the Internet through a service provider. The user may request and interact with the desired media through a simple telephone interface, for example. Moreover, several wireless and cable-based services can be developed on the top of this system.
    Type: Grant
    Filed: August 29, 2000
    Date of Patent: August 9, 2011
    Assignee: AT&T Intellectual Property II, L.P.
    Inventors: Andrea Basso, Eric Cosatto, Steven Lloyd Greenspan, David M. Weimer
  • Patent number: 7990384
    Abstract: A system and method for generating photo-realistic talking-head animation from a text input utilizes an audio-visual unit selection process. The lip-synchronization is obtained by optimally selecting and concatenating variable-length video units of the mouth area. The unit selection process utilizes the acoustic data to determine the target costs for the candidate images and utilizes the visual data to determine the concatenation costs. The image database is prepared in a hierarchical fashion, including high-level features (such as a full 3D modeling of the head, geometric size and position of elements) and pixel-based, low-level features (such as a PCA-based metric for labeling the various feature bitmaps).
    Type: Grant
    Filed: September 15, 2003
    Date of Patent: August 2, 2011
    Assignee: AT&T Intellectual Property II, L.P.
    Inventors: Eric Cosatto, Hans Peter Graf, Gerasimos Potamianos, Juergen Schroeter
  • Patent number: 7933772
    Abstract: A system and method for generating a video sequence having mouth movements synchronized with speech sounds are disclosed. The system utilizes a database of n-phones as the smallest selectable unit, wherein n is larger than 1 and preferably 3. The system calculates a target cost for each candidate n-phone for a target frame using a phonetic distance, coarticulation parameter, and speech rate. For each n-phone in a target sequence, the system searches for candidate n-phones that are visually similar according to the target cost. The system samples each candidate n-phone to get a same number of frames as in the target sequence and builds a video frame lattice of candidate video frames. The system assigns a joint cost to each pair of adjacent frames and searches the video frame lattice to construct the video sequence by finding the optimal path through the lattice according to the minimum of the sum of the target cost and the joint cost over the sequence.
    Type: Grant
    Filed: March 19, 2008
    Date of Patent: April 26, 2011
    Assignee: AT&T Intellectual Property II, L.P.
    Inventors: Eric Cosatto, Hans Peter Graf, Fu Jie Huang
  • Patent number: 7921013
    Abstract: A system and method of providing sender-customization of multi-media messages through the use of emoticons is disclosed. The sender inserts the emoticons into a text message. As an animated face audibly delivers the text, emoticons associated with the message are started a predetermined period of time or number of words prior to the position of the emoticon in the message text and completed a predetermined length of time or number of words following the location of the emoticon. The sender may insert emoticons through the use of emoticon buttons that are icons available for choosing. Upon sender selections of an emoticon, an icon representing the emoticon is inserted into the text at the position of the cursor. Once an emoticon is chosen, the sender may also choose the amplitude for the emoticon and increased or decreased amplitude will be displayed in the icon inserted into the message text.
    Type: Grant
    Filed: August 30, 2005
    Date of Patent: April 5, 2011
    Assignee: AT&T Intellectual Property II, L.P.
    Inventors: Joern Ostermann, Mehmet Reha Civanlar, Eric Cosatto, Hans Peter Graf, Yann Andre LeCun
  • Publication number: 20110002508
    Abstract: A method of improving the lighting conditions of a real scene or video sequence. Digitally generated light is added to a scene for video conferencing over telecommunication networks. A virtual illumination equation takes into account light attenuation, lambertian and specular reflection. An image of an object is captured, a virtual light source illuminates the object within the image. In addition, the object can be the head of the user. The position of the head of the user is dynamically tracked so that an three-dimensional model is generated which is representative of the head of the user. Synthetic light is applied to a position on the model to form an illuminated model.
    Type: Application
    Filed: September 8, 2010
    Publication date: January 6, 2011
    Applicant: AT&T Intellectual Property II, L.P..
    Inventors: Andrea Basso, Eric Cosatto, David Crawford Gibbon, Hans Peter Graf, Shan Liu
  • Patent number: 7844467
    Abstract: A system and method of controlling the movement of a virtual agent while the agent is listening to a human user during a conversation is disclosed. The method comprises receiving speech data from the user, performing a prosodic analysis of the speech data and controlling the virtual agent movement according to the prosodic analysis.
    Type: Grant
    Filed: January 25, 2008
    Date of Patent: November 30, 2010
    Assignee: AT&T Intellectual Property II, L.P.
    Inventors: Eric Cosatto, Hans Peter Graf, Thomas M. Isaacson, Volker Franz Strom
  • Patent number: 7805017
    Abstract: A method of improving the lighting conditions of a real scene or video sequence. Digitally generated light is added to a scene for video conferencing over telecommunication networks. A virtual illumination equation takes into account light attenuation, lambertian and specular reflection. An image of an object is captured, a virtual light source illuminates the object within the image. In addition, the object can be the head of the user. The position of the head of the user is dynamically tracked so that an three-dimensional model is generated which is representative of the head of the user. Synthetic light is applied to a position on the model to form an illuminated model.
    Type: Grant
    Filed: May 8, 2007
    Date of Patent: September 28, 2010
    Assignee: AT&T Intellectual Property II, L.P.
    Inventors: Andrea Basso, Eric Cosatto, David Crawford Gibbon, Hans Peter Graf, Shan Liu
  • Publication number: 20100172568
    Abstract: A detector and method for automatically detecting signet ring cells in an image of a biopsy tissue sample, includes finding in the image, points about which cell membranes appear in radial symmetry; selecting as candidate points, at least ones of the points that have an adjacent nuclei with a predetermined shape feature; and applying a convolutional neural network to the candidate points to determine which of the candidate points are signet ring cells.
    Type: Application
    Filed: July 2, 2009
    Publication date: July 8, 2010
    Applicant: NEC LABORATORIES AMERICA, INC.
    Inventors: Christopher Malon, Matthew L. Miller, Eric Cosatto
  • Publication number: 20100076750
    Abstract: Methods and apparatus for rendering a talking head on a client device are disclosed. The client device has a client cache capable of storing audio/visual data associated with rendering the talking head. The method comprises storing sentences in a client cache of a client device that relate to bridging delays in a dialog, storing sentence templates to be used in dialogs, generating a talking head response to a user inquiry from the client device, and determining whether sentences or stored templates stored in the client cache relate to the talking head response. If the stored sentences or stored templates relate to the talking head response, the method comprises instructing the client device to use the appropriate stored sentence or template from the client cache to render at least a part of the talking head response and transmitting a portion of the talking head response not stored in the client cache, if any, to the client device to render a complete talking head response.
    Type: Application
    Filed: November 30, 2009
    Publication date: March 25, 2010
    Applicant: AT&T Corp.
    Inventors: Eric Cosatto, Hans Peter Graf, Joern Ostermann
  • Publication number: 20100076762
    Abstract: A method for generating animated sequences of talking heads in text-to-speech applications wherein a processor samples a plurality of frames comprising image samples. The processor reads first data comprising one or more parameters associated with noise-producing orifice images of sequences of at least three concatenated phonemes which correspond to an input stimulus. The processor reads, based on the first data. second data comprising images of a noise-producing entity. The processor generates an animated sequence of the noise-producing entity.
    Type: Application
    Filed: November 30, 2009
    Publication date: March 25, 2010
    Applicant: AT&T Corp.
    Inventors: Eric Cosatto, Hans Peter Graf, Juergen Schroeter
  • Publication number: 20100002920
    Abstract: A method and system for detecting and counting mitotic figures in an image of a biopsy sample stained with at least one dye, includes color filtering the image in a computer process to identify pixels in the image that have a color which is indicative a mitotic figure; extracting the mitotic pixels in the image that are connected to one another in a computer process, thereby producing blobs of mitotic pixels; shape-filtering and clustering the blobs of mitotic pixels in a computer process to produce mitotic figure candidates; extracting sub-images of mitotic figures by cropping the biopsy sample image at the location of the blobs; extracting two sets of features from the mitotic figure candidates in two separate computer processes; determining which of the mitotic figure candidates are mitotic figures in a computer classification process based on the extracted sets of features; and counting the number of mitotic figures per square unit of biopsy sample tissue.
    Type: Application
    Filed: July 2, 2009
    Publication date: January 7, 2010
    Applicant: NEC Laboratories America, Inc.
    Inventors: Eric Cosatto, Harold Christopher Burger, Matthew L. Miller
  • Publication number: 20090304268
    Abstract: A method system for training an apparatus to recognize a pattern includes providing the apparatus with a host processor executing steps of a machine learning process; providing the apparatus with an accelerator including at least two processors; inputting training pattern data into the host processor; determining coefficient changes in the machine learning process with the host processor using the training pattern data; transferring the training data to the accelerator; determining kernel dot-products with the at least two processors of the accelerator using the training data; and transferring the dot-products back to the host processor.
    Type: Application
    Filed: June 4, 2009
    Publication date: December 10, 2009
    Applicant: NEC LABORATORIES AMERICA, INC.
    Inventors: Srihari Cadambi, Igor Durdanovic, Venkata Jakkula, Eric Cosatto, Murugan Sankaradass, Hans Peter Graf, Srimat T. Chakradhar