Patents by Inventor Juergen Schroeter
Juergen Schroeter has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20140101689Abstract: A system that incorporates teachings of the present disclosure may include, for example, a processor that causes a STB to present an avatar. The processor can receive from the STB a response of the user, detect from the response a change in an emotional state of the user, adapt a search for media content according to the change in the emotional state of the user, and adapt a portion of the characteristics of the avatar relating to emotional feedback according to the change in the emotional state of the user. The processor can cause the STB to present the adapted avatar presenting content from a media content source identified from the adapted search for media content. Other embodiments are disclosed.Type: ApplicationFiled: December 11, 2013Publication date: April 10, 2014Applicant: AT&T Intellectual Property I, LPInventors: Linda Roberts, Horst Juergen Schroeter, E-Lee Chang, Darnell Clayton, Madhur Khandelwal
-
Publication number: 20140098240Abstract: A system that incorporates teachings of the subject disclosure may include, for example, a method for controlling a steering of a plurality of cameras to identify a plurality of potential sources, identifying the plurality of potential sources according to image data provided by the plurality of cameras, assigning a beam of a plurality of beams of a plurality of microphones to each of the plurality of potential sources, detecting a first command comprising one of a first audible cue based on signals from a portion of the plurality of microphones, a first visual cue based on image data from one of the plurality of cameras, or both for controlling a media center, and configuring the media center according to the first command. Other embodiments are disclosed.Type: ApplicationFiled: October 9, 2012Publication date: April 10, 2014Applicant: AT&T Intellectual Property I, LPInventors: Dimitrios Dimitriadis, Horst Juergen Schroeter
-
Patent number: 8683354Abstract: A system that incorporates teachings of the present disclosure may include, for example, a first computing device having a controller to present an avatar having characteristics that correlate to a user profile and that conform to operating characteristics of the first computing device, and transmit to a second computing device operational information associated with the avatar for reproducing at least in part the avatar at said second computing device. Other embodiments are disclosed.Type: GrantFiled: October 16, 2008Date of Patent: March 25, 2014Assignee: AT&T Intellectual Property I, L.P.Inventors: Madhur Khandelwal, E-Lee Chang, Horst Juergen Schroeter, Linda Roberts, Darnell Clayton
-
Patent number: 8666746Abstract: A system and method are disclosed for generating customized text-to-speech voices for a particular application. The method comprises generating a custom text-to-speech voice by selecting a voice for generating a custom text-to-speech voice associated with a domain, collecting text data associated with the domain from a pre-existing text data source and using the collected text data, generating an in-domain inventory of synthesis speech units by selecting speech units appropriate to the domain via a search of a pre-existing inventory of synthesis speech units, or by recording the minimal inventory for a selected level of synthesis quality. The text-to-speech custom voice for the domain is generated utilizing the in-domain inventory of synthesis speech units. Active learning techniques may also be employed to identify problem phrases wherein only a few minutes of recorded data is necessary to deliver a high quality TTS custom voice.Type: GrantFiled: May 13, 2004Date of Patent: March 4, 2014Assignee: AT&T Intellectual Property II, L.P.Inventors: Srinivas Bangalore, Junlan Feng, Mazin G. Rahim, Juergen Schroeter, David Eugene Schulz, Ann K. Syrdal
-
Patent number: 8645122Abstract: A voice-enabled help desk service is disclosed. The service comprises an automatic speech recognition module for recognizing speech from a user, a spoken language understanding module for understanding the output from the automatic speech recognition module, a dialog management module for generating a response to speech from the user, a natural voices text-to-speech synthesis module for synthesizing speech to generate the response to the user, and a frequently asked questions module. The frequently asked questions module handles frequently asked questions from the user by changing voices and providing predetermined prompts to answer frequently asked questions.Type: GrantFiled: December 19, 2002Date of Patent: February 4, 2014Assignee: AT&T Intellectual Property II, L.P.Inventors: Giuseppe Di Fabbrizio, Dawn L Dutton, Narendra K. Gupta, Barbara B. Hollister, Mazin G Rahim, Giuseppe Riccardi, Robert Elias Schapire, Juergen Schroeter
-
Patent number: 8600757Abstract: A system and method for providing a scalable spoken dialog system are disclosed. The method comprises receiving information which may be internal to the system or external to the system and dynamically modifying at least one module within a spoken dialog system according to the received information. The modules may be one or more of an automatic speech recognition, natural language understanding, dialog management and text-to-speech module or engine. Dynamically modifying the module may improve hardware performance or improve a specific caller's speech processing accuracy, for example. The modification of the modules or hardware may also be based on an application or a task, or based on a current portion of a dialog.Type: GrantFiled: November 30, 2012Date of Patent: December 3, 2013Assignee: AT&T Intellectual Property II, L.P.Inventors: Rahim Mazin, Juergen Schroeter
-
Patent number: 8332226Abstract: A system and method for providing a scalable spoken dialog system are disclosed. The method comprises receiving information which may be internal to the system or external to the system and dynamically modifying at least one module within a spoken dialog system according to the received information. The modules may be one or more of an automatic speech recognition, natural language understanding, dialog management and text-to-speech module or engine. Dynamically modifying the module may improve hardware performance or improve a specific caller's speech processing accuracy, for example. The modification of the modules or hardware may also be based on an application or a task, or based on a current portion of a dialog.Type: GrantFiled: January 7, 2005Date of Patent: December 11, 2012Assignee: AT&T Intellectual Property II, L.P.Inventors: Mazin G. Rahim, Juergen Schroeter
-
Publication number: 20120136664Abstract: Disclosed herein are systems, methods, and non-transitory computer-readable storage media for generating speech. One variation of the method is from a server side, and another variation of the method is from a client side. The server side method, as implemented by a network-based automatic speech processing system, includes first receiving, from a network client independent of knowledge of internal operations of the system, a request to generate a text-to-speech voice. The request can include speech samples, transcriptions of the speech samples, and metadata describing the speech samples. The system extracts sound units from the speech samples based on the transcriptions and generates an interactive demonstration of the text-to-speech voice based on the sound units, the transcriptions, and the metadata, wherein the interactive demonstration hides a back end processing implementation from the network client. The system provides access to the interactive demonstration to the network client.Type: ApplicationFiled: November 30, 2010Publication date: May 31, 2012Applicant: AT&T Intellectual Property I, L.P.Inventors: Mark Charles Beutnagel, Alistair D. Conkie, Yeon-Jun Kim, Horst Juergen Schroeter
-
Patent number: 8135591Abstract: A method and system are disclosed that train a text-to-speech synthesis system for use in speech synthesis. The method includes generating a speech database of audio files comprising domain-specific voices having various prosodies, and training a text-to-speech synthesis system using the speech database by selecting audio segments having a prosody based on at least one dialog state. The system includes a processor, a speech database of audio files, and modules for implementing the method.Type: GrantFiled: August 13, 2009Date of Patent: March 13, 2012Assignee: AT&T Intellectual Property II, L.P.Inventor: Horst Juergen Schroeter
-
Patent number: 8078466Abstract: A method for generating animated sequences of talking heads in text-to-speech applications wherein a processor samples a plurality of frames comprising image samples. The processor reads first data comprising one or more parameters associated with noise-producing orifice images of sequences of at least three concatenated phonemes which correspond to an input stimulus. The processor reads, based on the first data, second data comprising images of a noise-producing entity. The processor generates an animated sequence of the noise-producing entity.Type: GrantFiled: November 30, 2009Date of Patent: December 13, 2011Assignee: AT&T Intellectual Property II, L.P.Inventors: Eric Cosatto, Hans Peter Graf, Juergen Schroeter
-
Patent number: 7990384Abstract: A system and method for generating photo-realistic talking-head animation from a text input utilizes an audio-visual unit selection process. The lip-synchronization is obtained by optimally selecting and concatenating variable-length video units of the mouth area. The unit selection process utilizes the acoustic data to determine the target costs for the candidate images and utilizes the visual data to determine the concatenation costs. The image database is prepared in a hierarchical fashion, including high-level features (such as a full 3D modeling of the head, geometric size and position of elements) and pixel-based, low-level features (such as a PCA-based metric for labeling the various feature bitmaps).Type: GrantFiled: September 15, 2003Date of Patent: August 2, 2011Assignee: AT&T Intellectual Property II, L.P.Inventors: Eric Cosatto, Hans Peter Graf, Gerasimos Potamianos, Juergen Schroeter
-
Patent number: 7966186Abstract: A system and method for generating a synthetic text-to-speech TTS voice are disclosed. A user is presented with at least one TTS voice and at least one voice characteristic. A new synthetic TTS voice is generated by blending a plurality of existing TTS voices according to the selected voice characteristics. The blending of voices involves interpolating segmented parameters of each TTS voice. Segmented parameters may be, for example, prosodic characteristics of the speech such as pitch, volume, phone durations, accents, stress, mis-pronunciations and emotion.Type: GrantFiled: November 4, 2008Date of Patent: June 21, 2011Assignee: AT&T Intellectual Property II, L.P.Inventors: David A. Kapilow, Kenneth H. Rosen, Juergen Schroeter
-
Patent number: 7869998Abstract: A voice-enabled help desk service is disclosed. The service comprises an automatic speech recognition module for recognizing speech from a user, a spoken language understanding module for understanding the output from the automatic speech recognition module, a dialog management module for generating a response to speech from the user, a natural voices text-to-speech synthesis module for synthesizing speech to generate the response to the user, and a frequently asked questions module. The frequently asked questions module handles frequently asked questions from the user by changing voices and providing predetermined prompts to answer the frequently asked question.Type: GrantFiled: December 19, 2002Date of Patent: January 11, 2011Assignee: AT&T Intellectual Property II, L.P.Inventors: Giuseppe Di Fabbrizio, Dawn L Dutton, Narendra K. Gupta, Barbara B. Hollister, Mazin G Rahim, Giuseppe Riccardi, Robert Elias Schapire, Juergen Schroeter
-
Publication number: 20100076762Abstract: A method for generating animated sequences of talking heads in text-to-speech applications wherein a processor samples a plurality of frames comprising image samples. The processor reads first data comprising one or more parameters associated with noise-producing orifice images of sequences of at least three concatenated phonemes which correspond to an input stimulus. The processor reads, based on the first data. second data comprising images of a noise-producing entity. The processor generates an animated sequence of the noise-producing entity.Type: ApplicationFiled: November 30, 2009Publication date: March 25, 2010Applicant: AT&T Corp.Inventors: Eric Cosatto, Hans Peter Graf, Juergen Schroeter
-
Patent number: 7630897Abstract: A method for generating animated sequences of talking heads in text-to-speech applications wherein a processor samples a plurality of frames comprising image samples. The processor reads first data comprising one or more parameters associated with noise-producing orifice images of sequences of at least three concatenated phonemes which correspond to an input stimulus. The processor reads, based on the first data. second data comprising images of a noise-producing entity. The processor generates an animated sequence of the noise-producing entity.Type: GrantFiled: May 19, 2008Date of Patent: December 8, 2009Assignee: AT&T Intellectual Property II, L.P.Inventors: Eric Cosatto, Hans Peter Graf, Juergen Schroeter
-
Publication number: 20090300041Abstract: A method and system are disclosed that train a text-to-speech synthesis system for use in speech synthesis. The method includes generating a speech database of audio files comprising domain-specific voices having various prosodies, and training a text-to-speech synthesis system using the speech database by selecting audio segments having a prosody based on at least one dialog state. The system includes a processor, a speech database of audio files, and modules for implementing the method.Type: ApplicationFiled: August 13, 2009Publication date: December 3, 2009Applicant: AT&T Corp.Inventor: Horst Juergen Schroeter
-
Patent number: 7584104Abstract: A system, method and computer readable medium that trains a text-to-speech synthesis system for use in speech synthesis is disclosed. The method may include recording audio files of one or more live voices speaking language used in a specific domain, the audio files being recorded using various prosodies, storing the recorded audio files in a speech database; and training a text-to-speech synthesis system using the speech database, wherein the text-to-speech synthesis system selects audio selects audio segments having a prosody based on at least one dialog state and one speech act.Type: GrantFiled: September 8, 2006Date of Patent: September 1, 2009Assignee: AT&T Intellectual Property II, L.P.Inventor: Horst Juergen Schroeter
-
Publication number: 20090063153Abstract: A system and method for generating a synthetic text-to-speech TTS voice are disclosed. A user is presented with at least one TTS voice and at least one voice characteristic. A new synthetic TTS voice is generated by blending a plurality of existing TTS voices according to the selected voice characteristics. The blending of voices involves interpolating segmented parameters of each TTS voice. Segmented parameters may be, for example, prosodic characteristics of the speech such as pitch, volume, phone durations, accents, stress, mis-pronunciations and emotion.Type: ApplicationFiled: November 4, 2008Publication date: March 5, 2009Applicant: AT&T Corp.Inventors: David A. Kapilow, Kenneth H. Rosen, Juergen Schroeter
-
Patent number: 7454348Abstract: A system and method for generating a synthetic text-to-speech TTS voice are disclosed. A user is presented with at least one TTS voice and at least one voice characteristic. A new synthetic TTS voice is generated by blending a plurality of existing TTS voices according to the selected voice characteristics. The blending of voices involves interpolating segmented parameters of each TTS voice. Segmented parameters may be, for example, prosodic characteristics of the speech such as pitch, volume, phone durations, accents, stress, mis-pronunciations and emotion.Type: GrantFiled: January 8, 2004Date of Patent: November 18, 2008Assignee: AT&T Intellectual Property II, L.P.Inventors: David A. Kapilow, Kenneth H. Rosen, Juergen Schroeter
-
Publication number: 20080221904Abstract: A method for generating animated sequences of talking heads in text-to-speech applications wherein a processor samples a plurality of frames comprising image samples. The processor reads first data comprising one or more parameters associated with noise-producing orifice images of sequences of at least three concatenated phonemes which correspond to an input stimulus. The processor reads, based on the first data. second data comprising images of a noise-producing entity. The processor generates an animated sequence of the noise-producing entity.Type: ApplicationFiled: May 19, 2008Publication date: September 11, 2008Applicant: AT&T Corp.Inventors: Eric Cosatto, Hans Peter Graf, Juergen Schroeter