Patents by Inventor Mahapathy Kadirkamanathan
Mahapathy Kadirkamanathan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 9646603Abstract: A method, apparatus, and system are described for a continuous speech recognition engine that includes a fine speech recognizer model, a coarse sound representation generator, and a coarse match generator. The fine speech recognizer model receives a time coded sequence of sound feature frames, applies a speech recognition process to the sound feature frames and determines at least a best guess at each recognizable word that corresponds to the sound feature frames. The coarse sound representation generator generates a coarse sound representation of the recognized word. The coarse match generator determines a likelihood of the coarse sound representation actually being the recognized word based on comparing the coarse sound representation of the recognized word to a database containing the known sound of that recognized word and assigns the likelihood as a robust confidence level parameter to that recognized word.Type: GrantFiled: February 27, 2009Date of Patent: May 9, 2017Assignee: LONGSAND LIMITEDInventor: Mahapathy Kadirkamanathan
-
Patent number: 9002490Abstract: Methods for implementing shared experiences using mobile computing devices comprise capturing audio waves associated with a media using a built-in microphone of a mobile computing device, the mobile computing device including a processor, a memory, a display screen, a built-in battery to power the mobile computing device, and a built-in communication module to enable wireless communication. A signature is generated from the audio waves captured by the microphone. Based on the signature being recognized as a known signature, the signature and positioning information are transmitted to an audio server using the wireless communication. The positioning information identifies a specific moment in the media that a user of the mobile computing device is listening, the audio server and the mobile computing device connected to a network. Activity information is received from the audio server. The activity information is related to the media and associated with a third party server connected to the network.Type: GrantFiled: April 13, 2011Date of Patent: April 7, 2015Assignee: Longsand LimtedInventors: Mahapathy Kadirkamanathan, Simon Hayhurst
-
Patent number: 8781812Abstract: A language identification system that includes a universal phoneme decoder (UPD) is described. The UPD contains a universal phoneme set representing both 1) all phonemes occurring in the set of two or more spoken languages, and 2) captures phoneme correspondences across languages, such that a set of unique phoneme patterns and probabilities are calculated in order to identify a most likely phoneme occurring each time in the audio files in the set of two or more potential languages in which the UPD was trained on. Each statistical language model (SLM) uses the set of unique phoneme patterns created for each language in the set to distinguish between spoken human languages in the set of languages. The run-time language identifier module identifies a particular human language being spoken by utilizing the linguistic probabilities supplied by the SLMs that are based on the set of unique phoneme patterns created for each language.Type: GrantFiled: March 18, 2013Date of Patent: July 15, 2014Assignee: Longsand LimitedInventors: Mahapathy Kadirkamanathan, Christopher John Waple
-
Patent number: 8447329Abstract: A system to determine positions of mobile computing devices and provide direction information includes a first mobile computing device configured to broadcast a first chirp signal, a second mobile computing device configured to broadcast a second chirp signal indicating receipt of the first chirp signal and a first time information about when the first chirp signal is received, and a third mobile computing device configured broadcast a third chirp signal indicating (a) receipt of the first and second chirp signals and (b) a second time information about when the first and second chirp signals are received. The first mobile computing device is configured to use the first and second time information to determine a position of the second mobile computing device. The first mobile computing device is also configured to transmit text messages to the second mobile computing device to direct a user of the second mobile computing device to a position of a user of the first mobile computing device.Type: GrantFiled: February 8, 2011Date of Patent: May 21, 2013Assignee: Longsand LimitedInventors: Mahapathy Kadirkamanathan, Sean Mark Blanchflower
-
Patent number: 8401840Abstract: A language identification system that includes a universal phoneme decoder (UPD) is described. The UPD contains a universal phoneme set representing both 1) all phonemes occurring in the set of two or more spoken languages, and 2) captures phoneme correspondences across languages, such that a set of unique phoneme patterns and probabilities are calculated in order to identify a most likely phoneme occurring each time in the audio files in the set of two or more potential languages in which the UPD was trained on. Each statistical language model (SLM) uses the set of unique phoneme patterns created for each language in the set to distinguish between spoken human languages in the set of languages. The run-time language identifier module identifies a particular human language being spoken by utilizing the linguistic probabilities supplied by the SLMs that are based on the set of unique phoneme patterns created for each language.Type: GrantFiled: May 24, 2012Date of Patent: March 19, 2013Assignee: Autonomy Corporation LtdInventors: Mahapathy Kadirkamanathan, Christopher John Waple
-
Publication number: 20120265328Abstract: Methods for implementing shared experiences using mobile computing devices comprise capturing audio waves associated with a media using a built-in microphone of a mobile computing device, the mobile computing device including a processor, a memory, a display screen, a built-in battery to power the mobile computing device, and a built-in communication module to enable wireless communication. A signature is generated from the audio waves captured by the microphone. Based on the signature being recognized as a known signature, the signature and positioning information are transmitted to an audio server using the wireless communication. The positioning information identifies a specific moment in the media that a user of the mobile computing device is listening, the audio server and the mobile computing device connected to a network. Activity information is received from the audio server. The activity information is related to the media and associated with a third party server connected to the network.Type: ApplicationFiled: April 13, 2011Publication date: October 18, 2012Applicant: AUTONOMY CORPORATION LTDInventors: Mahapathy KADIRKAMANATHAN, Simon HAYHURST
-
Publication number: 20120232901Abstract: A language identification system that includes a universal phoneme decoder (UPD) is described. The UPD contains a universal phoneme set representing both 1) all phonemes occurring in the set of two or more spoken languages, and 2) captures phoneme correspondences across languages, such that a set of unique phoneme patterns and probabilities are calculated in order to identify a most likely phoneme occurring each time in the audio files in the set of two or more potential languages in which the UPD was trained on. Each statistical language model (SLM) uses the set of unique phoneme patterns created for each language in the set to distinguish between spoken human languages in the set of languages. The run-time language identifier module identifies a particular human language being spoken by utilizing the linguistic probabilities supplied by the SLMs that are based on the set of unique phoneme patterns created for each language.Type: ApplicationFiled: May 24, 2012Publication date: September 13, 2012Applicant: Autonomy Corporation Ltd.Inventors: Mahapathy Kadirkamanathan, Christopher John Waple
-
Publication number: 20120202514Abstract: A system to determine positions of mobile computing devices and provide direction information includes a first mobile computing device configured to broadcast a first chirp signal, a second mobile computing device configured to broadcast a second chirp signal indicating receipt of the first chirp signal and a first time information about when the first chirp signal is received, and a third mobile computing device configured broadcast a third chirp signal indicating (a) receipt of the first and second chirp signals and (b) a second time information about when the first and second chirp signals are received. The first mobile computing device is configured to use the first and second time information to determine a position of the second mobile computing device. The first mobile computing device is also configured to transmit text messages to the second mobile computing device to direct a user of the second mobile computing device to a position of a user of the first mobile computing device.Type: ApplicationFiled: February 8, 2011Publication date: August 9, 2012Applicant: AUTONOMY CORPORATION LTDInventors: MAHAPATHY KADIRKAMANATHAN, SEAN MARK BLANCHFLOWER
-
Patent number: 8229743Abstract: Various methods and apparatus are described for a speech recognition system. In an embodiment, the statistical language model (SLM) provides probability estimates of how linguistically likely a sequence of linguistic items are to occur in that sequence based on an amount of times the sequence of linguistic items occurs in text and phrases in general use. The speech recognition decoder module requests a correction module for one or more corrected probability estimates P?(z|xy) of how likely a linguistic item z follows a given sequence of linguistic items x followed by y, where (x, y, and z) are three variable linguistic items supplied from the decoder module. The correction module is trained to linguistics of a specific domain, and is located in between the decoder module and the SLM in order to adapt the probability estimates supplied by the SLM to the specific domain when those probability estimates from the SLM significantly disagree with the linguistic probabilities in that domain.Type: GrantFiled: June 23, 2009Date of Patent: July 24, 2012Assignee: Autonomy Corporation Ltd.Inventors: David Carter, Mahapathy Kadirkamanathan
-
Patent number: 8190420Abstract: A language identification system that includes a universal phoneme decoder (UPD) is described. The UPD contains a universal phoneme set representing both 1) all phonemes occurring in the set of two or more spoken languages, and 2) captures phoneme correspondences across languages, such that a set of unique phoneme patterns and probabilities are calculated in order to identify a most likely phoneme occurring each time in the audio files in the set of two or more potential languages in which the UPD was trained on. Each statistical language models (SLM) uses the set of unique phoneme patterns created for each language in the set to distinguish between spoken human languages in the set of languages. The run-time language identifier module identifies a particular human language being spoken by utilizing the linguistic probabilities supplied by the one or more SLMs that are based on the set of unique phoneme patterns created for each language.Type: GrantFiled: August 4, 2009Date of Patent: May 29, 2012Assignee: Autonomy Corporation Ltd.Inventors: Mahapathy Kadirkamanathan, Christopher John Waple
-
Publication number: 20110035219Abstract: A language identification system that includes a universal phoneme decoder (UPD) is described. The UPD contains a universal phoneme set representing both 1) all phonemes occurring in the set of two or more spoken languages, and 2) captures phoneme correspondences across languages, such that a set of unique phoneme patterns and probabilities are calculated in order to identify a most likely phoneme occurring each time in the audio files in the set of two or more potential languages in which the UPD was trained on. Each statistical language models (SLM) uses the set of unique phoneme patterns created for each language in the set to distinguish between spoken human languages in the set of languages. The run-time language identifier module identifies a particular human language being spoken by utilizing the linguistic probabilities supplied by the one or more SLMs that are based on the set of unique phoneme patterns created for each language.Type: ApplicationFiled: August 4, 2009Publication date: February 10, 2011Applicant: AUTONOMY CORPORATION LTD.Inventors: Mahapathy Kadirkamanathan, Christopher John Waple
-
Publication number: 20100324901Abstract: Various methods and apparatus are described for a speech recognition system. In an embodiment, the statistical language model (SLM) provides probability estimates of how linguistically likely a sequence of linguistic items are to occur in that sequence based on an amount of times the sequence of linguistic items occurs in text and phrases in general use. The speech recognition decoder module requests a correction module for one or more corrected probability estimates P?(z|xy) of how likely a linguistic item z follows a given sequence of linguistic items x followed by y, where (x, y, and z) are three variable linguistic items supplied from the decoder module. The correction module is trained to linguistics of a specific domain, and is located in between the decoder module and the SLM in order to adapt the probability estimates supplied by the SLM to the specific domain when those probability estimates from the SLM significantly disagree with the linguistic probabilities in that domain.Type: ApplicationFiled: June 23, 2009Publication date: December 23, 2010Applicant: Autonomy Corporation Ltd.Inventors: David Carter, Mahapathy Kadirkamanathan
-
Publication number: 20100223056Abstract: A method, apparatus, and system are described for a continuous speech recognition engine that includes a fine speech recognizer model, a coarse sound representation generator, and a coarse match generator. The fine speech recognizer model receives a time coded sequence of sound feature frames, applies a speech recognition process to the sound feature frames and determines at least a best guess at each recognizable word that corresponds to the sound feature frames. The coarse sound representation generator generates a coarse sound representation of the recognized word. The coarse match generator determines a likelihood of the coarse sound representation actually being the recognized word based on comparing the coarse sound representation of the recognized word to a database containing the known sound of that recognized word and assigns the likelihood as a robust confidence level parameter to that recognized word.Type: ApplicationFiled: February 27, 2009Publication date: September 2, 2010Applicant: AUTONOMY CORPORATION LTD.Inventor: Mahapathy Kadirkamanathan