Patents by Inventor Kean Kheong Chin
Kean Kheong Chin has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240170006Abstract: A system for processing and presenting a conversation includes a sensor, a processor, and a presenter. The sensor is configured to capture an audio-form conversation. The processor is configured to automatically transform the audio-form conversation into a transformed conversation. The transformed conversation includes a synchronized text, wherein the synchronized text is synchronized with the audio-form conversation. The presenter is configured to present the transformed conversation including the synchronized text and the audio-form conversation. The presenter is further configured to present the transformed conversation to be navigable, searchable, assignable, editable, and shareable.Type: ApplicationFiled: January 25, 2024Publication date: May 23, 2024Inventors: YUN FU, SIMON LAU, KAISUKE NAKAJIMA, JULIUS CHENG, GELEI CHEN, SAM SONG LIANG, JAMES MASON ALTREUTER, KEAN KHEONG CHIN, ZHENHAO GE, HITESH ANAND GUPTA, XIAOKE HUANG, JAMES FRANCIS McATEER, BRIAN FRANCIS WILLIAMS, TAO XING
-
Patent number: 11978472Abstract: A system for processing and presenting a conversation includes a sensor, a processor, and a presenter. The sensor is configured to capture an audio-form conversation. The processor is configured to automatically transform the audio-form conversation into a transformed conversation. The transformed conversation includes a synchronized text, wherein the synchronized text is synchronized with the audio-form conversation. The presenter is configured to present the transformed conversation including the synchronized text and the audio-form conversation. The presenter is further configured to present the transformed conversation to be navigable, searchable, assignable, editable, and shareable.Type: GrantFiled: March 23, 2021Date of Patent: May 7, 2024Assignee: Otter.ai, Inc.Inventors: Yun Fu, Simon Lau, Kaisuke Nakajima, Julius Cheng, Gelei Chen, Sam Song Liang, James Mason Altreuter, Kean Kheong Chin, Zhenhao Ge, Hitesh Anand Gupta, Xiaoke Huang, James Francis McAteer, Brian Francis Williams, Tao Xing
-
Publication number: 20240087574Abstract: Computer-implemented method and system for receiving and processing one or more moment-associating elements. For example, the computer-implemented method includes receiving the one or more moment-associating elements, transforming the one or more moment-associating elements into one or more pieces of moment-associating information, and transmitting at least one piece of the one or more pieces of moment-associating information.Type: ApplicationFiled: November 20, 2023Publication date: March 14, 2024Inventors: YUN FU, SIMON LAU, KAISUKE NAKAJIMA, JULIUS CHENG, SAM SONG LIANG, JAMES MASON ALTREUTER, KEAN KHEONG CHIN, ZHENHAO GE, HITESH ANAND GUPTA, XIAOKE HUANG, JAMES FRANCIS McATEER, BRIAN FRANCIS WILLIAMS, TAO XING
-
Patent number: 11869508Abstract: Computer-implemented method and system for receiving and processing one or more moment-associating elements. For example, the computer-implemented method includes receiving the one or more moment-associating elements, transforming the one or more moment-associating elements into one or more pieces of moment-associating information, and transmitting at least one piece of the one or more pieces of moment-associating information.Type: GrantFiled: April 28, 2021Date of Patent: January 9, 2024Assignee: Otter.ai, Inc.Inventors: Yun Fu, Simon Lau, Kaisuke Nakajima, Julius Cheng, Sam Song Liang, James Mason Altreuter, Kean Kheong Chin, Zhenhao Ge, Hitesh Anand Gupta, Xiaoke Huang, James Francis McAteer, Brian Francis Williams, Tao Xing
-
Publication number: 20220353102Abstract: Methods and systems for team cooperation with real-time recording of one or more moment-associating elements. For example, a method includes: delivering, in response to an instruction, an invitation to each member of one or more members associated with a workspace; granting, in response to acceptance of the invitation by one or more subscribers of the one or more members, subscription permission to the one or more subscribers; receiving the one or more moment-associating elements; transforming the one or more moment-associating elements into one or more pieces of moment-associating information; and transmitting at least one piece of the one or more pieces of moment-associating information to the one or more subscribers.Type: ApplicationFiled: July 13, 2022Publication date: November 3, 2022Inventors: SIMON LAU, YUN FU, JAMES MASON ALTREUTER, BRIAN FRANCIS WILLIAMS, XIAOKE HUANG, TAO XING, WEN SUN, TAO LU, KAISUKE NAKAJIMA, KEAN KHEONG CHIN, HITESH ANAND GUPTA, JULIUS CHENG, JING PAN, SAM SONG LIANG
-
Publication number: 20220343918Abstract: Computer-implemented method and system for processing and broadcasting one or more moment-associating elements. For example, the computer-implemented method includes granting subscription permission to one or more subscribers; receiving the one or more moment-associating elements; transforming the one or more moment-associating elements into one or more pieces of moment-associating information; and transmitting at least one piece of the one or more pieces of moment-associating information to the one or more subscribers.Type: ApplicationFiled: July 13, 2022Publication date: October 27, 2022Inventors: YUN FU, TAO XING, KAISUKE NAKAJIMA, BRIAN FRANCIS WILLIAMS, JAMES MASON ALTREUTER, XIAOKE HUANG, SIMON LAU, SAM SONG LIANG, KEAN KHEONG CHIN, WEN SUN, JULIUS CHENG, HITESH ANAND GUPTA
-
Patent number: 11431517Abstract: Methods and systems for team cooperation with real-time recording of one or more moment-associating elements. For example, a method includes: delivering, in response to an instruction, an invitation to each member of one or more members associated with a workspace; granting, in response to acceptance of the invitation by one or more subscribers of the one or more members, subscription permission to the one or more subscribers; receiving the one or more moment-associating elements; transforming the one or more moment-associating elements into one or more pieces of moment-associating information; and transmitting at least one piece of the one or more pieces of moment-associating information to the one or more subscribers.Type: GrantFiled: February 3, 2020Date of Patent: August 30, 2022Assignee: Otter.ai, Inc.Inventors: Simon Lau, Yun Fu, James Mason Altreuter, Brian Francis Williams, Xiaoke Huang, Tao Xing, Wen Sun, Tao Lu, Kaisuke Nakajima, Kean Kheong Chin, Hitesh Anand Gupta, Julius Cheng, Jing Pan, Sam Song Liang
-
Patent number: 11423911Abstract: Computer-implemented method and system for processing and broadcasting one or more moment-associating elements. For example, the computer-implemented method includes granting subscription permission to one or more subscribers; receiving the one or more moment-associating elements; transforming the one or more moment-associating elements into one or more pieces of moment-associating information; and transmitting at least one piece of the one or more pieces of moment-associating information to the one or more subscribers.Type: GrantFiled: October 10, 2019Date of Patent: August 23, 2022Assignee: Otter.ai, Inc.Inventors: Yun Fu, Tao Xing, Kaisuke Nakajima, Brian Francis Williams, James Mason Altreuter, Xiaoke Huang, Simon Lau, Sam Song Liang, Kean Kheong Chin, Wen Sun, Julius Cheng, Hitesh Anand Gupta
-
Publication number: 20210327454Abstract: A system for processing and presenting a conversation includes a sensor, a processor, and a presenter. The sensor is configured to capture an audio-form conversation. The processor is configured to automatically transform the audio-form conversation into a transformed conversation. The transformed conversation includes a synchronized text, wherein the synchronized text is synchronized with the audio-form conversation. The presenter is configured to present the transformed conversation including the synchronized text and the audio-form conversation. The presenter is further configured to present the transformed conversation to be navigable, searchable, assignable, editable, and shareable.Type: ApplicationFiled: March 23, 2021Publication date: October 21, 2021Inventors: YUN FU, SIMON LAU, KAISUKE NAKAJIMA, JULIUS CHENG, GELEI CHEN, SAM SONG LIANG, JAMES MASON ALTREUTER, KEAN KHEONG CHIN, ZHENHAO GE, HITESH ANAND GUPTA, XIAOKE HUANG, JAMES FRANCIS McATEER, BRIAN FRANCIS WILLIAMS, TAO XING
-
Publication number: 20210319797Abstract: Computer-implemented method and system for receiving and processing one or more moment-associating elements. For example, the computer-implemented method includes receiving the one or more moment-associating elements, transforming the one or more moment-associating elements into one or more pieces of moment-associating information, and transmitting at least one piece of the one or more pieces of moment-associating information.Type: ApplicationFiled: April 28, 2021Publication date: October 14, 2021Inventors: YUN FU, SIMON LAU, KAISUKE NAKAJIMA, JULIUS CHENG, SAM SONG LIANG, JAMES MASON ALTREUTER, KEAN KHEONG CHIN, ZHENHAO GE, HITESH ANAND GUPTA, XIAOKE HUANG, JAMES FRANCIS McATEER, BRIAN FRANCIS WILLIAMS, TAO XING
-
Patent number: 11100943Abstract: A system for processing and presenting a conversation includes a sensor, a processor, and a presenter. The sensor is configured to capture an audio-form conversation. The processor is configured to automatically transform the audio-form conversation into a transformed conversation. The transformed conversation includes a synchronized text, wherein the synchronized text is synchronized with the audio-form conversation. The presenter is configured to present the transformed conversation including the synchronized text and the audio-form conversation. The presenter is further configured to present the transformed conversation to be navigable, searchable, assignable, editable, and shareable.Type: GrantFiled: February 14, 2019Date of Patent: August 24, 2021Assignee: Otter.ai, Inc.Inventors: Yun Fu, Simon Lau, Kaisuke Nakajima, Julius Cheng, Gelei Chen, Sam Song Liang, James Mason Altreuter, Kean Kheong Chin, Zhenhao Ge, Hitesh Anand Gupta, Xiaoke Huang, James Francis McAteer, Brian Francis Williams, Tao Xing
-
Patent number: 11024316Abstract: Computer-implemented method and system for receiving and processing one or more moment-associating elements. For example, the computer-implemented method includes receiving the one or more moment-associating elements, transforming the one or more moment-associating elements into one or more pieces of moment-associating information, and transmitting at least one piece of the one or more pieces of moment-associating information.Type: GrantFiled: May 3, 2019Date of Patent: June 1, 2021Assignee: Otter.ai, Inc.Inventors: Yun Fu, Simon Lau, Kaisuke Nakajima, Julius Cheng, Sam Song Liang, James Mason Altreuter, Kean Kheong Chin, Zhenhao Ge, Hitesh Anand Gupta, Xiaoke Huang, James Francis McAteer, Brian Francis Williams, Tao Xing
-
Patent number: 9454963Abstract: A text-to-speech method for simulating a plurality of different voice characteristics includes dividing inputted text into a sequence of acoustic units; selecting voice characteristics for the inputted text; converting the sequence of acoustic units to a sequence of speech vectors using an acoustic model having a plurality of model parameters provided in clusters each having at least one sub-cluster and describing probability distributions which relate an acoustic unit to a speech vector; and outputting the sequence of speech vectors as audio with the selected voice characteristics. A parameter of a predetermined type of each probability distribution is expressed as a weighted sum of parameters of the same type using voice characteristic dependent weighting. In converting the sequence of acoustic units to a sequence of speech vectors, the voice characteristic dependent weights for the selected voice characteristics are retrieved for each cluster such that there is one weight per sub-cluster.Type: GrantFiled: March 13, 2013Date of Patent: September 27, 2016Assignee: KABUSHIKI KAISHA TOSHIBAInventors: Javier Latorre-Martinez, Vincent Ping Leung Wan, Kean Kheong Chin, Mark John Francis Gales, Katherine Mary Knill, Masami Akamine, Byung Ha Chung
-
Patent number: 9269347Abstract: A text-to-speech method configured to output speech having a selected speaker voice and a selected speaker attribute, including: inputting text; dividing the inputted text into a sequence of acoustic units; selecting a speaker for the inputted text; selecting a speaker attribute for the inputted text; converting the sequence of acoustic units to a sequence of speech vectors using an acoustic model; and outputting the sequence of speech vectors as audio with the selected speaker voice and a selected speaker attribute. The acoustic model includes a first set of parameters relating to speaker voice and a second set of parameters relating to speaker attributes, which parameters do not overlap. The selecting a speaker voice includes selecting parameters from the first set of parameters and the selecting the speaker attribute includes selecting the parameters from the second set of parameters.Type: GrantFiled: March 15, 2013Date of Patent: February 23, 2016Assignee: Kabushiki Kaisha ToshibaInventors: Javier Latorre-Martinez, Vincent Ping Leung Wan, Kean Kheong Chin, Mark John Francis Gales, Katherine Mary Knill, Masami Akamine
-
Patent number: 8620655Abstract: A speech processing method, comprising: receiving a speech input which comprises a sequence of feature vectors; determining the likelihood of a sequence of words arising from the sequence of feature vectors using an acoustic model and a language model, comprising: providing an acoustic model for performing speech recognition on an input signal which comprises a sequence of feature vectors, said model having a plurality of model parameters relating to the probability distribution of a word or part thereof being related to a feature vector, wherein said speech input is a mismatched speech input which is received from a speaker in an environment which is not matched to the speaker or environment under which the acoustic model was trained; and adapting the acoustic model to the mismatched speech input, the speech processing method further comprising determining the likelihood of a sequence of features occurring in a given language using a language model; and combining the likelihoods determined by the acousticType: GrantFiled: August 10, 2011Date of Patent: December 31, 2013Assignee: Kabushiki Kaisha ToshibaInventors: Haitian Xu, Kean Kheong Chin, Mark John Francis Gales
-
Patent number: 8612224Abstract: A method for identifying a plurality of speakers in audio data and for decoding the speech spoken by said speakers; the method comprising: receiving speech; dividing the speech into segments as it is received; processing the received speech segment by segment in the order received to identify the speaker and to decode the speech, processing comprising: performing primary decoding of the segment using an acoustic model and a language model; obtaining segment parameters indicating the differences between the speaker of the segment and a base speaker during the primary decoding; comparing the segment parameters with a plurality of stored speaker profiles to determine the identity of the speaker, and selecting a speaker profile for said speaker; updating the selected speaker profile; performing a further decoding of the segment using a speaker independent acoustic model, adapted using the updated speaker profile; outputting the decoded speech for the identified speaker, wherein the speaker profiles are updType: GrantFiled: August 23, 2011Date of Patent: December 17, 2013Assignee: Kabushiki Kaisha ToshibaInventors: Catherine Breslin, Mark John Francis Gales, Kean Kheong Chin, Katherine Mary Knill
-
Publication number: 20130262119Abstract: A text-to-speech method configured to output speech having a selected speaker voice and a selected speaker attribute, including: inputting text; dividing the inputted text into a sequence of acoustic units; selecting a speaker for the inputted text; selecting a speaker attribute for the inputted text; converting the sequence of acoustic units to a sequence of speech vectors using an acoustic model; and outputting the sequence of speech vectors as audio with the selected speaker voice and a selected speaker attribute. The acoustic model includes a first set of parameters relating to speaker voice and a second set of parameters relating to speaker attributes, which parameters do not overlap. The selecting a speaker voice includes selecting parameters from the first set of parameters and the selecting the speaker attribute includes selecting the parameters from the second set of parameters.Type: ApplicationFiled: March 15, 2013Publication date: October 3, 2013Applicant: Kabushiki Kaisha ToshibaInventors: Javier LATORRE-MARTINEZ, Vincent Ping Leung Wan, Kean Kheong Chin, Mark John Francis Gales, Katherine Mary Knill, Masami Akamine
-
Publication number: 20130262109Abstract: A text-to-speech method for simulating a plurality of different voice characteristics includes dividing inputted text into a sequence of acoustic units; selecting voice characteristics for the inputted text; converting the sequence of acoustic units to a sequence of speech vectors using an acoustic model having a plurality of model parameters provided in clusters each having at least one sub-cluster and describing probability distributions which relate an acoustic unit to a speech vector; and outputting the sequence of speech vectors as audio with the selected voice characteristics. A parameter of a predetermined type of each probability distribution is expressed as a weighted sum of parameters of the same type using voice characteristic dependent weighting. In converting the sequence of acoustic units to a sequence of speech vectors, the voice characteristic dependent weights for the selected voice characteristics are retrieved for each cluster such that there is one weight per sub-cluster.Type: ApplicationFiled: March 13, 2013Publication date: October 3, 2013Inventors: Javier Latorre-Martinez, Vincent Ping Leung Wan, Kean Kheong Chin, Mark John Francis Gales, Katherine Mary Knill, Masami Akamine, Byung Ha Chung
-
Patent number: 8417522Abstract: A speech recognition method includes receiving a speech input signal in a first noise environment which includes a sequence of observations, determining the likelihood of a sequence of words arising from the sequence of observations using an acoustic model, adapting the model trained in a second noise environment to that of the first environment, wherein adapting the model trained in the second environment to that of the first environment includes using second order or higher order Taylor expansion coefficients derived for a group of probability distributions and the same expansion coefficient is used for the whole group.Type: GrantFiled: April 20, 2010Date of Patent: April 9, 2013Assignee: Kabushiki Kaisha ToshibaInventors: Haitian Xu, Kean Kheong Chin
-
Publication number: 20120253811Abstract: A method for identifying a plurality of speakers in audio data and for decoding the speech spoken by said speakers; the method comprising: receiving speech; dividing the speech into segments as it is received; processing the received speech segment by segment in the order received to identify the speaker and to decode the speech, processing comprising: performing primary decoding of the segment using an acoustic model and a language model; obtaining segment parameters indicating the differences between the speaker of the segment and a base speaker during the primary decoding; comparing the segment parameters with a plurality of stored speaker profiles to determine the identity of the speaker, and selecting a speaker profile for said speaker; updating the selected speaker profile; performing a further decoding of the segment using a speaker independent acoustic model, adapted using the updated speaker profile; outputting the decoded speech for the identified speaker, wherein the speaker profiles are updType: ApplicationFiled: August 23, 2011Publication date: October 4, 2012Applicant: Kabushiki Kaisha ToshibaInventors: Catherine BRESLIN, Mark John Francis Gales, Kean Kheong Chin, Katherine Mary Knill