Patents by Inventor Kean Kheong Chin

Kean Kheong Chin has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240170006
    Abstract: A system for processing and presenting a conversation includes a sensor, a processor, and a presenter. The sensor is configured to capture an audio-form conversation. The processor is configured to automatically transform the audio-form conversation into a transformed conversation. The transformed conversation includes a synchronized text, wherein the synchronized text is synchronized with the audio-form conversation. The presenter is configured to present the transformed conversation including the synchronized text and the audio-form conversation. The presenter is further configured to present the transformed conversation to be navigable, searchable, assignable, editable, and shareable.
    Type: Application
    Filed: January 25, 2024
    Publication date: May 23, 2024
    Inventors: YUN FU, SIMON LAU, KAISUKE NAKAJIMA, JULIUS CHENG, GELEI CHEN, SAM SONG LIANG, JAMES MASON ALTREUTER, KEAN KHEONG CHIN, ZHENHAO GE, HITESH ANAND GUPTA, XIAOKE HUANG, JAMES FRANCIS McATEER, BRIAN FRANCIS WILLIAMS, TAO XING
  • Patent number: 11978472
    Abstract: A system for processing and presenting a conversation includes a sensor, a processor, and a presenter. The sensor is configured to capture an audio-form conversation. The processor is configured to automatically transform the audio-form conversation into a transformed conversation. The transformed conversation includes a synchronized text, wherein the synchronized text is synchronized with the audio-form conversation. The presenter is configured to present the transformed conversation including the synchronized text and the audio-form conversation. The presenter is further configured to present the transformed conversation to be navigable, searchable, assignable, editable, and shareable.
    Type: Grant
    Filed: March 23, 2021
    Date of Patent: May 7, 2024
    Assignee: Otter.ai, Inc.
    Inventors: Yun Fu, Simon Lau, Kaisuke Nakajima, Julius Cheng, Gelei Chen, Sam Song Liang, James Mason Altreuter, Kean Kheong Chin, Zhenhao Ge, Hitesh Anand Gupta, Xiaoke Huang, James Francis McAteer, Brian Francis Williams, Tao Xing
  • Publication number: 20240087574
    Abstract: Computer-implemented method and system for receiving and processing one or more moment-associating elements. For example, the computer-implemented method includes receiving the one or more moment-associating elements, transforming the one or more moment-associating elements into one or more pieces of moment-associating information, and transmitting at least one piece of the one or more pieces of moment-associating information.
    Type: Application
    Filed: November 20, 2023
    Publication date: March 14, 2024
    Inventors: YUN FU, SIMON LAU, KAISUKE NAKAJIMA, JULIUS CHENG, SAM SONG LIANG, JAMES MASON ALTREUTER, KEAN KHEONG CHIN, ZHENHAO GE, HITESH ANAND GUPTA, XIAOKE HUANG, JAMES FRANCIS McATEER, BRIAN FRANCIS WILLIAMS, TAO XING
  • Patent number: 11869508
    Abstract: Computer-implemented method and system for receiving and processing one or more moment-associating elements. For example, the computer-implemented method includes receiving the one or more moment-associating elements, transforming the one or more moment-associating elements into one or more pieces of moment-associating information, and transmitting at least one piece of the one or more pieces of moment-associating information.
    Type: Grant
    Filed: April 28, 2021
    Date of Patent: January 9, 2024
    Assignee: Otter.ai, Inc.
    Inventors: Yun Fu, Simon Lau, Kaisuke Nakajima, Julius Cheng, Sam Song Liang, James Mason Altreuter, Kean Kheong Chin, Zhenhao Ge, Hitesh Anand Gupta, Xiaoke Huang, James Francis McAteer, Brian Francis Williams, Tao Xing
  • Publication number: 20220353102
    Abstract: Methods and systems for team cooperation with real-time recording of one or more moment-associating elements. For example, a method includes: delivering, in response to an instruction, an invitation to each member of one or more members associated with a workspace; granting, in response to acceptance of the invitation by one or more subscribers of the one or more members, subscription permission to the one or more subscribers; receiving the one or more moment-associating elements; transforming the one or more moment-associating elements into one or more pieces of moment-associating information; and transmitting at least one piece of the one or more pieces of moment-associating information to the one or more subscribers.
    Type: Application
    Filed: July 13, 2022
    Publication date: November 3, 2022
    Inventors: SIMON LAU, YUN FU, JAMES MASON ALTREUTER, BRIAN FRANCIS WILLIAMS, XIAOKE HUANG, TAO XING, WEN SUN, TAO LU, KAISUKE NAKAJIMA, KEAN KHEONG CHIN, HITESH ANAND GUPTA, JULIUS CHENG, JING PAN, SAM SONG LIANG
  • Publication number: 20220343918
    Abstract: Computer-implemented method and system for processing and broadcasting one or more moment-associating elements. For example, the computer-implemented method includes granting subscription permission to one or more subscribers; receiving the one or more moment-associating elements; transforming the one or more moment-associating elements into one or more pieces of moment-associating information; and transmitting at least one piece of the one or more pieces of moment-associating information to the one or more subscribers.
    Type: Application
    Filed: July 13, 2022
    Publication date: October 27, 2022
    Inventors: YUN FU, TAO XING, KAISUKE NAKAJIMA, BRIAN FRANCIS WILLIAMS, JAMES MASON ALTREUTER, XIAOKE HUANG, SIMON LAU, SAM SONG LIANG, KEAN KHEONG CHIN, WEN SUN, JULIUS CHENG, HITESH ANAND GUPTA
  • Patent number: 11431517
    Abstract: Methods and systems for team cooperation with real-time recording of one or more moment-associating elements. For example, a method includes: delivering, in response to an instruction, an invitation to each member of one or more members associated with a workspace; granting, in response to acceptance of the invitation by one or more subscribers of the one or more members, subscription permission to the one or more subscribers; receiving the one or more moment-associating elements; transforming the one or more moment-associating elements into one or more pieces of moment-associating information; and transmitting at least one piece of the one or more pieces of moment-associating information to the one or more subscribers.
    Type: Grant
    Filed: February 3, 2020
    Date of Patent: August 30, 2022
    Assignee: Otter.ai, Inc.
    Inventors: Simon Lau, Yun Fu, James Mason Altreuter, Brian Francis Williams, Xiaoke Huang, Tao Xing, Wen Sun, Tao Lu, Kaisuke Nakajima, Kean Kheong Chin, Hitesh Anand Gupta, Julius Cheng, Jing Pan, Sam Song Liang
  • Patent number: 11423911
    Abstract: Computer-implemented method and system for processing and broadcasting one or more moment-associating elements. For example, the computer-implemented method includes granting subscription permission to one or more subscribers; receiving the one or more moment-associating elements; transforming the one or more moment-associating elements into one or more pieces of moment-associating information; and transmitting at least one piece of the one or more pieces of moment-associating information to the one or more subscribers.
    Type: Grant
    Filed: October 10, 2019
    Date of Patent: August 23, 2022
    Assignee: Otter.ai, Inc.
    Inventors: Yun Fu, Tao Xing, Kaisuke Nakajima, Brian Francis Williams, James Mason Altreuter, Xiaoke Huang, Simon Lau, Sam Song Liang, Kean Kheong Chin, Wen Sun, Julius Cheng, Hitesh Anand Gupta
  • Publication number: 20210327454
    Abstract: A system for processing and presenting a conversation includes a sensor, a processor, and a presenter. The sensor is configured to capture an audio-form conversation. The processor is configured to automatically transform the audio-form conversation into a transformed conversation. The transformed conversation includes a synchronized text, wherein the synchronized text is synchronized with the audio-form conversation. The presenter is configured to present the transformed conversation including the synchronized text and the audio-form conversation. The presenter is further configured to present the transformed conversation to be navigable, searchable, assignable, editable, and shareable.
    Type: Application
    Filed: March 23, 2021
    Publication date: October 21, 2021
    Inventors: YUN FU, SIMON LAU, KAISUKE NAKAJIMA, JULIUS CHENG, GELEI CHEN, SAM SONG LIANG, JAMES MASON ALTREUTER, KEAN KHEONG CHIN, ZHENHAO GE, HITESH ANAND GUPTA, XIAOKE HUANG, JAMES FRANCIS McATEER, BRIAN FRANCIS WILLIAMS, TAO XING
  • Publication number: 20210319797
    Abstract: Computer-implemented method and system for receiving and processing one or more moment-associating elements. For example, the computer-implemented method includes receiving the one or more moment-associating elements, transforming the one or more moment-associating elements into one or more pieces of moment-associating information, and transmitting at least one piece of the one or more pieces of moment-associating information.
    Type: Application
    Filed: April 28, 2021
    Publication date: October 14, 2021
    Inventors: YUN FU, SIMON LAU, KAISUKE NAKAJIMA, JULIUS CHENG, SAM SONG LIANG, JAMES MASON ALTREUTER, KEAN KHEONG CHIN, ZHENHAO GE, HITESH ANAND GUPTA, XIAOKE HUANG, JAMES FRANCIS McATEER, BRIAN FRANCIS WILLIAMS, TAO XING
  • Patent number: 11100943
    Abstract: A system for processing and presenting a conversation includes a sensor, a processor, and a presenter. The sensor is configured to capture an audio-form conversation. The processor is configured to automatically transform the audio-form conversation into a transformed conversation. The transformed conversation includes a synchronized text, wherein the synchronized text is synchronized with the audio-form conversation. The presenter is configured to present the transformed conversation including the synchronized text and the audio-form conversation. The presenter is further configured to present the transformed conversation to be navigable, searchable, assignable, editable, and shareable.
    Type: Grant
    Filed: February 14, 2019
    Date of Patent: August 24, 2021
    Assignee: Otter.ai, Inc.
    Inventors: Yun Fu, Simon Lau, Kaisuke Nakajima, Julius Cheng, Gelei Chen, Sam Song Liang, James Mason Altreuter, Kean Kheong Chin, Zhenhao Ge, Hitesh Anand Gupta, Xiaoke Huang, James Francis McAteer, Brian Francis Williams, Tao Xing
  • Patent number: 11024316
    Abstract: Computer-implemented method and system for receiving and processing one or more moment-associating elements. For example, the computer-implemented method includes receiving the one or more moment-associating elements, transforming the one or more moment-associating elements into one or more pieces of moment-associating information, and transmitting at least one piece of the one or more pieces of moment-associating information.
    Type: Grant
    Filed: May 3, 2019
    Date of Patent: June 1, 2021
    Assignee: Otter.ai, Inc.
    Inventors: Yun Fu, Simon Lau, Kaisuke Nakajima, Julius Cheng, Sam Song Liang, James Mason Altreuter, Kean Kheong Chin, Zhenhao Ge, Hitesh Anand Gupta, Xiaoke Huang, James Francis McAteer, Brian Francis Williams, Tao Xing
  • Patent number: 9454963
    Abstract: A text-to-speech method for simulating a plurality of different voice characteristics includes dividing inputted text into a sequence of acoustic units; selecting voice characteristics for the inputted text; converting the sequence of acoustic units to a sequence of speech vectors using an acoustic model having a plurality of model parameters provided in clusters each having at least one sub-cluster and describing probability distributions which relate an acoustic unit to a speech vector; and outputting the sequence of speech vectors as audio with the selected voice characteristics. A parameter of a predetermined type of each probability distribution is expressed as a weighted sum of parameters of the same type using voice characteristic dependent weighting. In converting the sequence of acoustic units to a sequence of speech vectors, the voice characteristic dependent weights for the selected voice characteristics are retrieved for each cluster such that there is one weight per sub-cluster.
    Type: Grant
    Filed: March 13, 2013
    Date of Patent: September 27, 2016
    Assignee: KABUSHIKI KAISHA TOSHIBA
    Inventors: Javier Latorre-Martinez, Vincent Ping Leung Wan, Kean Kheong Chin, Mark John Francis Gales, Katherine Mary Knill, Masami Akamine, Byung Ha Chung
  • Patent number: 9269347
    Abstract: A text-to-speech method configured to output speech having a selected speaker voice and a selected speaker attribute, including: inputting text; dividing the inputted text into a sequence of acoustic units; selecting a speaker for the inputted text; selecting a speaker attribute for the inputted text; converting the sequence of acoustic units to a sequence of speech vectors using an acoustic model; and outputting the sequence of speech vectors as audio with the selected speaker voice and a selected speaker attribute. The acoustic model includes a first set of parameters relating to speaker voice and a second set of parameters relating to speaker attributes, which parameters do not overlap. The selecting a speaker voice includes selecting parameters from the first set of parameters and the selecting the speaker attribute includes selecting the parameters from the second set of parameters.
    Type: Grant
    Filed: March 15, 2013
    Date of Patent: February 23, 2016
    Assignee: Kabushiki Kaisha Toshiba
    Inventors: Javier Latorre-Martinez, Vincent Ping Leung Wan, Kean Kheong Chin, Mark John Francis Gales, Katherine Mary Knill, Masami Akamine
  • Patent number: 8620655
    Abstract: A speech processing method, comprising: receiving a speech input which comprises a sequence of feature vectors; determining the likelihood of a sequence of words arising from the sequence of feature vectors using an acoustic model and a language model, comprising: providing an acoustic model for performing speech recognition on an input signal which comprises a sequence of feature vectors, said model having a plurality of model parameters relating to the probability distribution of a word or part thereof being related to a feature vector, wherein said speech input is a mismatched speech input which is received from a speaker in an environment which is not matched to the speaker or environment under which the acoustic model was trained; and adapting the acoustic model to the mismatched speech input, the speech processing method further comprising determining the likelihood of a sequence of features occurring in a given language using a language model; and combining the likelihoods determined by the acoustic
    Type: Grant
    Filed: August 10, 2011
    Date of Patent: December 31, 2013
    Assignee: Kabushiki Kaisha Toshiba
    Inventors: Haitian Xu, Kean Kheong Chin, Mark John Francis Gales
  • Patent number: 8612224
    Abstract: A method for identifying a plurality of speakers in audio data and for decoding the speech spoken by said speakers; the method comprising: receiving speech; dividing the speech into segments as it is received; processing the received speech segment by segment in the order received to identify the speaker and to decode the speech, processing comprising: performing primary decoding of the segment using an acoustic model and a language model; obtaining segment parameters indicating the differences between the speaker of the segment and a base speaker during the primary decoding; comparing the segment parameters with a plurality of stored speaker profiles to determine the identity of the speaker, and selecting a speaker profile for said speaker; updating the selected speaker profile; performing a further decoding of the segment using a speaker independent acoustic model, adapted using the updated speaker profile; outputting the decoded speech for the identified speaker, wherein the speaker profiles are upd
    Type: Grant
    Filed: August 23, 2011
    Date of Patent: December 17, 2013
    Assignee: Kabushiki Kaisha Toshiba
    Inventors: Catherine Breslin, Mark John Francis Gales, Kean Kheong Chin, Katherine Mary Knill
  • Publication number: 20130262119
    Abstract: A text-to-speech method configured to output speech having a selected speaker voice and a selected speaker attribute, including: inputting text; dividing the inputted text into a sequence of acoustic units; selecting a speaker for the inputted text; selecting a speaker attribute for the inputted text; converting the sequence of acoustic units to a sequence of speech vectors using an acoustic model; and outputting the sequence of speech vectors as audio with the selected speaker voice and a selected speaker attribute. The acoustic model includes a first set of parameters relating to speaker voice and a second set of parameters relating to speaker attributes, which parameters do not overlap. The selecting a speaker voice includes selecting parameters from the first set of parameters and the selecting the speaker attribute includes selecting the parameters from the second set of parameters.
    Type: Application
    Filed: March 15, 2013
    Publication date: October 3, 2013
    Applicant: Kabushiki Kaisha Toshiba
    Inventors: Javier LATORRE-MARTINEZ, Vincent Ping Leung Wan, Kean Kheong Chin, Mark John Francis Gales, Katherine Mary Knill, Masami Akamine
  • Publication number: 20130262109
    Abstract: A text-to-speech method for simulating a plurality of different voice characteristics includes dividing inputted text into a sequence of acoustic units; selecting voice characteristics for the inputted text; converting the sequence of acoustic units to a sequence of speech vectors using an acoustic model having a plurality of model parameters provided in clusters each having at least one sub-cluster and describing probability distributions which relate an acoustic unit to a speech vector; and outputting the sequence of speech vectors as audio with the selected voice characteristics. A parameter of a predetermined type of each probability distribution is expressed as a weighted sum of parameters of the same type using voice characteristic dependent weighting. In converting the sequence of acoustic units to a sequence of speech vectors, the voice characteristic dependent weights for the selected voice characteristics are retrieved for each cluster such that there is one weight per sub-cluster.
    Type: Application
    Filed: March 13, 2013
    Publication date: October 3, 2013
    Inventors: Javier Latorre-Martinez, Vincent Ping Leung Wan, Kean Kheong Chin, Mark John Francis Gales, Katherine Mary Knill, Masami Akamine, Byung Ha Chung
  • Patent number: 8417522
    Abstract: A speech recognition method includes receiving a speech input signal in a first noise environment which includes a sequence of observations, determining the likelihood of a sequence of words arising from the sequence of observations using an acoustic model, adapting the model trained in a second noise environment to that of the first environment, wherein adapting the model trained in the second environment to that of the first environment includes using second order or higher order Taylor expansion coefficients derived for a group of probability distributions and the same expansion coefficient is used for the whole group.
    Type: Grant
    Filed: April 20, 2010
    Date of Patent: April 9, 2013
    Assignee: Kabushiki Kaisha Toshiba
    Inventors: Haitian Xu, Kean Kheong Chin
  • Publication number: 20120253811
    Abstract: A method for identifying a plurality of speakers in audio data and for decoding the speech spoken by said speakers; the method comprising: receiving speech; dividing the speech into segments as it is received; processing the received speech segment by segment in the order received to identify the speaker and to decode the speech, processing comprising: performing primary decoding of the segment using an acoustic model and a language model; obtaining segment parameters indicating the differences between the speaker of the segment and a base speaker during the primary decoding; comparing the segment parameters with a plurality of stored speaker profiles to determine the identity of the speaker, and selecting a speaker profile for said speaker; updating the selected speaker profile; performing a further decoding of the segment using a speaker independent acoustic model, adapted using the updated speaker profile; outputting the decoded speech for the identified speaker, wherein the speaker profiles are upd
    Type: Application
    Filed: August 23, 2011
    Publication date: October 4, 2012
    Applicant: Kabushiki Kaisha Toshiba
    Inventors: Catherine BRESLIN, Mark John Francis Gales, Kean Kheong Chin, Katherine Mary Knill