Patents by Inventor Setsuo Yamada
Setsuo Yamada has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20250166584Abstract: A display data generation device (includes an input unit that receives input of target data including a text sequence and annotation information corresponding to texts included in the text sequence, and a display preparation unit that determines, on the basis of the annotation information, annotation expression information indicating a background color of a display screen of a display device and a position and a range in which a corresponding background color is displayed for expressing correspondence relationship between the texts and the annotation information in a case where the display device displays the texts, and generates display data for causing the text sequence and the annotation information to be displayed according to a sequence in the text sequence, the display data being for causing the background color indicated by the annotation expression information to be displayed at the position and the range indicated by the annotation expression information.Type: ApplicationFiled: January 17, 2025Publication date: May 22, 2025Applicant: NIPPON TELEGRAPH AND TELEPHONE CORPORATIONInventors: Setsuo YAMADA, Takaaki HASEGAWA, Kazuyuki ISO, Masayuki SUGIZAKI
-
Patent number: 12230231Abstract: A display data generation device includes an input unit that receives input of target data including a text sequence and annotation information corresponding to texts included in the text sequence, and a display preparation unit that determines, on the basis of the annotation information, annotation expression information indicating a background color of a display screen of a display device and a position and a range in which a corresponding background color is displayed for expressing correspondence relationship between the texts and the annotation information in a case where the display device displays the texts, and generates display data for causing the text sequence and the annotation information to be displayed according to a sequence in the text sequence, the display data being for causing the background color indicated by the annotation expression information to be displayed at the position and the range indicated by the annotation expression information.Type: GrantFiled: March 30, 2021Date of Patent: February 18, 2025Assignee: NIPPON TELEGRAPH AND TELEPHONE CORPORATIONInventors: Setsuo Yamada, Takaaki Hasegawa, Kazuyuki Iso, Masayuki Sugizaki
-
Publication number: 20250036688Abstract: Disclosed is a search result display device (1) comprising: a regard prediction unit (12) configured to predict, from a dialogue between a customer and a service person, a regard of the customer; a keyword extraction unit (17) configured to extract a keyword from the regard; and a display controller (13) configured to cause a display (14) to display the dialogue and a search result obtained from the database (21) with the keyword as a search query, wherein when a string has been designated by the service person, the display controller (13) causes the display (14) to display a search result obtained from the database (21) using a search query that incorporates the string, until a search result automatic update instruction is given by the service person.Type: ApplicationFiled: October 15, 2024Publication date: January 30, 2025Applicant: NIPPON TELEGRAPH AND TELEPHONE CORPORATIONInventors: Yoshiaki NODA, Setsuo YAMADA, Takaaki HASEGAWA
-
Publication number: 20250029613Abstract: A speech section extraction device includes: a speech section identification unit that identifies a speech section including at least one speech from speech text data including speeches of two or more people; a speech section type determination unit that determines a speech section type for each of the speech section that has been identified; a speech type extraction unit that extracts a speech type of each speech included in the speech text data from the speech text data; and a speech section extraction unit that extracts an important speech section among the speech section that has been identified, based on a combination and transition of the speech section type that has been determined, and a combination and transition of the speech type that has been extracted.Type: ApplicationFiled: December 3, 2021Publication date: January 23, 2025Applicant: NIPPON TELEGRAPH AND TELEPHONE CORPORATIONInventors: Satoshi MIEDA, Setsuo YAMADA, Takafumi HIKICHI
-
Publication number: 20250029617Abstract: A speech section classification device includes: a speech section estimation unit that estimates a speech section from speech text data including speeches of two or more people; a speech type estimation unit that estimates a speech type of each speech included in the speech section estimated by the speech section estimation unit; and a speech section classification unit that classifies the speech section estimated by the speech section estimation unit, using the speech type of each speech estimated by the speech type estimation unit and a speech section classification rule determined in advance as a rule for classifying speech sections on the basis of the speech type.Type: ApplicationFiled: December 3, 2021Publication date: January 23, 2025Applicant: NIPPON TELEGRAPH AND TELEPHONE CORPORATIONInventors: Takafumi HIKICHI, Setsuo YAMADA, Satoshi MIEDA
-
Publication number: 20250022469Abstract: A classification device includes a first identification unit that receives, as input, utterance data including an utterance of a first speaker and an utterance of a second speaker in a dialogue and, using a first identification model/rule, identifies respective utterance types of the utterances included in the utterance data, a second identification unit that receives, as input, the utterance data and the utterance type of each of the utterances, using a second identification model/rule preset according to the utterance types, identifies a first identification utterance indicating an inquiry and a second identification utterance in response to the first identification utterance in the utterance data, and outputs pair data of utterances indicating the first identification utterance and the second identification utterance, and a result classification unit that receives, as input, the output pair data of utterances, and, using a result classification model/rule, classifies a response result of the dialogue incluType: ApplicationFiled: December 1, 2021Publication date: January 16, 2025Applicant: NIPPON TELEGRAPH AND TELEPHONE CORPORATIONInventors: Setsuo YAMADA, Takafumi HIKICHI, Satoshi MIEDA
-
Patent number: 12147477Abstract: Disclosed is a search result display device (1) comprising: a regard prediction unit (12) configured to predict, from a dialogue between a customer and a service person, a regard of the customer; a keyword extraction unit (17) configured to extract a keyword from the regard; and a display controller (13) configured to cause a display (14) to display the dialogue and a search result obtained from the database (21) with the keyword as a search query, wherein when a string has been designated by the service person, the display controller (13) causes the display (14) to display a search result obtained from the database (21) using a search query that incorporates the string, until a search result automatic update instruction is given by the service person.Type: GrantFiled: August 14, 2019Date of Patent: November 19, 2024Assignee: NIPPON TELEGRAPH AND TELEPHONE CORPORATIONInventors: Yoshiaki Noda, Setsuo Yamada, Takaaki Hasegawa
-
Patent number: 12141207Abstract: The present invention allows appropriate acquisition of focus points in a dialogue.Type: GrantFiled: August 14, 2019Date of Patent: November 12, 2024Assignee: NIPPON TELEGRAPH AND TELEPHONE CORPORATIONInventors: Setsuo Yamada, Yoshiaki Noda, Takaaki Hasegawa
-
Publication number: 20240194165Abstract: A display data generation device includes an input unit that receives input of target data including a text sequence and annotation information corresponding to texts included in the text sequence, and a display preparation unit that determines, on the basis of the annotation information, annotation expression information indicating a background color of a display screen of a display device and a position and a range in which a corresponding background color is displayed for expressing correspondence relationship between the texts and the annotation information in a case where the display device displays the texts, and generates display data for causing the text sequence and the annotation information to be displayed according to a sequence in the text sequence, the display data being for causing the background color indicated by the annotation expression information to be displayed at the position and the range indicated by the annotation expression information.Type: ApplicationFiled: March 30, 2021Publication date: June 13, 2024Applicant: NIPPON TELEGRAPH AND TELEPHONE CORPORATIONInventors: Setsuo YAMADA, Takaaki HASEGAWA, Kazuyuki ISO, Masayuki SUGIZAKI
-
Patent number: 11996119Abstract: The end-of-talk prediction device (10) of the present invention comprises: a divide unit (11) for dividing, using delimiter symbols indicating delimitations within segments, a string in which the utterance in the dialog has been text-converted by speech recognition, the delimiter symbols included in the result of the speech recognition; and an end-of-talk prediction unit (12) for predicting, using an end-of-talk prediction model (14), whether the utterance corresponding to the divided string divided by the divide unit (11) is an end-of-talk utterance of the speaker.Type: GrantFiled: August 14, 2019Date of Patent: May 28, 2024Assignee: NIPPON TELEGRAPH AND TELEPHONE CORPORATIONInventors: Setsuo Yamada, Yoshiaki Noda, Takaaki Hasegawa
-
Patent number: 11955111Abstract: To improve prediction accuracy of utterance types in a dialog. A learning data generation device (10) according to the present invention comprises: a sort unit (11) configured to perform, based on information appended to utterances in a dialog amongst more than one speaker and that is indicative of a dialogue scene that is a scene in which the utterances in the dialog were made, sorting regarding whether the utterances are to be targets for generation of the learning data, wherein the sorter (11) is configured to exclude utterances of a dialogue scene that includes utterances similar to utterance of the particular type from the targets for generation of learning data.Type: GrantFiled: August 14, 2019Date of Patent: April 9, 2024Assignee: NIPPON TELEGRAPH AND TELEPHONE CORPORATIONInventors: Setsuo Yamada, Yoshiaki Noda, Takaaki Hasegawa
-
Patent number: 11922927Abstract: The learning data generation device (10) of the present invention comprises: an end-of-talk predict unit (11) for performing: a first prediction in which it is predicted, based on utterance information on an utterance in the dialog, using the end-of-talk prediction model (16), whether the utterance is an end-of-talk utterance of the speaker; and a second prediction in which it is predicted, based on one or more prescribed rules, whether the utterance is an end-of-talk utterance; and a training data generate unit (13) for generating, when, in the first prediction it is predicted that the utterance is not an end-of-talk utterance and in the second prediction it is predicted that the utterance is an end-of-talk utterance, for the utterance information on the utterance, learning data to which training data indicating that the utterance is an end-of-talk utterance is appended.Type: GrantFiled: August 14, 2019Date of Patent: March 5, 2024Assignee: NIPPON TELEGRAPH AND TELEPHONE CORPORATIONInventors: Yoshiaki Noda, Setsuo Yamada, Takaaki Hasegawa
-
Patent number: 11749258Abstract: A display device for displaying an utterance and information extracted from the utterance, includes an input/output interface configured to display display blocks for a series of dialogue scenes in chronological order of acquisition of utterances, each of the series of dialogue scenes being indicated by dialogue scene data stored in correspondence with utterance data indicating a corresponding one of the utterances, display dialogue scene information within each of the display blocks for the series of dialogue scenes, the dialogue scene information including the corresponding one of the utterances, an utterance type indicating a type of the corresponding one of the utterances, or utterance focus point information of the corresponding one of the utterances, and switch, based on an operation input, between displaying the dialogue scene information and not displaying the dialogue scene information within each of the display blocks for the series of dialogue scenes.Type: GrantFiled: September 21, 2022Date of Patent: September 5, 2023Assignee: NIPPON TELEGRAPH AND TELEPHONE CORPORATIONInventors: Yoshiaki Noda, Setsuo Yamada, Takaaki Hasegawa
-
Publication number: 20230014267Abstract: A display device for displaying an utterance and information extracted from the utterance, includes an input/output interface configured to display display blocks for a series of dialogue scenes in chronological order of acquisition of utterances, each of the series of dialogue scenes being indicated by dialogue scene data stored in correspondence with utterance data indicating a corresponding one of the utterances, display dialogue scene information within each of the display blocks for the series of dialogue scenes, the dialogue scene information including the corresponding one of the utterances, an utterance type indicating a type of the corresponding one of the utterances, or utterance focus point information of the corresponding one of the utterances, and switch, based on an operation input, between displaying the dialogue scene information and not displaying the dialogue scene information within each of the display blocks for the series of dialogue scenes.Type: ApplicationFiled: September 21, 2022Publication date: January 19, 2023Applicant: NIPPON TELEGRAPH AND TELEPHONE CORPORATIONInventors: Yoshiaki NODA, Setsuo YAMADA, Takaaki HASEGAWA
-
Patent number: 11482209Abstract: The present invention makes it possible to efficiently create an appropriate dialogue history. This device for supporting creation of dialogue history (1) is provided with: a dialogue utterance focus point information store (19) which, according to utterance data indicating utterances, stores dialogue scene data indicating dialogue scenes of the utterances, utterance type indicating the types of the utterances, and utterance focus point information of the utterances; and an input/output interface (20) which, with respect to each of the dialogue scenes indicated by the dialogue scene data stored in the dialogue utterance focus point information store (19), causes a display device to display any one or more of utterances, utterance type, and utterance focus point information.Type: GrantFiled: August 14, 2019Date of Patent: October 25, 2022Assignee: NIPPON TELEGRAPH AND TELEPHONE CORPORATIONInventors: Yoshiaki Noda, Setsuo Yamada, Takaaki Hasegawa
-
Publication number: 20210312944Abstract: The end-of-talk prediction device (10) of the present invention comprises: a divide unit (11) for dividing, using delimiter symbols indicating delimitations within segments, a string in which the utterance in the dialog has been text-converted by speech recognition, the delimiter symbols included in the result of the speech recognition; and an end-of-talk prediction unit (12) for predicting, using an end-of-talk prediction model (14), whether the utterance corresponding to the divided string divided by the divide unit (11) is an end-of-talk utterance of the speaker.Type: ApplicationFiled: August 14, 2019Publication date: October 7, 2021Applicant: NIPPON TELEGRAPH AND TELEPHONE CORPORATIONInventors: Setsuo YAMADA, Yoshiaki NODA, Takaaki HASEGAWA
-
Publication number: 20210312908Abstract: The learning data generation device (10) of the present invention comprises: an end-of-talk predict unit (11) for performing: a first prediction in which it is predicted, based on utterance information on an utterance in the dialog, using the end-of-talk prediction model (16), whether the utterance is an end-of-talk utterance of the speaker; and a second prediction in which it is predicted, based on one or more prescribed rules, whether the utterance is an end-of-talk utterance; and a training data generate unit (13) for generating, when, in the first prediction it is predicted that the utterance is not an end-of-talk utterance and in the second prediction it is predicted that the utterance is an end-of-talk utterance, for the utterance information on the utterance, learning data to which training data indicating that the utterance is an end-of-talk utterance is appended.Type: ApplicationFiled: August 14, 2019Publication date: October 7, 2021Applicant: NIPPON TELEGRAPH AND TELEPHONE CORPORATIONInventors: Yoshiaki NODA, Setsuo YAMADA, Takaaki HASEGAWA
-
Patent number: 11087745Abstract: To provide a speech recognition results re-ranking technology for re-ranking speech recognition results so as to render speech recognition results suitable for intended use of speech recognition while reducing preparation costs required prior to execution of re-ranking processing of speech recognition results. A speech recognition results re-ranking device includes: a speech recognition unit 210 that generates a speech recognition result set with recognition score from speech data; and a re-ranking unit 220 that generates a speech recognition result set with integrated score from the speech recognition result set with recognition score by using a word vector expression database, a cluster center vector expression database, and a normalized knowledge information word DF value database.Type: GrantFiled: December 19, 2017Date of Patent: August 10, 2021Assignee: NIPPON TELEGRAPH AND TELEPHONE CORPORATIONInventors: Takashi Nakamura, Nobuaki Hiroshima, Setsuo Yamada
-
Publication number: 20210241042Abstract: The disclosure allows quick and accurate confirmation of the degree to which a presently used classifier (model) conforms to data for which no ground truth exists. The classifier evaluation device (1) comprises: a data count obtainment unit (18) for obtaining a data count of input data to be made a classification target; a correction frequency counter (17) for counting a correction frequency of the classifiers, from correction information of classification results for the classifiers; and a correction rate calculation unit (19) for calculating, based on, the correction frequency and the data count of input data a correction rate for each of the classifiers.Type: ApplicationFiled: August 14, 2019Publication date: August 5, 2021Applicant: NIPPON TELEGRAPH AND TELEPHONE CORPORATIONInventors: Takaaki HASEGAWA, Yoshiaki NODA, Setsuo YAMADA
-
Publication number: 20210183369Abstract: To improve prediction accuracy of utterance types in a dialog. A learning data generation device (10) according to the present invention comprises: a sort unit (11) configured to perform, based on information appended to utterances in a dialog amongst more than one speaker and that is indicative of a dialogue scene that is a scene in which the utterances in the dialog were made, sorting regarding whether the utterances are to be targets for generation of the learning data, wherein the sorter (11) is configured to exclude utterances of a dialogue scene that includes utterances similar to utterance of the particular type from the targets for generation of learning data.Type: ApplicationFiled: August 14, 2019Publication date: June 17, 2021Applicant: NIPPON TELEGRAPH AND TELEPHONE CORPORATIONInventors: Setsuo YAMADA, Yoshiaki NODA, Takaaki HASEGAWA