Patents Assigned to MINDS LAB., INC.
-
Publication number: 20230023102Abstract: Provided is a lip-sync video providing apparatus for providing a video in which a voice and lip shapes are synchronized. The lip-sync video providing apparatus is configured to obtain a template video including at least one frame and depicting a target object, obtain a target voice to be used as a voice of the target object, generate a lip image corresponding to the voice for each frame of the template video by using a trained first artificial neural network, and generate lip-sync data including frame identification information of a frame in the template video, the lip image, and position information regarding the lip image in a frame in the template video.Type: ApplicationFiled: December 23, 2021Publication date: January 26, 2023Applicant: MINDS LAB INC.Inventors: Hyoung Kyu SONG, Dong Ho CHOI, Hong Seop CHOI
-
METHOD OF TRAINING ARTIFICIAL NEURAL NETWORK AND METHOD OF EVALUATING PRONUNCIATION USING THE METHOD
Publication number: 20220172025Abstract: The disclosure relates to a method of training an artificial neural network so that a first artificial neural network is trained based on a plurality of training data including a first feature and a second feature that has a correlation with the first feature and depends on the first feature.Type: ApplicationFiled: October 13, 2021Publication date: June 2, 2022Applicant: MINDS LAB INC.Inventors: Seung Won PARK, Jong Mi LEE, Kang Wook KIM -
Publication number: 20220157329Abstract: A method and apparatus for converting a voice of a first speaker into a voice of a second speaker by using a plurality of trained artificial neural networks are provided. The method of converting a voice feature of a voice comprises (i) generating a first audio vector corresponding to a first voice by using a first artificial neural network, (ii) generating a first text feature value corresponding to the first text by using a second artificial neural network, (iii) generating a second audio vector by removing the voice feature value of the first voice from the first audio vector by using the first text feature value and a third artificial neural network, and (iv) generating, by using the second audio vector and a voice feature value of a target voice, a second voice in which a feature of the target voice is reflected.Type: ApplicationFiled: October 13, 2021Publication date: May 19, 2022Applicant: MINDS LAB INC.Inventors: Hong Seop CHOI, Seung Won PARK
-
Publication number: 20220129250Abstract: A method of generating an application by using an artificial neural network model includes a data processing step of pre-processing training data, a model training step of training the artificial neural network model based on the preprocessed training data, and an application making step of receiving an input for editing one or more components included in the application and an input for setting a connection relationship between the one or more components. The one or more components include the artificial neural network model.Type: ApplicationFiled: August 18, 2021Publication date: April 28, 2022Applicant: MINDS LAB INC.Inventors: Tae Joon YOO, Myun Chul JOE, Hong Seop CHOI
-
Patent number: 11221728Abstract: A device for providing an interface for managing contact scenarios for one or more respondents may provide a first interface for inputting contact information of one or more respondents to be contacted, provide a second interface for inputting a scenario of the contact, and provide a third interface for displaying on a screen a contact simulation according to the scenario.Type: GrantFiled: September 30, 2020Date of Patent: January 11, 2022Assignee: MINDS LAB INC.Inventors: Tae Joon Yoo, Ha Young Lee, Han Gyul Yu, Hong Seop Choi
-
Publication number: 20210390958Abstract: A method of generating a speaker-labeled text from voice data including voices of at least two speakers includes converting the voice data into text to generate a first text, determining a speaker of each of one or more second texts obtained by dividing the first text in a predetermined unit, and providing an editing interface for displaying the one or more second texts and a speaker of each of the one or more second texts.Type: ApplicationFiled: August 18, 2021Publication date: December 16, 2021Applicant: MINDS LAB INC.Inventors: Jung Sang WON, Hee Yeon KIM, Hee Kwan LIM, Moo Ni CHOI, Seung Min NAM, Tae Joon YOO, Hong Seop CHOI
-
Publication number: 20210333947Abstract: A device for providing an interface for managing contact scenarios for one or more respondents may provide a first interface for inputting contact information of one or more respondents to be contacted, provide a second interface for inputting a scenario of the contact, and provide a third interface for displaying on a screen a contact simulation according to the scenario.Type: ApplicationFiled: September 30, 2020Publication date: October 28, 2021Applicant: MINDS LAB INC.Inventors: Tae Joon YOO, Ha Young LEE, Han Gyul YU, Hong Seop CHOI
-
Publication number: 20210075750Abstract: Provided is a method of controlling display of a consultation session by integrating and displaying at least one consultation session managed by an individual respondent. The method includes receiving, from a server, data about the at least one consultation session to be displayed; displaying a consultation session window for each of the at least one consultation session; and displaying in a preset manner a session window of a consultation session in which details of consultation satisfy a certain condition for a consultant's intervention. The consultation session window includes a history of conversation in each of the at least one consultation session.Type: ApplicationFiled: September 8, 2020Publication date: March 11, 2021Applicant: MINDS LAB INC.Inventors: Tae Joon YOO, Dong Su KIM, Hong Seop CHOI
-
Publication number: 20210012764Abstract: A method of generating a voice for each speaker from audio content including a section in which at least two or more speakers simultaneously speak is provided. The method includes dividing the audio content into one or more single-speaker sections and one or more multi-speaker sections, determining a speaker feature value corresponding to each of the one or more single-speaker sections, generating grouping information by grouping the one or more single-speaker sections based on a similarity of the determined speaker feature value, determining a speaker feature value for each speaker by referring to the grouping information, and generating a voice of each of multiple speakers in each section from each of the one or more multi-speaker sections by using a trained artificial neural network and the speaker feature value for each individual speaker.Type: ApplicationFiled: September 30, 2020Publication date: January 14, 2021Applicant: MINDS LAB INC.Inventors: Tae Joon YOO, Myun Chul JOE, Hong Seop CHOI
-
Publication number: 20190378024Abstract: Aspects of the technology described herein relate generally to a platform that enables the deployment of autonomous bots that identify and deliver relevant content in real-time based on received information. These bots may be designed to proactively provide relevant content without any explicit trigger from a user. For example, a bot may analyze speech and/or text in a primary communication channel (e.g., a telephone, email, webchat, or videophone) and proactively provide content relevant to the speech and/or text in one or more secondary communication channels (e.g., displayed on a computer screen, a mobile device screen, and/or a pair of smart glasses).Type: ApplicationFiled: December 15, 2017Publication date: December 12, 2019Applicant: Second Mind Labs, Inc.Inventors: Kul Singh, Andras Kornai, Yurii Pohrebniak
-
Publication number: 20190155907Abstract: The present disclosure relates to a system and method of generating a sentence similar to a basis sentence for machine learning. For the same, the similar sentence generating method includes: generating a first similar sentence by using a word similar to a word included in a basis sentence; generating a second similar sentence of the basis sentence or the first similar sentence based on a speaker feature; and determining whether or not the first similar sentence and the second similar sentence are valid.Type: ApplicationFiled: November 20, 2018Publication date: May 23, 2019Applicant: MINDS LAB., INC.Inventors: Sung Jun PARK, Yi Gyu HWANG, Tae Joon YOO, Ki Hyun YUN
-
Publication number: 20190019078Abstract: The present disclosure relates to an apparatus and method of allocating a question according to a question type or question feature. A question allocating apparatus for the same may include a question analysis unit generating at least one of question type information and question feature information of a current question; and a question allocating unit determining an answer generating unit suitable for the current questions among a plurality of answer generating units based on at least one of the question type information, and the question feature information, and allocating the current question to at least one answer generating including the determined answer generating unit.Type: ApplicationFiled: July 12, 2018Publication date: January 17, 2019Applicant: MINDS LAB., INC.Inventors: Yi Gyu HWANG, Kang Woo PARK, Dong Hyun YOO, Su Lyn HONG, Tae Joon YOO
-
Publication number: 20190013012Abstract: The present disclosure relates to a system and method of sentence learning based on an unsupervised learning method. For the same, a sentence learning method may include: enhancing a basis sentence corpus by using a word similar to a word included in a basis sentence; performing learning for the basis sentence included in the basis sentence corpus based on an unsupervised learning method; and removing an abnormal sentence among at least one similar sentence obtained by performing of the sentence learning.Type: ApplicationFiled: July 4, 2018Publication date: January 10, 2019Applicant: MINDS LAB., INC.Inventors: Yi Gyu HWANG, Su Lyn HONG, Tae Joon YOO