Patents Assigned to MINDS LAB., INC.
  • Publication number: 20230023102
    Abstract: Provided is a lip-sync video providing apparatus for providing a video in which a voice and lip shapes are synchronized. The lip-sync video providing apparatus is configured to obtain a template video including at least one frame and depicting a target object, obtain a target voice to be used as a voice of the target object, generate a lip image corresponding to the voice for each frame of the template video by using a trained first artificial neural network, and generate lip-sync data including frame identification information of a frame in the template video, the lip image, and position information regarding the lip image in a frame in the template video.
    Type: Application
    Filed: December 23, 2021
    Publication date: January 26, 2023
    Applicant: MINDS LAB INC.
    Inventors: Hyoung Kyu SONG, Dong Ho CHOI, Hong Seop CHOI
  • Publication number: 20220172025
    Abstract: The disclosure relates to a method of training an artificial neural network so that a first artificial neural network is trained based on a plurality of training data including a first feature and a second feature that has a correlation with the first feature and depends on the first feature.
    Type: Application
    Filed: October 13, 2021
    Publication date: June 2, 2022
    Applicant: MINDS LAB INC.
    Inventors: Seung Won PARK, Jong Mi LEE, Kang Wook KIM
  • Publication number: 20220157329
    Abstract: A method and apparatus for converting a voice of a first speaker into a voice of a second speaker by using a plurality of trained artificial neural networks are provided. The method of converting a voice feature of a voice comprises (i) generating a first audio vector corresponding to a first voice by using a first artificial neural network, (ii) generating a first text feature value corresponding to the first text by using a second artificial neural network, (iii) generating a second audio vector by removing the voice feature value of the first voice from the first audio vector by using the first text feature value and a third artificial neural network, and (iv) generating, by using the second audio vector and a voice feature value of a target voice, a second voice in which a feature of the target voice is reflected.
    Type: Application
    Filed: October 13, 2021
    Publication date: May 19, 2022
    Applicant: MINDS LAB INC.
    Inventors: Hong Seop CHOI, Seung Won PARK
  • Publication number: 20220129250
    Abstract: A method of generating an application by using an artificial neural network model includes a data processing step of pre-processing training data, a model training step of training the artificial neural network model based on the preprocessed training data, and an application making step of receiving an input for editing one or more components included in the application and an input for setting a connection relationship between the one or more components. The one or more components include the artificial neural network model.
    Type: Application
    Filed: August 18, 2021
    Publication date: April 28, 2022
    Applicant: MINDS LAB INC.
    Inventors: Tae Joon YOO, Myun Chul JOE, Hong Seop CHOI
  • Patent number: 11221728
    Abstract: A device for providing an interface for managing contact scenarios for one or more respondents may provide a first interface for inputting contact information of one or more respondents to be contacted, provide a second interface for inputting a scenario of the contact, and provide a third interface for displaying on a screen a contact simulation according to the scenario.
    Type: Grant
    Filed: September 30, 2020
    Date of Patent: January 11, 2022
    Assignee: MINDS LAB INC.
    Inventors: Tae Joon Yoo, Ha Young Lee, Han Gyul Yu, Hong Seop Choi
  • Publication number: 20210390958
    Abstract: A method of generating a speaker-labeled text from voice data including voices of at least two speakers includes converting the voice data into text to generate a first text, determining a speaker of each of one or more second texts obtained by dividing the first text in a predetermined unit, and providing an editing interface for displaying the one or more second texts and a speaker of each of the one or more second texts.
    Type: Application
    Filed: August 18, 2021
    Publication date: December 16, 2021
    Applicant: MINDS LAB INC.
    Inventors: Jung Sang WON, Hee Yeon KIM, Hee Kwan LIM, Moo Ni CHOI, Seung Min NAM, Tae Joon YOO, Hong Seop CHOI
  • Publication number: 20210333947
    Abstract: A device for providing an interface for managing contact scenarios for one or more respondents may provide a first interface for inputting contact information of one or more respondents to be contacted, provide a second interface for inputting a scenario of the contact, and provide a third interface for displaying on a screen a contact simulation according to the scenario.
    Type: Application
    Filed: September 30, 2020
    Publication date: October 28, 2021
    Applicant: MINDS LAB INC.
    Inventors: Tae Joon YOO, Ha Young LEE, Han Gyul YU, Hong Seop CHOI
  • Publication number: 20210075750
    Abstract: Provided is a method of controlling display of a consultation session by integrating and displaying at least one consultation session managed by an individual respondent. The method includes receiving, from a server, data about the at least one consultation session to be displayed; displaying a consultation session window for each of the at least one consultation session; and displaying in a preset manner a session window of a consultation session in which details of consultation satisfy a certain condition for a consultant's intervention. The consultation session window includes a history of conversation in each of the at least one consultation session.
    Type: Application
    Filed: September 8, 2020
    Publication date: March 11, 2021
    Applicant: MINDS LAB INC.
    Inventors: Tae Joon YOO, Dong Su KIM, Hong Seop CHOI
  • Publication number: 20210012764
    Abstract: A method of generating a voice for each speaker from audio content including a section in which at least two or more speakers simultaneously speak is provided. The method includes dividing the audio content into one or more single-speaker sections and one or more multi-speaker sections, determining a speaker feature value corresponding to each of the one or more single-speaker sections, generating grouping information by grouping the one or more single-speaker sections based on a similarity of the determined speaker feature value, determining a speaker feature value for each speaker by referring to the grouping information, and generating a voice of each of multiple speakers in each section from each of the one or more multi-speaker sections by using a trained artificial neural network and the speaker feature value for each individual speaker.
    Type: Application
    Filed: September 30, 2020
    Publication date: January 14, 2021
    Applicant: MINDS LAB INC.
    Inventors: Tae Joon YOO, Myun Chul JOE, Hong Seop CHOI
  • Publication number: 20190378024
    Abstract: Aspects of the technology described herein relate generally to a platform that enables the deployment of autonomous bots that identify and deliver relevant content in real-time based on received information. These bots may be designed to proactively provide relevant content without any explicit trigger from a user. For example, a bot may analyze speech and/or text in a primary communication channel (e.g., a telephone, email, webchat, or videophone) and proactively provide content relevant to the speech and/or text in one or more secondary communication channels (e.g., displayed on a computer screen, a mobile device screen, and/or a pair of smart glasses).
    Type: Application
    Filed: December 15, 2017
    Publication date: December 12, 2019
    Applicant: Second Mind Labs, Inc.
    Inventors: Kul Singh, Andras Kornai, Yurii Pohrebniak
  • Publication number: 20190155907
    Abstract: The present disclosure relates to a system and method of generating a sentence similar to a basis sentence for machine learning. For the same, the similar sentence generating method includes: generating a first similar sentence by using a word similar to a word included in a basis sentence; generating a second similar sentence of the basis sentence or the first similar sentence based on a speaker feature; and determining whether or not the first similar sentence and the second similar sentence are valid.
    Type: Application
    Filed: November 20, 2018
    Publication date: May 23, 2019
    Applicant: MINDS LAB., INC.
    Inventors: Sung Jun PARK, Yi Gyu HWANG, Tae Joon YOO, Ki Hyun YUN
  • Publication number: 20190019078
    Abstract: The present disclosure relates to an apparatus and method of allocating a question according to a question type or question feature. A question allocating apparatus for the same may include a question analysis unit generating at least one of question type information and question feature information of a current question; and a question allocating unit determining an answer generating unit suitable for the current questions among a plurality of answer generating units based on at least one of the question type information, and the question feature information, and allocating the current question to at least one answer generating including the determined answer generating unit.
    Type: Application
    Filed: July 12, 2018
    Publication date: January 17, 2019
    Applicant: MINDS LAB., INC.
    Inventors: Yi Gyu HWANG, Kang Woo PARK, Dong Hyun YOO, Su Lyn HONG, Tae Joon YOO
  • Publication number: 20190013012
    Abstract: The present disclosure relates to a system and method of sentence learning based on an unsupervised learning method. For the same, a sentence learning method may include: enhancing a basis sentence corpus by using a word similar to a word included in a basis sentence; performing learning for the basis sentence included in the basis sentence corpus based on an unsupervised learning method; and removing an abnormal sentence among at least one similar sentence obtained by performing of the sentence learning.
    Type: Application
    Filed: July 4, 2018
    Publication date: January 10, 2019
    Applicant: MINDS LAB., INC.
    Inventors: Yi Gyu HWANG, Su Lyn HONG, Tae Joon YOO