Patents by Inventor Kyu Woong Hwang
Kyu Woong Hwang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12148228Abstract: Certain aspects of the present disclosure are generally directed to apparatus and techniques for event state detection. One example method generally includes receiving a plurality of sensor signals at a computing device, determining, at the computing device, probabilities of sub-event states based on the plurality of sensor signals using an artificial neural network for each of a plurality of time intervals, and detecting, at the computing device, the event state based on the probabilities of the sub-event states via a state sequence model.Type: GrantFiled: October 8, 2019Date of Patent: November 19, 2024Assignee: QUALCOMM IncorporatedInventors: Mingu Lee, Wonil Chang, Yeonseok Kim, Kyu Woong Hwang, Yin Huang, Ruowei Wang, Haijun Zhao, Janghoon Cho
-
Publication number: 20240290332Abstract: An example device includes memory configured to store a speech signal representative of speech and a streaming model. The streaming model includes an on-device, real-time streaming model. The device includes one or more processors implemented in circuitry coupled to the memory. The one or more processors are configured to determine one or more words in the speech signal based on one or more transfers of learned knowledge from a non-streaming model to the streaming model. The one or more processors are also configured to take an action based on the determined one or more words.Type: ApplicationFiled: July 19, 2023Publication date: August 29, 2024Inventors: Kyuhong Shim, Jinkyu Lee, Simyung Chang, Kyu Woong Hwang
-
Patent number: 12039968Abstract: System and method for operating an always-on ASR (automatic speech recognition) system by selecting target keywords and continuously detecting the selected target keywords in voice commands in a mobile device are provided. In the mobile device, a processor is configured to collect keyword candidates, collect usage frequency data for keywords in the keyword candidates, collect situational usage frequency data for the keywords in the keyword candidates, select target keywords from the keyword candidates based on the usage frequency data and the situational usage frequency data, and detect one or more of the target keywords in a voice command using continuous detection of the target keywords.Type: GrantFiled: September 30, 2020Date of Patent: July 16, 2024Assignee: QUALCOMM IncorporatedInventors: Wonil Chang, Jinseok Lee, Mingu Lee, Jinkyu Lee, Byeonggeun Kim, Dooyong Sung, Jae-Won Choi, Kyu Woong Hwang
-
Publication number: 20240211793Abstract: Certain aspects of the present disclosure provide techniques for processing streaming data using machine learning models. An example method generally includes generating a first feature map for a first set of streaming data using a machine learning model. To generate the first feature map, results of one or more operations performed on each respective item in the first set of streaming data are combined into the first feature map, and the results of the one or more operations performed for each respective item in the first set of streaming data are combined into the first feature map. A second feature map is generated for a second set of streaming data using the machine learning model. A result of processing the total set of data through the machine learning model is generated based at least on a combination of the first feature map and the second feature map.Type: ApplicationFiled: December 21, 2022Publication date: June 27, 2024Inventors: Duseok KANG, Yunseong LEE, Yeonseok KIM, Lee JOOSEONG, Kyu Woong HWANG
-
Publication number: 20240104311Abstract: A processor-implemented method for recognizing a natural language on a mobile device includes receiving an audio input. The method further includes using a neural network to generate local text corresponding to the audio input. The method still further includes generating a local confidence value for accuracy of the local text. The method includes transmitting, to a remote device, data corresponding to the audio input. The method further includes receiving remote text corresponding to the data, along with a remote confidence score for accuracy of the remote text. The method still further includes outputting the local text in response to the local confidence value being higher than the remote confidence score, and outputting the remote text in response to the remote confidence score being higher than the local confidence value.Type: ApplicationFiled: September 23, 2022Publication date: March 28, 2024Inventors: Kee-Hyun PARK, Sungrack YUN, Kyu Woong HWANG
-
Publication number: 20240104420Abstract: Certain aspects of the present disclosure provide techniques and apparatus for a training and using machine learning models in multi-device network environments. An example computer-implemented method for network communications performed by a host device includes extracting a feature set from a data set associated with a client device using a client-device-specific feature extractor, wherein the feature set comprises a subset of features in a common feature space, training a task-specific model based on the extracted feature set and one or more other feature sets associated with other client devices, wherein the feature sets associated with the other client devices comprise one or more subsets of features in the common feature space, and deploying, to each respective client device of a plurality of client devices, a respective version of the task-specific model.Type: ApplicationFiled: September 23, 2022Publication date: March 28, 2024Inventors: Kyu Woong HWANG, Seunghan YANG, Hyunsin PARK, Leonid SHEYNBLAT, Vinesh SUKUMAR, Ziad ASGHAR, Justin MCGLOIN, Joel LINSKY, Tong TANG
-
Patent number: 11908457Abstract: A method for operating a neural network includes receiving an input sequence at an encoder. The input sequence is encoded to produce a set of hidden representations. Attention-heads of the neural network calculate attention weights based on the hidden representations. A context vector is calculated for each attention-head based on the attention weights and the hidden representations. Each of the context vectors correspond to a portion of the input sequence. An inference is output based on the context vectors.Type: GrantFiled: July 3, 2020Date of Patent: February 20, 2024Assignee: QUALCOMM IncorporatedInventors: Mingu Lee, Jinkyu Lee, Hye Jin Jang, Kyu Woong Hwang
-
Publication number: 20240045782Abstract: Embodiments include methods performed by a processor of a computing device for suggesting more efficient action sequences to a user. The methods may include recognizing a user action sequence including one or more user actions performed by the user to achieve a result, determining a first difficulty rating of the user action sequence, determining whether a cluster of multiple system action sequences exists within a cluster database in which each system action sequence of the one or more system action sequences produces the result. Methods may further include comparing the first difficulty rating to one or more difficulty ratings of the one or more system action sequences in response to determining that the cluster of multiple system action sequences exists within the cluster database, and displaying, via a display interface of the computing device, one or more system action sequences with a lower difficulty rating than the first difficulty rating.Type: ApplicationFiled: August 8, 2022Publication date: February 8, 2024Inventors: Sungrack YUN, Hyoungwoo PARK, Seunghan YANG, Hyesu LIM, Taekyung KIM, Jaewon CHOI, Kyu Woong HWANG
-
Patent number: 11798204Abstract: Imaging systems and techniques are described. An imaging system receives image data representing at least a portion (e.g., a face) of a first user as captured by a first image sensor. The imaging system identifies that a gaze of the first user as represented in the image data is directed toward a displayed representation of at least a portion (e.g., a face) of a second user. The imaging system identifies an arrangement of representations of users for output. The imaging system generates modified image data based on the gaze and the arrangement at least in part by modifying the image data to modify at least the portion of the first user in the image data to be visually directed toward a direction corresponding to the second user based on the gaze and the arrangement. The imaging system outputs the modified image data arranged according to the arrangement.Type: GrantFiled: March 2, 2022Date of Patent: October 24, 2023Assignee: QUALCOMM IncorporatedInventors: Hyunsin Park, Juntae Lee, Simyung Chang, Byeonggeun Kim, Jaewon Choi, Kyu Woong Hwang
-
Publication number: 20230281885Abstract: Imaging systems and techniques are described. An imaging system receives image data representing at least a portion (e.g., a face) of a first user as captured by a first image sensor. The imaging system identifies that a gaze of the first user as represented in the image data is directed toward a displayed representation of at least a portion (e.g., a face) of a second user. The imaging system identifies an arrangement of representations of users for output. The imaging system generates modified image data based on the gaze and the arrangement at least in part by modifying the image data to modify at least the portion of the first user in the image data to be visually directed toward a direction corresponding to the second user based on the gaze and the arrangement. The imaging system outputs the modified image data arranged according to the arrangement.Type: ApplicationFiled: March 2, 2022Publication date: September 7, 2023Inventors: Hyunsin PARK, Juntae LEE, Simyung CHANG, Byeonggeun KIM, Jaewon CHOI, Kyu Woong HWANG
-
Patent number: 11664012Abstract: In one embodiment, an electronic device includes an input device configured to provide an input stream, a first processing device, and a second processing device. The first processing device is configured to use a keyword-detection model to determine if the input stream comprises a keyword, wake up the second processing device in response to determining that a segment of the input stream comprises the keyword, and modify the keyword-detection model in response to a training input received from the second processing device. The second processing device is configured to use a first neural network to determine whether the segment of the input stream comprises the keyword and provide the training input to the first processing device in response to determining that the segment of the input stream does not comprise the keyword.Type: GrantFiled: March 25, 2020Date of Patent: May 30, 2023Assignee: Qualcomm IncorporatedInventors: Young Mo Kang, Sungrack Yun, Kyu Woong Hwang, Hye Jin Jang, Byeonggeun Kim
-
Patent number: 11652960Abstract: Embodiment systems and methods for presenting a facial expression in a virtual meeting may include detecting a user facial expression of a user based on information received from a sensor of the computing device, determining whether the detected user facial expression is approved for presentation on an avatar in a virtual meeting, generating an avatar exhibiting a facial expression consistent with the detected user facial expression in response to determining that the detected user facial expression is approved for presentation on an avatar in the virtual meeting, generating an avatar exhibiting a facial expression that is approved for presentation in response to determining that the detected user facial expression is not approved for presentation on an avatar in the virtual meeting, and presenting the generated avatar in the virtual meeting.Type: GrantFiled: May 14, 2021Date of Patent: May 16, 2023Assignee: QUALCOMM IncorporatedInventors: Jae-Won Choi, Sungrack Yun, Janghoon Cho, Hanul Kim, Hyoungwoo Park, Seunghan Yang, Kyu Woong Hwang
-
Publication number: 20230081012Abstract: Embodiments include methods of assisting a user in locating a mobile device executed by a processor of the mobile device. Various embodiments may include a processor of a mobile device obtaining information useful for locating the mobile device from a sensor of the mobile device configured to obtain information regarding surroundings of the mobile device, anonymizing the obtained information to remove private information, and uploading the anonymized information to a remote server in response to determining that the mobile device may be misplaced. Anonymizing the obtained information may include removing speech from an audio input and compiling samples of ambient noise for inclusion in the anonymized information. Anonymizing the obtained information to remove private information includes editing an image captured by the mobile device to make images of detected individuals unrecognizable.Type: ApplicationFiled: September 14, 2021Publication date: March 16, 2023Inventors: Kyu Woong HWANG, Sungrack YUN, Jaewon CHOI, Seunghan YANG, Janghoon CHO, Hyoungwoo PARK, Hanul KIM
-
Publication number: 20220405547Abstract: Certain aspects of the present disclosure provide techniques for residual normalization. A first tensor comprising a frequency dimension and a temporal dimension is accessed. A second tensor is generated by applying a frequency-based instance normalization operation to the first tensor, comprising, for each respective frequency bin in the frequency dimension, computing a respective frequency-specific mean of the first tensor. A third tensor is generated by: scaling the first tensor by a scale value, and aggregating the scaled first tensor and the second tensor. The third tensor is provided as input to a layer of a neural network.Type: ApplicationFiled: June 17, 2022Publication date: December 22, 2022Inventors: Byeonggeun KIM, Simyung Chang, Jangho Kim, Seunghan Yang, Kyu Woong Hwang
-
Publication number: 20220383197Abstract: Certain aspects of the present disclosure provide techniques for training a machine learning model. The method generally includes receiving, at a local device from a server, information defining a global version of a machine learning model. A local version of the machine learning model and a local center associated with the local version of the machine learning model are generated based on embeddings generated from local data at a client device and the global version of the machine learning model. A secure center different from the local center is generated based, at least in part, on information about secure centers shared by a plurality of other devices participating in a federated learning scheme. Information about the local version of the machine learning model and information about the secure center is transmitted by the local device to the server.Type: ApplicationFiled: May 31, 2022Publication date: December 1, 2022Inventors: Hyunsin PARK, Hossein HOSSEINI, Sungrack YUN, Kyu Woong HWANG
-
Publication number: 20220368856Abstract: Embodiment systems and methods for presenting a facial expression in a virtual meeting may include detecting a user facial expression of a user based on information received from a sensor of the computing device, determining whether the detected user facial expression is approved for presentation on an avatar in a virtual meeting, generating an avatar exhibiting a facial expression consistent with the detected user facial expression in response to determining that the detected user facial expression is approved for presentation on an avatar in the virtual meeting, generating an avatar exhibiting a facial expression that is approved for presentation in response to determining that the detected user facial expression is not approved for presentation on an avatar in the virtual meeting, and presenting the generated avatar in the virtual meeting.Type: ApplicationFiled: May 14, 2021Publication date: November 17, 2022Inventors: Jae-Won CHOI, Sungrack YUN, Janghoon CHO, Hanul KIM, Hyoungwoo PARK, Seunghan YANG, Kyu Woong HWANG
-
Publication number: 20220318633Abstract: A processor-implemented method for compressing a deep neural network model includes receiving an initial neural network model. The initial neural network is pruned based on a first threshold to generate a pruned network and a set of pruned weights. A quantization process is applied to the pruned network to produce a pruned and quantized network. A teacher model is generated by incorporating the pruned set of weights with the pruned network. In addition, an initial student model is generated from the quantized and pruned network. The initial student model is trained using the teacher model to output a trained student model.Type: ApplicationFiled: March 25, 2022Publication date: October 6, 2022Inventors: Jangho KIM, Simyung CHANG, Hyunsin PARK, Juntae LEE, Jaewon CHOI, Kyu Woong HWANG
-
Patent number: 11437031Abstract: A device to process an audio signal representing input sound includes a hand detector configured to generate a first indication responsive to detection of at least a portion of a hand over at least a portion of the device. The device also includes an automatic speech recognition system configured to be activated, responsive to the first indication, to process the audio signal.Type: GrantFiled: July 30, 2019Date of Patent: September 6, 2022Assignee: QUALCOMM IncorporatedInventors: Sungrack Yun, Young Mo Kang, Hye Jin Jang, Byeonggeun Kim, Kyu Woong Hwang
-
Publication number: 20220122594Abstract: A computer-implemented method of operating an artificial neural network for processing data having a frequency dimension includes receiving an input. The audio input may be separated into one or more subgroups along the frequency dimension. A normalization may be performed on each subgroup. The normalization for a first subgroup the normalization is performed independently of the normalization a second subgroups. An output such as a keyword detection indication, is generated based on the normalized subgroups.Type: ApplicationFiled: October 20, 2021Publication date: April 21, 2022Inventors: Simyung CHANG, Hyunsin PARK, Hyoungwoo PARK, Janghoon CHO, Sungrack YUN, Kyu Woong HWANG
-
Publication number: 20220121949Abstract: A method for generating a personalized model includes receiving one or more personal data samples from a user. A prototype of a personal identity is generated based on the personal data samples. The prototype of the personal identity is trained to reflect personal characteristics of the user. A network graph is generated based on the prototype of the personal identity. One or more channels of a global network are pruned based on the network graph to produce the personalized model.Type: ApplicationFiled: October 20, 2021Publication date: April 21, 2022Inventors: Simyung CHANG, Jangho KIM, Hyunsin PARK, Juntae LEE, Jaewon CHOI, Kyu Woong HWANG