Patents by Inventor Joo Chan Sohn
Joo Chan Sohn has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11341742Abstract: A method of estimating a self-location of a vehicle includes obtaining an omnidirectional image by using an omnidirectional camera photographing a ground around a driving vehicle at a current location of the driving vehicle, estimating the current location of the driving vehicle on the basis of a global positioning system (GPS) satellite signal received from a satellite, searching a satellite image database storing a satellite image obtained by photographing the ground with the satellite to determine candidate satellite images corresponding to the estimated current location of the driving vehicle, comparing the omnidirectional image with each of the determined candidate satellite images to determine a candidate satellite image having a highest similarity to the omnidirectional image from among the determined candidate satellite images, and finally estimating the current location of the driving vehicle by using a location measurement value mapped to the determined candidate satellite image.Type: GrantFiled: November 26, 2019Date of Patent: May 24, 2022Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventor: Joo Chan Sohn
-
Publication number: 20200202104Abstract: A method of estimating a self-location of a vehicle includes obtaining an omnidirectional image by using an omnidirectional camera photographing a ground around a driving vehicle at a current location of the driving vehicle, estimating the current location of the driving vehicle on the basis of a global positioning system (GPS) satellite signal received from a satellite, searching a satellite image database storing a satellite image obtained by photographing the ground with the satellite to determine candidate satellite images corresponding to the estimated current location of the driving vehicle, comparing the omnidirectional image with each of the determined candidate satellite images to determine a candidate satellite image having a highest similarity to the omnidirectional image from among the determined candidate satellite images, and finally estimating the current location of the driving vehicle by using a location measurement value mapped to the determined candidate satellite image.Type: ApplicationFiled: November 26, 2019Publication date: June 25, 2020Inventor: Joo Chan SOHN
-
Patent number: 10371534Abstract: Provided are an apparatus and method for sharing and learning driving environment data to improve the decision intelligence of an autonomous vehicle. The apparatus for sharing and learning driving environment data to improve the decision intelligence of an autonomous vehicle includes a sensing section which senses surrounding vehicles traveling within a preset distance from the autonomous vehicle, a communicator which transmits and receives data between the autonomous vehicle and another vehicle or a cloud server, a storage which stores precise lane-level map data, and a learning section which generates mapping data centered on the autonomous vehicle by mapping driving environment data of a sensing result of the sensing section to the precise map data, transmits the mapping data to the other vehicle or the cloud server through the communicator, and performs learning for autonomous driving using the mapping data and data received from the other vehicle or the cloud server.Type: GrantFiled: May 23, 2017Date of Patent: August 6, 2019Assignee: Electronics and Telecommunications Research InstituteInventors: Kyoung Wook Min, Jeong Dan Choi, Jun Gyu Kang, Sang Heon Park, Kyung Bok Sung, Joo Chan Sohn, Dong Jin Lee, Yong Woo Jo, Seung Jun Han
-
Publication number: 20190143992Abstract: The present invention relates to a self-driving learning apparatus and method using driving experience information. The self-driving learning apparatus includes: an environment information collecting sensor configured to collect driving environment information of a traveling vehicle; a control information collecting sensor configured to collect behavior control information of the traveling vehicle; and a self-driving information generator configured to generate driving experience information by matching driving environment information of a driving environment changing around an ego vehicle to the collected behavior control information.Type: ApplicationFiled: January 12, 2018Publication date: May 16, 2019Applicant: Electronics and Telecommunications Research InstituteInventors: Joo Chan SOHN, Kyoung Wook MIN, Jeong Dan CHOI
-
Publication number: 20180129205Abstract: Provided are an automatic driving system and method using a driving experience database for safe driving by traffic situations. The automatic driving method includes receiving driving information about surrounding vehicles located near a first vehicle, receiving information about the event and driving information about the first vehicle when an event which is set for the first vehicle occurs, storing the driving information about the surrounding vehicles and the driving information about the first vehicle in association with the information about the event to build a database, and performing learning on a driving behavior of the first vehicle, based on the occurrence of the event.Type: ApplicationFiled: October 26, 2017Publication date: May 10, 2018Applicant: Electronics and Telecommunications Research InstituteInventors: Jeong Dan CHOI, Joo Chan SOHN, Kyoung Wook MIN, Seung Jun HAN, Hyun Jeong YUN
-
Publication number: 20180101172Abstract: Provided are an apparatus and method for sharing and learning driving environment data to improve the decision intelligence of an autonomous vehicle. The apparatus for sharing and learning driving environment data to improve the decision intelligence of an autonomous vehicle includes a sensing section which senses surrounding vehicles traveling within a preset distance from the autonomous vehicle, a communicator which transmits and receives data between the autonomous vehicle and another vehicle or a cloud server, a storage which stores precise lane-level map data, and a learning section which generates mapping data centered on the autonomous vehicle by mapping driving environment data of a sensing result of the sensing section to the precise map data, transmits the mapping data to the other vehicle or the cloud server through the communicator, and performs learning for autonomous driving using the mapping data and data received from the other vehicle or the cloud server.Type: ApplicationFiled: May 23, 2017Publication date: April 12, 2018Applicant: Electronics and Telecommunications Research InstituteInventors: Kyoung Wook MIN, Jeong Dan CHOI, Jun Gyu KANG, Sang Heon PARK, Kyung Bok SUNG, Joo Chan SOHN, Dong Jin LEE, Yong Woo JO, Seung Jun HAN
-
Publication number: 20170192436Abstract: Disclosed are an autonomous driving service system for an autonomous driving vehicle, a cloud server for the same, and a method for operating the cloud server. The autonomous driving service system for the autonomous driving vehicle according to an embodiment of the present invention includes a user terminal that requests autonomous driving map data used for an autonomous driving vehicle to perform autonomous driving from a departure point set in advance to a destination, and a cloud server that establishes and manages precise map data based on raw data collected from a plurality of collection vehicles which are driving in mutually different locations, acquires the autonomous driving map data by searching for the precise map data in response to the request for autonomous driving map data of the user terminal, and transmits the acquired autonomous driving map data to the autonomous driving vehicle.Type: ApplicationFiled: June 30, 2016Publication date: July 6, 2017Applicant: Electronics and Telecommunications Research InstituteInventors: Kyoung Wook MIN, Kyung Bok SUNG, Jeong Dan CHOI, Seung Jun HAN, Joo Chan SOHN
-
Patent number: 9605968Abstract: A navigation route cooperation navigation system and a method of controlling the same are provided. The navigation route cooperation navigation system can secure a visible distance or a communicable range and allow a member vehicle which cannot perform cluster driving to drive according to driving information of a leader vehicle, allow the member vehicle to drive according to a recommended route better than a navigation route of a leader vehicle, and set the member vehicle which passes the leader vehicle as a new leader vehicle.Type: GrantFiled: June 11, 2015Date of Patent: March 28, 2017Assignee: Electronics and Telecommunications Research InstituteInventors: Yoo Seung Song, Hyun Jeong Yun, Oh Cheon Kwon, Joo Chan Sohn
-
Publication number: 20160261619Abstract: Provided are a ship gateway apparatus and a status information displaying method thereof. The ship gateway apparatus includes a network interface configured to provide an interface with an external network system, a data input unit configured to receive, through the network interface, data transmitted from the external network system, a network connection establisher configured to establish a connection with the external network system, a data processor configured to process data, based on establishment information about the connection with the external network system established by the network connection establisher, a data output unit configured to output data, obtained through processing by the data processor, to the network interface, and a switching unit configured to transfer data, input through the data input unit, to the data processor and output the data, obtained through processing by the data processor, to the data output unit.Type: ApplicationFiled: February 29, 2016Publication date: September 8, 2016Applicant: Electronics and Telecommunications Research InstituteInventors: Kwang Il LEE, Moon Sub SONG, Joo Chan SOHN, Byung Tae JANG
-
Patent number: 9361591Abstract: An apparatus and method for building a map of probability distribution are provided. The apparatus for building the map of probability distribution includes: a sensor information collector configured to collect sensor information from a plurality of sensors; as object recognizer configured to recognize an object by integrating and inferring the sensor information, and to acquire object information; and a probability distribution creator configured to determine whether to apply an object property model including at least one of kinematic properties, shape properties, and probabilistic properties in correspondence to the object information, to acquire object properties corresponding to the object information, and to create a probability distribution based on foe object properties. Accordingly, it is possible to build a map with high reliability.Type: GrantFiled: October 29, 2013Date of Patent: June 7, 2016Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Ki In Na, Jae Min Byun, Myung Chan Roh, Joo Chan Sohn, Sung Hoon Kim
-
Publication number: 20160146619Abstract: A navigation route cooperation navigation system and a method of controlling the same are provided. The navigation route cooperation navigation system can secure a visible distance or a communicable range and allow a member vehicle which cannot perform cluster driving to drive according to driving information of a leader vehicle, allow the member vehicle to drive according to a recommended route better than a navigation route of a leader vehicle, and set the member vehicle which passes the leader vehicle as a new leader vehicle.Type: ApplicationFiled: June 11, 2015Publication date: May 26, 2016Inventors: Yoo Seung SONG, Hyun Jeong YUN, Oh Cheon KWON, Joo Chan SOHN
-
Patent number: 9280703Abstract: Disclosed are an apparatus for tracking a location of a hand, includes: a skin color image detector for detecting a skin color region from an image input from an image device using a predetermined skin color of a user; a face tracker for tracking a face using the detected skin color image; a motion detector for setting a ROI using location information of the tracked face, and detecting a motion image from the set ROI; a candidate region extractor for extracting a candidate region with respect to a hand of the user using the skin color image detected by the skin color image detector and the motion image detected by the motion detector; and a hand tracker for tracking a location of the hand in the extracted candidate region to find out a final location of the hand.Type: GrantFiled: August 28, 2012Date of Patent: March 8, 2016Assignee: Electronics and Telecommunications Research InstituteInventors: Woo Han Yun, Jae Yeon Lee, Do Hyung Kim, Jae Hong Kim, Joo Chan Sohn
-
Patent number: 9088871Abstract: Disclosed are a wearable behavioral pattern collecting apparatus for generating collected information by analyzing a behavioral pattern of a wearer, and generating and thereby transmitting emergency call information when an emergency situation occurs, a network including a repeater to transmit information received from the behavioral pattern collecting apparatus to a remote monitoring server, and a behavioral pattern analyzing system and method for transmitting information on an emergency accident and a position of the wearer to a corresponding institution or a corresponding person in charge when an emergency situation such as a falling accident or the emergency accident occurs by observing a change in the behavioral pattern of the wearer.Type: GrantFiled: August 15, 2012Date of Patent: July 21, 2015Assignee: Electronics and Telecommunications Research InstituteInventors: Chan Kyu Park, Jae Hong Kim, Cheon Shu Park, Sang Seung Kang, Min Su Jang, Joo Chan Sohn
-
Patent number: 9008440Abstract: Disclosed are a component recognizing apparatus and a component recognizing method. The component recognizing apparatus includes: an image preprocessing unit configured to extract component edges from an input component image by using a plurality of edge detecting techniques, and detect a component region by using the extracted component edges; a feature extracting unit configured to extract a component feature from the detected component region, and create a feature vector by using the component feature; and a component recognizing unit configured to input the created feature vector to an artificial neural network which has learned in advance to recognize a component category through a plurality of component image samples, and recognize the component category according to a result.Type: GrantFiled: July 10, 2012Date of Patent: April 14, 2015Assignee: Electronics and Telecommunications Research InstituteInventors: Kye Kyung Kim, Woo Han Yun, Hye Jin Kim, Su Young Chi, Jae Yeon Lee, Mun Sung Han, Jae Hong Kim, Joo Chan Sohn
-
Patent number: 8989771Abstract: A space recognition system obtains available RSSI (Received Signal Strength Indicator) information for a plurality of fixing devices in the vicinity of a user device in the wireless sensor network environment, collects environment information in a space where the user device is located, and collects environment information in a plurality of spaces in which the fixing devices are respectively located. The system combines the RSSI information and the environment information and performs a recognition function on the combined environment information to recognize the space in which a user having the user device is located.Type: GrantFiled: August 29, 2012Date of Patent: March 24, 2015Assignee: Electronics and Telecommunications Research InstituteInventors: Sang Seung Kang, Min Su Jang, Jae Hong Kim, Joo Chan Sohn
-
Patent number: 8934716Abstract: Disclosed is a method of sequencing character information in order to increase precision of character recognition. The method includes: a pre-processing that extracts character information from an image to binarize the extracted character information through a predetermined threshold and extracts and thins a center line of the binarized character information; normalizing the pre-processed character information to character information according to a predetermined criteria; and sequencing the normalized character information using structural features including an end point or a divergence point of the character information. The present invention suggests an angle normalization method of input character information, a structural feature position determining method, and a structural feature numeral string generating method to strongly recognize characters configured by various fonts obtained from a natural scene regardless of an angle or a size of the characters.Type: GrantFiled: August 30, 2012Date of Patent: January 13, 2015Assignee: Electronics and Telecommunications Research InstituteInventors: Ho Sub Yoon, Ji Eun Kim, Kyu Dae Ban, Dong Jin Lee, Jae Hong Kim, Joo Chan Sohn
-
Patent number: 8913798Abstract: Disclosed are a system and a method for recognizing a disguised face using a Gabor feature and a support vector machine (SVM) classifier according to the present invention. The system for recognizing a disguised face includes: a graph generation means to generate a single standard face graph from a plurality of facial image samples; a support vector machine (SVM) learning means to determine an optimal classification plane for discriminating a disguised face from the plurality of facial image samples and disguised facial image samples; and a facial recognition means to determine whether an input facial image is disguised using the standard face graph and the optimal classification plane when the facial image to be recognized is input.Type: GrantFiled: August 2, 2012Date of Patent: December 16, 2014Assignee: Electronics and Telecommunications Research InstituteInventors: Kye Kyung Kim, Jae Yeon Lee, Ho Sub Yoon, Jae Hong Kim, Joo Chan Sohn
-
Publication number: 20140347484Abstract: The present invention relates to an apparatus and method for providing the surrounding environment information of a vehicle. The apparatus includes a first information extraction unit for collecting sensing information about a surrounding environment of a vehicle and extracting lane information and object information based on the sensing information. A second information extraction unit acquires an image of the surrounding environment of the vehicle, and extracts lane information and object information based on the image. An information integration unit matches and compares the lane information and the object information extracted by the first information extraction unit with the lane information and the object information extracted by the second information extraction unit, determining ultimate lane information and ultimate object information based on results of comparison, and providing the ultimate lane information and the ultimate object information to a control unit of the vehicle.Type: ApplicationFiled: April 16, 2014Publication date: November 27, 2014Applicant: Electronics and Telecommunications Research InstituteInventors: Jae-Min BYUN, Ki-In NA, Myung-Chan ROH, Joo-Chan SOHN, Sung-Hoon KIM
-
Publication number: 20140306811Abstract: A system for integrating gestures and sounds including: a gesture recognition unit that extracts gesture feature information corresponding to user commands from image information and acquires gesture recognition information from the gesture feature information; a background recognition unit acquiring background sound information using a predetermined background sound model from sound information; a sound recognition unit that extracts sound feature information corresponding to user commands from the sound information and extracts the sound feature information based on the the background sound information and acquires sound recognition information from the sound feature information; and an integration unit that generates integration information by integrating the gesture recognition information and the sound recognition information.Type: ApplicationFiled: June 24, 2014Publication date: October 16, 2014Inventors: Mun Sung HAN, Young Giu Jung, Hyun Kim, Jae Hong Kim, Joo Chan Sohn
-
Patent number: 8793134Abstract: Disclosed is a system for integrating gestures and sounds including: a gesture recognition unit that extracts gesture feature information corresponding to user commands from image information and acquires gesture recognition information from the gesture feature information; a background recognition unit acquiring background sound information using the predetermined background sound model from the sound information; a sound recognition unit that extracts the sound feature information corresponding to user commands from the sound information and extracts the sound feature information based on the background sound information and acquires the sound recognition information from the sound feature information; and an integration unit that generates integration information by integrating the gesture recognition information and the sound recognition information.Type: GrantFiled: December 21, 2011Date of Patent: July 29, 2014Assignee: Electronics and Telecommunications Research InstituteInventors: Mun Sung Han, Young Giu Jung, Hyun Kim, Jae Hong Kim, Joo Chan Sohn