Patents by Inventor Tae Eun Choe

Tae Eun Choe has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20200410703
    Abstract: In some implementations, a method is provided. The method includes obtaining an image depicting an environment where an autonomous driving vehicle (ADV) is located. The method also includes determining, using a first neural network, a plurality of line indicators based on the image. The plurality of line indicators represent one or more lanes in the environment. The method further includes determining, using a second neural network, a vanishing point within the image based on the plurality of line segments. The second neural network is communicatively coupled to the first neural network. The plurality of line indicators is determined simultaneously with the vanishing point. The method further includes calibrating one or more sensors based of the autonomous driving vehicle based on the vanishing point.
    Type: Application
    Filed: June 28, 2019
    Publication date: December 31, 2020
    Inventors: Yuliang Guo, Tae Eun Choe, Ka Wai Tsoi, Guang Chen, Weide Zhang
  • Publication number: 20200410252
    Abstract: In one embodiment, a set of bounding box candidates are plotted onto a 2D space based on their respective dimension (e.g., widths and heights). The bounding box candidates are clustered on the 2D space based on the distribution density of the bounding box candidates. For each of the clusters of the bounding box candidates, an anchor box is determined to represent the corresponding cluster. A neural network model is trained based on the anchor boxes representing the clusters. The neural network model is utilized to detect or recognize objects based on images and/or point clouds captured by a sensor (e.g., camera, LIDAR, and/or RADAR) of an autonomous driving vehicle.
    Type: Application
    Filed: June 28, 2019
    Publication date: December 31, 2020
    Inventors: Ka Wai TSOI, Tae Eun CHOE, Yuliang GUO, Guang CHEN, Weide ZHANG
  • Publication number: 20200410704
    Abstract: In response to a first image captured by a camera of an ADV, a horizon line is determined based on the camera's hardware settings, representing a vanishing point based on an initial or default pitch angle of the camera. One or more lane lines are determined based on the first image via a perception process performed on the first image. In response to a first input signal received from an input device, a position of the horizon line is updated based on the first input signal and a position of at least one of the lane lines is updated based on the updated horizon line. The input signal may represent an incremental adjustment for adjusting the position of the horizon line. A first calibration factor or first correction value is determined for calibrating a pitch angle of the camera based on a difference between the initial horizon line and the updated horizon line.
    Type: Application
    Filed: June 28, 2019
    Publication date: December 31, 2020
    Inventors: Tae Eun CHOE, Yuliang GUO, Guang CHEN, Ka Wai TSOI, Weide ZHANG
  • Publication number: 20200410255
    Abstract: In some implementations, a method is provided. The method includes obtaining an image depicting an environment where an autonomous driving vehicle (ADV) may be located. The image comprises a plurality of line indicators. The plurality of line indicators represent one or more lanes in the environment. The image is part of training data for a neural network. The method also includes determining a plurality of line segments based on the plurality of line indicators. The method further includes determining a vanishing point within the image based on the plurality of line segments. The method further includes updating one or more of the image or metadata associated with the image to indicate a location of the vanishing point within the image.
    Type: Application
    Filed: June 28, 2019
    Publication date: December 31, 2020
    Inventors: Yuliang Guo, Tae Eun Choe, Ka Wai Tsoi, Guang Chen, Weide Zhang
  • Publication number: 20200406893
    Abstract: During the autonomous driving, the movement trails or moving history of obstacles, as well as, an autonomous driving vehicle (ADV) may be maintained in a corresponding buffer. For each of the obstacles or objects and the ADV, the vehicle states at different points in time are maintained and stored in one or more buffers. The vehicle states representing the moving trails or moving history of the obstacles and the ADV may be utilized to reconstruct a history trajectory of the obstacles and the ADV, which may be used for a variety of purposes. For example, the moving trails or history of obstacles may be utilized to determine lane configuration of one or more lanes of a road, particularly, in a rural area where the lane markings are unclear. The moving history of the obstacles may also be utilized predict the future movement of the obstacles, tailgate an obstacle, and infer a lane line.
    Type: Application
    Filed: June 28, 2019
    Publication date: December 31, 2020
    Inventors: Tae Eun CHOE, Guang CHEN, Weide ZHANG, Yuliang GUO, Ka Wai TSOI
  • Publication number: 20200410260
    Abstract: In one embodiment, in addition to detecting or recognizing an actual lane, a virtual lane is determined based on the current state or motion prediction of an ADV. A virtual lane may or may not be identical or similar to the actual lane. A virtual lane may represent the likely movement of the ADV in a next time period given the current speed and heading direction of the vehicle. If an object is detected that may cross a lane line of the virtual lane and is a closest object to the ADV, the object is considered as a CIPO, and an emergency operation may be activated. That is, even though an object may not be in the path of an actual lane, if the object is in the path of a virtual lane of an ADV, the object may be considered as a CIPO and subject to a special operation.
    Type: Application
    Filed: June 28, 2019
    Publication date: December 31, 2020
    Inventors: Tae Eun CHOE, Yuliang GUO, Guang CHEN, Weide ZHANG, Ka Wai TSOI
  • Patent number: 10860868
    Abstract: In one embodiment, a lane processing method and system identifies traffic lanes based on images. The images can be converted to lane markers where the lane markers are based on inner edges of the identified lanes. The lane markers can be used for steering, navigation, controlling and driving an automated driving vehicle (ADV). The markers can be associated with each other, in graphical space, to construct lane lines. Additional information, such as spatial and semantic information can be associated with each lane to further improve ADV planning and control.
    Type: Grant
    Filed: April 18, 2018
    Date of Patent: December 8, 2020
    Assignee: BAIDU USA LLC
    Inventors: Jun Zhu, Tae Eun Choe, Guang Chen, Weide Zhang
  • Patent number: 10818035
    Abstract: In some implementations, a method is provided. The method includes obtaining an image depicting an environment where an autonomous driving vehicle (ADV) is located. The method also includes determining a vanishing point within the image using a neural network. The vanishing point is represented by the neural network as a relative distance to a center of the image. The method further includes calibrating one or more sensors of the autonomous driving vehicle based on the vanishing point.
    Type: Grant
    Filed: June 28, 2019
    Date of Patent: October 27, 2020
    Assignee: BAIDU USA LLC
    Inventors: Yuliang Guo, Tae Eun Choe, Ka Wai Tsoi, Guang Chen, Weide Zhang
  • Patent number: 10768769
    Abstract: A system and method for video surveillance and searching are disclosed. Video is analyzed and events are automatically detected. Based on the automatically detected events, textual descriptions are generated. The textual descriptions may be used to supplement video viewing and event viewing, and to provide for textual searching for events.
    Type: Grant
    Filed: February 23, 2017
    Date of Patent: September 8, 2020
    Assignee: AVIGILON FORTRESS CORPORATION
    Inventors: Tae Eun Choe, Mun Wai Lee, Kiran Gunda, Niels Haering
  • Publication number: 20200225790
    Abstract: A system and method for video surveillance and searching are disclosed. Video is analyzed and events are automatically detected. Based on the automatically detected events, textual descriptions are generated. The textual descriptions may be used to supplement video viewing and event viewing, and to provide for textual searching for events.
    Type: Application
    Filed: February 23, 2017
    Publication date: July 16, 2020
    Inventors: Tae Eun Choe, Mun Wai Lee, Kiran Gunda, Niels Haering
  • Patent number: 10642891
    Abstract: Relational graphs may be used to extract information. Similarities between the relational graphs and the items they represent may be determined. For example, when applied to video searching, relational graphs may be obtained from searching videos to extract objects, events and/or relations therebetween. Each relational graph may comprise a plurality of nodes and edges, wherein at least some of the detected objects and events are represented by each node, and wherein each edge and represents a relationship between two nodes. Subgraphs may be extracted from each relational graph and dimension reduction may be performed on the subgraphs to obtain a reduced variable set which may then be used to perform searches, such as similarity analyses of videos.
    Type: Grant
    Filed: April 14, 2014
    Date of Patent: May 5, 2020
    Assignee: AVIGILON FORTRESS CORPORATION
    Inventors: Tae Eun Choe, Hongli Deng, Mun Wai Lee, Feng Guo
  • Publication number: 20200026282
    Abstract: According to some embodiments, a system pre-processes, via a first thread, a captured image perceiving an environment surrounding the ADV obtained from an image capturing device of the ADV. The system processes, via a second thread, the pre-processed image with a corresponding depth image captured by a ranging device of the ADV using a machine learning model to detect vehicle lanes. The system post-processes, via a third thread, the detected vehicle lanes to track the vehicle lanes relative to the ADV. The system generates a trajectory based on a lane line of the tracked vehicle lanes to control the ADV autonomously according to the trajectory.
    Type: Application
    Filed: July 23, 2018
    Publication date: January 23, 2020
    Inventors: Tae Eun CHOE, Jun ZHU, I-Kuei CHEN, Guang CHEN, Weide ZHANG
  • Publication number: 20190325234
    Abstract: In one embodiment, a lane processing method and system identifies traffic lanes based on images. The images can be converted to lane markers where the lane markers are based on inner edges of the identified lanes. The lane markers can be used for steering, navigation, controlling and driving an automated driving vehicle (ADV). The markers can be associated with each other, in graphical space, to construct lane lines. Additional information, such as spatial and semantic information can be associated with each lane to further improve ADV planning and control.
    Type: Application
    Filed: April 18, 2018
    Publication date: October 24, 2019
    Inventors: JUN ZHU, TAE EUN CHOE, GUANG CHEN, WEIDE ZHANG
  • Patent number: 10186123
    Abstract: Systems, methods, and manufactures for a surveillance system are provided. The surveillance system includes sensors having at least one non-overlapping field of view. The surveillance system is operable to track a target in an environment using the sensors. The surveillance system is also operable to extract information from images of the target provided by the sensors. The surveillance system is further operable to determine probabilistic confidences corresponding to the information extracted from images of the target. The confidences include at least one confidence corresponding to at least one primitive event. Additionally, the surveillance system is operable to determine grounded formulae by instantiating predefined rules using the confidences. Further, the surveillance system is operable to infer a complex event corresponding to the target using the grounded formulae. Moreover, the surveillance system is operable to provide an output describing the complex event.
    Type: Grant
    Filed: March 31, 2015
    Date of Patent: January 22, 2019
    Assignee: Avigilon Fortress Corporation
    Inventors: Atul Kanaujia, Tae Eun Choe, Hongli Deng
  • Publication number: 20180246887
    Abstract: Systems, methods, and computer applications and media for gathering, categorizing, sorting, managing, reviewing and organizing large quantities of multimedia items across space and time and using crowd-sourcing resources are described. Various implementations may enable either public or private (e.g., internal to an organization) crowdsourcing of multimedia item gathering and analysis, including the gathering, analysis, lead-searching, and classification of digital still images and digital videos. Various implementations may allow a user, such as a law enforcement investigator, to consolidate all of the available multimedia items into one system, and quickly gather, sort, organize, and display the multimedia items based on location, time, content, or other parameters. Moreover, an investigator may be able to create crowd source tasks as he works with the multimedia items and utilize crowd source resources when he needs help.
    Type: Application
    Filed: February 28, 2018
    Publication date: August 30, 2018
    Inventors: Minwoo Park, Tae Eun Choe, W. Andrew Scanlon, M. Allison Beach, Gary W. Myers
  • Patent number: 9996976
    Abstract: A method is provided for augmenting video feed obtained by a camera of a aerial vehicle to a user interface. The method can include obtaining a sequence of video images with or without corresponding sensor metadata from the aerial vehicle; obtaining supplemental data based on the sequence of video images and the sensor metadata; correcting an error in the sensor metadata using a reconstruction error minimization technique; creating a geographically-referenced scene model based on a virtual sensor coordinate system that is registered to the sequence of video images; overlaying the supplemental information onto the geographically-referenced scene model by rendering geo-registered data from a 3D perspective that matches a corrected camera model; creating a video stream of a virtual representation from the scene from the perspective of the camera based on the overlaying; and providing the video stream to a UI to be render onto a display.
    Type: Grant
    Filed: May 4, 2015
    Date of Patent: June 12, 2018
    Assignee: AVIGILON FORTRESS CORPORATION
    Inventors: Shirley Zhou, Don Madden, Tae Eun Choe, Andrew W. Scanlon
  • Publication number: 20170300150
    Abstract: A system and method for video surveillance and searching are disclosed. Video is analyzed and events are automatically detected. Based on the automatically detected events, textual descriptions are generated. The textual descriptions may be used to supplement video viewing and event viewing, and to provide for textual searching for events.
    Type: Application
    Filed: February 23, 2017
    Publication date: October 19, 2017
    Inventors: Tae Eun Choe, Mun Wai Lee, Kiran Gunda, Niels Haering
  • Patent number: 9602738
    Abstract: A system and method for video surveillance and searching are disclosed. Video is analyzed and events are automatically detected. Based on the automatically detected events, textual descriptions are generated. The textual descriptions may be used to supplement video viewing and event viewing, and to provide for textual searching for events.
    Type: Grant
    Filed: November 21, 2012
    Date of Patent: March 21, 2017
    Assignee: AVIGILON FORTRESS CORPORATION
    Inventors: Tae Eun Choe, Mun Wai Lee, Kiran Gunda, Niels Haering
  • Publication number: 20170039765
    Abstract: A method is provided for augmenting video feed obtained by a camera of a aerial vehicle to a user interface. The method can include obtaining a sequence of video images with or without corresponding sensor metadata from the aerial vehicle; obtaining supplemental data based on the sequence of video images and the sensor metadata; correcting an error in the sensor metadata using a reconstruction error minimization technique; creating a geographically-referenced scene model based on a virtual sensor coordinate system that is registered to the sequence of video images; overlaying the supplemental information onto the geographically-referenced scene model by rendering geo-registered data from a 3D perspective that matches a corrected camera model; creating a video stream of a virtual representation from the scene from the perspective of the camera based on the overlaying; and providing the video stream to a UI to be render onto a display.
    Type: Application
    Filed: May 4, 2015
    Publication date: February 9, 2017
    Inventors: Shirley Zhou, Don Madden, Tae Eun Choe, Andrew W. Scanlon
  • Publication number: 20160314345
    Abstract: Methods and systems for facial recognition are provided. The method includes determining a three-dimensional (3D) model of a face of an individual based on different images of the individual. The method also includes extracting two-dimensional (2D) patches from the 3D model. Further, the method includes generating a plurality of signatures of the face using different combinations of the 2D patches, wherein the plurality of signatures correspond to respective views of the 3D model from different angles.
    Type: Application
    Filed: July 8, 2016
    Publication date: October 27, 2016
    Inventors: Atul Kanaujia, Narayanan Ramanathan, Tae Eun Choe