Patents by Inventor Dah-Jye Lee

Dah-Jye Lee has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 9361706
    Abstract: The present disclosure relates generally to optical flow algorithms. Section 1 of the present disclosure describes an optical flow algorithm with real-time performance and adequate accuracy for embedded vision applications. This optical flow algorithm is based on a ridge estimator. Sections 2 and 3 describe an obstacle detection algorithm that utilizes the motion field that is output from the optical flow algorithm. Section 2 is focused on unmanned ground vehicles, whereas section 3 is focused on unmanned aerial vehicles.
    Type: Grant
    Filed: January 4, 2010
    Date of Patent: June 7, 2016
    Assignee: Brigham Young University
    Inventors: Dah-Jye Lee, Zhaoyi Wei
  • Patent number: 9317923
    Abstract: A method for stereo vision may include filtering a row or column in a stereo image to obtain intensity profiles, identifying peaks in the intensity profiles, pairing peaks within a maximum disparity distance, determining a shape interval for peak pairs, selecting a peak pair with a maximum shape interval, determining a disparity offset for the peak pairs, extending shape intervals to include all pixels in the intensity profiles, computing depths or distances from disparity offsets, and smoothing the stereo image disparity map along a perpendicular dimension. Another method for stereo vision includes filtering stereo images to intensity profiles, identifying peaks in the intensity profiles, pairing peaks within a maximum disparity distance, determining shape intervals for peak pairs, and selecting peak pairs with the maximum shape interval. Apparatus corresponding to the above methods are also disclosed herein.
    Type: Grant
    Filed: April 5, 2013
    Date of Patent: April 19, 2016
    Assignee: Brigham Young University
    Inventors: Beau Jeffrey Tippetts, Dah-Jye Lee
  • Patent number: 9317779
    Abstract: A method for training an image processing neural network without human selection of features may include providing a training set of images labeled with two or more classifications, providing an image processing toolbox with image transforms that can be applied to the training set, generating a random set of feature extraction pipelines, where each feature extraction pipeline includes a sequence of image transforms randomly selected from the image processing toolbox and randomly selected control parameters associated with the sequence of image transforms. The method may also include coupling a first stage classifier to an output of each feature extraction pipeline and executing a genetic algorithm to conduct genetic modification of each feature extraction pipeline and train each first stage classifier on the training set, and coupling a second stage classifier to each of the first stage classifiers in order to increase classification accuracy.
    Type: Grant
    Filed: April 5, 2013
    Date of Patent: April 19, 2016
    Assignee: Brigham Young University
    Inventors: Kirt Dwayne Lillywhite, Dah-Jye Lee
  • Publication number: 20130266211
    Abstract: A method for stereo vision may include filtering a row or column in a stereo image to obtain intensity profiles, identifying peaks in the intensity profiles, pairing peaks within a maximum disparity distance, determining a shape interval for peak pairs, selecting a peak pair with a maximum shape interval, determining a disparity offset for the peak pairs, extending shape intervals to include all pixels in the intensity profiles, computing depths or distances from disparity offsets, and smoothing the stereo image disparity map along a perpendicular dimension. Another method for stereo vision includes filtering stereo images to intensity profiles, identifying peaks in the intensity profiles, pairing peaks within a maximum disparity distance, determining shape intervals for peak pairs, and selecting peak pairs with the maximum shape interval. Apparatus corresponding to the above methods are also disclosed herein.
    Type: Application
    Filed: April 5, 2013
    Publication date: October 10, 2013
    Applicant: Brigham Young University
    Inventors: Beau Jeffrey Tippetts, Dah-Jye Lee
  • Publication number: 20130266214
    Abstract: A method for training an image processing neural network without human selection of features may include providing a training set of images labeled with two or more classifications, providing an image processing toolbox with image transforms that can be applied to the training set, generating a random set of feature extraction pipelines, where each feature extraction pipeline includes a sequence of image transforms randomly selected from the image processing toolbox and randomly selected control parameters associated with the sequence of image transforms. The method may also include coupling a first stage classifier to an output of each feature extraction pipeline and executing a genetic algorithm to conduct genetic modification of each feature extraction pipeline and train each first stage classifier on the training set, and coupling a second stage classifier to each of the first stage classifiers in order to increase classification accuracy.
    Type: Application
    Filed: April 5, 2013
    Publication date: October 10, 2013
    Applicant: Brighham Young University
    Inventors: Kirt Dwayne Lillywhite, Dah-Jye Lee
  • Publication number: 20110128379
    Abstract: The present disclosure relates generally to optical flow algorithms. Section 1 of the present disclosure describes an optical flow algorithm with real-time performance and adequate accuracy for embedded vision applications. This optical flow algorithm is based on a ridge estimator. Sections 2 and 3 describe an obstacle detection algorithm that utilizes the motion field that is output from the optical flow algorithm. Section 2 is focused on unmanned ground vehicles, whereas section 3 is focused on unmanned aerial vehicles.
    Type: Application
    Filed: January 4, 2010
    Publication date: June 2, 2011
    Inventors: Dah-Jye Lee, Zhaoyi Wei
  • Publication number: 20090138270
    Abstract: The provision of speech therapy to a learner (76) entails receiving a speech signal (156) from the learner (76) at a computing system (24). The speech signal (156) corresponds to an utterance (116) made by the learner (76). A set of parameters (166) is ascertained from the speech signal (156). The parameters (166) represent a contact pattern (52) between a tongue and palate of the learner (156) during the utterance (116). For each parameter in the set of parameters (166), a deviation measure (188) is calculated relative to a corresponding parameter from a set of normative parameters (138) characterizing an ideal pronunciation of the utterance (116). An accuracy score (56) for the utterance (116), relative to its ideal pronunciation, is generated from the deviation measure (188). The accuracy score (56) is provided to the learner (76) to visualize accuracy of the utterance (116) relative to its ideal pronunciation.
    Type: Application
    Filed: November 26, 2007
    Publication date: May 28, 2009
    Inventors: Samuel G. Fletcher, Dah-Jye Lee, Jared Darrell Turpin
  • Patent number: 6369401
    Abstract: A three-dimensional measurement system and method for objects, such as oysters, projects one or more laser lines onto a surface on which the object is currently located. The laser lines are picked up as parallel lines by a camera where no object is located on the surface. When an object is located on the surface, the camera obtains an image that includes lines displaced from the parallel lines as a result of the lines impinging on portions of the object that have a particular height associated therewith. The displacement data allows a processor to determine the height of the object at various positions of the object can be obtained. Volume can be obtained using a binary image of the object to calculate area from the height data. The object can then be classified according to its volume.
    Type: Grant
    Filed: September 10, 1999
    Date of Patent: April 9, 2002
    Assignee: Agri-Tech, Inc.
    Inventor: Dah-Jye Lee