Patents by Inventor Joo Hwee Lim

Joo Hwee Lim has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20190114482
    Abstract: According to various embodiments, a method for providing task related information to a user may be provided. The method may include: determining location information based on a spatial model; determining task information based on a task model; determining sensor information; determining output information based on the location information, task information and sensor information; and providing the output information to the user. In a specific embodiment, the output information may comprise an orientation cue, an error indication or a contextual cue to assist the user in performing the task associated with the location detected by a vision recognition method, and the output information can be provided to the user as augmented reality in a wearable device.
    Type: Application
    Filed: March 30, 2017
    Publication date: April 18, 2019
    Inventors: Liyuan LI, Mark David RICE, Joo Hwee LIM, Suat Ling Jamie NG, Teck Sun Marcus WAN, Shue Ching CHIA, Hong Huei TAY, Shiang Long LEE
  • Patent number: 9445716
    Abstract: A non-stereo fundus image is used to obtain a plurality of glaucoma indicators. Additionally, genome data for the subject is used to obtain genetic marker data relating to one or more genes and/or SNPs associated with glaucoma. The glaucoma indicators and genetic marker data are input into an adaptive model operative to generate an output indicative of a risk of glaucoma in the subject. In combination, the genetic indicators and genome data are more informative about the risk of glaucoma than either of the two in isolation. The adaptive model may be a two-stage model, having a first stage in which individual genetic indicators are combined with respective portions of the genome data by first adaptive model modules to form respective first outputs, and a second stage in which the first outputs are combined by a second adaptive mode.
    Type: Grant
    Filed: August 17, 2015
    Date of Patent: September 20, 2016
    Assignees: Agency for Science, Technology and Research, Singapore Health Services Pte Ltd
    Inventors: Jiang Liu, Zhuo Zhang, Wing Kee Damon Wong, Ngan Meng Tan, Fengshou Yin, Beng Hai Lee, Huiqi Li, Joo Hwee Lim, Carol Cheung, Tin Aung, Tien Yin Wong, Ziyang Liang, Jun Cheng, Baskaran Mani
  • Publication number: 20160100753
    Abstract: A non-stereo fundus image is used to obtain a plurality of glaucoma indicators. Additionally, genome data for the subject is used to obtain genetic marker data relating to one or more genes and/or SNPs associated with glaucoma. The glaucoma indicators and genetic marker data are input into an adaptive model operative to generate an output indicative of a risk of glaucoma in the subject. In combination, the genetic indicators and genome data are more informative about the risk of glaucoma than either of the two in isolation. The adaptive model may be a two-stage model, having a first stage in which individual genetic indicators are combined with respective portions of the genome data by first adaptive model modules to form respective first outputs, and a second stage in which the first outputs are combined by a second adaptive mode.
    Type: Application
    Filed: August 17, 2015
    Publication date: April 14, 2016
    Inventors: Jiang LIU, Zhuo ZHANG, Wing Kee Damon WONG, Ngan Meng TAN, Fengshou YIN, Beng Hai LEE, Huiqi LI, Joo Hwee LIM, Carol CHEUNG, Tin AUNG, Tien Yin WONG, Ziyang LIANG, Jun CHENG, Baskaran MANI
  • Patent number: 9107617
    Abstract: A non-stereo fundus image is used to obtain a plurality of glaucoma indicators. Additionally, genome data for the subject is used to obtain genetic marker data relating to one or more genes and/or SNPs associated with glaucoma. The glaucoma indicators and genetic marker data are input into an adaptive model operative to generate an output indicative of a risk of glaucoma in the subject. In combination, the genetic indicators and genome data are more informative about the risk of glaucoma than either of the two in isolation. The adaptive model may be a two-stage model, having a first stage in which individual genetic indicators are combined with respective portions of the genome data by first adaptive model modules to form respective first outputs, and a second stage in which the first outputs are combined by a second adaptive mode.
    Type: Grant
    Filed: November 16, 2010
    Date of Patent: August 18, 2015
    Assignees: Agency for Science, Technology and Research, Singapore Health Services Pte Ltd.
    Inventors: Jiang Liu, Zhuo Zhang, Wing Kee Damon Wong, Ngan Meng Tan, Fengshou Yin, Beng Hai Lee, Huiqi Li, Joo Hwee Lim, Carol Cheung, Tin Aung, Tien Yin Wong, Ziyang Liang, Jun Cheng, Baskaran Mani
  • Patent number: 8705826
    Abstract: A two-dimensional retinal fundus image of the retinal fundus of an eye is processed by optic disc segmentation (2) followed by cup segmentation 4. Data derived from the optic disc segmentation (i.e. the output of the disc segmentation (2) and/or data derived from the output of the optic disc segmentation step, e.g. by a smoothing operation 3) and data derived from the out-put of the optic cup segmentation (i.e. the output of the cup segmentation (4) and/or data derived from the output of the optic disc segmentation, e.g. by a smoothing operation 5) are fed (6) to an adaptive model which has been trained to generate from such inputs a value indicative of cup-to-disc ratio (CDR) of the eye. The CDR is indicative of glaucoma. Thus, the method can be used to screen patients for glaucoma.
    Type: Grant
    Filed: May 14, 2008
    Date of Patent: April 22, 2014
    Assignee: Agency for Science, Technology and Research
    Inventors: Jiang Liu, Joo Hwee Lim, Wing Kee Wong, Huiqi Li, Tien Yin Wong
  • Publication number: 20140003723
    Abstract: A text detection device is provided. The text detection device may include: an image input circuit configured to receive an image; an edge property determination circuit configured to determine a plurality of edge properties for each of a plurality of scales of the image; and a text location determination circuit configured to determine a text location in the image based on the plurality of edge properties for the plurality of scales of the image.
    Type: Application
    Filed: June 24, 2013
    Publication date: January 2, 2014
    Applicant: Agency for Science, Technology and Research
    Inventors: Shijian LU, Joo Hwee LIM
  • Patent number: 8428322
    Abstract: A method for determining the position of an optic cup boundary in a 2D retinal image. The method includes detecting kinks in blood vessels at an estimated boundary of the optic cup and the optic disc, and determining the position of the optic cup boundary in the 2D retinal image based on the detected kinks. The determined optic cup boundary may be used for determining a CDR which may in turn be used for determining a risk of contracting glaucomatous.
    Type: Grant
    Filed: December 15, 2008
    Date of Patent: April 23, 2013
    Assignees: Singapore Health Services Pte Ltd, Agency for Science, Technology and Research, National University of Singapore
    Inventors: Wing Kee Damon Wong, Jiang Liu, Joo Hwee Lim, Huiqi Li, Ngan Meng Tan, Tien Yin Wong
  • Patent number: 8331627
    Abstract: A method and system for generating an entirely well-focused image of a three-dimensional scene. The method comprises the steps of a) learning a prediction model including at least a focal depth probability density function (PDF), h(k), for all depth values k, from historical tiles of the scene; b) predicting the possible focal surfaces in subsequent tiles of the scene by applying the prediction model; c) for each value of k, examining h(k) such that if h(k) is below a first threshold, no image is acquired at the depth k? for said one tile; and if h(k) is above or equal to a first threshold, one or more images are acquired in a depth range around said value of k for said one tile; and d) processing the acquired images to generate a pixel focus map for said one tile.
    Type: Grant
    Filed: September 26, 2008
    Date of Patent: December 11, 2012
    Assignee: Agency for Science, Technology and Research
    Inventors: Wei Xiong, Qi Tian, Joo Hwee Lim
  • Publication number: 20120230564
    Abstract: A non-stereo fundus image is used to obtain a plurality of glaucoma indicators. Additionally, genome data for the subject is used to obtain genetic marker data relating to one or more genes and/or SNPs associated with glaucoma. The glaucoma indicators indicators and genetic marker data are input into an adaptive model operative to generate an output indicative of a risk of glaucoma in the subject. In combination, the genetic indicators and genome data are more informative about the risk of glaucoma than either of the two in isolation. The adaptive model may be a two-stage model, having a first stage in which individual genetic indicators are combined with respective portions of the genome data by first adaptive model modules to form respective first outputs, and a second stage in which the first outputs are combined by a second adaptive mode.
    Type: Application
    Filed: November 16, 2010
    Publication date: September 13, 2012
    Inventors: Jiang Liu, Zhuo Zhang, Wing Kee Damon Wong, Ngan Meng Tan, Fengshou Yin, Beng Hai Lee, Huiqi Li, Joo Hwee Lim, Carol Cheung, Tin Aung, Tien Yin Wong, Ziyang Liang, Jun Cheng, Baskaran Mani
  • Publication number: 20120155726
    Abstract: A method for determining a grade of nuclear cataract in a test image. The method includes: (1a) defining a contour of a lens structure in the test image, the defined contour of the lens structure comprising a segment around a boundary of a nucleus of the lens structure; (1b) extracting features from the test image based on the defined contour of the lens structure in the test image; and (1c) determining the grade of nuclear cataract in the test image based on the extracted features and a grading model.
    Type: Application
    Filed: August 24, 2009
    Publication date: June 21, 2012
    Inventors: Huiqi Li, Joo Hwee Lim, Jiang Jimmy Liu, Wing Kee Damon Wong, Ngan Meng Tan, Zhuo Zhang, Shijian Lu, Tien Yin Wong
  • Publication number: 20110282897
    Abstract: A method and system for maintaining a database of reference images, the database including a plurality of sets of images, each set associated with one location or object. The method comprises the steps of identifying local features of each set of images; determining distances between each local feature of each set and the local features of all other sets; identifying discriminative features of each set of images by removing local features based on the determined distances; and storing the discriminative features of each set of images.
    Type: Application
    Filed: June 5, 2009
    Publication date: November 17, 2011
    Applicant: Agency for Science, Technology and Research
    Inventors: Yiqun Li, Joo Hwee Lim, Hanlin Goh
  • Publication number: 20110091084
    Abstract: A method performed by a computer system for detecting opacity in an image of the lens of an eye. The method includes detecting a region of interest in a picture of the lens, and processing the region of interest to produce a modified image using an algorithm which emphasizes opacity associated with a cortical cataract relative to opacity caused by other types of opacity, such as opacity caused by posterior sub-capsular cataracts (PSC). The modified image may be used for grading the level of cortical opacity, by measuring, in the modified image, the proportion of opacity in at least one area of the region of interest.
    Type: Application
    Filed: May 20, 2008
    Publication date: April 21, 2011
    Inventors: Huiqi Li, Joo Hwee Lim, Jiang Liu, Li Liang Ko, Wing Kee Damon Wong, Tien Yin Wong
  • Publication number: 20110091083
    Abstract: A two-dimensional retinal fundus image of the retinal fundus of an eye is processed by optic disc segmentation (2) followed by cup segmentation 4. Data derived from the optic disc segmentation (i.e. the output of the disc segmentation (2) and/or data derived from the output of the optic disc segmentation step, e.g. by a smoothing operation 3) and data derived from the out-put of the optic cup segmentation (i.e. the output of the cup segmentation (4) and/or data derived from the output of the optic disc segmentation, e.g. by a smoothing operation 5) are fed (6) to an adaptive model which has been trained to generate from such inputs a value indicative of cup-to-disc ratio (CDR) of the eye. The CDR is indicative of glaucoma. Thus, the method can be used to screen patients for glaucoma.
    Type: Application
    Filed: May 14, 2008
    Publication date: April 21, 2011
    Inventors: Jiang Liu, Joo Hwee Lim, Wing Kee Wong, Huiqi Li, Tien Yin Wong
  • Publication number: 20100254596
    Abstract: A method and system for generating an entirely well-focused image of a three-dimensional scene. The method comprises the steps of a) learning a prediction model including at least a focal depth probability density function (PDF), h(k), for all depth values k, from historical tiles of the scene; b) predicting the possible focal surfaces in subsequent tiles of the scene by applying the prediction model; c) for each value of k, examining h(k) such that if h(k) is below a first threshold, no image is acquired at the depth k? for said one tile; and if h(k) is above or equal to a first threshold, one or more images are acquired in a depth range around said value of k for said one tile; and d) processing the acquired images to generate a pixel focus map for said one tile.
    Type: Application
    Filed: September 26, 2008
    Publication date: October 7, 2010
    Inventors: Wei Xiong, Qi Tian, Joo Hwee Lim
  • Publication number: 20100005485
    Abstract: A method of annotating footage that includes a structured text broadcast stream, a video stream and an audio stream, the method includes the steps of: extracting directly or indirectly one or more keywords and/or features from at least said structured text broadcast streams, temporally annotating said footage with said keywords and/or features analysing temporally adjacent annotated keywords and/or features to determine information about one or more events within said footage. Also provided are: a data store for storing video footage, a method of generation of a personalised video summary, a system for annotating footage and a system for generation of a personalised video summary.
    Type: Application
    Filed: December 19, 2005
    Publication date: January 7, 2010
    Applicant: AGENCY FOR SCIENCE, TECHNOLOGY AND RESEARCH
    Inventors: Qi Tian, Lingyu Duan, Changsheng Xu, Kongwah Wan, Joo Hwee Lim, Xin Guo Yu
  • Publication number: 20080193016
    Abstract: A method for use in indexing video footage, the video footage comprising an image signal and a corresponding audio signal relating to the image signals, the method comprising extracting audio features from the audio signal of the video footage and visual features from the image signal of the video footage; comparing the extracted audio and visual features with predetermined audio and visual keywords; identifying the audio and visual keywords associated with the video footage based on the comparison of the extracted video and visual features with the predetermine audio and visual keywords; and determining the presence of events in the video footage based on the audio and visual keywords associated with the video footage.
    Type: Application
    Filed: February 7, 2005
    Publication date: August 14, 2008
    Applicant: Agency for Science, Technology and Research
    Inventors: Joo Hwee Lim, Changsheng Xu, Kong Wah Wan, Qi Tian, Yu-Lin Kang
  • Patent number: 6574378
    Abstract: A method, an apparatus and a computer program product for indexing and retrieving image data using visual keywords (108) is disclosed. Visual keywords (108) are prototypical, visual tokens (104) and are extracted from samples of visual documents (100) in a visual-content domain via supervised and/or unsupervised learning processes. An image or a video-shot key frame is described and indexed by a signature (112) that registers the spatial distribution of the visual keywords (108) present in its visual content. Visual documents (100) are retrieved for a sample query (120) by comparing the similarities between the signature (112) of the query (120) and those of visual documents (100) in the database. The signatures (112) of visual documents (100) are generated based on spatial distributions of the visual keywords (108). Singular-value decomposition (114) is applied to the signatures (112) to obtain a coded description (116).
    Type: Grant
    Filed: July 8, 1999
    Date of Patent: June 3, 2003
    Assignee: Kent Ridge Digital Labs
    Inventor: Joo Hwee Lim