Patents by Inventor Joo Hwee Lim
Joo Hwee Lim has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20190114482Abstract: According to various embodiments, a method for providing task related information to a user may be provided. The method may include: determining location information based on a spatial model; determining task information based on a task model; determining sensor information; determining output information based on the location information, task information and sensor information; and providing the output information to the user. In a specific embodiment, the output information may comprise an orientation cue, an error indication or a contextual cue to assist the user in performing the task associated with the location detected by a vision recognition method, and the output information can be provided to the user as augmented reality in a wearable device.Type: ApplicationFiled: March 30, 2017Publication date: April 18, 2019Inventors: Liyuan LI, Mark David RICE, Joo Hwee LIM, Suat Ling Jamie NG, Teck Sun Marcus WAN, Shue Ching CHIA, Hong Huei TAY, Shiang Long LEE
-
Patent number: 9445716Abstract: A non-stereo fundus image is used to obtain a plurality of glaucoma indicators. Additionally, genome data for the subject is used to obtain genetic marker data relating to one or more genes and/or SNPs associated with glaucoma. The glaucoma indicators and genetic marker data are input into an adaptive model operative to generate an output indicative of a risk of glaucoma in the subject. In combination, the genetic indicators and genome data are more informative about the risk of glaucoma than either of the two in isolation. The adaptive model may be a two-stage model, having a first stage in which individual genetic indicators are combined with respective portions of the genome data by first adaptive model modules to form respective first outputs, and a second stage in which the first outputs are combined by a second adaptive mode.Type: GrantFiled: August 17, 2015Date of Patent: September 20, 2016Assignees: Agency for Science, Technology and Research, Singapore Health Services Pte LtdInventors: Jiang Liu, Zhuo Zhang, Wing Kee Damon Wong, Ngan Meng Tan, Fengshou Yin, Beng Hai Lee, Huiqi Li, Joo Hwee Lim, Carol Cheung, Tin Aung, Tien Yin Wong, Ziyang Liang, Jun Cheng, Baskaran Mani
-
Publication number: 20160100753Abstract: A non-stereo fundus image is used to obtain a plurality of glaucoma indicators. Additionally, genome data for the subject is used to obtain genetic marker data relating to one or more genes and/or SNPs associated with glaucoma. The glaucoma indicators and genetic marker data are input into an adaptive model operative to generate an output indicative of a risk of glaucoma in the subject. In combination, the genetic indicators and genome data are more informative about the risk of glaucoma than either of the two in isolation. The adaptive model may be a two-stage model, having a first stage in which individual genetic indicators are combined with respective portions of the genome data by first adaptive model modules to form respective first outputs, and a second stage in which the first outputs are combined by a second adaptive mode.Type: ApplicationFiled: August 17, 2015Publication date: April 14, 2016Inventors: Jiang LIU, Zhuo ZHANG, Wing Kee Damon WONG, Ngan Meng TAN, Fengshou YIN, Beng Hai LEE, Huiqi LI, Joo Hwee LIM, Carol CHEUNG, Tin AUNG, Tien Yin WONG, Ziyang LIANG, Jun CHENG, Baskaran MANI
-
Patent number: 9107617Abstract: A non-stereo fundus image is used to obtain a plurality of glaucoma indicators. Additionally, genome data for the subject is used to obtain genetic marker data relating to one or more genes and/or SNPs associated with glaucoma. The glaucoma indicators and genetic marker data are input into an adaptive model operative to generate an output indicative of a risk of glaucoma in the subject. In combination, the genetic indicators and genome data are more informative about the risk of glaucoma than either of the two in isolation. The adaptive model may be a two-stage model, having a first stage in which individual genetic indicators are combined with respective portions of the genome data by first adaptive model modules to form respective first outputs, and a second stage in which the first outputs are combined by a second adaptive mode.Type: GrantFiled: November 16, 2010Date of Patent: August 18, 2015Assignees: Agency for Science, Technology and Research, Singapore Health Services Pte Ltd.Inventors: Jiang Liu, Zhuo Zhang, Wing Kee Damon Wong, Ngan Meng Tan, Fengshou Yin, Beng Hai Lee, Huiqi Li, Joo Hwee Lim, Carol Cheung, Tin Aung, Tien Yin Wong, Ziyang Liang, Jun Cheng, Baskaran Mani
-
Patent number: 8705826Abstract: A two-dimensional retinal fundus image of the retinal fundus of an eye is processed by optic disc segmentation (2) followed by cup segmentation 4. Data derived from the optic disc segmentation (i.e. the output of the disc segmentation (2) and/or data derived from the output of the optic disc segmentation step, e.g. by a smoothing operation 3) and data derived from the out-put of the optic cup segmentation (i.e. the output of the cup segmentation (4) and/or data derived from the output of the optic disc segmentation, e.g. by a smoothing operation 5) are fed (6) to an adaptive model which has been trained to generate from such inputs a value indicative of cup-to-disc ratio (CDR) of the eye. The CDR is indicative of glaucoma. Thus, the method can be used to screen patients for glaucoma.Type: GrantFiled: May 14, 2008Date of Patent: April 22, 2014Assignee: Agency for Science, Technology and ResearchInventors: Jiang Liu, Joo Hwee Lim, Wing Kee Wong, Huiqi Li, Tien Yin Wong
-
Publication number: 20140003723Abstract: A text detection device is provided. The text detection device may include: an image input circuit configured to receive an image; an edge property determination circuit configured to determine a plurality of edge properties for each of a plurality of scales of the image; and a text location determination circuit configured to determine a text location in the image based on the plurality of edge properties for the plurality of scales of the image.Type: ApplicationFiled: June 24, 2013Publication date: January 2, 2014Applicant: Agency for Science, Technology and ResearchInventors: Shijian LU, Joo Hwee LIM
-
Patent number: 8428322Abstract: A method for determining the position of an optic cup boundary in a 2D retinal image. The method includes detecting kinks in blood vessels at an estimated boundary of the optic cup and the optic disc, and determining the position of the optic cup boundary in the 2D retinal image based on the detected kinks. The determined optic cup boundary may be used for determining a CDR which may in turn be used for determining a risk of contracting glaucomatous.Type: GrantFiled: December 15, 2008Date of Patent: April 23, 2013Assignees: Singapore Health Services Pte Ltd, Agency for Science, Technology and Research, National University of SingaporeInventors: Wing Kee Damon Wong, Jiang Liu, Joo Hwee Lim, Huiqi Li, Ngan Meng Tan, Tien Yin Wong
-
Patent number: 8331627Abstract: A method and system for generating an entirely well-focused image of a three-dimensional scene. The method comprises the steps of a) learning a prediction model including at least a focal depth probability density function (PDF), h(k), for all depth values k, from historical tiles of the scene; b) predicting the possible focal surfaces in subsequent tiles of the scene by applying the prediction model; c) for each value of k, examining h(k) such that if h(k) is below a first threshold, no image is acquired at the depth k? for said one tile; and if h(k) is above or equal to a first threshold, one or more images are acquired in a depth range around said value of k for said one tile; and d) processing the acquired images to generate a pixel focus map for said one tile.Type: GrantFiled: September 26, 2008Date of Patent: December 11, 2012Assignee: Agency for Science, Technology and ResearchInventors: Wei Xiong, Qi Tian, Joo Hwee Lim
-
Publication number: 20120230564Abstract: A non-stereo fundus image is used to obtain a plurality of glaucoma indicators. Additionally, genome data for the subject is used to obtain genetic marker data relating to one or more genes and/or SNPs associated with glaucoma. The glaucoma indicators indicators and genetic marker data are input into an adaptive model operative to generate an output indicative of a risk of glaucoma in the subject. In combination, the genetic indicators and genome data are more informative about the risk of glaucoma than either of the two in isolation. The adaptive model may be a two-stage model, having a first stage in which individual genetic indicators are combined with respective portions of the genome data by first adaptive model modules to form respective first outputs, and a second stage in which the first outputs are combined by a second adaptive mode.Type: ApplicationFiled: November 16, 2010Publication date: September 13, 2012Inventors: Jiang Liu, Zhuo Zhang, Wing Kee Damon Wong, Ngan Meng Tan, Fengshou Yin, Beng Hai Lee, Huiqi Li, Joo Hwee Lim, Carol Cheung, Tin Aung, Tien Yin Wong, Ziyang Liang, Jun Cheng, Baskaran Mani
-
Publication number: 20120155726Abstract: A method for determining a grade of nuclear cataract in a test image. The method includes: (1a) defining a contour of a lens structure in the test image, the defined contour of the lens structure comprising a segment around a boundary of a nucleus of the lens structure; (1b) extracting features from the test image based on the defined contour of the lens structure in the test image; and (1c) determining the grade of nuclear cataract in the test image based on the extracted features and a grading model.Type: ApplicationFiled: August 24, 2009Publication date: June 21, 2012Inventors: Huiqi Li, Joo Hwee Lim, Jiang Jimmy Liu, Wing Kee Damon Wong, Ngan Meng Tan, Zhuo Zhang, Shijian Lu, Tien Yin Wong
-
Publication number: 20110282897Abstract: A method and system for maintaining a database of reference images, the database including a plurality of sets of images, each set associated with one location or object. The method comprises the steps of identifying local features of each set of images; determining distances between each local feature of each set and the local features of all other sets; identifying discriminative features of each set of images by removing local features based on the determined distances; and storing the discriminative features of each set of images.Type: ApplicationFiled: June 5, 2009Publication date: November 17, 2011Applicant: Agency for Science, Technology and ResearchInventors: Yiqun Li, Joo Hwee Lim, Hanlin Goh
-
Publication number: 20110091084Abstract: A method performed by a computer system for detecting opacity in an image of the lens of an eye. The method includes detecting a region of interest in a picture of the lens, and processing the region of interest to produce a modified image using an algorithm which emphasizes opacity associated with a cortical cataract relative to opacity caused by other types of opacity, such as opacity caused by posterior sub-capsular cataracts (PSC). The modified image may be used for grading the level of cortical opacity, by measuring, in the modified image, the proportion of opacity in at least one area of the region of interest.Type: ApplicationFiled: May 20, 2008Publication date: April 21, 2011Inventors: Huiqi Li, Joo Hwee Lim, Jiang Liu, Li Liang Ko, Wing Kee Damon Wong, Tien Yin Wong
-
Publication number: 20110091083Abstract: A two-dimensional retinal fundus image of the retinal fundus of an eye is processed by optic disc segmentation (2) followed by cup segmentation 4. Data derived from the optic disc segmentation (i.e. the output of the disc segmentation (2) and/or data derived from the output of the optic disc segmentation step, e.g. by a smoothing operation 3) and data derived from the out-put of the optic cup segmentation (i.e. the output of the cup segmentation (4) and/or data derived from the output of the optic disc segmentation, e.g. by a smoothing operation 5) are fed (6) to an adaptive model which has been trained to generate from such inputs a value indicative of cup-to-disc ratio (CDR) of the eye. The CDR is indicative of glaucoma. Thus, the method can be used to screen patients for glaucoma.Type: ApplicationFiled: May 14, 2008Publication date: April 21, 2011Inventors: Jiang Liu, Joo Hwee Lim, Wing Kee Wong, Huiqi Li, Tien Yin Wong
-
Publication number: 20100254596Abstract: A method and system for generating an entirely well-focused image of a three-dimensional scene. The method comprises the steps of a) learning a prediction model including at least a focal depth probability density function (PDF), h(k), for all depth values k, from historical tiles of the scene; b) predicting the possible focal surfaces in subsequent tiles of the scene by applying the prediction model; c) for each value of k, examining h(k) such that if h(k) is below a first threshold, no image is acquired at the depth k? for said one tile; and if h(k) is above or equal to a first threshold, one or more images are acquired in a depth range around said value of k for said one tile; and d) processing the acquired images to generate a pixel focus map for said one tile.Type: ApplicationFiled: September 26, 2008Publication date: October 7, 2010Inventors: Wei Xiong, Qi Tian, Joo Hwee Lim
-
Publication number: 20100005485Abstract: A method of annotating footage that includes a structured text broadcast stream, a video stream and an audio stream, the method includes the steps of: extracting directly or indirectly one or more keywords and/or features from at least said structured text broadcast streams, temporally annotating said footage with said keywords and/or features analysing temporally adjacent annotated keywords and/or features to determine information about one or more events within said footage. Also provided are: a data store for storing video footage, a method of generation of a personalised video summary, a system for annotating footage and a system for generation of a personalised video summary.Type: ApplicationFiled: December 19, 2005Publication date: January 7, 2010Applicant: AGENCY FOR SCIENCE, TECHNOLOGY AND RESEARCHInventors: Qi Tian, Lingyu Duan, Changsheng Xu, Kongwah Wan, Joo Hwee Lim, Xin Guo Yu
-
Publication number: 20080193016Abstract: A method for use in indexing video footage, the video footage comprising an image signal and a corresponding audio signal relating to the image signals, the method comprising extracting audio features from the audio signal of the video footage and visual features from the image signal of the video footage; comparing the extracted audio and visual features with predetermined audio and visual keywords; identifying the audio and visual keywords associated with the video footage based on the comparison of the extracted video and visual features with the predetermine audio and visual keywords; and determining the presence of events in the video footage based on the audio and visual keywords associated with the video footage.Type: ApplicationFiled: February 7, 2005Publication date: August 14, 2008Applicant: Agency for Science, Technology and ResearchInventors: Joo Hwee Lim, Changsheng Xu, Kong Wah Wan, Qi Tian, Yu-Lin Kang
-
Patent number: 6574378Abstract: A method, an apparatus and a computer program product for indexing and retrieving image data using visual keywords (108) is disclosed. Visual keywords (108) are prototypical, visual tokens (104) and are extracted from samples of visual documents (100) in a visual-content domain via supervised and/or unsupervised learning processes. An image or a video-shot key frame is described and indexed by a signature (112) that registers the spatial distribution of the visual keywords (108) present in its visual content. Visual documents (100) are retrieved for a sample query (120) by comparing the similarities between the signature (112) of the query (120) and those of visual documents (100) in the database. The signatures (112) of visual documents (100) are generated based on spatial distributions of the visual keywords (108). Singular-value decomposition (114) is applied to the signatures (112) to obtain a coded description (116).Type: GrantFiled: July 8, 1999Date of Patent: June 3, 2003Assignee: Kent Ridge Digital LabsInventor: Joo Hwee Lim