Patents by Inventor Ka-Ming Leung
Ka-Ming Leung has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11288544Abstract: A method of generating a training sample for matching unlabelled objects in a sequence of images. A first representation of unlabelled objects is generated from images of a first and second set of images. A second representation of the unlabelled objects is generated using an unsupervised method. An anchor image in the first set is selected. A first set of candidate images in the second set that are close to the anchor image in both the first and second representations, is determined. A second set of candidate images in the second set that are distant from the anchor image in either the first or the second representations, is determined. A match candidate image is selected from the first set or the second set of candidate images. The training sample is generated from at least the anchor image and the match candidate image.Type: GrantFiled: February 7, 2020Date of Patent: March 29, 2022Assignee: Canon Kabushiki KaishaInventors: Ka Ming Leung, Getian Ye
-
Publication number: 20200257940Abstract: A method of generating a training sample for matching unlabelled objects in a sequence of images. A first representation of unlabelled objects is generated from images of a first and second set of images. A second representation of the unlabelled objects is generated using an unsupervised method. An anchor image in the first set is selected. A first set of candidate images in the second set that are close to the anchor image in both the first and second representations, is determined. A second set of candidate images in the second set that are distant from the anchor image in either the first or the second representations, is determined. A match candidate image is selected from the first set or the second set of candidate images. The training sample is generated from at least the anchor image and the match candidate image.Type: ApplicationFiled: February 7, 2020Publication date: August 13, 2020Inventors: KA MING LEUNG, GETIAN YE
-
Patent number: 10579901Abstract: A method of comparing objects in images. A dictionary determined from a plurality of feature vectors formed from a test image and codes formed by applying the dictionary to the feature vectors, is received. The dictionary is based on a modified manifold obtained by determining correspondences for codes using pairwise similarities between codes. Comparison codes are determined for the objects in the images by applying the dictionary to feature vectors of the objects in the images. The objects in the images are compared based on the comparison codes of the objects.Type: GrantFiled: December 5, 2017Date of Patent: March 3, 2020Assignee: Canon Kabushiki KaishaInventors: Getian Ye, Ka Ming Leung
-
Patent number: 10546208Abstract: A method of selecting at least one video frame of a video sequence comprising a plurality of video frames. The method determines a time for analysis based on length of the video sequence and processing capability of a running device. A first sampling pattern is based on the determined time for analysis. A first set of frames in the video sequence is sampled using a first sampling pattern. The first set is sampled infrequently throughout the video sequence in accordance with the first sampling pattern. A candidate frame is determined from the sampled frames based on image quality. A second set of the frames comprising one or more of the frames in a narrow range of the video sequence near the determined candidate frame, is determined in accordance with a second sampling pattern. At least one of the video frames is selected from the sampled frames based on image quality.Type: GrantFiled: September 25, 2017Date of Patent: January 28, 2020Assignee: Canon Kabushiki KaishaInventors: Sammy Chan, Ian Robert Boreham, Ka Ming Leung, Mark Ronald Tainsh
-
Patent number: 10503981Abstract: A method of determining similarity of objects in images. Feature vectors are determined for objects in images captured by cameras operating in a training domain. Feature vectors are determined for the objects in images captured by cameras operating in a target domain, the cameras of the target domain operating with different environmental factors to the cameras of the training domain. A mapping is determined for a difference in the feature vectors of the training domain and the target domain. The difference in the feature vectors of the training domain and the target domain is converted to a matching space by applying the determined mapping to the feature vectors of the training domain and the target domain. A classifier is determined using data associated with the feature vectors of the training domain in the matching space.Type: GrantFiled: June 27, 2017Date of Patent: December 10, 2019Assignee: Canon Kabushiki KaishaInventors: Getian Ye, Ka Ming Leung
-
Patent number: 10372994Abstract: A method of selecting at least one video frame of a video sequence. A plurality of faces is detected in at least one video frame of the video sequence. An orientation of the detected faces is tracked over a series of subsequent video frames to determine whether a first detected face is turning towards a second detected face. The method then determines, using the tracked orientation of the detected faces, a portion of the video sequence in which the first and second detected faces are oriented towards each other for at least a predetermined number of frames defining a gaze fixation of the detected faces. At least one video frame is selected from the determined portion of the video sequence, the selected video frame capturing the gaze fixation of the detected faces.Type: GrantFiled: May 2, 2017Date of Patent: August 6, 2019Assignee: Canon Kabushiki KaishaInventors: Sammy Chan, Ka Ming Leung, Mark Ronald Tainsh
-
Publication number: 20190171905Abstract: A method of comparing objects in images. A dictionary determined from a plurality of feature vectors formed from a test image and codes formed by applying the dictionary to the feature vectors, is received. The dictionary is based on a modified manifold obtained by determining correspondences for codes using pairwise similarities between codes. Comparison codes are determined for the objects in the images by applying the dictionary to feature vectors of the objects in the images. The objects in the images are compared based on the comparison codes of the objects.Type: ApplicationFiled: December 5, 2017Publication date: June 6, 2019Inventors: Getian Ye, Ka Ming Leung
-
Publication number: 20180373962Abstract: A method of determining similarity of objects in images. Feature vectors are determined for objects in images captured by cameras operating in a training domain. Feature vectors are determined for the objects in images captured by cameras operating in a target domain, the cameras of the target domain operating with different environmental factors to the cameras of the training domain. A mapping is determined for a difference in the feature vectors of the training domain and the target domain. The difference in the feature vectors of the training domain and the target domain is converted to a matching space by applying the determined mapping to the feature vectors of the training domain and the target domain. A classifier is determined using data associated with the feature vectors of the training domain in the matching space.Type: ApplicationFiled: June 27, 2017Publication date: December 27, 2018Inventors: Getian Ye, Ka Ming LEUNG
-
Patent number: 10110801Abstract: Methods, systems, and computer readable media are described for controlling a camera to perform a selected task from a set of tasks is provided. The method comprises determining a viewing condition of the camera to perform each task from the set of tasks, and determining a posterior probability of task success for each task from the set of tasks based on the determined viewing conditions and a prior probability of task success for each task. The method also includes determining a change in rate of information gain for task success for each task from the set of tasks based on the posterior probability, selecting the task to be performed based on the change in rate of information gain, and controlling the camera to perform the selected task.Type: GrantFiled: June 23, 2016Date of Patent: October 23, 2018Assignee: Canon Kabushiki KaishaInventors: Ka Ming Leung, Geoffrey Richard Taylor
-
Publication number: 20180089528Abstract: A method of selecting at least one video frame of a video sequence comprising a plurality of video frames. The method determines a time for analysis based on length of the video sequence and processing capability of a running device. A first sampling pattern is based on the determined time for analysis. A first set of frames in the video sequence is sampled using a first sampling pattern. The first set is sampled infrequently throughout the video sequence in accordance with the first sampling pattern. A candidate frame is determined from the sampled frames based on image quality. A second set of the frames comprising one or more of the frames in a narrow range of the video sequence near the determined candidate frame, is determined in accordance with a second sampling pattern. At least one of the video frames is selected from the sampled frames based on image quality.Type: ApplicationFiled: September 25, 2017Publication date: March 29, 2018Inventors: SAMMY CHAN, IAN ROBERT BOREHAM, KA MING LEUNG, MARK RONALD TAINSH
-
Publication number: 20170330038Abstract: A method of selecting at least one video frame of a video sequence. A plurality of faces is detected in at least one video frame of the video sequence. An orientation of the detected faces is tracked over a series of subsequent video frames to determine whether a first detected face is turning towards a second detected face. The method then determines, using the tracked orientation of the detected faces, a portion of the video sequence in which the first and second detected faces are oriented towards each other for at least a predetermined number of frames defining a gaze fixation of the detected faces. At least one video frame is selected from the determined portion of the video sequence, the selected video frame capturing the gaze fixation of the detected faces.Type: ApplicationFiled: May 2, 2017Publication date: November 16, 2017Inventors: SAMMY CHAN, KA MING LEUNG, MARK RONALD TAINSH
-
Patent number: 9811733Abstract: A method of selecting a frame from a plurality of video frames captured by a camera (120). The method determines features to which map points in a three dimensional space are projected. A histogram of the determined features for a plurality of regions in the frame is created. One of the regions may be determined as being an unmapped region based on the created histogram. The frame is selected based on the size of the unmapped region.Type: GrantFiled: October 2, 2014Date of Patent: November 7, 2017Assignee: Canon Kabushiki KaishaInventor: Ka Ming Leung
-
Publication number: 20170006215Abstract: Methods, systems, and computer readable media are described for controlling a camera to perform a selected task from a set of tasks is provided. The method comprises determining a viewing condition of the camera to perform each task from the set of tasks, and determining a posterior probability of task success for each task from the set of tasks based on the determined viewing conditions and a prior probability of task success for each task. The method also includes determining a change in rate of information gain for task success for each task from the set of tasks based on the posterior probability, selecting the task to be performed based on the change in rate of information gain, and controlling the camera to perform the selected task.Type: ApplicationFiled: June 23, 2016Publication date: January 5, 2017Inventors: KA Ming LEUNG, GEOFFREY RICHARD TAYLOR
-
Patent number: 9407293Abstract: A system (100) for encoding an input video frame (1005), for transmitting or storing the encoded video and for decoding the video is disclosed. The system (100) includes an encoder (1000) and a decoder (1200) interconnected through a storage or transmission medium (1100). The encoder (1000) includes a turbo encoder (1015) for forming parity bit data from the input frame (1005) into a first data source (1120), and a sampler (1020) for down-sampling the input frame (1005) followed by intraframe compression (1030) to form a second data source (1110). The decoder (1200) receives data from the second data source (1110) to form an estimate for the frame (1005). The decoder (1200) also receivers the parity bit data from the first data source (1120), and corrects errors in the estimate by applying the parity bit data to the estimate. Each bit plane is corrected in turn by a turbo decoder (1260). The decoder determines how reliably a pixel value was decoded, too.Type: GrantFiled: November 27, 2008Date of Patent: August 2, 2016Assignee: CANON KABUSHIKI KAISHAInventors: Axel Lakus-Becker, Ka-Ming Leung
-
Patent number: 9076059Abstract: A method of selecting a first image from a plurality of images for constructing a coordinate system of an augmented reality system. A first image feature in the first image corresponding to the feature of the marker is determined. A second image feature in a second image is determined based on a second pose of a camera, said second image feature having a visual match to the first image feature. A reconstructed position of the feature of the marker in a three-dimensional (3D) space is determined based on positions of the first and second image features, the first and the second camera pose. A reconstruction error is determined based on the reconstructed position of the feature of the marker and a pre-determined position of the marker.Type: GrantFiled: December 5, 2012Date of Patent: July 7, 2015Assignee: Canon Kabushiki KaishaInventors: Ka-Ming Leung, Simon John Liddington
-
Patent number: 9014278Abstract: A method (800) of performing distributed video encoding on an input video frame (1005), is disclosed. The method (800) forms a bit-stream from original pixel values of the input video frame (1005), such that groups of bits in the bit-stream are associated with clusters of spatial pixel positions in the input video frame (1005). The bit-stream is interleaved to reduce the clustering. The interleaved bit-stream is encoded to generate parity bits from the bit-stream according to a bitwise error correction method.Type: GrantFiled: October 8, 2008Date of Patent: April 21, 2015Assignee: Canon Kabushiki KaishaInventors: Timothy Merrick Long, Axel Lakus-Becker, Ka-Ming Leung
-
Publication number: 20150098645Abstract: A method of selecting a frame from a plurality of video frames captured by a camera (120). The method determines features to which map points in a three dimensional space are projected. A histogram of the determined features for a plurality of regions in the frame is created. One of the regions may be determined as being an unmapped region based on the created histogram. The frame is selected based on the size of the unmapped region.Type: ApplicationFiled: October 2, 2014Publication date: April 9, 2015Inventor: KA MING LEUNG
-
Patent number: 8917776Abstract: A method of determining bit rates for use in encoding video data for joint decoding, is disclosed. An approximation of the video data is generated for later use as side information during a process of joint decoding. Bit error probabilities are determined for each bit plane and for each coefficient band of the approximation. The bit rates are determined for encoding the bit planes depending on the bit error probabilities, bit planes, and coefficient bands.Type: GrantFiled: October 23, 2009Date of Patent: December 23, 2014Assignee: Canon Kabushiki KaishaInventors: Axel Lakus-Becker, Ka-Ming Leung
-
Patent number: 8810633Abstract: Disclosed is a method of image coding for joint decoding of images from different viewpoints using distributed coding techniques. The method receives a first set of features (205) and error correction bits (203) corresponding to a first image (201) obtained at a first viewpoint (122) and a second set of features (425) from a second image (254, 415) corresponding to a second viewpoint (124). An approximation (437) of said first image (201) at said first viewpoint (122) is determined (432, 434, 436) an based on the first and second sets of features (205, 425) and the second image at the second viewpoint. A reliability measure (445) of the approximation of the first image is then determined (450) by joint decoding (438) the approximation (437) using the error correction bits (203). The approximation of the first image is then refined iteratively (460, 438) based on the reliability measure (445) and image information (448) derived from the joint decoding.Type: GrantFiled: November 29, 2010Date of Patent: August 19, 2014Assignee: Canon Kabushiki KaishaInventors: Ka-Ming Leung, Zhonghua Ma
-
Patent number: 8594196Abstract: A method of encoding video data generates a first source of video data from a first set of video frames by approximating the first set of video frames. A second source of video data is generated from a second set of video frames by transforming first respective binary representations of pixel values of said second set of video frames into second respective binary representation of the pixel values of the second set of video frames. The video data sources are encoded independently according to a mapping wherein Hamming distance between each of successive pixel values in a predetermined range of values in the second binary representation is greater than Hamming distance between each of successive pixel values in a predetermined range of values in the first binary representation.Type: GrantFiled: August 29, 2008Date of Patent: November 26, 2013Assignee: Canon Kabushiki KaishaInventors: Axel Lakus-Becker, Ka-Ming Leung