Patents by Inventor Jiandan Chen
Jiandan Chen has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11972622Abstract: A method for updating a coordinate of an annotated point in a digital image due to camera movement is performed by an image processing device, which obtains a current digital image of a scene. The current digital image has been captured by a camera subsequent to movement of the camera relative the scene. The current digital image is associated with at least one annotated point. Each at least one annotated point has a respective coordinate in the current digital image. The method comprises identifying an amount of the movement by comparing position indicative information in the current digital image to position indicative information in a previous digital image of the scene. The previous digital image has been captured prior to movement of the camera. The method comprises updating the coordinate of each at least one annotated point in accordance with the identified amount of movement and a camera homography.Type: GrantFiled: December 16, 2021Date of Patent: April 30, 2024Assignee: Axis ABInventors: Jiandan Chen, Haiyan Xie
-
Publication number: 20230343082Abstract: A method for encoding training data for training of a neural network comprises: obtaining training data including multiple datasets, each dataset comprises images annotated with at least one respective object class, forming , each dataset having an individual background class associated with the object class; encoding the images of the datasets to be associated with their respective individual background class; encoding image patches belonging to annotated object classes to be associated with their respective object class; encoding each of the datasets, to include an ignore attribute (“ignore”) to object classes that are annotated only in the other datasets and to background classes formed for the other datasets of the multiple datasets, the ignore attribute indicating that the assigned object class and background classes do not contribute in adapting the neural network in training using the respective dataset; and providing the encoded training data for training of a neural network.Type: ApplicationFiled: April 17, 2023Publication date: October 26, 2023Applicant: Axis ABInventors: Haochen Liu, Jiandan Chen
-
Publication number: 20220398687Abstract: A method for dewarping a region of a fisheye view image , captured by a fisheye lens wherein the fisheye view image comprises a first dewarping pole, DP1. The method defines a region of interest, ROI, determining a center, G, of the ROI, defining a temporary annulus sector region such that the temporary annulus sector region comprises the ROI, and has its center at DP1 and has a temporary outer arc shaped edge and a temporary inner arc shaped edge, defining a second dewarping pole, DP2 at a distance, D, from DP1 along a radial direction extending from G of the ROI through the DP1, setting a dewarping annulus sector region comprising the ROI. The dewarping annulus sector region has its center at DP2 and has its dewarping inner arc shaped edge maintained at a same radial distance from DP1 as the temporary inner arc shaped edge.Type: ApplicationFiled: June 8, 2022Publication date: December 15, 2022Applicant: Axis ABInventors: Ludvig PETTERSSON, Jiandan CHEN, Hanna BJÖRGVINSDÓTTIR
-
Publication number: 20220351485Abstract: A device and method merge a first candidate area relating to a candidate feature in a first image and a second candidate area relating to a candidate feature in a second image. The first and second images have an overlapping region, and at least a portion of the first and second candidate areas are located in the overlapping region. An image overlap size is determined indicating a size of the overlapping region of the first and second images, and a candidate area overlap ratio is determined indicating a ratio of overlap between the first and second candidate areas. A merging threshold is then determined based on the image overlap size, and, on condition that the candidate area overlap ratio is larger than the merging threshold, the first candidate area and the second candidate area are merged, thereby forming a merged candidate area.Type: ApplicationFiled: April 11, 2022Publication date: November 3, 2022Applicant: Axis ABInventors: Jiandan CHEN, Hanna BJÖRGVINSDÓTTIR, Ludvig PETTERSSON
-
Publication number: 20220254178Abstract: A method for updating a coordinate of an annotated point in a digital image due to camera movement is performed by an image processing device, which obtains a current digital image of a scene. The current digital image has been captured by a camera subsequent to movement of the camera relative the scene. The current digital image is associated with at least one annotated point. Each at least one annotated point has a respective coordinate in the current digital image. The method comprises identifying an amount of the movement by comparing position indicative information in the current digital image to position indicative information in a previous digital image of the scene. The previous digital image has been captured prior to movement of the camera. The method comprises updating the coordinate of each at least one annotated point in accordance with the identified amount of movement and a camera homography.Type: ApplicationFiled: December 16, 2021Publication date: August 11, 2022Applicant: Axis ABInventors: Jiandan CHEN, Haiyan XIE
-
Publication number: 20220180105Abstract: A tag for indicating a region of interest, ROI, has a predefined format and codes ROI information, which is readable by imaging the tag and which includes a size indication and a relative position indication, the respective indications allowing the ROI's size and position to be determined relative to the tag's size and position. A method for locating a ROI in an image comprises: obtaining an image; searching in the image for a tag with the predefined format; and determining a ROI on the basis of the tag's size and position in the image and the ROI information. A method for generating a tag comprises: obtaining an image including a visible provisional tag of same physical size as the tag to be generated; obtaining operator input identifying a ROI in the image; deriving the size indication and relative position indication; and printing the tag.Type: ApplicationFiled: November 15, 2021Publication date: June 9, 2022Applicant: Axis ABInventors: Jiandan CHEN, Haiyan XIE
-
Patent number: 10552737Abstract: Methods and apparatus, including computer program products, implementing and using techniques for configuring an artificial neural network to a particular surveillance situation. A number of object classes characteristic for the surveillance situation are selected. The object classes form a subset of the total number of object classes for which the artificial neural network is trained. A database is accessed that includes activation frequency values for the neurons within the artificial neural network. The activation frequency values are a function of the object class. Those neurons having activation frequency values lower than a threshold value for the subset of selected object classes are removed from the artificial neural network.Type: GrantFiled: December 21, 2017Date of Patent: February 4, 2020Assignee: Axis ABInventors: Robin Seibold, Jiandan Chen, Hanna Björgvinsdóttir, Martin Ljungqvist
-
Patent number: 10147200Abstract: Methods and apparatus, including computer program products, implementing and using techniques for classifying an object occurring in a sequence of images. The object is tracked through the sequence of images. A set of temporally distributed image crops including the object is generated from the sequence of images. The set of image crops is fed to an artificial neural network trained for classifying an object. The artificial network determines a classification result for each image crop. A quality measure of each classification result is determined based on one or more of: a confidence measure of a classification vector output from the artificial neural network, and a resolution of the image crop. The classification result for each image crop is weighed by its quality measure, and an object class for the object is determined by combining the weighted output from the artificial neural network for the set of images.Type: GrantFiled: March 21, 2017Date of Patent: December 4, 2018Assignee: Axis ABInventors: Hanna Bjorgvinsdottir, Jiandan Chen
-
Publication number: 20180276845Abstract: Methods and apparatus, including computer program products, implementing and using techniques for classifying an object occurring in a sequence of images. The object is tracked through the sequence of images. A set of temporally distributed image crops including the object is generated from the sequence of images. The set of image crops is fed to an artificial neural network trained for classifying an object. The artificial network determines a classification result for each image crop. A quality measure of each classification result is determined based on one or more of: a confidence measure of a classification vector output from the artificial neural network, and a resolution of the image crop. The classification result for each image crop is weighed by its quality measure, and an object class for the object is determined by combining the weighted output from the artificial neural network for the set of images.Type: ApplicationFiled: March 21, 2017Publication date: September 27, 2018Inventors: Hanna Bjorgvinsdottir, Jiandan Chen
-
Publication number: 20180181867Abstract: Methods and apparatus, including computer program products, implementing and using techniques for configuring an artificial neural network to a particular surveillance situation. A number of object classes characteristic for the surveillance situation are selected. The object classes form a subset of the total number of object classes for which the artificial neural network is trained. A database is accessed that includes activation frequency values for the neurons within the artificial neural network. The activation frequency values are a function of the object class. Those neurons having activation frequency values lower than a threshold value for the subset of selected object classes are removed from the artificial neural network.Type: ApplicationFiled: December 21, 2017Publication date: June 28, 2018Applicant: Axis ABInventors: Robin Seibold, Jiandan Chen, Hanna Björgvinsdóttir, Martin Ljungqvist
-
Patent number: 9936217Abstract: A method and encoder for video encoding a sequence of frames is provided. The method comprises: receiving a sequence of frames depicting a moving object, predicting a movement of the moving object in the sequence of frames between a first time point and a second time point; defining, on basis of the predicted movement of the moving object, a region of interest (ROI) in the frames which covers the moving object during its entire predicted movement between the first time point and the second time point; and encoding a first frame, corresponding to the first time point, in the ROI and one or more intermediate frames, corresponding to time points being intermediate to the first and the second time point, in at least a subset of the ROI using a common encoding quality pattern defining which encoding quality to use in which portion of the ROI.Type: GrantFiled: November 25, 2015Date of Patent: April 3, 2018Assignee: AXIS ABInventors: Jiandan Chen, Markus Skans, Willie Betschart, Mikael Pendse, Alexandre Martins
-
Patent number: 9830528Abstract: A method may include determining a value indicative of an average intensity of blocks in an image. The blocks include a primary and outer blocks. Each of the outer blocks may have three, five, or more than five pixels. The image may describe an external pixel lying between the primary and at least one of the outer blocks. The external pixel may not contribute to the value indicative of the average intensity of any of the blocks. The image may also describe a common internal pixel lying within two of the blocks. The common pixel may contribute to the value indicative of the average intensity of the two of the blocks. The method may include comparing the value indicative of the average intensity of the primary block to the values of the outer blocks, and quantifying a feature represented by the image by generating a characteristic number.Type: GrantFiled: December 9, 2015Date of Patent: November 28, 2017Assignee: Axis ABInventors: Jiandan Chen, Anders Lloyd, Niclas Danielsson
-
Publication number: 20170169306Abstract: A method may include determining a value indicative of an average intensity of blocks in an image. The blocks include a primary and outer blocks. Each of the outer blocks may have three, five, or more than five pixels. The image may describe an external pixel lying between the primary and at least one of the outer blocks. The external pixel may not contribute to the value indicative of the average intensity of any of the blocks. The image may also describe a common internal pixel lying within two of the blocks. The common pixel may contribute to the value indicative of the average intensity of the two of the blocks. The method may include comparing the value indicative of the average intensity of the primary block to the values of the outer blocks, and quantifying a feature represented by the image by generating a characteristic number.Type: ApplicationFiled: December 9, 2015Publication date: June 15, 2017Inventors: Jiandan Chen, Anders Lloyd, Niclas Danielsson
-
Patent number: 9536154Abstract: A method of monitoring a scene by a camera (7) comprises marking a part (14) of the scene with light having a predefined spectral content and a spatial verification pattern. An analysis image is captured of the scene by a sensor sensitive to the predefined spectral content. The analysis image is segmented based on the predefined spectral content, to find a candidate image region. A spatial pattern is detected in the candidate image region, and a characteristic of the detected spatial pattern is compared to a corresponding characteristic of the spatial verification pattern. If the characteristics match, the candidate image region is identified as a verified image region corresponding to the marked part (14) of the scene. Image data representing the scene is obtained, and image data corresponding to the verified image region is processed in a first manner, and remaining image data is processed in a second manner.Type: GrantFiled: May 8, 2014Date of Patent: January 3, 2017Assignee: Axis ABInventors: Markus Skans, Anders Johannesson, Jiandan Chen, Joakim Baltsen
-
Publication number: 20160165257Abstract: A method and encoder for video encoding a sequence of frames is provided. The method comprises: receiving a sequence of frames depicting a moving object, predicting a movement of the moving object in the sequence of frames between a first time point and a second time point; defining, on basis of the predicted movement of the moving object, a region of interest (ROI) in the frames which covers the moving object during its entire predicted movement between the first time point and the second time point; and encoding a first frame, corresponding to the first time point, in the ROI and one or more intermediate frames, corresponding to time points being intermediate to the first and the second time point, in at least a subset of the ROI using a common encoding quality pattern defining which encoding quality to use in which portion of the ROI.Type: ApplicationFiled: November 25, 2015Publication date: June 9, 2016Applicant: AXIS ABInventors: Jiandan Chen, Markus Skans, Willie Betschart, Mikael Pendse, Alexandre Martins
-
Publication number: 20140334676Abstract: A method of monitoring a scene by a camera (7) comprises marking a part (14) of the scene with light having a predefined spectral content and a spatial verification pattern. An analysis image is captured of the scene by a sensor sensitive to the predefined spectral content. The analysis image is segmented based on the predefined spectral content, to find a candidate image region. A spatial pattern is detected in the candidate image region, and a characteristic of the detected spatial pattern is compared to a corresponding characteristic of the spatial verification pattern. If the characteristics match, the candidate image region is identified as a verified image region corresponding to the marked part (14) of the scene. Image data representing the scene is obtained, and image data corresponding to the verified image region is processed in a first manner, and remaining image data is processed in a second manner.Type: ApplicationFiled: May 8, 2014Publication date: November 13, 2014Applicant: Axis ABInventors: Markus SKANS, Anders Johannesson, Jiandan Chen, Joakim Baltsen