Patents by Inventor Anders Modén
Anders Modén has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 9020204Abstract: A method and apparatus for determining position and orientation enabling navigation of an object using image data from at least a first and a second 2D image from at least one camera mounted on said object. The method comprises the steps of: correcting images from one or several cameras and from at least a first and a second 2D image for their respective radial distortion and other measurable effects which result in poor image precision; matching 2D image items in and between at least a first and second 2D image; calculating a fundamental matrix by using correlated image points from at least a first and a second 2D image; calculating and extracting estimated first rotation and translation values from the fundamental matrix using single value decomposition (SVD) based on information from at least a first and a second 2D image; iterating more accurate final rotation and translation values by using the LevenBerg-Marquard algorithm and determining the position and orientation of said object.Type: GrantFiled: October 1, 2010Date of Patent: April 28, 2015Assignee: SAAB ABInventor: Anders Modén
-
Patent number: 8953847Abstract: A method and apparatus for determining a position and attitude of at least one camera by calculating and extracting estimated rotation and translation values from an estimated fundamental matrix based on information from at least a first and second 2D image. Variable substitution is utilized to strengthen derivatives and provide a more rapid convergence. A solution is provided for solving position and orientation from correlated point features in images using a method that solves for both rotation and translation simultaneously.Type: GrantFiled: October 1, 2010Date of Patent: February 10, 2015Assignee: SAAB ABInventor: Anders Modén
-
Publication number: 20130322698Abstract: A method and apparatus for determining position and orientation enabling navigation of an object using image data from at least a first and a second 2D image from at least one camera mounted on said object. The method comprises the steps of: correcting images from one or several cameras and from at least a first and a second 2D image for their respective radial distortion and other measurable effects which result in poor image precision; matching 2D image items in and between at least a first and second 2D image; calculating a fundamental matrix by using correlated image points from at least a first and a second 2D image; calculating and extracting estimated first rotation and translation values from the fundamental matrix using single value decomposition (SVD) based on information from at least a first and a second 2D image; iterating more accurate final rotation and translation values by using the LevenBerg-Marquard algorithm and determining the position and orientation of said object.Type: ApplicationFiled: October 1, 2010Publication date: December 5, 2013Applicant: SAAB ABInventor: Anders Modén
-
Publication number: 20130272581Abstract: A method and apparatus for determining a position and attitude of at least one camera by calculating and extracting estimated rotation and translation values from an estimated fundamental matrix based on information from at least a first and second 2D image. Variable substitution is utilized to strengthen derivatives and provide a more rapid convergence. A solution is provided for solving position and orientation from correlated point features in images using a method that solves for both rotation and translation simultaneously.Type: ApplicationFiled: October 1, 2010Publication date: October 17, 2013Applicant: SAAB ABInventor: Anders Modén
-
Publication number: 20130208009Abstract: A method and apparatus for generating and optimizing a fundamental matrix for a first 2D image and a second 2D image to obtain the relative geometrical information between said two 2D images for points in the two 2D images that correspond to a mutual 3D point. According to the method, the geometrical projection errors in the correspondence points are used to select correct and accurate inliers. This method and apparatus provides a more accurate and precise fundamental matrix than conventional methods.Type: ApplicationFiled: October 1, 2010Publication date: August 15, 2013Applicant: SAAB ABInventor: Anders Modén
-
Patent number: 7813543Abstract: The present invention relates to automatic modeling of a physical scene. At least two images (I1, I2) of the scene are received, which are taken from different angles and/or positions. A matching module (130) matches image objects in the first image (I1) against image objects in the second image (I2), by first loading pixel values for at least one first portion of the first image (I1) into an artificial neural network (133). Then, the artificial neural network (133) scans the second image (I2) in search of pixels representing a respective second portion corresponding to each of the at least one first portion; determines a position of the respective second portion upon fulfillment of a match criterion; and produces a representative matching result (M12). Based on the matching result (M12), a first calculation module (140) calculates a fundamental matrix (F12), which defines a relationship between the first and second images (I1, I2).Type: GrantFiled: June 8, 2005Date of Patent: October 12, 2010Assignee: SAAB ABInventor: Anders Modén
-
Publication number: 20080143715Abstract: The present invention relates to computer production of images. Three-dimensional graphics data (D3D) is automatically rendered by means of a GPU (330), which is adapted to receive two-dimensional image data. This data contains a number of image points, which each is associated with color information (r, g, b), transparency information (a), and depth buffer data (Z) that for each of the image points specifies a distance between a projection plane and a point of a reproduced object in the scene. A buffer unit (320) storing the image data is directly accessible by the GPU (330). The GPU (330), in turn, includes a texture module (331), a vertex module (332) and a fragment module (333). The texture module (331) receives the color information (r, g, b) and based thereon generates texture data (T) for at least one synthetic object in the synthetic three-dimensional model (V).Type: ApplicationFiled: June 8, 2005Publication date: June 19, 2008Applicant: SAAB ABInventors: Anders Moden, Lisa Johansson
-
Publication number: 20070250465Abstract: The present invention relates to automatic modeling of a physical scene. At least two images (I1, I2) of the scene are received, which are taken from different angles and/or positions. A matching module (130) matches image objects in the first image (I1) against image objects in the second image (I2), by first loading pixel values for at least one first portion of the first image (I1) into an artificial neural network (133). Then, the artificial neural network (133) scans the second image (I2) in search of pixels representing a respective second portion corresponding to each of the at least one first portion; determines a position of the respective second portion upon fulfillment of a match criterion; and produces a representative matching result (M12). Based on the matching result (M12), a first calculation module (140) calculates a fundamental matrix (F12), which defines a relationship between the first and second images (I1, I2).Type: ApplicationFiled: June 8, 2005Publication date: October 25, 2007Applicant: SAAB ABInventor: Anders Moden