Patents by Inventor Gene Grindstaff
Gene Grindstaff has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20130322697Abstract: A method and system for estimating the average speed of a moving object based on an attribute of the moving object present within the captured image data is disclosed. A plurality of images of the moving object are recorded using an image sensor that senses the ambient light or other electromagnetic radiation reflected or emitted by the moving object. Each image is captured at a different capture time. The image sensor is preferably located at a fixed or substantially fixed location when imaging the moving object. An area of interest of the moving object is located within the image data of the first image. An attribute of the moving object is then calculated for the first image. For at least a second image, the same attribute is calculated for the same area of interest of the moving object. The attribute calculations for the first and at least second images are then used to determine the average speed of the moving object.Type: ApplicationFiled: May 31, 2012Publication date: December 5, 2013Applicant: HEXAGON TECHNOLOGY CENTER GMBHInventors: Gene A. Grindstaff, Sheila G. Whitaker
-
Patent number: 8462218Abstract: An apparatus and method for stabilizing image frames in a video data stream. A weighted average or centroid of the intensity or hue associated with pixels vs. the horizontal and vertical position of each pixel is calculated for a reference frame in the video data stream. A corresponding centroid is calculated for a subsequent frame in the stream. This image frame is then translated so that the centroid of the subsequent frame and the centroid of the reference frame coincide, reducing artifacts from shaking of the video capture device. Alternatively, the video stream frames may be divided into tiles and centroids calculated for each tile. The centroids of the tiles of a subsequent frame are curve fit to the centroids of tiles in a reference frame. An affine transform is then performed on the subsequent frame to reduce artifacts in the image from movements of the video capture device.Type: GrantFiled: November 16, 2010Date of Patent: June 11, 2013Assignee: Intergraph Software Technologies CompanyInventors: Gene A. Grindstaff, Sheila G. Whitaker
-
Publication number: 20120213436Abstract: Embodiments of the present invention relate to processing of digital image data that has been generated by imaging a physical object through a medium. For example, the medium may be, the atmosphere and the atmosphere may have some inherent property, such as haze, fog, or smoke. Additionally, the medium may be media other than the atmosphere, such as, water or blood. There may be one or more media that obstructs the physical object and the medium resides at least in front of the physical object between the physical object and an imaging sensor. The physical object may be one or more physical objects that are part of a scene in a field of view (e.g. view of a mountain range, forest, cars in a parking lot etc.). An estimated transmission vector of the medium is determined based upon digital input image data.Type: ApplicationFiled: June 6, 2011Publication date: August 23, 2012Applicant: HEXAGON TECHNOLOGY CENTER GMBHInventors: Gene A. Grindstaff, Sheila G. Whitaker
-
Publication number: 20120212477Abstract: A computer-implemented method of processing digital input image data containing haze and having a plurality of color channels including at least a blue channel, to generate output image data having reduced haze, includes receiving in a first computer-implemented process, digital input image data, and generating, in a second computer-implemented process, digital output image data based on the digital input image data using an estimated transmission vector for the digital input image data. The estimated transmission vector is substantially equal to an inverse blue channel of the digital input image data, and the digital output image data contains less haze than the digital input image data. The method also includes outputting the digital output image data via an output device.Type: ApplicationFiled: February 18, 2011Publication date: August 23, 2012Applicant: INTERGRAPH TECHNOLOGIES COMPANYInventors: Gene A. Grindstaff, Sheila G. Whitaker
-
Patent number: 8228378Abstract: An apparatus and method for preparing a composite image from a video data stream and for identifying changed features in two composite images. Frames from the video data stream are transformed to a projected 2D image, aligned with adjacent frames and overlapping areas are averaged to provide a higher apparent resolution. The composite image can be stored in real-time. As a second composite image is prepared of the same location at a later time, portions of the second image can be compared to corresponding portions of the stored image after the intensities of the images are equalized. Image areas whose absolute difference exceeds a threshold are again intensity equalized. Areas that are again above threshold can be flagged for further scrutiny, either by a human or by a machine that performs object recognition. In this way, composite video images of a scene can be prepared and compared in real-time.Type: GrantFiled: April 13, 2011Date of Patent: July 24, 2012Assignee: Hexagon Technology Center GmbHInventors: Gene A. Grindstaff, Sheila G. Whitaker
-
Patent number: 8189690Abstract: A system to parse both telemetry data and corresponding encoded video data wherein the telemetry and video data are subsequently synchronized based upon temporal information, such as a time stamp. The telemetry data and the video data are originally unsynchronized and the data for each is acquired by a separate device. The acquiring devices may be located within or attached to an aerial vehicle. The system receives the telemetry data and the encoded video data and outputs a series of synchronized video images with telemetry data. Thus, telemetry information is associated with each video image. The telemetry data may be acquired at a different rate than the video data. As a result, telemetry data may be interpolated or extrapolated to correspond to each video image. The present system operates in real-time, thus data acquired from aerial vehicles can be displayed on a map.Type: GrantFiled: October 15, 2010Date of Patent: May 29, 2012Assignee: Intergraph Technologies CompanyInventors: Sheila G. Whitaker, Gene A. Grindstaff, Roger K. Shelton, William D. Howell
-
Patent number: 8160394Abstract: A method of transforming an input image from a hemispherical source to an output image in rectilinear coordinates is disclosed. The method includes receiving data representative of an input image originating from a hemispherical camera and having a size defined by an input image height and an input image width. Once the data is received, an output image width and output image height representative of a size of a rectilinear output image is calculated based on the size of the input image. A rectilinear output image is then generated by mapping input image pixels to locations within the width and height of the output image, without reference to the optical characteristics of the hemispherical camera.Type: GrantFiled: May 11, 2006Date of Patent: April 17, 2012Assignee: Intergraph Software Technologies, CompanyInventors: Elaine S. Acree, Sheila G. Whitaker, Gene A. Grindstaff
-
Publication number: 20110188762Abstract: An apparatus and method for preparing a composite image from a video data stream and for identifying changed features in two composite images. Frames from the video data stream are transformed to a projected 2D image, aligned with adjacent frames and overlapping areas are averaged to provide a higher apparent resolution. The composite image can be stored in real-time. As a second composite image is prepared of the same location at a later time, portions of the second image can be compared to corresponding portions of the stored image after the intensities of the images are equalized. Image areas whose absolute difference exceeds a threshold are again intensity equalized. Areas that are again above threshold can be flagged for further scrutiny, either by a human or by a machine that performs object recognition. In this way, composite video images of a scene can be prepared and compared in real-time.Type: ApplicationFiled: April 13, 2011Publication date: August 4, 2011Applicant: HEXAGON TECHNOLOGY CENTER GMBHInventors: Gene A. Grindstaff, Sheila G. Whitaker
-
Patent number: 7961216Abstract: An apparatus and method for preparing a composite image from a video data stream and for identifying changed features in two composite images. Frames from the video data stream are transformed to a projected 2D image, aligned with adjacent frames and overlapping areas are averaged to provide a higher apparent resolution. The composite image can be stored in real-time. As a second composite image is prepared of the same location at a later time, portions of the second image can be compared to corresponding portions of the stored image after the intensities of the images are equalized. Image areas whose absolute difference exceeds a threshold are again intensity equalized. Areas that are again above threshold can be flagged for further scrutiny, either by a human or by a machine that performs object recognition. In this way, composite video images of a scene can be prepared and compared in real-time.Type: GrantFiled: August 3, 2005Date of Patent: June 14, 2011Assignee: Intergraph Software Technologies CompanyInventors: Gene A. Grindstaff, Sheila G. Whitaker
-
Publication number: 20110090399Abstract: A system is provided that can parse both telemetry data and corresponding encoded video data wherein the telemetry and video data are subsequently synchronized based upon temporal information, such as a time stamp. The telemetry data and the video data are originally unsynchronized and the data for each is acquired by a separate device. The acquiring devices may be located within or attached to an aerial vehicle. The system receives the telemetry data stream or file and the encoded video data stream or file and outputs a series of synchronized video images with telemetry data. Thus, there is telemetry information associated with each video image. The telemetry data may be acquired at a different rate than the video data. As a result, telemetry data may be interpolated or extrapolated to create telemetry data that corresponds to each video image. The present system operates in real-time, so that data acquired from aerial vehicles can be displayed on a map.Type: ApplicationFiled: October 15, 2010Publication date: April 21, 2011Applicant: INTERGRAPH TECHNOLOGIES COMPANYInventors: Sheila G. Whitaker, Gene A. Grindstaff, Roger K. Shelton, William D. Howell
-
Publication number: 20110058049Abstract: An apparatus and method for stabilizing image frames in a video data stream. A weighted average or centroid of the intensity or hue associated with pixels vs. the horizontal and vertical position of each pixel is calculated for a reference frame in the video data stream. A corresponding centroid is calculated for a subsequent frame in the stream. This image frame is then translated so that the centroid of the subsequent frame and the centroid of the reference frame coincide, reducing artifacts from shaking of the video capture device. Alternatively, the video stream frames may be divided into tiles and centroids calculated for each tile. The centroids of the tiles of a subsequent frame are curve fit to the centroids of tiles in a reference frame. An affine transform is then performed on the subsequent frame to reduce artifacts in the image from movements of the video capture device.Type: ApplicationFiled: November 16, 2010Publication date: March 10, 2011Applicant: INTERGRAPH TECHNOLOGIES COMPANYInventors: Gene A. Grindstaff, Sheila G. Whitaker
-
Patent number: 7859569Abstract: An apparatus and method for stabilizing image frames in a video data stream. A weighted average or centroid of the intensity or hue associated with pixels vs. the horizontal and vertical position of each pixel is calculated for a reference frame in the video data stream. A corresponding centroid is calculated for a subsequent frame in the stream. This image frame is then translated so that the centroid of the subsequent frame and the centroid of the reference frame coincide, reducing artifacts from shaking of the video capture device. Alternatively, the video stream frames may be divided into tiles and centroids calculated for each tile. The centroids of the tiles of a subsequent frame are curve fit to the centroids of tiles in a reference frame. An affine transform is then performed on the subsequent frame to reduce artifacts in the image from movements of the video capture device.Type: GrantFiled: August 22, 2005Date of Patent: December 28, 2010Assignee: Intergraph Technologies CompanyInventors: Gene A. Grindstaff, Sheila G. Whitaker
-
Patent number: 7668402Abstract: A method for generating a composite image includes receiving a separate image into a computer system, comparing the separate image to the composite image so as to generate a mismatch value, and modifying at least one of the composite image and the separate image to reduce the mismatch value.Type: GrantFiled: November 5, 2004Date of Patent: February 23, 2010Assignee: Intergraph Technologies CompanyInventors: Gene A. Grindstaff, Sheila G. Whitaker
-
Publication number: 20080095437Abstract: A method for demultiplexing time-division multiplexed digital video data which originates from multiple sources in which the time-division multiplexed images are not indexed nor is there identification information provided to differentiate sources. The sources are generally cameras which may be stationary cameras or moving cameras that rotate as are commonly used in the surveillance industry. A first set of digital video data representative of a first image is retrieved from a memory source or from a video tape. The first set of digital video data is stored to a memory location associated with a first source. The first set of video data is also identified as representative video data of the first source. A second set of digital video data representative of a current image is then retrieved. A difference ratio is calculated using the representative digital video data and the current set of digital video data.Type: ApplicationFiled: December 17, 2007Publication date: April 24, 2008Applicant: INTERGRAPH SOFTWARE TECHNOLOGIES COMPANYInventors: Gene Grindstaff, Susan Fletcher, Therman McKay
-
Publication number: 20070263093Abstract: A method of transforming an input image from a hemispherical source to an output image in rectilinear coordinates is disclosed. The method includes receiving data representative of an input image originating from a hemispherical camera and having a size defined by an input image height and an input image width. Once the data is received, an output image width and output image height representative of a size of a rectilinear output image is calculated based on the size of the input image. A rectilinear output image is then generated by mapping input image pixels to locations within the width and height of the output image, without reference to the optical characteristics of the hemispherical camera.Type: ApplicationFiled: May 11, 2006Publication date: November 15, 2007Inventors: Elaine Acree, Sheila Whitaker, Gene Grindstaff
-
Publication number: 20060215926Abstract: A method for viewing a first object that is obstructed by a second object. In such a method, the first object has a contrasting color to the second object and the second object is constructed from a material that allows visible light to pass therethrough. The amount of visible light that passes through the second object is not enough such that the first object is visible to the human eye. The method involves taking a digital image of the first and second object using a visible light sensor, such as a CCD camera sensor. The digital image data that is received into a computer system contains both first object data and second object data. It should be understood that the fist object data and the second object data include color information. The amount of contrast between the first and the second object should be approximately 10% of the total scale such that on a 256 color scale the difference is approximately 25 levels.Type: ApplicationFiled: May 26, 2006Publication date: September 28, 2006Inventors: Gene Grindstaff, Susan Fletcher, Therman McKay
-
Publication number: 20060097139Abstract: A method for generating a composite image includes receiving a separate image into a computer system, comparing the separate image to the composite image so as to generate a mismatch value, and modifying at least one of the composite image and the separate image to reduce the mismatch value.Type: ApplicationFiled: November 5, 2004Publication date: May 11, 2006Applicant: Intergraph Hardware TechnologiesInventors: Gene Grindstaff, Sheila Whitaker
-
Publication number: 20060061661Abstract: An apparatus and method for stabilizing image frames in a video data stream. A weighted average or centroid of the intensity or hue associated with pixels vs. the horizontal and vertical position of each pixel is calculated for a reference frame in the video data stream. A corresponding centroid is calculated for a subsequent frame in the stream. This image frame is then translated so that the centroid of the subsequent frame and the centroid of the reference frame coincide, reducing artifacts from shaking of the video capture device. Alternatively, the video stream frames may be divided into tiles and centroids calculated for each tile. The centroids of the tiles of a subsequent frame are curve fit to the centroids of tiles in a reference frame. An affine transform is then performed on the subsequent frame to reduce artifacts in the image from movements of the video capture device.Type: ApplicationFiled: August 22, 2005Publication date: March 23, 2006Inventors: Gene Grindstaff, Sheila Whitaker
-
Publication number: 20060028549Abstract: An apparatus and method for preparing a composite image from a video data stream and for identifying changed features in two composite images. Frames from the video data stream are transformed to a projected 2D image, aligned with adjacent frames and overlapping areas are averaged to provide a higher apparent resolution. The composite image can be stored in real-time. As a second composite image is prepared of the same location at a later time, portions of the second image can be compared to corresponding portions of the stored image after the intensities of the images are equalized. Image areas whose absolute difference exceeds a threshold are again intensity equalized. Areas that are again above threshold can be flagged for further scrutiny, either by a human or by a machine that performs object recognition. In this way, composite video images of a scene can be prepared and compared in real-time.Type: ApplicationFiled: August 3, 2005Publication date: February 9, 2006Inventors: Gene Grindstaff, Sheila Whitaker
-
Publication number: 20050285947Abstract: In a first embodiment of the invention, there is provided a method for structuring digital video images in a computer system. The digital video images are capable of being displayed on a display device and contain addressable digital data that is addressable with respect to a reference point on the display device. The method may be embodied in computer code on a computer readable medium which is executed by a processor within the computer system. The computer code removes motion from a digital video image stream. By removing motion from the digital image stream, additional information and details can be observed which are spread out over multiple images when the images are displayed in sequence. The method begins by obtaining a first digital video image and a second digital video image. A subsection is defined within the first digital image at an addressable location relative to the reference point.Type: ApplicationFiled: June 21, 2004Publication date: December 29, 2005Inventors: Gene Grindstaff, Sheila Whitaker, Susan Fletcher