Patents by Inventor Michael Bazakos
Michael Bazakos has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 10599926Abstract: Pixel color values representing an image of a portion of a field are received where each pixel color value has a respective position within the image. A processor identifies groups of the received pixel color values as possibly representing a Nitrogen-deficient plant leaf. For each group of pixel color values, the processor converts the pixel color values into feature values that describe a shape and the processor uses the feature values describing the shape to determine whether the group of pixel color values represents a Nitrogen-deficient leaf of a plant. The processor stores in memory an indication that the portion of the field is deficient in Nitrogen based on the groups of pixel color values determined to represent a respective Nitrogen-deficient leaf.Type: GrantFiled: December 15, 2016Date of Patent: March 24, 2020Assignee: Regents of the University of MinnesotaInventors: Nikolaos Papanikolopoulos, Vassilios Morellas, Dimitris Zermas, David Mulla, Michael Bazakos, Daniel Kaiser
-
Publication number: 20170177938Abstract: Pixel color values representing an image of a portion of a field are received where each pixel color value has a respective position within the image. A processor identifies groups of the received pixel color values as possibly representing a Nitrogen-deficient plant leaf. For each group of pixel color values, the processor converts the pixel color values into feature values that describe a shape and the processor uses the feature values describing the shape to determine whether the group of pixel color values represents a Nitrogen-deficient leaf of a plant. The processor stores in memory an indication that the portion of the field is deficient in Nitrogen based on the groups of pixel color values determined to represent a respective Nitrogen-deficient leaf.Type: ApplicationFiled: December 15, 2016Publication date: June 22, 2017Inventors: Nikolaos Papanikolopoulos, Vassilios Morellas, Dimitris Zermas, David Mulla, Michael Bazakos, Daniel Kaiser
-
Publication number: 20070177819Abstract: A multi-spectral imaging surveillance system and method in which a plurality of imaging cameras is associated with a data-processing apparatus. A module can be provided, which resides in a memory of said data-processing apparatus. The module performs fusion of a plurality images respectively generated by varying imaging cameras among said plurality of imaging cameras. Fusion of the images is based on a plurality of parameters indicative of environmental conditions in order to achieve enhanced imaging surveillance thereof. The final fused images are the result of two parts: an image fusion portion, and a knowledge representation part. For the final fusion, many operators can be utilized, which can be applied between the image fusion result and the knowledge representation portion.Type: ApplicationFiled: February 1, 2006Publication date: August 2, 2007Inventors: Yunqian Ma, Michael Bazakos
-
Publication number: 20070177792Abstract: In an embodiment, one or more sequences of learning video data is provided. The learning video sequences include an action. One or more features of the action are extracted from the one or more sequences of learning video data. Thereafter, a reception of a sequence of operational video data is enabled, and an extraction of the one or more features of the action from the sequence of operational video data is enabled. A comparison is then enabled between the extracted one or more features of the action from the one or more sequences of learning video data and the one or more features of the action from the sequence of operational video data. In an embodiment, this comparison allows the determination of whether the action in present in the operational video data.Type: ApplicationFiled: January 31, 2006Publication date: August 2, 2007Inventors: Yunqian Ma, Michael Bazakos
-
Publication number: 20070122040Abstract: An image is processed by a sensed-feature-based classifier to generate a list of objects assigned to classes. The most prominent objects (those objects whose classification is most likely reliable) are selected for range estimation and interpolation. Based on the range estimation and interpolation, the sensed features are converted to physical features for each object. Next, that subset of objects is then run through a physical-feature-based classifier that re-classifies the objects. Next, the objects and their range estimates are re-run through the processes of range estimation and interpolation, sensed-feature-to-physical-feature conversion, and physical-feature-based classification iteratively to continuously increase the reliability of the classification as well as the range estimation. The iterations are halted when the reliability reaches a predetermined confidence threshold.Type: ApplicationFiled: November 30, 2005Publication date: May 31, 2007Applicant: Honeywell International Inc.Inventors: Kwong Au, Michael Bazakos, Yunqian Ma
-
Publication number: 20070121999Abstract: A system and method detects the intent and/or motivation of two or more persons or other animate objects in a video scene. In one embodiment, the system forms a blob of the two or more persons, draws a bounding box around said blob, calculates an entropy value for said blob, and compares that entropy value to a threshold to determine if the two or more persons are involved in a fight or other altercation.Type: ApplicationFiled: November 28, 2005Publication date: May 31, 2007Inventors: Yunqian Ma, Michael Bazakos
-
Publication number: 20070092245Abstract: Facial detection and tracking systems and methods within a wide field of view are disclosed. A facial detection and tracking system in accordance with an illustrative embodiment of the present invention can include a wide field of view camera for detecting and tracking one or more objects within a wider field of view, and at least one narrower field of view camera for obtaining a higher-resolution image of each object located within a subset space of the wider field of view. The wide field of view camera can be operatively coupled to a computer or other such device that determines the subset space location of the individual within the wider field of view, and then tasks one or more of the narrower field of view cameras covering the subset space location to obtain a high-resolution image of the object.Type: ApplicationFiled: October 20, 2005Publication date: April 26, 2007Applicant: HONEYWELL INTERNATIONAL INC.Inventors: Michael Bazakos, Vassilios Morellas
-
Publication number: 20060285723Abstract: A system for tracking objects across an area having a network of cameras with overlapping and non-overlapping fields of view. The system may use a combination of color, shape, texture and/or multi-resolution histograms for object representation or target modeling for the tacking of an object from one camera to another. The system may include user and output interfacing.Type: ApplicationFiled: June 16, 2005Publication date: December 21, 2006Inventors: Vassilios Morellas, Michael Bazakos, Yunqian Ma, Andrew Johnson
-
Publication number: 20060233436Abstract: Systems and methods of establishing 3D coordinates from 2D image domain data acquired from an image sensor are disclosed. An illustrative method may include the steps of acquiring at least one image frame from the image sensor, selecting at least one polygon defining a region of interest within the image frame, measuring the distance from an origin of the image sensor to a number of reference points on the polygon, determining the distance between the selected reference points, and then determining 3D reference coordinates for one or more points on the polygon using a data fusion technique in which 2D image data from the image sensor is geometrically converted to 3D coordinates based at least in part on measured values of the reference points. An interpolation technique can be used to determine the 3D coordinates for all of the pixels within the polygon.Type: ApplicationFiled: August 26, 2005Publication date: October 19, 2006Inventors: Yunqian Ma, Michael Bazakos
-
Publication number: 20060233461Abstract: Systems and methods for transforming two-dimensional image data into a 3D dense range map are disclosed. An illustrative method may include the steps of acquiring at least one image frame from an image sensor, selecting at least one region of interest within the image frame, determining the geo-location of three or more reference points within each selected region of interest, and transforming 2D image domain data from each selected region of interest into a 3D dense range map containing physical features of one or more objects within the image frame. The 3D dense range map can be used to calculate physical feature vectors of objects disposed within each defined region of interest. An illustrative video surveillance system may include an image sensor adapted to acquire images from at least one region of interest, a graphical user interface for displaying images acquired from the image sensor within an image frame, and a processor for determining the geo-location of one ore more objects within the image frame.Type: ApplicationFiled: April 19, 2005Publication date: October 19, 2006Applicant: HONEYWELL INTERNATIONAL INC.Inventors: Yunqian Ma, Michael Bazakos, KwongWing Au
-
Publication number: 20060116555Abstract: Thermal infrared image data of at least a region of a face of a person in an enclosure is provided. The enclosure, for example, may include a first enclosed volume and a second enclosed volume physically separated from the first enclosed volume. The first enclosed volume may include an entrance door sized to allow a person to enter the first enclosed volume. The enclosure provides a controlled environment for performing measurements (e.g., capturing thermal infrared image data) for use in determining a physiological state of a person (e.g., anxiety, deception, etc.).Type: ApplicationFiled: November 18, 2004Publication date: June 1, 2006Applicant: Honeywell International Inc.Inventors: Ioannis Pavlidis, Michael Bazakos, Vassilios Morellas
-
Publication number: 20060104488Abstract: A face detection and recognition system having several arrays imaging a scene at different bands of the infrared spectrum. The system may use weighted subtracting and thresholding to distinguish human skin in a sensed image. A feature selector may locate a face in the image. The face may be framed or the image cropped with a frame or border to incorporate essentially only the face. The border may be superimposed on an image direct from an imaging array. A sub-image containing the face may be extracted from within the border and compared with a database of face information to attain recognition of the face. A level of recognition of the face may be established. Infrared lighting may be used as needed to illuminate the scene.Type: ApplicationFiled: November 12, 2004Publication date: May 18, 2006Inventors: Michael Bazakos, Vassilios Morellas, Andrew Johnson, Yungian Ma
-
Publication number: 20060102843Abstract: A face detection and recognition system having several arrays imaging a scene in the infrared and visible spectrums. The system may use weighted subtracting and thresholding to distinguish human skin in a sensed image. A feature selector may locate a face in the image. The image may be cropped with a frame or border incorporating essentially only the face. The border may be superimposed on images from an infrared imaging array and the visible imaging array. Sub-images containing the face may be extracted from within the border on the infrared and visible images, respectively, and compared with a database of face information to attain recognition of the face. Confidence levels of recognition for infrared and visible imaged faces may be established. A resultant confidence level of recognition may be determined from these confidence levels. Infrared lighting may be used as needed to illuminate the scene.Type: ApplicationFiled: November 12, 2004Publication date: May 18, 2006Inventors: Michael Bazakos, Vassilios Morellas, Yunqian Ma
-
Publication number: 20060082438Abstract: A system for providing stand-off biometric verification of a driver of a vehicle while the vehicle is moving and/or a person on foot at a control gate, including an RFID vehicle tag reader, an RFID personal smart card reader and a facial detection and recognition (verification) system. The driver carries a RFID personal smart card that stores personal information of the driver and a face template of the driver. The vehicle carries a RFID vehicle tag that stores information regarding the vehicle. When the vehicle approaches the control gate, the RFID vehicle tag reader reads data from the RFID vehicle tag and the RFID personal tag reader reads data from the RFID personal smart card. The facial detection and verification system scans and reads a facial image for the driver. All the data and facial images detected by the readers are sent to a local computer at the control gate for further processing (final face verification).Type: ApplicationFiled: December 1, 2005Publication date: April 20, 2006Inventors: Michael Bazakos, David Meyers
-
Publication number: 20060082439Abstract: A system for providing stand-off biometric verification of a driver of a vehicle while the vehicle is moving and/or a person on foot at a control gate, including an RFID vehicle tag reader, an RFID personal smart card reader and a facial detection and recognition (verification) system. The driver carries a RFID personal smart card that stores personal information of the driver and a face template of the driver. The vehicle carries a RFID vehicle tag that stores information regarding the vehicle. When the vehicle approaches the control gate, the RFID vehicle tag reader reads data from the RFID vehicle tag and the RFID personal tag reader reads data from the RFID personal smart card. The facial detection and verification system scans and reads a facial image for the driver. All the data and facial images detected by the readers are sent to a local computer at the control gate for further processing (final face verification).Type: ApplicationFiled: December 1, 2005Publication date: April 20, 2006Inventors: Michael Bazakos, David Meyers, Vassilios Morellas
-
Publication number: 20060053342Abstract: Methods and systems for the unsupervised learning of events contained within a video sequence, including apparatus and interfaces for implementing such systems and methods, are disclosed. An illustrative method in accordance with an exemplary embodiment of the present invention may include the steps of providing a behavioral analysis engine, initiating a training phase mode within the behavioral analysis engine and obtaining a feature vector including one or more parameters relating to an object located within an image sequence, and then analyzing the feature vector to determine a number of possible event candidates. The behavioral analysis engine can be configured to prompt the user to confirm whether an event candidate is a new event, an existing event, or an outlier. Once trained, a testing/operational phase mode of the behavioral analysis engine can be further implemented to detect the occurrence of one or more learned events, if desired.Type: ApplicationFiled: September 9, 2004Publication date: March 9, 2006Inventors: Michael Bazakos, Yunqian Ma, Vassilios Morellas
-
Publication number: 20050110610Abstract: A system for providing stand-off biometric verification of a driver of a vehicle at a control gate while the vehicle is moving, including a pre-verification system and a post-verification systems. The pre-verification system is installed before an entrance of a facility and comprises an RFID vehicle tag reader, an RFID personal tag reader and a facial detection and recognition (verification) system. The RFID vehicle tag reader scans and reads an ID from an RFID vehicle tag of the vehicle that is trying to pass through the gate. The RFID personal tag reader reads an ID from an RFID personal tag carried by personnel who are driving in the vehicle. The facial detection and verification system scans and reads facial images for the driver. The post-verification system is installed on at least one of an entrance and an exit for post-verification to ensure that the vehicle that enters the entrance or leaves from the exit is the one that has been verified/denied at the control gate.Type: ApplicationFiled: November 3, 2004Publication date: May 26, 2005Inventors: Michael Bazakos, Ridha Hamza, David Meyers
-
Publication number: 20050055582Abstract: A system for providing stand-off biometric verification of a driver of a vehicle at a control gate while the vehicle is moving, including an RFID vehicle tag reader, an RFID personal tag reader and a facial detection and recognition (verification) system. The RFID vehicle tag reader scans and reads data from an RFID vehicle tag of the vehicle that is trying to pass through the gate. The RFID personal tag reader reads data from an RFID personal tag carried by personnel who are driving in the vehicle. The facial detection and verification system scans and reads facial images for the driver. All the data and facial images detected by the reader are sent to a computer for further processing (final face verification).Type: ApplicationFiled: September 5, 2003Publication date: March 10, 2005Inventors: Michael Bazakos, David Meyers, Murray Cooper