Patents by Inventor Farzin Aghdasi

Farzin Aghdasi has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20130169822
    Abstract: Disclosed are methods, systems, computer readable media and other implementations, including a method to calibrate a camera that includes capturing by the camera a frame of a scene, identifying features appearing in the captured frame, the features associated with pre-determined values representative of physical attributes of one or more objects, and determining parameters of the camera based on the identified features appearing in the captured frame and the pre-determined values associated with the identified features.
    Type: Application
    Filed: December 28, 2011
    Publication date: July 4, 2013
    Applicant: Pelco, Inc.
    Inventors: Hongwei Zhu, Farzin Aghdasi, Greg Millar
  • Publication number: 20130170760
    Abstract: A method of presenting video comprising receiving a plurality of video data from a video source, analyzing the plurality of video data; identifying the presence of foreground-objects that are distinct from background portions in the plurality of video data, classifying the foreground-objects into foreground-object classifications, receiving user input selecting a foreground-object classification, and generating video frames from the plurality of video data containing background portions and only foreground-objects in the selected foreground-object classification.
    Type: Application
    Filed: December 29, 2011
    Publication date: July 4, 2013
    Applicant: PELCO, INC.
    Inventors: Lei Wang, Farzin Aghdasi, Greg Millar
  • Publication number: 20130170696
    Abstract: An example of a method for identifying objects in video content according to the disclosure includes receiving video content of a scene captured by a video camera, detecting an object in the video content, identifying a track that the object follows over a series of frames of the video content, extracting object features for the object from the video content, and classifying the object based on the object features. Classifying the object further comprises: determining a track-level classification for the object using spatially invariant object features, determining a global-clustering classification for the object using spatially variant features, and determining an object type for the object based on the track-level classification and the global-clustering classification for the object.
    Type: Application
    Filed: December 28, 2011
    Publication date: July 4, 2013
    Applicant: PELCO, INC.
    Inventors: Hongwei Zhu, Farzin Aghdasi, Greg Millar
  • Publication number: 20130162834
    Abstract: Techniques for processing video content in a video camera are provided. The techniques include a method for processing video content in at video camera according to the disclosure includes capturing thermal video data using a thermal imaging sensor, determining quantization parameters for the thermal video data, quantizing the thermal video data to generate quantized thermal video data content and video quantization information, and transmitting the quantized thermal video data stream and the video quantization information to a video analytics server over a network.
    Type: Application
    Filed: December 22, 2011
    Publication date: June 27, 2013
    Applicant: Pelco, Inc.
    Inventors: Lei Wang, Farzin Aghdasi, Greg Millar
  • Publication number: 20130163382
    Abstract: Systems and methods are described for determining device positions in a video surveillance system. A method described herein includes generating a reference sound; emitting, at a first device, the reference sound; detecting, at the first device, a responsive reference sound from one or more second devices in response to the emitted reference sound; identifying a position of each of the one or more second devices; obtaining information relating to latency of the one or more second devices; computing a round trip time associated with each of the one or more second devices based on at least a timing of detecting the one or more responsive reference sounds and the latency of each of the one or more second devices; and estimating the position of the first device according to the round trip time and the position associated with each of the one or more second devices.
    Type: Application
    Filed: December 22, 2011
    Publication date: June 27, 2013
    Applicant: Pelco, Inc.
    Inventors: James Millar, Farzin Aghdasi, Greg Millar
  • Publication number: 20130166711
    Abstract: Systems and methods are described herein that provide a three-tier intelligent video surveillance management system. An example of a system described herein includes a gateway configured to obtain video content and metadata relating to the video content from a plurality of network devices, a metadata processing module communicatively coupled to the gateway and configured to filter the metadata according to one or more criteria to obtain a filtered set of metadata, a video processing module communicatively coupled to the gateway and the metadata processing module and configured to isolate video portions, of video the content, associated with respective first portions of the filtered set of metadata, and a cloud services interface communicatively coupled to the gateway, the metadata processing module and the video processing module and configured to provide at least some of the filtered set of metadata or the isolated video portions to a cloud computing service.
    Type: Application
    Filed: December 22, 2011
    Publication date: June 27, 2013
    Applicant: Pelco, Inc.
    Inventors: Lei Wang, Hongwei Zhu, Farzin Aghdasi, Greg Millar
  • Publication number: 20130155247
    Abstract: A method of adjusting the color of images captured by a plurality of cameras comprises the steps of receiving a first image captured by a first camera from the plurality of cameras, analyzing the first image to separate the pixels in the first image into background pixels and foreground pixels, selecting pixels from the background pixels that have a color that is a shade of gray, determining the amount to adjust the colors of the selected pixels to move their colors towards true gray, and providing information for use in adjusting the color components of images from the plurality of cameras.
    Type: Application
    Filed: December 20, 2011
    Publication date: June 20, 2013
    Applicant: PELCO, INC.
    Inventors: Lei Wang, Greg Millar, Farzin Aghdasi
  • Publication number: 20130128050
    Abstract: Disclosed are methods, systems, computer readable media and other implementations, including a method that includes determining, from image data captured by a plurality of cameras, motion data for multiple moving objects, and presenting, on a global image representative of areas monitored by the plurality of cameras, graphical indications of the determined motion data for the multiple objects at positions on the global image corresponding to geographic locations of the multiple moving objects. The method further includes presenting captured image data from one of the plurality of cameras in response to selection, based on the graphical indications presented on the global image, of an area of the global image presenting at least one of the graphical indications for at least one of the multiple moving objects captured by the one of the plurality of cameras.
    Type: Application
    Filed: November 22, 2011
    Publication date: May 23, 2013
    Inventors: Farzin AGHDASI, Wei SU, Lei WANG
  • Publication number: 20130028467
    Abstract: Embodiments of the disclosure provide for systems and methods for creating metadata associated with a video data. The metadata can include data about objects viewed within a video scene and/or events that occur within the video scene. Some embodiments allow users to search for specific objects and/or events by searching the recorded metadata. In some embodiments, metadata is created by receiving a video frame and developing a background model for the video frame. Foreground object(s) can then be identified in the video frame using the background model. Once these objects are identified they can be classified and/or an event associated with the foreground object may be detected. The event and the classification of the foreground object can then be recorded as metadata.
    Type: Application
    Filed: December 30, 2010
    Publication date: January 31, 2013
    Applicant: Pelco Inc.
    Inventors: Greg Millar, Farzin Aghdasi, Lei Wang
  • Patent number: 8269844
    Abstract: A method of improving a video image by removing the effects of camera vibration comprising the steps of, obtaining a reference frame, receiving an incoming frame, determining the frame translation vector for the incoming frame, translating the incoming frame to generate a realigned frame, performing low pass filtering in the spatial domain on pixels in the realigned frame, performing low pass filtering in the spatial domain on pixels in the reference frame, determining the absolute difference between the filtered pixels in the reference frame and the filtered pixels in the realigned frame, performing low pass filtering in the temporal domain on the pixels in the realigned frame to generate the output frame if the absolute difference is less than a predetermined threshold, and providing the realigned frame as the output frame if the absolute difference is greater than the predetermined threshold.
    Type: Grant
    Filed: March 26, 2008
    Date of Patent: September 18, 2012
    Assignee: Pelco, Inc.
    Inventors: Chien-Min Huang, Farzin Aghdasi
  • Publication number: 20120170803
    Abstract: Embodiments of the disclosure provide for systems and methods for creating metadata associated with a video data. The metadata can include data about objects viewed within a video scene and/or events that occur within the video scene. Some embodiments allow users to search for specific objects and/or events by searching the recorded metadata. In some embodiments, metadata is created by receiving a video frame and developing a background model for the video frame. Foreground object(s) can then be identified in the video frame using the background model. Once these objects are identified they can be classified and/or an event associated with the foreground object may be detected. The event and the classification of the foreground object can then be recorded as metadata.
    Type: Application
    Filed: December 30, 2010
    Publication date: July 5, 2012
    Applicant: Pelco Inc.
    Inventors: Greg Millar, Farzin Aghdasi, Lei Wang
  • Publication number: 20120169923
    Abstract: Techniques are discussed for providing mechanisms for coding and transmitting high definition video, e.g., over low bandwidth connections. In particular, foreground-objects are identified as distinct from the background of a scene represented in a plurality of video frames received from a video source, such as a camera. In identifying foreground-objects, semantically significant and semantically insignificant movement (e.g., repetitive versus non-repetitive movement) is differentiated. Processing of the foreground-objects and background proceed at different update rates or frequencies.
    Type: Application
    Filed: December 30, 2010
    Publication date: July 5, 2012
    Applicant: Pelco Inc.
    Inventors: Greg Millar, Farzin Aghdasi, Lei Wang, Chien-Min Huang
  • Publication number: 20120170838
    Abstract: Systems and methods of sorting electronic color images of objects are provided. One method includes receiving an input representation of an object, the representation including pixels defined in a first color space, converting the input image into a second color space, determining a query feature vector including multiple parameters associated with color of the input representation, the query feature vector parameters including at least a first parameter of the first color space and at least a first parameter of the second color space and comparing the query feature vector to multiple candidate feature vectors. Each candidate feature vector includes multiple parameters associated with color of multiple stored candidate images, the candidate feature vector parameters including at least the first parameter from the first color space and at least the first parameter from the second color space.
    Type: Application
    Filed: December 30, 2010
    Publication date: July 5, 2012
    Applicant: Pelco Inc.
    Inventors: Lei WANG, Shu Yang, Greg Miller, Farzin Aghdasi
  • Publication number: 20120169882
    Abstract: Techniques are described for tracking moving objects using a plurality of security cameras. Multiple cameras may capture frames that contain images of a moving object. These images may be processed by the cameras to create metadata associated with the images of the objects. Frames of each camera's video feed and metadata may be transmitted to a host computer system. The host computer system may use the metadata received from each camera to determine whether the moving objects imaged by the cameras represent the same moving object. Based upon properties of the images of the objects described in the metadata received from each camera, the host computer system may select a preferable video feed containing images of the moving object for display to a user.
    Type: Application
    Filed: December 30, 2010
    Publication date: July 5, 2012
    Applicant: Pelco Inc.
    Inventors: Greg Millar, Farzin Aghdasi, Lei Wang
  • Publication number: 20120169871
    Abstract: An image display method includes: receiving, from a single camera, first and second image information for first and second captured images captured from different perspectives, the first image information having a first data density; selecting a portion of the first captured image for display with a higher level of detail than other portions of the first captured image, the selected portion corresponding to a first area of the first captured image; displaying the selected portion in a first displayed image, using a second data density relative to the selected portion of the first captured image; and displaying another portion of the first captured image, in a second displayed image, using a third data density; where the another portion of the first captured image is other than the selected portion of the first captured image; and where the third data density is lower than the second data density.
    Type: Application
    Filed: December 30, 2010
    Publication date: July 5, 2012
    Applicant: Pelco Inc.
    Inventors: Sezai SABLAK, Greg Millar, Farzin Aghdasi
  • Publication number: 20120170902
    Abstract: Embodiments of the disclosure provide for systems and methods for searching video data for events and/or behaviors. An inference engine can be used to aide in the searching. In some embodiments, a user can specify various search criteria, for example, a video source(s), an event(s) or behavior(s) to search, and an action(s) to perform in the event of a successful search. The search can be performed by analyzing an object(s) found within scenes of the video data. An object can be identified by a number of attributes specified by the user. Once the search criteria has been received from the user, the video data can be received (or extracted from storage), the data analyzed for the specified events (or behaviors), and the specified action performed in the event a successful search occurs.
    Type: Application
    Filed: December 30, 2010
    Publication date: July 5, 2012
    Applicant: Pelco Inc.
    Inventors: Hongwei Zhu, Greg Millar, Farzin Aghdasi, Lei Wang, Shu Yang
  • Publication number: 20120170802
    Abstract: Trajectory information of objects appearing in a scene can be used to cluster trajectories into groups of trajectories according to each trajectory's relative distance between each other for scene activity analysis. By doing so, a database of trajectory data can be maintained that includes the trajectories to be clustered into trajectory groups. This database can be used to train a clustering system, and with extracted statistical features of resultant trajectory groups a new trajectory can be analyzed to determine whether the new trajectory is normal or abnormal. Embodiments described herein, can be used to determine whether a video scene is normal or abnormal. In the event that the new trajectory is identified as normal the new trajectory can be annotated with the extracted semantic data. In the event that the new trajectory is determined to be abnormal a user can be notified that an abnormal behavior has occurred.
    Type: Application
    Filed: December 30, 2010
    Publication date: July 5, 2012
    Applicant: Pelco Inc. (Clovis, CA)
    Inventors: Greg Millar, Farzin Aghdasi, Hongwei Zhu
  • Publication number: 20120173577
    Abstract: Embodiments of the disclosure provide for systems and methods for creating metadata associated with a video data. The metadata can include data about objects viewed within a video scene and/or events that occur within the video scene. Some embodiments allow users to search for specific objects and/or events by searching the recorded metadata. In some embodiments, metadata is created by receiving a video frame and developing a background model for the video frame. Foreground object(s) can then be identified in the video frame using the background model. Once these objects are identified they can be classified and/or an event associated with the foreground object may be detected. The event and the classification of the foreground object can then be recorded as metadata.
    Type: Application
    Filed: December 30, 2010
    Publication date: July 5, 2012
    Applicant: Pelco Inc.
    Inventors: Greg Millar, Farzin Aghdasi, Lei Wang
  • Publication number: 20120162416
    Abstract: A video surveillance system includes: an input configured to receive indications of images each comprising a plurality of pixels; a memory; and a processing unit communicatively coupled to the input and the memory and configured to: analyze the indications of the images; compare the present image with a short-term background image stored in the memory; compare the present image with a long-term background image stored in the memory; provide an indication in response to an object in the present image being disposed in a first location in the present image, in a second location in, or absent from, the short-term background image, and in a third location in, or absent from, the long-term background image, where the first location is different from both the second location and the third location.
    Type: Application
    Filed: December 22, 2010
    Publication date: June 28, 2012
    Applicant: Pelco, Inc.
    Inventors: Wei Su, Lei Wang, Farzin Aghdasi, Shu Yang
  • Patent number: 7933333
    Abstract: A method of detecting motion in a video image comprising the steps of connecting an MPEG compliant encoder to a video source that provides video images, compressing the video data in the video images and generating a compressed video image bit stream having a motion compensation component, receiving the generated compressed video image bit stream, comparing the motion compensation component to a threshold value, and indicating that motion has occurred if the motion compensation component is greater than the threshold value.
    Type: Grant
    Filed: June 30, 2005
    Date of Patent: April 26, 2011
    Assignee: Pelco, Inc
    Inventors: Chien-Min Huang, Farzin Aghdasi