Patents by Inventor Hankyu Moon

Hankyu Moon has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 8706544
    Abstract: The present invention is a method and system for forecasting the demographic characterization of customers to help customize programming contents on each means for playing output of each site of a plurality of sites in a media network through automatically measuring, characterizing, and estimating the demographic information of customers that appear in the vicinity of each means for playing output. The analysis of demographic information of customers is performed automatically based on the visual information of the customers, using a plurality of means for capturing images and a plurality of computer vision technologies on the visual information. The measurement of the demographic information is performed in each measured node, where the node is defined as means for playing output. Extrapolation of the measurement characterizes the demographic information per each node of a plurality of nodes in a site of a plurality of sites of a media network.
    Type: Grant
    Filed: May 23, 2007
    Date of Patent: April 22, 2014
    Assignee: VideoMining Corporation
    Inventors: Rajeev Sharma, Satish Mummareddy, Jeff Hershey, Hankyu Moon
  • Patent number: 8577663
    Abstract: A system and method for identifying a monitoring point in an electrical and electronic system (EES) in a vehicle. The method includes defining a network model of the EES where potential monitoring point locations in the model are identified as targets, such as nodes. The method then computes a betweenness centrality metric for each target in the model as a summation of a ratio of a number of shortest paths between each pair of targets in the model that pass through the target whose betweenness centrality metric is being determined to a total number of shortest paths between each pair of targets. The method identifies which of the betweenness centrality metrics are greater than a threshold that defines a minimum acceptable metric and determines which of those targets meets a predetermined model coverage. The monitoring point is selected as the target that best satisfies the minimum metric and the desired coverage.
    Type: Grant
    Filed: May 23, 2011
    Date of Patent: November 5, 2013
    Assignee: GM Global Technology Operations LLC
    Inventors: Tsai-Ching Lu, Yilu Zhang, Alejandro Nijamkin, David L. Allen, Hankyu Moon, Mutasim A. Salman
  • Patent number: 8520906
    Abstract: The present invention is a system and method for estimating the age of people based on their facial images. It addresses the difficulty of annotating the age of a person from facial image by utilizing relative age (such as older than, or younger than) and face-based class similarity (gender, ethnicity or appearance-based cluster) of sampled pair-wise facial images. It involves a unique method for the pair-wise face training and a learning machine (or multiple learning machines) which output the relative age along with the face-based class similarity, of the pairwise facial images. At the testing stage, the given input face image is paired with some number of reference images to be fed to the trained machines. The age of the input face is determined by comparing the estimated relative ages of the pairwise facial images to the ages of reference face images.
    Type: Grant
    Filed: September 12, 2008
    Date of Patent: August 27, 2013
    Assignee: VideoMining Corporation
    Inventors: Hankyu Moon, Rajeev Sharma, Namsoon Jung
  • Patent number: 8462996
    Abstract: The present invention is a method and system for measuring human emotional response to visual stimulus, based on the person's facial expressions. Given a detected and tracked human face, it is accurately localized so that the facial features are correctly identified and localized. Face and facial features are localized using the geometrically specialized learning machines. Then the emotion-sensitive features, such as the shapes of the facial features or facial wrinkles, are extracted. The facial muscle actions are estimated using a learning machine trained on the emotion-sensitive features. The instantaneous facial muscle actions are projected to a point in affect space, using the relation between the facial muscle actions and the affective state (arousal, valence, and stance). The series of estimated emotional changes renders a trajectory in affect space, which is further analyzed in relation to the temporal changes in visual stimulus, to determine the response.
    Type: Grant
    Filed: May 19, 2008
    Date of Patent: June 11, 2013
    Assignee: VideoMining Corporation
    Inventors: Hankyu Moon, Rajeev Sharma, Namsoon Jung
  • Patent number: 8401248
    Abstract: The present invention is a method and system to provide an automatic measurement of people's responses to dynamic digital media, based on changes in their facial expressions and attention to specific content. First, the method detects and tracks faces from the audience. It then localizes each of the faces and facial features to extract emotion-sensitive features of the face by applying emotion-sensitive feature filters, to determine the facial muscle actions of the face based on the extracted emotion-sensitive features. The changes in facial muscle actions are then converted to the changes in affective state, called an emotion trajectory. On the other hand, the method also estimates eye gaze based on extracted eye images and three-dimensional facial pose of the face based on localized facial images. The gaze direction of the person, is estimated based on the estimated eye gaze and the three-dimensional facial pose of the person.
    Type: Grant
    Filed: December 30, 2008
    Date of Patent: March 19, 2013
    Assignee: VideoMining Corporation
    Inventors: Hankyu Moon, Rajeev Sharma, Namsoon Jung
  • Patent number: 8379937
    Abstract: The present invention is a method and system to provide a face-based automatic ethnicity recognition system that utilizes ethnicity-sensitive image features and probabilistic graphical models to represent ethnic classes. The ethnicity-sensitive image features are derived from groups of image features so that each grouping of the image features contributes to more accurate recognition of the ethnic class. The ethnicity-sensitive image features can be derived from image filters that are matched to different colors, sizes, and shapes of facial features—such as eyes, mouth, or complexion. The ethnicity-sensitive image features serve as observable quantities in the ethnic class-dependent probabilistic graphical models, where each probabilistic graphical model represents one ethnic class. A given input facial image is corrected for pose and lighting, and ethnicity-sensitive image features are extracted.
    Type: Grant
    Filed: September 29, 2008
    Date of Patent: February 19, 2013
    Assignee: VideoMining Corporation
    Inventors: Hankyu Moon, Rajeev Sharma, Namsoon Jung
  • Patent number: 8351647
    Abstract: The present invention is a system and framework for automatically measuring and correlating visual characteristics of people and accumulating the data for the purpose of demographic and behavior analysis. The demographic and behavior characteristics of people are extracted from a sequence of images using techniques from computer vision. The demographic and behavior characteristics are combined with a timestamp and a location marker to provide a feature vector of a person at a particular time at a particular location. These feature vectors are then accumulated and aggregated automatically in order to generate a data set that can be statistically analyzed, data mined and/or queried.
    Type: Grant
    Filed: December 17, 2007
    Date of Patent: January 8, 2013
    Assignee: VideoMining Corporation
    Inventors: Rajeev Sharma, Hankyu Moon, Namsoon Jung
  • Patent number: 8325982
    Abstract: The present invention is a method and system for detecting and tracking shopping carts from video images in a retail environment. First, motion blobs are detected and tracked from the video frames. Then these motion blobs are examined to determine whether or not some of them contain carts, based on the presence or absence of linear edge motion. Linear edges are detected within consecutive video frames, and their estimated motions vote for the presence of a cart. The motion blobs receiving enough votes are classified as cart candidate blobs. A more elaborate model of passive motions within blobs containing a cart is constructed. The detected cart candidate blob is then analyzed based on the constructed passive object motion model to verify whether or not the blob indeed shows the characteristic passive motion of a person pushing a cart. Then the finally-detected carts are corresponded across the video frames to generate cart tracks.
    Type: Grant
    Filed: July 23, 2009
    Date of Patent: December 4, 2012
    Assignee: VideoMining Corporation
    Inventors: Hankyu Moon, Rajeev Sharma, Namsoon Jung
  • Publication number: 20120303348
    Abstract: A system and method for identifying a monitoring point in an electrical and electronic system (EES) in a vehicle. The method includes defining a network model of the EES where potential monitoring point locations in the model are identified as targets, such as nodes. The method then computes a betweenness centrality metric for each target in the model as a summation of a ratio of a total number of shortest paths between each pair of targets and a number of shortest paths that pass through the target whose betweenness centrality metric is being determined. The method identifies which of the betweenness centrality metrics are greater than a threshold that defines a minimum acceptable metric and determines which of those targets meets a predetermined model coverage. The monitoring point is selected as the target that best satisfies the minimum metric and the desired coverage.
    Type: Application
    Filed: May 23, 2011
    Publication date: November 29, 2012
    Applicant: GM GLOBAL TECHNOLOGY OPERATION LLC
    Inventors: Tsai-Ching Lu, Yilu Zhang, Alejandro Nijamkin, David L. Allen, Hankyu Moon, Mutasim A. Salman
  • Patent number: 8254633
    Abstract: The present invention is a method and system to provide correspondences between a face camera track and a behavior camera track, for the purpose of making correspondence between the data obtained from each track. First, multiple learning machines are trained so that each of the machines processes pairwise person images from a specific pose region, and estimates the likelihood of two person images belonging to the same person based on image appearances. Then, the system acquires a person image associated with a behavior camera track, determines the pose of the person image based on its floor position, and corrects the pose of the person image. The system also acquires person images from face camera images associated with a face camera track, and combines the images with corrected person images from the previous step to form pairwise person images.
    Type: Grant
    Filed: April 21, 2009
    Date of Patent: August 28, 2012
    Assignee: VideoMining Corporation
    Inventors: Hankyu Moon, Rajeev Sharma, Namsoon Jung
  • Patent number: 8219438
    Abstract: The present invention is a method and system for measuring human response to retail elements, based on the shopper's facial expressions and behaviors. From a facial image sequence, the facial geometry—facial pose and facial feature positions—is estimated to facilitate the recognition of facial expressions, gaze, and demographic categories. The recognized facial expression is translated into an affective state of the shopper and the gaze is translated into the target and the level of interest of the shopper. The body image sequence is processed to identify the shopper's interaction with a given retail element—such as a product, a brand, or a category.
    Type: Grant
    Filed: June 30, 2008
    Date of Patent: July 10, 2012
    Assignee: VideoMining Corporation
    Inventors: Hankyu Moon, Rajeev Sharma, Namsoon Jung
  • Patent number: 8165386
    Abstract: The present invention is an embedded audience measurement platform, which is called HAM. The HAM includes hardware, apparatus, and method for measuring audience data from image stream using dynamically-configurable hardware architecture. The HAM provides an end-to-end solution for audience measurement, wherein reconfigurable computational modules are used as engines per node to power the complete solution implemented in a flexible hardware architecture. The HAM is also a complete system for broad audience measurement, which has various components built into the system. Examples of the components comprise demographics classification, gaze estimation, emotion recognition, behavior analysis, and impression measurement.
    Type: Grant
    Filed: August 18, 2009
    Date of Patent: April 24, 2012
    Inventors: Hankyu Moon, Kevin Maurice Irick, Vijaykrishnan Narayanan, Rajeev Sharma, Namsoon Jung
  • Patent number: 8081816
    Abstract: The present invention is an apparatus and method for object recognition from at least an image stream from at least an image frame utilizing at least an artificial neural network. The present invention further comprises means for generating multiple components of an image pyramid simultaneously from a single image stream, means for providing the active pixel and interlayer neuron data to at least a subwindow processor, means for multiplying and accumulating the product of a pixel data or interlayer data and a synapse weight, and means for performing the activation of an accumulation. The present invention allows the artificial neural networks to be reconfigurable, thus embracing a broad range of object recognition applications in a flexible way. The subwindow processor in the present invention also further comprises means for performing neuron computations for at least a neuron.
    Type: Grant
    Filed: June 6, 2008
    Date of Patent: December 20, 2011
    Inventors: Kevin Maurice Irick, Vijaykrishnan Narayanan, Hankyu Moon, Rajeev Sharma, Namsoon Jung
  • Patent number: 8027521
    Abstract: The present invention is a method and system to provide a face-based automatic gender recognition system that utilizes localized facial features and hairstyles of humans. Given a human face detected from a face detector, it is accurately localized to facilitate the facial/hair feature detection and localization. Facial features are more finely localized using the geometrically distributed learning machines. Then the position, size, and appearance information of the facial features are extracted. The facial feature localization essentially decouples geometric and appearance information about facial features, so that a more explicit comparison can be made at the recognition stage. The hairstyle features that possess useful gender information are also extracted based on the hair region segmented, using the color discriminant analysis and the estimated geometry of the face.
    Type: Grant
    Filed: March 25, 2008
    Date of Patent: September 27, 2011
    Assignee: VideoMining Corporation
    Inventors: Hankyu Moon, Rajeev Sharma, Namsoon Jung
  • Patent number: 8010402
    Abstract: The present invention is a system and framework for augmenting any retail transaction system with information about the involved customers. This invention provides a method to combine the transaction data records and a customer or a group of customers with the automatically extracted demographic features (e.g., gender, age, and ethnicity), shopping group information, and behavioral information using computer vision algorithms. First, the system detects faces from face view, tracks them individually, and estimates poses of each of the tracked faces to normalize. These facial images are processed by the demographics classification module to determine and record the demographics feature vector. The system detects and tracks customers to analyze the dynamic behavior of the tracked customers so that their shopping group membership and checkout behavior can be recognized. Then the instances of faces and the instances of bodies can be matched and combined.
    Type: Grant
    Filed: April 21, 2009
    Date of Patent: August 30, 2011
    Assignee: VideoMining Corporation
    Inventors: Rajeev Sharma, Hankyu Moon, Varij Saurabh, Namsoon Jung
  • Patent number: 7987111
    Abstract: The present invention is a method and system for characterizing physical space based on automatic demographics measurement, using a plurality of means for capturing images and a plurality of computer vision technologies. The present invention is called demographic-based retail space characterization (DBR). Although the disclosed method is described in the context of retail space, the present invention can be applied to any physical space that has a restricted boundary. In the present invention, the physical space characterization can comprise various types of characterization depending on the objective of the physical space, and it is one of the objectives of the present invention to provide the automatic demographic composition measurement to facilitate the physical space characterization.
    Type: Grant
    Filed: October 26, 2007
    Date of Patent: July 26, 2011
    Assignee: VideoMining Corporation
    Inventors: Rajeev Sharma, Satish Mummareddy, Jeff Hershey, Hankyu Moon
  • Patent number: 7921036
    Abstract: The present invention is a method and system for selectively executing content on a display based on the automatic recognition of predefined characteristics, including visually perceptible attributes, such as the demographic profile of people identified automatically using a sequence of image frames from a video stream. The present invention detects the images of the individual or the people from captured images. The present invention automatically extracts visually perceptible attributes, including demographic information, local behavior analysis, and emotional status, of the individual or the people from the images in real time. The visually perceptible attributes further comprise height, skin color, hair color, the number of people in the scene, time spent by the people, and whether a person looked at the display. A targeted media is selected from a set of media pools, according to the automatically-extracted, visually perceptible attributes and the feedback from the people.
    Type: Grant
    Filed: June 29, 2009
    Date of Patent: April 5, 2011
    Assignee: VideoMining Corporation
    Inventors: Rajeev Sharma, Namsoon Jung, Hankyu Moon, Varij Saurabh
  • Patent number: 7912246
    Abstract: The present invention is a system and method for performing age classification or age estimation based on the facial images of people, using multi-category decomposition architecture of classifiers. In the multi-category decomposition architecture, which is a hybrid multi-classifier architecture specialized to age classification, the task of learning the concept of age against significant within-class variations, is handled by decomposing the set of facial images into auxiliary demographics classes, and the age classification is performed by an array of classifiers where each classifier, called an auxiliary class machine, is specialized to the given auxiliary class. The facial image data is annotated to assign the gender and ethnicity labels as well as the age labels. Each auxiliary class machine is trained to output both the given auxiliary class membership likelihood and the age group likelihoods. Faces are detected from the input image and individually tracked.
    Type: Grant
    Filed: January 29, 2008
    Date of Patent: March 22, 2011
    Assignee: VideoMining Corporation
    Inventors: Hankyu Moon, Rajeev Sharma, Namsoon Jung
  • Patent number: 7848548
    Abstract: The invention provides a face-based automatic demographics classification system that is robust to pose changes of the target faces and to accidental scene variables, by using a pose-independent facial image representation which comprises multiple pose-dependent facial appearance models. Given a sequence of people's faces in a scene, the two-dimensional variations are estimated and corrected using a novel machine learning based method. We estimate the three-dimensional pose of the people, using a machine learning based approach. The face tracking module keeps the identity of the person using geometric and appearance cues, where multiple appearance models are built based on the poses of the faces. Each separately built pose-dependent facial appearance model is fed to the demographics classifier, which is trained using only the faces having the corresponding pose.
    Type: Grant
    Filed: June 11, 2007
    Date of Patent: December 7, 2010
    Assignee: VideoMining Corporation
    Inventors: Hankyu Moon, Satish Mummareddy, Rajeev Sharma
  • Patent number: 7742623
    Abstract: The present invention is a method and system to estimate the visual target that people are looking, based on automatic image measurements. The system utilizes image measurements from both face-view cameras and top-down view cameras. The cameras are calibrated with respect to the site and the visual target, so that the gaze target is determined from the estimated position and gaze direction of a person. Face detection and two-dimensional pose estimation locate and normalize the face of the person so that the eyes can be accurately localized and the three-dimensional facial pose can be estimated. The eye gaze is estimated based on either the positions of localized eyes and irises or on the eye image itself, depending on the quality of the image. The gaze direction is estimated from the eye gaze measurement in the context of the three-dimensional facial pose.
    Type: Grant
    Filed: August 4, 2008
    Date of Patent: June 22, 2010
    Assignee: VideoMining Corporation
    Inventors: Hankyu Moon, Rajeev Sharma, Namsoon Jung