Patents by Inventor Namsoon Jung

Namsoon Jung has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 7974869
    Abstract: The present invention is a method and system for forecasting the behavioral characterization of customers to help customize programming contents on each means for playing output of each site of a plurality of sites in a media network through automatically measuring, characterizing, and forecasting the behavioral information of customers that appear in the vicinity of each means for playing output. The analysis of behavioral information of customers is performed automatically based on the visual information of the customers, using a plurality of means for capturing images and a plurality of computer vision technologies on the visual information. The measurement of the behavioral information is performed in each measured node, where the node is defined as means for playing output. Extrapolation of the measurement characterizes the behavioral information per each node of a plurality of nodes in a site of a plurality of sites of a media network.
    Type: Grant
    Filed: September 18, 2007
    Date of Patent: July 5, 2011
    Assignee: VideoMining Corporation
    Inventors: Rajeev Sharma, Satish Mummareddy, Jeff Hershey, Namsoon Jung
  • Patent number: 7957565
    Abstract: The present invention is a method and system for recognizing employees among the people in a physical space based on automatic behavior analysis of the people in a preferred embodiment. The present invention captures a plurality of input images of the people in the physical space by a plurality of means for capturing images. The present invention processes the plurality of input images in order to understand the behavioral characteristics of the people for the employee recognition purpose. The behavior analysis can comprise a path analysis as one of the characterization methods. The path analysis collects a plurality of trip information for each tracked person during a predefined window of time. The trip information can comprise spatial and temporal attributes, such as coordinates of the person's position, trip time, trip length, and average velocity for each of the plurality of trips.
    Type: Grant
    Filed: March 28, 2008
    Date of Patent: June 7, 2011
    Assignee: VideoMining Corporation
    Inventors: Rajeev Sharma, Satish Mummareddy, Jeff Hershey, Namsoon Jung
  • Patent number: 7921036
    Abstract: The present invention is a method and system for selectively executing content on a display based on the automatic recognition of predefined characteristics, including visually perceptible attributes, such as the demographic profile of people identified automatically using a sequence of image frames from a video stream. The present invention detects the images of the individual or the people from captured images. The present invention automatically extracts visually perceptible attributes, including demographic information, local behavior analysis, and emotional status, of the individual or the people from the images in real time. The visually perceptible attributes further comprise height, skin color, hair color, the number of people in the scene, time spent by the people, and whether a person looked at the display. A targeted media is selected from a set of media pools, according to the automatically-extracted, visually perceptible attributes and the feedback from the people.
    Type: Grant
    Filed: June 29, 2009
    Date of Patent: April 5, 2011
    Assignee: VideoMining Corporation
    Inventors: Rajeev Sharma, Namsoon Jung, Hankyu Moon, Varij Saurabh
  • Patent number: 7912246
    Abstract: The present invention is a system and method for performing age classification or age estimation based on the facial images of people, using multi-category decomposition architecture of classifiers. In the multi-category decomposition architecture, which is a hybrid multi-classifier architecture specialized to age classification, the task of learning the concept of age against significant within-class variations, is handled by decomposing the set of facial images into auxiliary demographics classes, and the age classification is performed by an array of classifiers where each classifier, called an auxiliary class machine, is specialized to the given auxiliary class. The facial image data is annotated to assign the gender and ethnicity labels as well as the age labels. Each auxiliary class machine is trained to output both the given auxiliary class membership likelihood and the age group likelihoods. Faces are detected from the input image and individually tracked.
    Type: Grant
    Filed: January 29, 2008
    Date of Patent: March 22, 2011
    Assignee: VideoMining Corporation
    Inventors: Hankyu Moon, Rajeev Sharma, Namsoon Jung
  • Patent number: 7904477
    Abstract: The present invention is a method and system for handling a plurality of information units in an information processing system, such as a multimodal human computer interaction (HCI) system, through verification process for the plurality of information units. The present invention converts each information unit in the plurality of information units into verified object by augmenting the first meaning in the information unit with a second meaning and expresses the verified objects by object representation for each verified object. The present invention utilizes a processing structure, called polymorphic operator, which is capable of applying a plurality of relationships among the verified objects based on a set of predefined rules in a particular application domain for governing the operation among the verified objects. The present invention is named Object Verification Enabled Network (OVEN).
    Type: Grant
    Filed: December 6, 2007
    Date of Patent: March 8, 2011
    Assignee: VideoMining Corporation
    Inventors: Namsoon Jung, Rajeev Sharma
  • Patent number: 7826644
    Abstract: The present invention is a system and method for immersing facial images of people captured automatically from an image or a sequence of images into a live video playback sequence. This method allows viewers to perceive a participation in the viewed “movie” segment. A format is defined for storing the video such that this live playback of the video sequence is possible. A plurality of Computer Vision algorithms in the invention processes a plurality of input image sequences from the means for capturing images, which is pointed at the users in the vicinity of the system and performs the head detection and tracking. The interaction in the invention can be performed either in real-time or off-line depending on the embodiment of the invention in an uncontrolled background.
    Type: Grant
    Filed: April 12, 2010
    Date of Patent: November 2, 2010
    Inventors: Rajeev Sharma, Namsoon Jung
  • Publication number: 20100195913
    Abstract: The present invention is a system and method for immersing facial images of people captured automatically from an image or a sequence of images into a live video playback sequence. This method allows viewers to perceive a participation in the viewed “movie” segment. A format is defined for storing the video such that this live playback of the video sequence is possible. A plurality of Computer Vision algorithms in the invention processes a plurality of input image sequences from the means for capturing images, which is pointed at the users in the vicinity of the system and performs the head detection and tracking. The interaction in the invention can be performed either in real-time or off-line depending on the embodiment of the invention in an uncontrolled background.
    Type: Application
    Filed: April 12, 2010
    Publication date: August 5, 2010
    Inventors: Rajeev Sharma, Namsoon Jung
  • Patent number: 7742623
    Abstract: The present invention is a method and system to estimate the visual target that people are looking, based on automatic image measurements. The system utilizes image measurements from both face-view cameras and top-down view cameras. The cameras are calibrated with respect to the site and the visual target, so that the gaze target is determined from the estimated position and gaze direction of a person. Face detection and two-dimensional pose estimation locate and normalize the face of the person so that the eyes can be accurately localized and the three-dimensional facial pose can be estimated. The eye gaze is estimated based on either the positions of localized eyes and irises or on the eye image itself, depending on the quality of the image. The gaze direction is estimated from the eye gaze measurement in the context of the three-dimensional facial pose.
    Type: Grant
    Filed: August 4, 2008
    Date of Patent: June 22, 2010
    Assignee: VideoMining Corporation
    Inventors: Hankyu Moon, Rajeev Sharma, Namsoon Jung
  • Patent number: 7734070
    Abstract: The present invention is a system and method for immersing facial images of people captured automatically from an image or a sequence of images into a live video playback sequence. This method allows viewers to perceive a participation in the viewed “movie” segment. A format is defined for storing the video such that this live playback of the video sequence is possible. A plurality of Computer Vision algorithms in the invention processes a plurality of input image sequences from the means for capturing images, which is pointed at the users in the vicinity of the system and performs the head detection and tracking. The interaction in the invention can be performed either in real-time or off-line depending on the embodiment of the invention in an uncontrolled background.
    Type: Grant
    Filed: December 16, 2003
    Date of Patent: June 8, 2010
    Inventors: Rajeev Sharma, Namsoon Jung
  • Publication number: 20090285456
    Abstract: The present invention is a method and system for measuring human emotional response to visual stimulus, based on the person's facial expressions. Given a detected and tracked human face, it is accurately localized so that the facial features are correctly identified and localized. Face and facial features are localized using the geometrically specialized learning machines. Then the emotion-sensitive features, such as the shapes of the facial features or facial wrinkles, are extracted. The facial muscle actions are estimated using a learning machine trained on the emotion-sensitive features. The instantaneous facial muscle actions are projected to a point in affect space, using the relation between the facial muscle actions and the affective state (arousal, valence, and stance). The series of estimated emotional changes renders a trajectory in affect space, which is further analyzed in relation to the temporal changes in visual stimulus, to determine the response.
    Type: Application
    Filed: May 19, 2008
    Publication date: November 19, 2009
    Inventors: Hankyu Moon, Rajeev Sharma, Namsoon Jung
  • Publication number: 20090158309
    Abstract: The present invention provides a comprehensive method to design an automatic media viewership measurement system, from the problem of sensor placement for an effective sampling of the viewership to the method of extrapolating spatially sampled viewership data. The system elements that affect the viewership—site, display, crowd, and audience—are identified first. The site-viewership analysis derives some of the crucial elements in determining an effective data sampling plan: visibility, occupancy, and viewership relevancy. The viewership sampling map is computed based on the visibility map, the occupancy map, and the viewership relevancy map; the viewership measurement sensors are placed so that the sensor coverage maximizes the viewership sampling map.
    Type: Application
    Filed: December 12, 2007
    Publication date: June 18, 2009
    Inventors: Hankyu Moon, Rajeev Sharma, Namsoon Jung
  • Publication number: 20080159634
    Abstract: The present invention is a method and system for automatically analyzing a category in a plurality of the categories in a physical space based on the visual characterization, such as behavior analysis or segmentation, of the persons with regard to the category. The present invention captures a plurality of input images of the persons in the category by a plurality of means for capturing images. The present invention processes the plurality of input images in order to understand the shopping behavior of the persons with the sub-categories of the category and analyzes the level of engagement and decision process at the sub-category level. The processes are based on a novel usage of a plurality of computer vision technologies to analyze the visual characterization of the persons from the plurality of input images. The physical space may be a retail space, and the persons may be customers in the retail space.
    Type: Application
    Filed: December 6, 2007
    Publication date: July 3, 2008
    Inventors: Rajeev Sharma, Satish Mummareddy, Priya Baboo, Jeff Hershey, Namsoon Jung
  • Publication number: 20080147725
    Abstract: The present invention is a method and system for handling a plurality of information units in an information processing system, such as a multimodal human computer interaction (HCI) system, through verification process for the plurality of information units. The present invention converts each information unit in the plurality of information units into verified object by augmenting the first meaning in the information unit with a second meaning and expresses the verified objects by object representation for each verified object. The present invention utilizes a processing structure, called polymorphic operator, which is capable of applying a plurality of relationships among the verified objects based on a set of predefined rules in a particular application domain for governing the operation among the verified objects. The present invention is named Object Verification Enabled Network (OVEN).
    Type: Application
    Filed: December 6, 2007
    Publication date: June 19, 2008
    Inventors: Namsoon Jung, Rajeev Sharma
  • Publication number: 20080109397
    Abstract: The present invention is a system and framework for automatically measuring and correlating visual characteristics of people and accumulating the data for the purpose of demographic and behavior analysis. The demographic and behavior characteristics of people are extracted from a sequence of images using techniques from computer vision. The demographic and behavior characteristics are combined with a timestamp and a location marker to provide a feature vector of a person at a particular time at a particular location. These feature vectors are then accumulated and aggregated automatically in order to generate a data set that can be statistically analyzed, data mined and/or queried.
    Type: Application
    Filed: December 17, 2007
    Publication date: May 8, 2008
    Inventors: Rajeev Sharma, Hankyu Moon, Namsoon Jung
  • Patent number: 7283650
    Abstract: The present invention is a system and method for printing facial images of people, captured automatically from a sequence of images, onto coupons or any promotional printed material, such as postcards, stamps, promotional brochures, or tickets for movies or shows. The coupon can also be used as a means to encourage people to visit specific sites as a way of promoting goods or services sold at the visited site. The invention is named UCOUPON. A plurality of Computer Vision algorithms in the UCOUPON processes a plurality of input image sequences from one or a plurality of means for capturing images that is pointed at the customers in the vicinity of the system in an uncontrolled background. The coupon content is matched by the customer's demographic information, and primarily, the UCOUPON system does not require any customer input or participation to gather the demographic data, operating fully independently and automatically.
    Type: Grant
    Filed: November 26, 2003
    Date of Patent: October 16, 2007
    Assignee: Video Mining Corporation
    Inventors: Rajeev Sharma, Namsoon Jung
  • Patent number: 7227976
    Abstract: The present invention is a system and method for detecting facial features of humans in a continuous video and superimposing virtual objects onto the features automatically and dynamically in real-time. The suggested system is named Facial Enhancement Technology (FET). The FET system consists of three major modules, initialization module, facial feature detection module, and superimposition module. Each module requires demanding processing time and resources by nature, but the FET system integrates these modules in such a way that real time processing is possible. The users can interact with the system and select the objects on the screen. The superimposed image moves along with the user's random motion dynamically. The FET system enables the user to experience something that was not possible before by augmenting the person's facial images. The hardware of the FET system comprises the continuous image-capturing device, image processing and controlling system, and output display system.
    Type: Grant
    Filed: June 27, 2003
    Date of Patent: June 5, 2007
    Assignee: VideoMining Corporation
    Inventors: Namsoon Jung, Rajeev Sharma
  • Patent number: 7225414
    Abstract: The present invention is a method and apparatus for attracting the attention of people in public places and engaging them in a touch-free interaction with a multimedia display using an image-capturing system and a set of Computer Vision algorithms as a means of informing the public as well as collecting data about/from the users. The invention is named, Virtual Touch Entertainment (VTE) Platform. The VTE Platform comprises of a series of interaction states, such as the Wait State, the Attraction State, the User Engagement State, the User Interaction State, and the Interaction Termination State. The modules in these interaction states handle complicated tasks assigned to them, such as attracting the users, training the users, providing the multimedia digital content to the users, and collecting the user data and statistics, in an efficient and intelligent manner.
    Type: Grant
    Filed: August 5, 2003
    Date of Patent: May 29, 2007
    Assignee: VideoMining Corporation
    Inventors: Rajeev Sharma, Emilio Schapira, Namsoon Jung
  • Patent number: 7053915
    Abstract: The present invention is a system and method for increasing the value of the audio-visual entertainment systems, such as karaoke, by simulating a virtual stage environment and enhancing the user's facial image in a continuous video input, automatically, dynamically and in real-time. The present invention is named Enhanced Virtual Karaoke (EVIKA). The EVIKA system consists of two major modules, the facial image enhancement module and the virtual stage simulation module. The facial image enhancement module augments the user's image using the embedded Facial Enhancement Technology (F.E.T.) in real-time. The virtual stage simulation module constructs a virtual stage in the display by augmenting the environmental image. The EVIKA puts the user's enhanced body image into the dynamic background, which changes according to the user's arbitrary motion. During the entire process, the user can interact with the system and select and interact with the virtual objects on the screen.
    Type: Grant
    Filed: July 16, 2003
    Date of Patent: May 30, 2006
    Assignee: Advanced Interfaces, Inc
    Inventors: Namsoon Jung, Rajeev Sharma
  • Patent number: RE41449
    Abstract: The present invention is a method and apparatus for providing an enhanced automatic drive-thru experience to the customers in a vehicle by allowing use of natural hand gestures to interact with digital content. The invention is named Virtual Touch Ordering System (VTOS). In the VTOS, the virtual touch interaction is defined to be a contact free interaction, in which a user is able to select graphical objects within the digital contents on a display system and is able to control the processes connected to the graphical objects, by natural hand gestures without touching any physical devices, such as a keyboard or a touch screen. Using the virtual touch interaction of the VTOS, the user is able to complete transactions or ordering, without leaving the car and without any physical contact with the display.
    Type: Grant
    Filed: February 7, 2008
    Date of Patent: July 20, 2010
    Inventors: Nils Krahnstoever, Emilio Schapira, Rajeev Sharma, Namsoon Jung
  • Patent number: RE42205
    Abstract: The present invention is a system and method for detecting facial features of humans in a continuous video and super-imposing virtual objects onto the features automatically and dynamically in real-time. The suggested system is named Facial Enhancement Technology (FET). The FET system consists of three major modules, initialization module, facial feature detection module, and superimposition module. Each module requires demanding processing time and resources by nature, but the FET system integrates these modules in such a way that real time processing is possible. The users can interact with the system and select the objects on the screen. The superimposed image moves along with the user's random motion dynamically. The FET system enables the user to experience something that was not possible before by augmenting the person's facial images. The hardware of the FET system comprises the continuous image-capturing device, image processing and controlling system, and output display system.
    Type: Grant
    Filed: June 4, 2009
    Date of Patent: March 8, 2011
    Assignee: Vmine Image Tech Co., Ltd LLC
    Inventors: Namsoon Jung, Rajeev Sharma