Patents by Inventor Rajeev Sharma

Rajeev Sharma has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20100195913
    Abstract: The present invention is a system and method for immersing facial images of people captured automatically from an image or a sequence of images into a live video playback sequence. This method allows viewers to perceive a participation in the viewed “movie” segment. A format is defined for storing the video such that this live playback of the video sequence is possible. A plurality of Computer Vision algorithms in the invention processes a plurality of input image sequences from the means for capturing images, which is pointed at the users in the vicinity of the system and performs the head detection and tracking. The interaction in the invention can be performed either in real-time or off-line depending on the embodiment of the invention in an uncontrolled background.
    Type: Application
    Filed: April 12, 2010
    Publication date: August 5, 2010
    Inventors: Rajeev Sharma, Namsoon Jung
  • Patent number: 7742623
    Abstract: The present invention is a method and system to estimate the visual target that people are looking, based on automatic image measurements. The system utilizes image measurements from both face-view cameras and top-down view cameras. The cameras are calibrated with respect to the site and the visual target, so that the gaze target is determined from the estimated position and gaze direction of a person. Face detection and two-dimensional pose estimation locate and normalize the face of the person so that the eyes can be accurately localized and the three-dimensional facial pose can be estimated. The eye gaze is estimated based on either the positions of localized eyes and irises or on the eye image itself, depending on the quality of the image. The gaze direction is estimated from the eye gaze measurement in the context of the three-dimensional facial pose.
    Type: Grant
    Filed: August 4, 2008
    Date of Patent: June 22, 2010
    Assignee: VideoMining Corporation
    Inventors: Hankyu Moon, Rajeev Sharma, Namsoon Jung
  • Patent number: 7734070
    Abstract: The present invention is a system and method for immersing facial images of people captured automatically from an image or a sequence of images into a live video playback sequence. This method allows viewers to perceive a participation in the viewed “movie” segment. A format is defined for storing the video such that this live playback of the video sequence is possible. A plurality of Computer Vision algorithms in the invention processes a plurality of input image sequences from the means for capturing images, which is pointed at the users in the vicinity of the system and performs the head detection and tracking. The interaction in the invention can be performed either in real-time or off-line depending on the embodiment of the invention in an uncontrolled background.
    Type: Grant
    Filed: December 16, 2003
    Date of Patent: June 8, 2010
    Inventors: Rajeev Sharma, Namsoon Jung
  • Patent number: 7711155
    Abstract: The present invention is a system and method for modeling faces from images captured from a single or a plurality of image capturing systems at different times. The method first determines the demographics of the person being imaged. This demographic classification is then used to select an approximate three dimensional face model from a set of models. Using this initial model and properties of camera projection, the model is adjusted leading to a more accurate face model.
    Type: Grant
    Filed: April 12, 2004
    Date of Patent: May 4, 2010
    Assignee: VideoMining Corporation
    Inventors: Rajeev Sharma, Kuntal Sengupta
  • Publication number: 20100089561
    Abstract: A connection block employs a support block with two parallel through holes that pass through parallel first and second flat block surfaces. First and second insert pipes have elongate portions and flanges. The elongate portions press-fit into the connection block and the flanges, not at pipe ends, abut against the first flat surface of the connection block when the pipes are installed. Upon installation, the ends the elongate portions of the pipes are formed into a flange by flattening the end against the second connection block surface. The junctures of the elongate portions and the first flanges form a flange radius that contacts a radius of the support block when the pipes are installed into the block. The elongate portions residing within the first and second through holes make a full contact fit against the inside diameters of the through holes. The flanges are perpendicular to the elongate portions.
    Type: Application
    Filed: October 10, 2008
    Publication date: April 15, 2010
    Applicant: DENSO International America, Inc.
    Inventor: Rajeev Sharma
  • Publication number: 20100066077
    Abstract: A connection joint brazed to a heat exchanger may employ a first block and a second block. The first block may have two fluid passages that align with two fluid passages of the second block. A first male insert may reside within a first fluid passage of each of the first block and the second block and a second male insert may reside within a second fluid passage of each block. Each of the male inserts may employ a first seal and a second seal with a raised boss region midway between the seals. The raised boss portion lies at the mated flats of the juncture of the first and second blocks, which are chamfered to permit part of the boss to locate in each of the chamfers. The seals may be o-rings they may be molded onto an insert base using an over molding or double shot manufacturing process.
    Type: Application
    Filed: September 15, 2008
    Publication date: March 18, 2010
    Applicant: DENSO International America, Inc.
    Inventor: Rajeev Sharma
  • Publication number: 20090285456
    Abstract: The present invention is a method and system for measuring human emotional response to visual stimulus, based on the person's facial expressions. Given a detected and tracked human face, it is accurately localized so that the facial features are correctly identified and localized. Face and facial features are localized using the geometrically specialized learning machines. Then the emotion-sensitive features, such as the shapes of the facial features or facial wrinkles, are extracted. The facial muscle actions are estimated using a learning machine trained on the emotion-sensitive features. The instantaneous facial muscle actions are projected to a point in affect space, using the relation between the facial muscle actions and the affective state (arousal, valence, and stance). The series of estimated emotional changes renders a trajectory in affect space, which is further analyzed in relation to the temporal changes in visual stimulus, to determine the response.
    Type: Application
    Filed: May 19, 2008
    Publication date: November 19, 2009
    Inventors: Hankyu Moon, Rajeev Sharma, Namsoon Jung
  • Patent number: 7590261
    Abstract: The invention is a method for detecting events in an imaged scene by analyzing the occlusion of linear features in the background image. Linear features, curved or straight, in specific scene locations are either manually specified or automatically learned from an image or image sequence of the background scene. For each linear feature, an occlusion model determines whether the line or part of it is occluded. The locations of the lines of interest in the scene, together with their occlusion characterizations, collectively form a description of the scene for a particular image. An event, defined as a series of descriptions of the scene over an image sequence, can then be initially defined and subsequently detected automatically by the system. An example application of this is counting cars or people passing in front of a video camera.
    Type: Grant
    Filed: July 30, 2004
    Date of Patent: September 15, 2009
    Assignee: VideoMining Corporation
    Inventors: Vladimir Y. Mariano, Rajeev Sharma
  • Publication number: 20090158309
    Abstract: The present invention provides a comprehensive method to design an automatic media viewership measurement system, from the problem of sensor placement for an effective sampling of the viewership to the method of extrapolating spatially sampled viewership data. The system elements that affect the viewership—site, display, crowd, and audience—are identified first. The site-viewership analysis derives some of the crucial elements in determining an effective data sampling plan: visibility, occupancy, and viewership relevancy. The viewership sampling map is computed based on the visibility map, the occupancy map, and the viewership relevancy map; the viewership measurement sensors are placed so that the sensor coverage maximizes the viewership sampling map.
    Type: Application
    Filed: December 12, 2007
    Publication date: June 18, 2009
    Inventors: Hankyu Moon, Rajeev Sharma, Namsoon Jung
  • Patent number: 7505621
    Abstract: The present invention includes a system and method for automatically extracting the demographic information from images. The system detects the face in an image, locates different components, extracts component features, and then classifies the components to identify the age, gender, or ethnicity of the person(s) in the image. Using components for demographic classification gives better results as compared to currently known techniques. Moreover, the described system and technique can be used to extract demographic information in more robust manner than currently known methods, in environments where high degree of variability in size, shape, color, texture, pose, and occlusion exists. This invention also performs classifier fusion using Data Level fusion and Multi-level classification for fusing results of various component demographic classifiers.
    Type: Grant
    Filed: October 22, 2004
    Date of Patent: March 17, 2009
    Assignee: VideoMining Corporation
    Inventors: Pyush Agrawal, Rajeev Sharma
  • Publication number: 20080159634
    Abstract: The present invention is a method and system for automatically analyzing a category in a plurality of the categories in a physical space based on the visual characterization, such as behavior analysis or segmentation, of the persons with regard to the category. The present invention captures a plurality of input images of the persons in the category by a plurality of means for capturing images. The present invention processes the plurality of input images in order to understand the shopping behavior of the persons with the sub-categories of the category and analyzes the level of engagement and decision process at the sub-category level. The processes are based on a novel usage of a plurality of computer vision technologies to analyze the visual characterization of the persons from the plurality of input images. The physical space may be a retail space, and the persons may be customers in the retail space.
    Type: Application
    Filed: December 6, 2007
    Publication date: July 3, 2008
    Inventors: Rajeev Sharma, Satish Mummareddy, Priya Baboo, Jeff Hershey, Namsoon Jung
  • Publication number: 20080147725
    Abstract: The present invention is a method and system for handling a plurality of information units in an information processing system, such as a multimodal human computer interaction (HCI) system, through verification process for the plurality of information units. The present invention converts each information unit in the plurality of information units into verified object by augmenting the first meaning in the information unit with a second meaning and expresses the verified objects by object representation for each verified object. The present invention utilizes a processing structure, called polymorphic operator, which is capable of applying a plurality of relationships among the verified objects based on a set of predefined rules in a particular application domain for governing the operation among the verified objects. The present invention is named Object Verification Enabled Network (OVEN).
    Type: Application
    Filed: December 6, 2007
    Publication date: June 19, 2008
    Inventors: Namsoon Jung, Rajeev Sharma
  • Publication number: 20080109397
    Abstract: The present invention is a system and framework for automatically measuring and correlating visual characteristics of people and accumulating the data for the purpose of demographic and behavior analysis. The demographic and behavior characteristics of people are extracted from a sequence of images using techniques from computer vision. The demographic and behavior characteristics are combined with a timestamp and a location marker to provide a feature vector of a person at a particular time at a particular location. These feature vectors are then accumulated and aggregated automatically in order to generate a data set that can be statistically analyzed, data mined and/or queried.
    Type: Application
    Filed: December 17, 2007
    Publication date: May 8, 2008
    Inventors: Rajeev Sharma, Hankyu Moon, Namsoon Jung
  • Patent number: 7321854
    Abstract: The present method incorporates audio and visual cues from human gesticulation for automatic recognition. The methodology articulates a framework for co-analyzing gestures and prosodic elements of a person's speech. The methodology can be applied to a wide range of algorithms involving analysis of gesticulating individuals. The examples of interactive technology applications can range from information kiosks to personal computers. The video analysis of human activity provides a basis for the development of automated surveillance technologies in public places such as airports, shopping malls, and sporting events.
    Type: Grant
    Filed: September 19, 2003
    Date of Patent: January 22, 2008
    Assignee: The Penn State Research Foundation
    Inventors: Rajeev Sharma, Mohammed Yeasin, Sanshzar Kettebekov
  • Patent number: 7319779
    Abstract: The present invention includes a method and system for automatically extracting the multi-class age category information of a person from digital images. The system detects the face of the person(s) in an image, extracts features from the face(s), and then classifies into one of the multiple age categories. Using appearance information from the entire face gives better results as compared to currently known techniques. Moreover, the described technique can be used to extract age category information in more robust manner than currently known methods, in environments with a high degree of variability in illumination, pose and presence of occlusion. Besides use as an automated data collection system wherein given the necessary facial information as the data, the age category of the person is determined automatically, the method could also be used for targeting certain age-groups in advertisements, surveillance, human computer interaction, security enhancements and immersive computer games.
    Type: Grant
    Filed: December 3, 2004
    Date of Patent: January 15, 2008
    Assignee: VideoMining Corporation
    Inventors: Satish Mummareddy, Rajeev Sharma
  • Patent number: 7317812
    Abstract: The present invention pertains generally to the field of computer graphics user interfaces. More specifically, the present invention discloses a video image based tracking system that allows a computer to robustly locate and track an object in three dimensions within the viewing area of two or more cameras. The preferred embodiment of the disclosed invention tracks a person's appendages in 3D allowing touch free control of interactive devices but the method and apparatus can be used to perform a wide variety of video tracking tasks. The method uses at least two cameras that view the volume of space within which the object is being located and tracked. It operates by maintaining a large number of hypotheses about the actual 3D object location.
    Type: Grant
    Filed: July 12, 2003
    Date of Patent: January 8, 2008
    Assignee: VideoMining Corporation
    Inventors: Nils Krahnstoever, Emilio Schapira, Rajeev Sharma
  • Patent number: 7283650
    Abstract: The present invention is a system and method for printing facial images of people, captured automatically from a sequence of images, onto coupons or any promotional printed material, such as postcards, stamps, promotional brochures, or tickets for movies or shows. The coupon can also be used as a means to encourage people to visit specific sites as a way of promoting goods or services sold at the visited site. The invention is named UCOUPON. A plurality of Computer Vision algorithms in the UCOUPON processes a plurality of input image sequences from one or a plurality of means for capturing images that is pointed at the customers in the vicinity of the system in an uncontrolled background. The coupon content is matched by the customer's demographic information, and primarily, the UCOUPON system does not require any customer input or participation to gather the demographic data, operating fully independently and automatically.
    Type: Grant
    Filed: November 26, 2003
    Date of Patent: October 16, 2007
    Assignee: Video Mining Corporation
    Inventors: Rajeev Sharma, Namsoon Jung
  • Patent number: 7274803
    Abstract: The present invention is a system and method for detecting and analyzing motion patterns of individuals present at a multimedia computer terminal from a stream of video frames generated by a video camera and the method of providing visual feedback of the extracted information to aid the interaction process between a user and the system. The method allows multiple people to be present in front of the computer terminal and yet allow one active user to make selections on the computer display. Thus the invention can be used as method for contact-free human-computer interaction in a public place, where the computer terminal can be positioned in a variety of configurations including behind a transparent glass window or at a height or location where the user cannot touch the terminal physically.
    Type: Grant
    Filed: March 31, 2003
    Date of Patent: September 25, 2007
    Assignee: VideoMining Corporation
    Inventors: Rajeev Sharma, Nils Krahnstoever, Emilio Schapira
  • Publication number: 20070205402
    Abstract: A flame retardant and glow resistant zinc free cellulose product containing silica, modified with polyaluminium ions. In the preparation of product the cellulose solution (viscose) and sodium silicate are blended and regenerated to obtain a polymeric silica in the cellulose structure which is further modified with polyaluminium ions to attach aluminium sites on silica molecules to make the product glow resistant and impart wash fastness as well.
    Type: Application
    Filed: April 25, 2006
    Publication date: September 6, 2007
    Applicant: BIRLA RESEARCH INSTITUTE FOR APPLIED SCIENCES
    Inventors: Aditya Shrivastava, Brij Koutu, Rajeev Sharma, Daya Chaurasia
  • Patent number: RE41449
    Abstract: The present invention is a method and apparatus for providing an enhanced automatic drive-thru experience to the customers in a vehicle by allowing use of natural hand gestures to interact with digital content. The invention is named Virtual Touch Ordering System (VTOS). In the VTOS, the virtual touch interaction is defined to be a contact free interaction, in which a user is able to select graphical objects within the digital contents on a display system and is able to control the processes connected to the graphical objects, by natural hand gestures without touching any physical devices, such as a keyboard or a touch screen. Using the virtual touch interaction of the VTOS, the user is able to complete transactions or ordering, without leaving the car and without any physical contact with the display.
    Type: Grant
    Filed: February 7, 2008
    Date of Patent: July 20, 2010
    Inventors: Nils Krahnstoever, Emilio Schapira, Rajeev Sharma, Namsoon Jung