Using Position Of The Lips, Movement Of The Lips, Or Face Analysis (epo) Patents (Class 704/E15.042)
-
Patent number: 12020217Abstract: Systems and methods for estimating the repair cost of one or more instances of vehicle damage pictured in a digital image are disclosed herein. These systems and methods may first use a damage detection neural network (NN) model to determine location(s), type(s), intensit(ies), and corresponding repair part(s) for pictured damage. Then, a repair cost estimation NN model may be given a damage type, a damage intensity, and the repair part(s) needed to determine a repair cost estimation. The training of each of the damage detection NN model and the repair cost estimation NN model is described. The manner of outputting results data corresponding to the systems and methods disclosed herein is also described.Type: GrantFiled: November 11, 2020Date of Patent: June 25, 2024Assignee: CDK GLOBAL, LLCInventors: Salil Gandhi, Jitendra Choudhary, Saurabh Kshirsagar, Papiya Debnath
-
Patent number: 11653156Abstract: Hearing device, accessory device, and a method of operating a hearing system comprising a hearing device and an accessory device is disclosed, the method comprising obtaining, in the accessory device, an audio input signal representative of audio from one or more audio sources; obtaining image data with a camera of the accessory device; identifying one or more audio sources including a first audio source based on the image data; determining a first model comprising first model coefficients, wherein the first model is based on image data of the first audio source and the audio input signal; and transmitting a hearing device signal to the hearing device, wherein the hearing device signal is based on the first model.Type: GrantFiled: May 28, 2021Date of Patent: May 16, 2023Assignee: GN HEARING A/SInventor: Andreas Tiefenau
-
Patent number: 8660841Abstract: Apparatus for isolation of a media stream of a first modality from a complex media source having at least two media modality, and multiple objects, and events, comprises: recording devices for the different modalities; an associator for associating between events recorded in said first modality and events recorded in said second modality, and providing an association output; and an isolator that uses the association output for isolating those events in the first mode correlating with events in the second mode associated with a predetermined object, thereby to isolate a isolated media stream associated with said predetermined object. Thus it is possible to identify events such as hand or mouth movements, and associate these with sounds, and then produce a filtered track of only those sounds associated with the events. In this way a particular speaker or musical instrument can be isolated from a complex scene.Type: GrantFiled: April 6, 2008Date of Patent: February 25, 2014Assignee: Technion Research & Development Foundation LimitedInventors: Zohar Barzelay, Yoav Yosef Schechner
-
Patent number: 8588482Abstract: The subject matter of this specification can be embodied in, among other things, a computer-implemented method that includes receiving a plurality of images having human faces. The method further includes generating a data structure having representations of the faces and associations that link the representations based on similarities in appearance between the faces. The method further includes outputting a first gender value for a first representation of a first face that indicates a gender of the first face based on one or more other gender values of one or more other representations of one or more other faces that are linked to the first representation.Type: GrantFiled: October 17, 2011Date of Patent: November 19, 2013Assignee: Google Inc.Inventors: Shumeet Baluja, Yushi Jing
-
Patent number: 8041082Abstract: The subject matter of this specification can be embodied in, among other things, a computer-implemented method that includes receiving a plurality of images having human faces. The method further includes generating a data structure having representations of the faces and associations that link the representations based on similarities in appearance between the faces. The method further includes outputting a first gender value for a first representation of a first face that indicates a gender of the first face based on one or more other gender values of one or more other representations of one or more other faces that are linked to the first representation.Type: GrantFiled: November 2, 2007Date of Patent: October 18, 2011Assignee: Google Inc.Inventors: Shumeet Baluja, Yushi Jing
-
Publication number: 20100332229Abstract: An information processing apparatus that includes an image acquisition unit to acquire a temporal sequence of frames of image data, a detecting unit to detect a lip area and a lip image from each of the frames of the image data, a recognition unit to recognize a word based on the detected lip images of the lip areas, and a controller to control an operation at the information processing apparatus based on the word recognized by the recognition unit.Type: ApplicationFiled: June 15, 2010Publication date: December 30, 2010Applicant: Sony CorporationInventors: Kazumi AOYAMA, Kohtaro SABE, Masato ITO
-
Publication number: 20100299144Abstract: Apparatus for isolation of a media stream of a first modality from a complex media source having at least two media modality, and multiple objects, and events, comprises: recording devices for the different modalities; an associator for associating between events recorded in said first modality and events recorded in said second modality, and providing an association output; and an isolator that uses the association output for isolating those events in the first mode correlating with events in the second mode associated with a predetermined object, thereby to isolate a isolated media stream associated with said predetermined object. Thus it is possible to identify events such as hand or mouth movements, and associate these with sounds, and then produce a filtered track of only those sounds associated with the events. In this way a particular speaker or musical instrument can be isolated from a complex scene.Type: ApplicationFiled: April 6, 2008Publication date: November 25, 2010Applicant: Technion Research & Development Foundation Ltd.Inventors: Zohar Barzelay, Yoav Yosef