Patents by Inventor Hartmut Neven

Hartmut Neven has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20130311506
    Abstract: A method and apparatus for enabling user query disambiguation based on a user context of a mobile computing device. According to embodiments of the invention, a first user search query, along with sensor data, is received from a mobile computing device. A recognition process is performed on the sensor data to identify at least one item. In response to determining the at least one item is a result for the first search query, data identifying the at least one item is transmitted to the mobile computing device as a response to the first search query. In response to determining the at least one item is not the result for the first search query, search results of a second search query is transmitted to the mobile computing device as the response to the first search query, the second search query comprising a query of the at least one item.
    Type: Application
    Filed: January 9, 2012
    Publication date: November 21, 2013
    Applicant: GOOGLE INC.
    Inventors: Gabriel Taubman, David Petrou, Hartwig Adam, Hartmut Neven
  • Patent number: 8532400
    Abstract: Aspects of the invention pertain to identifying whether or not an image from a user's device is of a place. Before undertaking time and resource consuming analysis of an image using specialized image analysis modules, pre-filtering classification is conducted based on image data and metadata associated with the image. The metadata may include geolocation information. One classification procedure analyzes the metadata to perform a high level determination as to whether the image is of a place. If the results indicate that it is of a place, then a further classification procedure may be performed, where the image information is analyzed, with or without the metadata. This process may be done concurrently with a place match filtering procedure. The results of the further classification will either find a match with a given place or not. The output is a place match either with or without geolocation information.
    Type: Grant
    Filed: July 24, 2012
    Date of Patent: September 10, 2013
    Assignee: Google Inc.
    Inventors: Boris Babenko, Hartwig Adam, John Flynn, Hartmut Neven
  • Patent number: 8520949
    Abstract: According to an embodiment, a method for filtering feature point matches for visual object recognition is provided. The method includes identifying local descriptors in an image and determining a self-similarity score for each local descriptor based upon matching each local descriptor to its nearest neighbor descriptors from a descriptor dataset. The method also includes filtering feature point matches having a number of local descriptors with self-similarity scores that exceed a threshold. According to another embodiment, the filtering step may further include removing feature point matches. According to a further embodiment, a system for filtering feature point matches for visual object recognition is provided. The system includes a descriptor identifier, a self-similar descriptor analyzer and a self-similar descriptor filter.
    Type: Grant
    Filed: June 22, 2009
    Date of Patent: August 27, 2013
    Assignee: Google Inc.
    Inventors: Alessandro Bissacco, Ulrich Buddemeier, Hartmut Neven
  • Patent number: 8510166
    Abstract: A gaze tracking technique is implemented with a head mounted gaze tracking device that communicates with a server. The server receives scene images from the head mounted gaze tracking device which captures external scenes viewed by a user wearing the head mounted device. The server also receives gaze direction information from the head mounted gaze tracking device. The gaze direction information indicates where in the external scenes the user was gazing when viewing the external scenes. An image recognition algorithm is executed on the scene images to identify items within the external scenes viewed by the user. A gazing log tracking the identified items viewed by the user is generated.
    Type: Grant
    Filed: May 11, 2011
    Date of Patent: August 13, 2013
    Assignee: Google Inc.
    Inventor: Hartmut Neven
  • Publication number: 20130188886
    Abstract: A system and method of identifying objects is provided. In one aspect, the system and method includes a hand-held device with a display, camera and processor. As the camera captures images and displays them on the display, the processor compares the information retrieved in connection with one image with information retrieved in connection with subsequent images. The processor uses the result of such comparison to determine the object that is likely to be of greatest interest to the user. The display simultaneously displays the images the images as they are captured, the location of the object in an image, and information retrieved for the object.
    Type: Application
    Filed: December 4, 2012
    Publication date: July 25, 2013
    Inventors: David Petrou, Matthew Bridges, Shailesh Nalawadi, Hartwig Adam, Matthew R. Casey, Hartmut Neven, Andrew Harp
  • Publication number: 20130179303
    Abstract: A method and apparatus for enabling dynamic product and vendor identification and the display of relevant purchase information are described herein. According to embodiments of the invention, a recognition process is executed on sensor data captured via a mobile computing device to identify one or more items, and to identify at least one product associated with the one or more items. Product and vendor information for the at least one product is retrieved and displayed via the mobile computing device. In the event a user gesture is detected in response to displaying the product and vendor information data, processing logic may submit a purchase order for the product (e.g., for an online vendor) or contact the vendor (e.g., for an in-store vendor).
    Type: Application
    Filed: January 9, 2012
    Publication date: July 11, 2013
    Applicant: GOOGLE INC.
    Inventors: David Petrou, Hartwig Adam, Laura Garcia-Barrio, Hartmut Neven
  • Publication number: 20130138685
    Abstract: In one embodiment the present invention is a method for populating and updating a database of images of landmarks including geo-clustering geo-tagged images according to geographic proximity to generate one or more geo-clusters, and visual-clustering the one or more geo-clusters according to image similarity to generate one or more visual clusters. In another embodiment, the present invention is a system for identifying landmarks from digital images, including the following components: a database of geo-tagged images; a landmark database; a geo-clustering module; and a visual clustering module. In other embodiments the present invention may be a method of enhancing user queries to retrieve images of landmarks, or a method of automatically tagging a new digital image with text labels.
    Type: Application
    Filed: September 14, 2012
    Publication date: May 30, 2013
    Applicant: Google Inc.
    Inventors: Fernando A. Brucher, Ulrich Buddemeier, Hartwig Adam, Hartmut Neven
  • Patent number: 8421872
    Abstract: An increasing number of mobile telephones and computers are being equipped with a camera. Thus, instead of simple text strings, it is also possible to send images as queries to search engines or databases. Moreover, advances in image recognition allow a greater degree of automated recognition of objects, strings of letters, or symbols in digital images. This makes it possible to convert the graphical information into a symbolic format, for example, plain text, in order to then access information about the object shown.
    Type: Grant
    Filed: February 20, 2004
    Date of Patent: April 16, 2013
    Assignee: Google Inc.
    Inventor: Hartmut Neven, Sr.
  • Publication number: 20130066878
    Abstract: A method and apparatus for enabling virtual tags is described. The method may include receiving a first digital image data and virtual tag data to be associated with a real-world object in the first digital image data, wherein the first digital image data is captured by a first mobile device, and the virtual tag data includes metadata received from a user of the first mobile device. The method may also include generating a first digital signature from the first digital image data that describes the real-world object, and in response to the generation, inserting in substantially real-time the first digital signature into a searchable index of digital images. The method may also include storing, in a tag database, the virtual tag data and an association between the virtual tag data and the first digital signature inserted into the index of digital images.
    Type: Application
    Filed: November 12, 2012
    Publication date: March 14, 2013
    Inventors: John Flynn, Dragomir Anguelov, Hartmut Neven, Mark Cummins, James Philbin, Rafel Spring, Hartwig Adam, Anand Pillai
  • Publication number: 20130036134
    Abstract: A method and apparatus for enabling a searchable history of real-world user experiences is described. The method may include capturing media data by a mobile computing device. The method may also include transmitting the captured media data to a server computer system, the server computer system to perform one or more recognition processes on the captured media data and add the captured media data to a history of real-world experiences of a user of the mobile computing device when the one or more recognition processes find a match. The method may also include transmitting a query of the user to the server computer system to initiate a search of the history or real-world experiences, and receiving results relevant to the query that include data indicative of the media data in the history of real-world experiences.
    Type: Application
    Filed: June 11, 2012
    Publication date: February 7, 2013
    Inventors: Hartmut Neven, David Petrou, Jacob Smullyan, Hartwig Adam
  • Patent number: 8358811
    Abstract: A method and apparatus for creating and updating a facial image database from a collection of digital images is disclosed. A set of detected faces from a digital image collection is stored in a facial image database, along with data pertaining to them. At least one facial recognition template for each face in the first set is computed, and the images in the set are grouped according to the facial recognition template into similarity groups. Another embodiment is a naming tool for assigning names to a plurality of faces detected in a digital image collection. A facial image database stores data pertaining to facial images detected in images of a digital image collection.
    Type: Grant
    Filed: April 1, 2009
    Date of Patent: January 22, 2013
    Assignee: Google Inc.
    Inventors: Hartwig Adams, Johannes Steffens, Keith Kiyohara, Hartmut Neven, Brian Westphal, Tobias Magnusson, Gavin Doughtie, Henry Benjamin, Michael Horowitz, Hong-Kien Kenneth Ong
  • Patent number: 8345921
    Abstract: Embodiments of this invention relate to detecting and blurring images. In an embodiment, a system detects objects in a photographic image. The system includes an object detector module configured to detect regions of the photographic image that include objects of a particular type at least based on the content of the photographic image. The system further includes a false positive detector module configured to determine whether each region detected by the object detector module includes an object of the particular type at least based on information about the context in which the photographic image was taken.
    Type: Grant
    Filed: May 11, 2009
    Date of Patent: January 1, 2013
    Assignee: Google Inc.
    Inventors: Andrea Frome, German Cheung, Ahmad Abdulkader, Marco Zennaro, Bo Wu, Alessandro Bissacco, Hartmut Neven, Luc Vincent, Hartwig Adam
  • Patent number: 8332424
    Abstract: A method and apparatus for enabling virtual tags is described. The method may include receiving a first digital image data and virtual tag data to be associated with a real-world object in the first digital image data, wherein the first digital image data is captured by a first mobile device, and the virtual tag data includes metadata received from a user of the first mobile device. The method may also include generating a first digital signature from the first digital image data that describes the real-world object, and in response to the generation, inserting in substantially real-time the first digital signature into a searchable index of digital images. The method may also include storing, in a tag database, the virtual tag data and an association between the virtual tag data and the first digital signature inserted into the index of digital images.
    Type: Grant
    Filed: May 13, 2011
    Date of Patent: December 11, 2012
    Assignee: Google Inc.
    Inventors: John Flynn, Dragomir Anguelov, Hartmut Neven, Mark Cummins, James Philbin, Rafael Spring, Hartwig Adam, Anand Pillai
  • Publication number: 20120290401
    Abstract: A gaze tracking technique is implemented with a head mounted gaze tracking device that communicates with a server. The server receives scene images from the head mounted gaze tracking device which captures external scenes viewed by a user wearing the head mounted device. The server also receives gaze direction information from the head mounted gaze tracking device. The gaze direction information indicates where in the external scenes the user was gazing when viewing the external scenes. An image recognition algorithm is executed on the scene images to identify items within the external scenes viewed by the user. A gazing log tracking the identified items viewed by the user is generated.
    Type: Application
    Filed: May 11, 2011
    Publication date: November 15, 2012
    Applicant: GOOGLE INC.
    Inventor: Hartmut Neven
  • Publication number: 20120290591
    Abstract: A method and apparatus for enabling virtual tags is described. The method may include receiving a first digital image data and virtual tag data to be associated with a real-world object in the first digital image data, wherein the first digital image data is captured by a first mobile device, and the virtual tag data includes metadata received from a user of the first mobile device. The method may also include generating a first digital signature from the first digital image data that describes the real-world object, and in response to the generation, inserting in substantially real-time the first digital signature into a searchable index of digital images. The method may also include storing, in a tag database, the virtual tag data and an association between the virtual tag data and the first digital signature inserted into the index of digital images.
    Type: Application
    Filed: May 13, 2011
    Publication date: November 15, 2012
    Inventors: John Flynn, Dragomir Anguelov, Hartmut Neven, Mark Cummins, James Philbin, Rafael Spring, Hartwig Adam, Anand Pillai
  • Patent number: 8238671
    Abstract: Aspects of the invention pertain to identifying whether or not an image from a user's device is of a place. Before undertaking time and resource consuming analysis of an image using specialized image analysis modules, pre-filtering classification is conducted based on image data and metadata associated with the image. The metadata may include geolocation information. One classification procedure analyzes the metadata to perform a high level determination as to whether the image is of a place. If the results indicate that it is of a place, then a further classification procedure may be performed, where the image information is analyzed, with or without the metadata. This process may be done concurrently with a place match filtering procedure. The results of the further classification will either find a match with a given place or not. The output is a place match either with or without geolocation information.
    Type: Grant
    Filed: December 7, 2009
    Date of Patent: August 7, 2012
    Assignee: Google Inc.
    Inventors: Boris Babenko, Hartwig Adam, John Flynn, Hartmut Neven
  • Patent number: 8189964
    Abstract: Aspects of the invention pertain to matching a selected image/photograph against a database of reference images having location information. The image of interest may include some location information itself, such as latitude/longitude coordinates and orientation. However, the location information provided by a user's device may be inaccurate or incomplete. The image of interest is provided to a front end server, which selects one or more cells to match the image against. Each cell may have multiple images and an index. One or more cell match servers compare the image against specific cells based on information provided by the front end server. An index storage server maintains index data for the cells and provides them to the cell match servers. If a match is found, the front end server identifies the correct location and orientation of the received image, and may correct errors in an estimated location of the user device.
    Type: Grant
    Filed: December 7, 2009
    Date of Patent: May 29, 2012
    Assignee: Google Inc.
    Inventors: John Flynn, Ulrich Buddemeier, Henrik Stewenius, Hartmut Neven, Fernando Brucher, Hartwig Adam
  • Publication number: 20120114239
    Abstract: Aspects of the invention pertain to matching a selected image/photograph against a database of reference images having location information. The image of interest may include some location information itself, such as latitude/longitude coordinates and orientation. However, the location information provided by a user's device may be inaccurate or incomplete. The image of interest is provided to a front end server, which selects one or more cells to match the image against. Each cell may have multiple images and an index. One or more cell match servers compare the image against specific cells based on information provided by the front end server. An index storage server maintains index data for the cells and provides them to the cell match servers. If a match is found, the front end server identifies the correct location and orientation of the received image, and may correct errors in an estimated location of the user device.
    Type: Application
    Filed: January 18, 2012
    Publication date: May 10, 2012
    Applicant: GOOGLE INC.
    Inventors: John Flynn, Ulrich Buddemeier, Henrik Stewenius, Hartmut Neven, Fernando Brucher, Hartwig Adam
  • Patent number: 8098938
    Abstract: Systems and methods for descriptor vector computation are described herein. An embodiment includes (a) identifying a plurality of regions in the digital image; (b) normalizing the regions using at least a similarity or affine transform such that the normalized regions have the same orientation and size as a pre-determined reference region; (c) generating one or more wavelets using dimensions of the reference region; (d) generating one or more dot products between each of the one or more wavelets, respectively, and the normalized regions; (e) concatenating amplitudes of the one or more dot products to generate a descriptor vector; and (f) outputting a signal corresponding to the descriptor vector.
    Type: Grant
    Filed: March 17, 2008
    Date of Patent: January 17, 2012
    Assignee: Google Inc.
    Inventors: Ulrich Buddemeier, Hartmut Neven
  • Patent number: 8086616
    Abstract: Systems and methods for selecting interest point descriptors for object recognition. In an embodiment, the present invention estimates performance of local descriptors by (1) receiving a local descriptor relating to an object in a first image; (2) identifying one or more nearest neighbor descriptors relating to one or more images different from the first image, the nearest neighbor descriptors comprising nearest neighbors of the local descriptor; (3) calculating a quality score for the local descriptor based on the number of nearest neighbor descriptors that relate to images showing the object; and (4) determining, on the basis of the quality score, if the local descriptor is effective in identifying the object.
    Type: Grant
    Filed: March 16, 2009
    Date of Patent: December 27, 2011
    Assignee: Google Inc.
    Inventors: Alessandro Bissacco, Ulrich Buddemeier, Hartmut Neven