Patents by Inventor Hartwig Adam

Hartwig Adam has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20130251217
    Abstract: A method and apparatus for creating and updating a facial image database from a collection of digital images is disclosed. A set of detected faces from a digital image collection is stored in a facial image database, along with data pertaining to them. At least one facial recognition template for each face in the first set is computed, and the images in the set are grouped according to the facial recognition template into similarity groups. Another embodiment is a naming tool for assigning names to a plurality of faces detected in a digital image collection. A facial image database stores data pertaining to facial images detected in images of a digital image collection.
    Type: Application
    Filed: December 17, 2012
    Publication date: September 26, 2013
    Inventors: Hartwig ADAM, Johannes Steffens, Keith Kiyohara, Harmut Neven, Brian Westphal, Tobias Magnusson, Gavin Doughtie, Henry Benjamin, Michael Horowitz, Hong-Kien Kenneth Ong
  • Patent number: 8532400
    Abstract: Aspects of the invention pertain to identifying whether or not an image from a user's device is of a place. Before undertaking time and resource consuming analysis of an image using specialized image analysis modules, pre-filtering classification is conducted based on image data and metadata associated with the image. The metadata may include geolocation information. One classification procedure analyzes the metadata to perform a high level determination as to whether the image is of a place. If the results indicate that it is of a place, then a further classification procedure may be performed, where the image information is analyzed, with or without the metadata. This process may be done concurrently with a place match filtering procedure. The results of the further classification will either find a match with a given place or not. The output is a place match either with or without geolocation information.
    Type: Grant
    Filed: July 24, 2012
    Date of Patent: September 10, 2013
    Assignee: Google Inc.
    Inventors: Boris Babenko, Hartwig Adam, John Flynn, Hartmut Neven
  • Publication number: 20130202198
    Abstract: Methods and systems for automatic detection of landmarks in digital images and annotation of those images are disclosed. A method for detecting and annotating landmarks in digital images includes the steps of automatically assigning a tag descriptive of a landmark to one or more images in a plurality of text-associated digital images to generate a set of landmark-tagged images, learning an appearance model for the landmark from the set of landmark-tagged images, and detecting the landmark in a new digital image using the appearance model. The method can also include a step of annotating the new image with the tag descriptive of the landmark.
    Type: Application
    Filed: February 5, 2013
    Publication date: August 8, 2013
    Applicant: Google Inc.
    Inventors: Hartwig ADAM, Li Zhang
  • Publication number: 20130188886
    Abstract: A system and method of identifying objects is provided. In one aspect, the system and method includes a hand-held device with a display, camera and processor. As the camera captures images and displays them on the display, the processor compares the information retrieved in connection with one image with information retrieved in connection with subsequent images. The processor uses the result of such comparison to determine the object that is likely to be of greatest interest to the user. The display simultaneously displays the images the images as they are captured, the location of the object in an image, and information retrieved for the object.
    Type: Application
    Filed: December 4, 2012
    Publication date: July 25, 2013
    Inventors: David Petrou, Matthew Bridges, Shailesh Nalawadi, Hartwig Adam, Matthew R. Casey, Hartmut Neven, Andrew Harp
  • Publication number: 20130179303
    Abstract: A method and apparatus for enabling dynamic product and vendor identification and the display of relevant purchase information are described herein. According to embodiments of the invention, a recognition process is executed on sensor data captured via a mobile computing device to identify one or more items, and to identify at least one product associated with the one or more items. Product and vendor information for the at least one product is retrieved and displayed via the mobile computing device. In the event a user gesture is detected in response to displaying the product and vendor information data, processing logic may submit a purchase order for the product (e.g., for an online vendor) or contact the vendor (e.g., for an in-store vendor).
    Type: Application
    Filed: January 9, 2012
    Publication date: July 11, 2013
    Applicant: GOOGLE INC.
    Inventors: David Petrou, Hartwig Adam, Laura Garcia-Barrio, Hartmut Neven
  • Publication number: 20130138685
    Abstract: In one embodiment the present invention is a method for populating and updating a database of images of landmarks including geo-clustering geo-tagged images according to geographic proximity to generate one or more geo-clusters, and visual-clustering the one or more geo-clusters according to image similarity to generate one or more visual clusters. In another embodiment, the present invention is a system for identifying landmarks from digital images, including the following components: a database of geo-tagged images; a landmark database; a geo-clustering module; and a visual clustering module. In other embodiments the present invention may be a method of enhancing user queries to retrieve images of landmarks, or a method of automatically tagging a new digital image with text labels.
    Type: Application
    Filed: September 14, 2012
    Publication date: May 30, 2013
    Applicant: Google Inc.
    Inventors: Fernando A. Brucher, Ulrich Buddemeier, Hartwig Adam, Hartmut Neven
  • Patent number: 8438163
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for automatically extracting logos from images. Methods include generating a query list including a plurality of logo search queries, for each logo search query of the plurality of logo search queries: generating a plurality of image search results, each image search result including image data, and clustering the plurality of image search results into a plurality of clusters, each cluster including a plurality of images of the plurality of image search results, extracting, for each cluster of the plurality of clusters, a representative image to provide a plurality of representative images, and a name corresponding to the representative image to provide a plurality of names, and providing the plurality of representative images and the plurality of names to a logo index, the logo index being accessible to identify one or more logo images in a query image.
    Type: Grant
    Filed: December 7, 2011
    Date of Patent: May 7, 2013
    Assignee: Google Inc.
    Inventors: Yuan Li, Hartwig Adam
  • Publication number: 20130066878
    Abstract: A method and apparatus for enabling virtual tags is described. The method may include receiving a first digital image data and virtual tag data to be associated with a real-world object in the first digital image data, wherein the first digital image data is captured by a first mobile device, and the virtual tag data includes metadata received from a user of the first mobile device. The method may also include generating a first digital signature from the first digital image data that describes the real-world object, and in response to the generation, inserting in substantially real-time the first digital signature into a searchable index of digital images. The method may also include storing, in a tag database, the virtual tag data and an association between the virtual tag data and the first digital signature inserted into the index of digital images.
    Type: Application
    Filed: November 12, 2012
    Publication date: March 14, 2013
    Inventors: John Flynn, Dragomir Anguelov, Hartmut Neven, Mark Cummins, James Philbin, Rafel Spring, Hartwig Adam, Anand Pillai
  • Patent number: 8396287
    Abstract: Methods and systems for automatic detection of landmarks in digital images and annotation of those images are disclosed. A method for detecting and annotating landmarks in digital images includes the steps of automatically assigning a tag descriptive of a landmark to one or more images in a plurality of text-associated digital images to generate a set of landmark-tagged images, learning an appearance model for the landmark from the set of landmark-tagged images, and detecting the landmark in a new digital image using the appearance model. The method can also include a step of annotating the new image with the tag descriptive of the landmark.
    Type: Grant
    Filed: May 15, 2009
    Date of Patent: March 12, 2013
    Assignee: Google Inc.
    Inventors: Hartwig Adam, Li Zhang
  • Publication number: 20130036134
    Abstract: A method and apparatus for enabling a searchable history of real-world user experiences is described. The method may include capturing media data by a mobile computing device. The method may also include transmitting the captured media data to a server computer system, the server computer system to perform one or more recognition processes on the captured media data and add the captured media data to a history of real-world experiences of a user of the mobile computing device when the one or more recognition processes find a match. The method may also include transmitting a query of the user to the server computer system to initiate a search of the history or real-world experiences, and receiving results relevant to the query that include data indicative of the media data in the history of real-world experiences.
    Type: Application
    Filed: June 11, 2012
    Publication date: February 7, 2013
    Inventors: Hartmut Neven, David Petrou, Jacob Smullyan, Hartwig Adam
  • Patent number: 8358811
    Abstract: A method and apparatus for creating and updating a facial image database from a collection of digital images is disclosed. A set of detected faces from a digital image collection is stored in a facial image database, along with data pertaining to them. At least one facial recognition template for each face in the first set is computed, and the images in the set are grouped according to the facial recognition template into similarity groups. Another embodiment is a naming tool for assigning names to a plurality of faces detected in a digital image collection. A facial image database stores data pertaining to facial images detected in images of a digital image collection.
    Type: Grant
    Filed: April 1, 2009
    Date of Patent: January 22, 2013
    Assignee: Google Inc.
    Inventors: Hartwig Adams, Johannes Steffens, Keith Kiyohara, Hartmut Neven, Brian Westphal, Tobias Magnusson, Gavin Doughtie, Henry Benjamin, Michael Horowitz, Hong-Kien Kenneth Ong
  • Publication number: 20130016899
    Abstract: Systems and methods for modeling the occurrence of common image components (e.g., sub-regions) in order to improve visual object recognition are disclosed. In one example, a query image may be matched to a training image of an object. A matched region within the training image to which the query image matches may be determined and a determination may be made whether the matched region is located within an annotated image component of the training image. When the matched region matches only to the image component, an annotation associated with the component may be identified. In another example, sub-regions within a plurality of training image corpora may be annotated as common image components including associated information (e.g., metadata). Matching sub-regions appearing in many training images of objects may be down-weighted in the matching process to reduce possible false matches to query images including common image components.
    Type: Application
    Filed: July 13, 2011
    Publication date: January 17, 2013
    Applicant: GOOGLE INC.
    Inventors: Yuan Li, Hartwig Adam
  • Patent number: 8345921
    Abstract: Embodiments of this invention relate to detecting and blurring images. In an embodiment, a system detects objects in a photographic image. The system includes an object detector module configured to detect regions of the photographic image that include objects of a particular type at least based on the content of the photographic image. The system further includes a false positive detector module configured to determine whether each region detected by the object detector module includes an object of the particular type at least based on information about the context in which the photographic image was taken.
    Type: Grant
    Filed: May 11, 2009
    Date of Patent: January 1, 2013
    Assignee: Google Inc.
    Inventors: Andrea Frome, German Cheung, Ahmad Abdulkader, Marco Zennaro, Bo Wu, Alessandro Bissacco, Hartmut Neven, Luc Vincent, Hartwig Adam
  • Patent number: 8332424
    Abstract: A method and apparatus for enabling virtual tags is described. The method may include receiving a first digital image data and virtual tag data to be associated with a real-world object in the first digital image data, wherein the first digital image data is captured by a first mobile device, and the virtual tag data includes metadata received from a user of the first mobile device. The method may also include generating a first digital signature from the first digital image data that describes the real-world object, and in response to the generation, inserting in substantially real-time the first digital signature into a searchable index of digital images. The method may also include storing, in a tag database, the virtual tag data and an association between the virtual tag data and the first digital signature inserted into the index of digital images.
    Type: Grant
    Filed: May 13, 2011
    Date of Patent: December 11, 2012
    Assignee: Google Inc.
    Inventors: John Flynn, Dragomir Anguelov, Hartmut Neven, Mark Cummins, James Philbin, Rafael Spring, Hartwig Adam, Anand Pillai
  • Publication number: 20120290591
    Abstract: A method and apparatus for enabling virtual tags is described. The method may include receiving a first digital image data and virtual tag data to be associated with a real-world object in the first digital image data, wherein the first digital image data is captured by a first mobile device, and the virtual tag data includes metadata received from a user of the first mobile device. The method may also include generating a first digital signature from the first digital image data that describes the real-world object, and in response to the generation, inserting in substantially real-time the first digital signature into a searchable index of digital images. The method may also include storing, in a tag database, the virtual tag data and an association between the virtual tag data and the first digital signature inserted into the index of digital images.
    Type: Application
    Filed: May 13, 2011
    Publication date: November 15, 2012
    Inventors: John Flynn, Dragomir Anguelov, Hartmut Neven, Mark Cummins, James Philbin, Rafael Spring, Hartwig Adam, Anand Pillai
  • Patent number: 8238671
    Abstract: Aspects of the invention pertain to identifying whether or not an image from a user's device is of a place. Before undertaking time and resource consuming analysis of an image using specialized image analysis modules, pre-filtering classification is conducted based on image data and metadata associated with the image. The metadata may include geolocation information. One classification procedure analyzes the metadata to perform a high level determination as to whether the image is of a place. If the results indicate that it is of a place, then a further classification procedure may be performed, where the image information is analyzed, with or without the metadata. This process may be done concurrently with a place match filtering procedure. The results of the further classification will either find a match with a given place or not. The output is a place match either with or without geolocation information.
    Type: Grant
    Filed: December 7, 2009
    Date of Patent: August 7, 2012
    Assignee: Google Inc.
    Inventors: Boris Babenko, Hartwig Adam, John Flynn, Hartmut Neven
  • Patent number: 8189964
    Abstract: Aspects of the invention pertain to matching a selected image/photograph against a database of reference images having location information. The image of interest may include some location information itself, such as latitude/longitude coordinates and orientation. However, the location information provided by a user's device may be inaccurate or incomplete. The image of interest is provided to a front end server, which selects one or more cells to match the image against. Each cell may have multiple images and an index. One or more cell match servers compare the image against specific cells based on information provided by the front end server. An index storage server maintains index data for the cells and provides them to the cell match servers. If a match is found, the front end server identifies the correct location and orientation of the received image, and may correct errors in an estimated location of the user device.
    Type: Grant
    Filed: December 7, 2009
    Date of Patent: May 29, 2012
    Assignee: Google Inc.
    Inventors: John Flynn, Ulrich Buddemeier, Henrik Stewenius, Hartmut Neven, Fernando Brucher, Hartwig Adam
  • Publication number: 20120114239
    Abstract: Aspects of the invention pertain to matching a selected image/photograph against a database of reference images having location information. The image of interest may include some location information itself, such as latitude/longitude coordinates and orientation. However, the location information provided by a user's device may be inaccurate or incomplete. The image of interest is provided to a front end server, which selects one or more cells to match the image against. Each cell may have multiple images and an index. One or more cell match servers compare the image against specific cells based on information provided by the front end server. An index storage server maintains index data for the cells and provides them to the cell match servers. If a match is found, the front end server identifies the correct location and orientation of the received image, and may correct errors in an estimated location of the user device.
    Type: Application
    Filed: January 18, 2012
    Publication date: May 10, 2012
    Applicant: GOOGLE INC.
    Inventors: John Flynn, Ulrich Buddemeier, Henrik Stewenius, Hartmut Neven, Fernando Brucher, Hartwig Adam
  • Publication number: 20110137895
    Abstract: A visual query is received from a client system, along with location information for the client system, and processed by a server system. The server system sends the visual query and the location information to a visual query search system, and receives from the visual query search system enhanced location information based on the visual query and the location information. The server system then sends a search query, including the enhanced location information, to a location-based search system. The search system receives and provides to the client one or more search results to the client system.
    Type: Application
    Filed: August 12, 2010
    Publication date: June 9, 2011
    Inventors: David Petrou, John Flynn, Hartwig Adam, Hartmut Neven
  • Publication number: 20110135207
    Abstract: Aspects of the invention pertain to matching a selected image/photograph against a database of reference images having location information. The image of interest may include some location information itself, such as latitude/longitude coordinates and orientation. However, the location information provided by a user's device may be inaccurate or incomplete. The image of interest is provided to a front end server, which selects one or more cells to match the image against. Each cell may have multiple images and an index. One or more cell match servers compare the image against specific cells based on information provided by the front end server. An index storage server maintains index data for the cells and provides them to the cell match servers. If a match is found, the front end server identifies the correct location and orientation of the received image, and may correct errors in an estimated location of the user device.
    Type: Application
    Filed: December 7, 2009
    Publication date: June 9, 2011
    Applicant: GOOGLE INC.
    Inventors: John Flynn, Ulrich Buddemeier, Henrik Stewenius, Hartmut Neven, Fernando Brucher, Hartwig Adam