Patents by Inventor Hartwig Adam

Hartwig Adam has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 9721188
    Abstract: Methods and systems for automatic detection of landmarks in digital images and annotation of those images are disclosed. A method for detecting and annotating landmarks in digital images includes the steps of automatically assigning a tag descriptive of a landmark to one or more images in a plurality of text-associated digital images to generate a set of landmark-tagged images, learning an appearance model for the landmark from the set of landmark-tagged images, and detecting the landmark in a new digital image using the appearance model. The method can also include a step of annotating the new image with the tag descriptive of the landmark.
    Type: Grant
    Filed: April 10, 2015
    Date of Patent: August 1, 2017
    Assignee: Google Inc.
    Inventors: Hartwig Adam, Li Zhang
  • Publication number: 20170206439
    Abstract: Techniques for providing image search templates are provided. An image search template may be associated with an image search query to aid the user in capturing an image that will be appropriate for processing the search query. The template may be displayed as an overlay during an image capturing process to indicate an appropriate image capturing pose, range, angle, or other view characteristics that may provide more accurate search results. The template may also be used in the image search query to segment the image and identify features relevant to the search query. Images in an image database may be clustered using characteristics of the images or metadata associated with the images in order to establish groups of images from which templates may be derived. The generated templates may be provided to users to assist in capturing images to be used as search engine queries.
    Type: Application
    Filed: March 30, 2017
    Publication date: July 20, 2017
    Inventors: Troy Chinen, Ameesh Makadia, Corinna Cortes, Hartwig Adam, Nemanja Petrovic, Teresa Ko, Sebastian Pueblas
  • Publication number: 20170155850
    Abstract: Implementations of the present disclosure include actions of receiving image data of an image capturing a scene, receiving data describing one or more entities determined from the scene, the one or more entities being determined from the scene, determining one or more actions based on the one or more entities, each action being provided at least partly based on search results from searching the one or more entities, and providing instructions to display an action interface comprising one or more action elements, each action element being to induce execution of a respective action, the action interface being displayed in a viewfinder
    Type: Application
    Filed: February 9, 2017
    Publication date: June 1, 2017
    Inventors: Teresa Ko, Hartwig Adam, Mikkel Crone Koser, Alexei Masterov, Andrews-Junior Kimbembe, Matthew J. Bridges, Paul Chang, David Petrou, Adam Berenzweig
  • Patent number: 9652462
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for associating still images and videos. One method includes receiving a plurality of images and a plurality of videos and determining whether the images are related to the videos. The determining includes, for an image and a video, extracting features from the image and extracting features frames of the video, and comparing the features to determine whether the image is related to the video. The method further includes maintaining a data store storing data associating each image with each video determined to be related to the image.
    Type: Grant
    Filed: April 29, 2011
    Date of Patent: May 16, 2017
    Assignee: Google Inc.
    Inventors: Ming Zhao, Yang Song, Hartwig Adam, Ullas Gargi, Yushi Jing, Henry A. Rowley
  • Patent number: 9639782
    Abstract: Techniques for providing image search templates are provided. An image search template may be associated with an image search query to aid the user in capturing an image that will be appropriate for processing the search query. The template may be displayed as an overlay during an image capturing process to indicate an appropriate image capturing pose, range, angle, or other view characteristics that may provide more accurate search results. The template may also be used in the image search query to segment the image and identify features relevant to the search query. Images in an image database may be clustered using characteristics of the images or metadata associated with the images in order to establish groups of images from which templates may be derived. The generated templates may be provided to users to assist in capturing images to be used as search engine queries.
    Type: Grant
    Filed: July 14, 2015
    Date of Patent: May 2, 2017
    Assignee: Google Inc.
    Inventors: Troy Chinen, Ameesh Makadia, Corinna Cortes, Hartwig Adam, Nemanja Petrovic, Teresa Ko, Sebastian Pueblas
  • Publication number: 20170091531
    Abstract: Implementations generally relate to face template balancing. In some implementations, a method includes generating face templates corresponding to respective images. The method also includes matching the images to a user based on the face templates. The method also includes receiving a determination that one or more matched images are mismatched images. The method also includes flagging one or more face templates corresponding to the one or more mismatched images as negative face templates.
    Type: Application
    Filed: October 7, 2016
    Publication date: March 30, 2017
    Applicant: Google Inc.
    Inventors: Jonathan MCPHIE, Hartwig ADAM, Dan FREDINBURG, Alexei MASTEROV
  • Patent number: 9600496
    Abstract: A system and computer-implemented method for associating images with semantic entities and providing search results using the semantic entities. An image database contains one or more source images associated with one or more images labels. A computer may generate one or more documents containing the labels associated with each image. Analysis may be performed on the one or more documents to associate the source images with semantic entities. The semantic entities may be used to provide search results. In response to receiving a target image as a search query, the target image may be compared with the source images to identify similar images. The semantic entities associated with the similar images may be used to determine a semantic entity for the target image. The semantic entity for the target image may be used to provide search results in response to the search initiated by the target image.
    Type: Grant
    Filed: September 10, 2015
    Date of Patent: March 21, 2017
    Assignee: Google Inc.
    Inventors: Maks Ovsjanikov, Yuan Li, Hartwig Adam, Charles Joseph Rosenberg
  • Patent number: 9600724
    Abstract: Implementations of the present disclosure include actions of receiving image data, the image data being provided from a camera and corresponding to a scene viewed by the camera, receiving one or more annotations, the one or more annotations being provided based on one or more entities determined from the scene, each annotation being associated with at least one entity, determining one or more actions based on the one or more annotations, and providing instructions to display an action interface including one or more action elements, each action element being selectable to induce execution of a respective action, the action interface being displayed in a viewfinder.
    Type: Grant
    Filed: February 10, 2015
    Date of Patent: March 21, 2017
    Assignee: Google Inc.
    Inventors: Teresa Ko, Hartwig Adam, Mikkel Crone Koser, Alexei Masterov, Andrews-Junior Kimbembe, Matthew J. Bridges, Paul Chang, David Petrou, Adam Berenzweig
  • Publication number: 20170024415
    Abstract: In one embodiment the present invention is a method for populating and updating a database of images of landmarks including geo-clustering geo-tagged images according to geographic proximity to generate one or more geo-clusters, and visual-clustering the one or more geo-clusters according to image similarity to generate one or more visual clusters. In another embodiment, the present invention is a system for identifying landmarks from digital images, including the following components: a database of geo-tagged images; a landmark database; a geo-clustering module; and a visual clustering module. In other embodiments the present invention may be a method of enhancing user queries to retrieve images of landmarks, or a method of automatically tagging a new digital image with text labels.
    Type: Application
    Filed: October 3, 2016
    Publication date: January 26, 2017
    Applicant: Google Inc.
    Inventors: Fernando A. Brucher, Ulrich Buddemeier, Hartwig Adam, Hartmut Neven
  • Publication number: 20160364414
    Abstract: A system and method of identifying objects is provided. In one aspect, the system and method includes a hand-held device with a display, camera and processor. As the camera captures images and displays them on the display, the processor compares the information retrieved in connection with one image with information retrieved in connection with subsequent images. The processor uses the result of such comparison to determine the object that is likely to be of greatest interest to the user. The display simultaneously displays the images the images as they are captured, the location of the object in an image, and information retrieved for the object.
    Type: Application
    Filed: August 25, 2016
    Publication date: December 15, 2016
    Inventors: David Petrou, Matthew J. Bridges, Shailesh Nalawadi, Hartwig Adam, Matthew R. Casey, Hartmut Neven, Andrew Harp
  • Patent number: 9483500
    Abstract: In one embodiment the present invention is a method for populating and updating a database of images of landmarks including geo-clustering geo-tagged images according to geographic proximity to generate one or more geo-clusters, and visual-clustering the one or more geo-clusters according to image similarity to generate one or more visual clusters. In another embodiment, the present invention is a system for identifying landmarks from digital images, including the following components: a database of geo-tagged images; a landmark database; a geo-clustering module; and a visual clustering module. In other embodiments the present invention may be a method of enhancing user queries to retrieve images of landmarks, or a method of automatically tagging a new digital image with text labels.
    Type: Grant
    Filed: April 6, 2015
    Date of Patent: November 1, 2016
    Assignee: Google Inc.
    Inventors: Fernando A. Brucher, Ulrich Buddemeier, Hartwig Adam, Hartmut Neven
  • Patent number: 9465977
    Abstract: Implementations generally relate to face template balancing. In some implementations, a method includes generating face templates corresponding to respective images. The method also includes matching the images to a user based on the face templates. The method also includes receiving a determination that one or more matched images are mismatched images. The method also includes flagging one or more face templates corresponding to the one or more mismatched images as negative face templates.
    Type: Grant
    Filed: July 17, 2014
    Date of Patent: October 11, 2016
    Assignee: Google Inc.
    Inventors: Jonathan McPhie, Hartwig Adam, Dan Fredinburg, Alexei Masterov
  • Patent number: 9460348
    Abstract: Methods and apparatus are disclosed for identifying a photo that lacks location metadata indicating where the photo was captured and determining a photo location to associate with the photo. In some implementations, a photo associated with a user is identified that includes metadata indicating a date and/or time it was captured, but lacks location data indicating where the photo was captured. In some versions of those implementations, a relationship of the metadata of the photo to at least one of a location date and a location time associated with a visit location of the user is determined. A photo location may be determined based on the visit location and associated with the photo. In some implementations, the visit location of the user may be determined independent of any location sensor.
    Type: Grant
    Filed: May 26, 2015
    Date of Patent: October 4, 2016
    Assignee: Google Inc.
    Inventors: Hartwig Adam, Jignashu Parikh
  • Publication number: 20160283779
    Abstract: Implementations generally relate to face template balancing. In some implementations, a method includes generating face templates corresponding to respective images. The method also includes matching the images to a user based on the face templates. The method also includes receiving a determination that one or more matched images are mismatched images. The method also includes flagging one or more face templates corresponding to the one or more mismatched images as negative face templates.
    Type: Application
    Filed: July 17, 2014
    Publication date: September 29, 2016
    Applicant: GOOGLE INC.
    Inventors: Jonathan McPHIE, Hartwig ADAM, Dan FREDINBURG, Alexei MASTEROV
  • Patent number: 9442957
    Abstract: A system and method of identifying objects is provided. In one aspect, the system and method includes a hand-held device with a display, camera and processor. As the camera captures images and displays them on the display, the processor compares the information retrieved in connection with one image with information retrieved in connection with subsequent images. The processor uses the result of such comparison to determine the object that is likely to be of greatest interest to the user. The display simultaneously displays the images the images as they are captured, the location of the object in an image, and information retrieved for the object.
    Type: Grant
    Filed: November 14, 2014
    Date of Patent: September 13, 2016
    Assignee: Google Inc.
    Inventors: David Petrou, Matthew J. Bridges, Shailesh Nalawadi, Hartwig Adam, Matthew R. Casey, Hartmut Neven, Andrew Harp
  • Patent number: 9442950
    Abstract: Systems and methods for a dynamic visual search engine are provided. In one example method, a criteria used to partition a set of compressed image descriptors into multiple database shards may be determined. Additionally, a size of a dynamic index may be determined. The dynamic index may represent a dynamic number of images and may be configured to accept insertion of reference images into the dynamic index that can be search against immediately. According to the method, an instruction to merge the uncompressed image descriptors of the dynamic index into the database shards of the compressed image descriptors may be received, and the uncompressed image descriptors of the dynamic index may be responsively merged into the database shards of the compressed image descriptors based on the criteria.
    Type: Grant
    Filed: October 29, 2014
    Date of Patent: September 13, 2016
    Assignee: Google Inc.
    Inventors: James William Philbin, Anand Pillai, John Flynn, Hartwig Adam
  • Patent number: 9424279
    Abstract: A system and computer-implemented method is provided for organizing multiple user submitted results responsive to an image query. A plurality of content submissions may be received from a variety of submitting users, each content submission including an image and an associated label. An image query may provide an image of an object as a request to identify the object. In response to receiving the image query, one or more results of the plurality of content submissions may be identified. A similarity between the labels for each of the one or more results may be determined and used to group the one or more results. Grouped results may be ranked and sorted for accurate and concise presentation to a querying user.
    Type: Grant
    Filed: March 8, 2013
    Date of Patent: August 23, 2016
    Assignee: Google Inc.
    Inventors: Yuan Li, Taehee Lee, Hartwig Adam
  • Patent number: 9406090
    Abstract: A method and apparatus for sharing captured media data via a social networking service is described herein. The method includes receiving a list of one or more media data files captured via a mobile (i.e., portable) computing device, the list to include, for each of the media data files, data identifying one or more real-world experiences of a user of the social networking service associated with the respective media data file. The method also includes transmitting information for providing an interface to the mobile computing device to enable the user to share one or more of the media data files in the social network with one or more other users by identifying each media data file with the data identifying a corresponding real-world experience. Furthermore, the method allows for the sharing of data related to the context of each corresponding real-world experience.
    Type: Grant
    Filed: January 9, 2012
    Date of Patent: August 2, 2016
    Assignee: Google Inc.
    Inventors: Hartwig Adam, Hartmut Neven, Laura Garcia-Barrio, David Petrou
  • Patent number: 9323792
    Abstract: This invention relates to building a landmark database from web data. In one embodiment, a computer-implemented method builds a landmark database. Web data including a web page is received from one or more websites via one or more networks. The web data is interpreted using at least one processor to determine landmark data describing a landmark. At least a portion of the landmark data identifies a landmark. Finally, a visual model is generated using the landmark data. A computing device is able to recognize the landmark in an image based on the visual model.
    Type: Grant
    Filed: June 25, 2014
    Date of Patent: April 26, 2016
    Assignee: Google Inc.
    Inventors: Ming Zhao, Yantao Zheng, Yang Song, Hartwig Adam
  • Publication number: 20160104324
    Abstract: Systems and methods for generating an augmented reality interface for generics activities are disclosed. The systems and methods may be directed to creating an augmented reality display for an activity performed on a surface. Given an image of the activity, an activity solver library and associated configuration information for the activity may be selected. The surface of the activity from the image may be rectified, forming a rectified image, from which activity state information may be extracted using the configuration information. The activity state information may be provided to the activity solver library to generate solution information, and elements indicating the solution information may be rendered in a perspective of the original image. By providing the configuration information associated with an activity solver library, an augmented reality interface can be generated for an activity by capturing an image of the activity.
    Type: Application
    Filed: October 16, 2015
    Publication date: April 14, 2016
    Inventors: Leon G. Palm, Hartwig Adam