Patents by Inventor Mikkel Crone Koser

Mikkel Crone Koser has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 9990694
    Abstract: Certain embodiments of this disclosure include methods and devices for outputting a zoom sequence. According to one embodiment, a method is provided. The method may include: (i) determining first location information from first metadata associated with one or more images, wherein the first location information identifies a first location; and (ii) outputting, for display, a first zoom sequence based on the first location information, wherein the first zoom sequence may include a first plurality of mapped images of the first location from a first plurality of zoom levels and the plurality of mapped images are sequentially ordered by a magnitude of the zoom level.
    Type: Grant
    Filed: November 28, 2016
    Date of Patent: June 5, 2018
    Assignee: Google LLC
    Inventors: Thomas Weedon Hume, Mikkel Crone Köser, Tony Ferreira, Jeremy Lyon, Waldemar Ariel Baraldi, Bryan Mawhinney, Christopher James Smith, Lenka Trochtova, Andrei Popescu, David Ingram, Flavio Lerda, Michael Ananin, Vytautas Vaitukaitis, Marc Paulina
  • Patent number: 9973705
    Abstract: Implementations of the present disclosure include actions of receiving image data of an image capturing a scene, receiving data describing one or more entities determined from the scene, the one or more entities being determined from the scene, determining one or more actions based on the one or more entities, each action being provided at least partly based on search results from searching the one or more entities, and providing instructions to display an action interface comprising one or more action elements, each action element being to induce execution of a respective action, the action interface being displayed in a viewfinder.
    Type: Grant
    Filed: February 9, 2017
    Date of Patent: May 15, 2018
    Assignee: Google LLC
    Inventors: Teresa Ko, Hartwig Adam, Mikkel Crone Koser, Alexei Masterov, Andrews-Junior Kimbembe, Matthew J. Bridges, Paul Chang, David Petrou, Adam Berenzweig
  • Patent number: 9756260
    Abstract: A method and system is disclosed for simulating different types of camera lens on a device by guiding a user through a set of images to be captured in connection with one or more desired lens effects. In one aspect, a wide-angle lens may be simulated by taking a plurality of images that have been taken at a particular location over a set of camera orientations that are determined based on the selection of the wide-angle lens. The mobile device may provide prompts to the user indicating the camera orientations for which images should be captured in order to generate the simulated camera lens effect.
    Type: Grant
    Filed: September 9, 2016
    Date of Patent: September 5, 2017
    Assignee: Google Inc.
    Inventors: Scott Ettinger, David Lee, Evan Rapoport, Jacob Mintz, Bryan Feldman, Mikkel Crone Köser, Daniel Joseph Filip
  • Patent number: 9699488
    Abstract: Systems and techniques are provided for smart snap to interesting points in media content. A position control input may be received from a user to a control interface for a content player being used with a content item. A smart snap point and an associated smart snap area t may be determined for the content item based on the received position control input. The smart snap point and the associated smart snap area may be stored. A second position control input to the control interface for the content player being used with the content item may be received. The position control input may be determined to move a position indicator into the associated smart snap area for the smart snap point. Use of the content item may be resumed with the content player from the smart snap point.
    Type: Grant
    Filed: June 2, 2014
    Date of Patent: July 4, 2017
    Assignee: Google Inc.
    Inventor: Mikkel Crone Köser
  • Publication number: 20170155850
    Abstract: Implementations of the present disclosure include actions of receiving image data of an image capturing a scene, receiving data describing one or more entities determined from the scene, the one or more entities being determined from the scene, determining one or more actions based on the one or more entities, each action being provided at least partly based on search results from searching the one or more entities, and providing instructions to display an action interface comprising one or more action elements, each action element being to induce execution of a respective action, the action interface being displayed in a viewfinder
    Type: Application
    Filed: February 9, 2017
    Publication date: June 1, 2017
    Inventors: Teresa Ko, Hartwig Adam, Mikkel Crone Koser, Alexei Masterov, Andrews-Junior Kimbembe, Matthew J. Bridges, Paul Chang, David Petrou, Adam Berenzweig
  • Publication number: 20170099437
    Abstract: The disclosed technology includes switching between a normal or standard-lens UI and a panoramic or wide-angle photography UI responsive to a zoom gesture. In one implementation, a user gesture corresponding to a “zoom-out” command, when received at a mobile computing device associated with a minimum zoom state, may trigger a switch from a standard lens photo capture UI to a wide-angle photography UI. In another implementation, a user gesture corresponding to a “zoom-in” command, when received at a mobile computing device associated with a nominal wide-angle state, may trigger a switch from a wide-angle photography UI to a standard lens photo capture UI.
    Type: Application
    Filed: December 15, 2016
    Publication date: April 6, 2017
    Inventors: Nirav Bipinchandra Mehta, Mikkel Crone Köser, David Singleton, Robert William Hamilton, Henry John Holland, Tony Ferreira, Thomas Weedon Hume
  • Patent number: 9600724
    Abstract: Implementations of the present disclosure include actions of receiving image data, the image data being provided from a camera and corresponding to a scene viewed by the camera, receiving one or more annotations, the one or more annotations being provided based on one or more entities determined from the scene, each annotation being associated with at least one entity, determining one or more actions based on the one or more annotations, and providing instructions to display an action interface including one or more action elements, each action element being selectable to induce execution of a respective action, the action interface being displayed in a viewfinder.
    Type: Grant
    Filed: February 10, 2015
    Date of Patent: March 21, 2017
    Assignee: Google Inc.
    Inventors: Teresa Ko, Hartwig Adam, Mikkel Crone Koser, Alexei Masterov, Andrews-Junior Kimbembe, Matthew J. Bridges, Paul Chang, David Petrou, Adam Berenzweig
  • Publication number: 20170076427
    Abstract: Certain embodiments of this disclosure include methods and devices for outputting a zoom sequence. According to one embodiment, a method is provided. The method may include: (i) determining first location information from first metadata associated with one or more images, wherein the first location information identifies a first location; and (ii) outputting, for display, a first zoom sequence based on the first location information, wherein the first zoom sequence may include a first plurality of mapped images of the first location from a first plurality of zoom levels and the plurality of mapped images are sequentially ordered by a magnitude of the zoom level.
    Type: Application
    Filed: November 28, 2016
    Publication date: March 16, 2017
    Inventors: Thomas Weedon Hume, Mikkel Crone Köser, Tony Ferreira, Jeremy Lyon, Waldemar Ariel Baraldi, Bryan Mawhinney, Christopher James Smith, Lenka Trochtova, Andrei Popescu, David Ingram, Flavio Lerda, Michael Ananin, Vytautas Vaitukaitis, Marc Paulina
  • Patent number: 9589321
    Abstract: Techniques for animating a view of a composite image based on metadata related to the capture of the underlying source images. According to certain implementations, the metadata may include timing or sensor data collected or generated during capture of the component source images. For example, the timing data may indicate an order or sequence in which the source images were captured. Accordingly, the corresponding regions of the composite panoramic image may be panned to in sequence, for example, using the Ken Burns Effect. In another example, sensor data from gyroscopes or accelerometers may be used to simulate the movement of the image capture device used to generate the source images. In another implementation, the source images may be associated with varying focal lengths or zoom levels. Accordingly, certain implementations may vary a level zoom, based on the metadata, while panning between source photos.
    Type: Grant
    Filed: April 24, 2014
    Date of Patent: March 7, 2017
    Assignee: Google Inc.
    Inventors: Thomas Weedon Hume, Mikkel Crone Köser
  • Patent number: 9538078
    Abstract: The disclosed technology includes switching between a normal or standard-lens UI and a panoramic or wide-angle photography UI responsive to a zoom gesture. In one implementation, a user gesture corresponding to a “zoom-out” command, when received at a mobile computing device associated with a minimum zoom state, may trigger a switch from a standard lens photo capture UI to a wide-angle photography UI. In another implementation, a user gesture corresponding to a “zoom-in” command, when received at a mobile computing device associated with a nominal wide-angle state, may trigger a switch from a wide-angle photography UI to a standard lens photo capture UI.
    Type: Grant
    Filed: March 2, 2014
    Date of Patent: January 3, 2017
    Assignee: Google Inc.
    Inventors: Nirav Bipinchandra Mehta, Mikkel Crone Köser, David Singleton, Robert William Hamilton, Henry John Holland, Tony Ferreira, Thomas Weedon Hume
  • Publication number: 20160373796
    Abstract: A content annotation tool is disclosed. In a configuration, a portion of a movie may be obtained from a database. Entities, such as an actor, background music, text, etc. may be automatically identified in the movie. A user, such as a content producer, may associate and/or provide supplemental content for an identified entity to the database. A selection of one or more automatically identified entities may be received. A database entry may be generated that links the identified entity with the supplemental content. The selected automatically identified one or more entities and//or supplemental content associated therewith may be presented to an end user.
    Type: Application
    Filed: September 2, 2016
    Publication date: December 22, 2016
    Inventors: Henry Will Schneiderman, Michael Andrew Sipe, Marco Paglia, Mikkel Crone Köser
  • Patent number: 9508172
    Abstract: Certain embodiments of this disclosure include methods and devices for outputting a zoom sequence. According to one embodiment, a method is provided. The method may include: (i) determining first location information from first metadata associated with one or more images, wherein the first location information identifies a first location; and (ii) outputting, for display, a first zoom sequence based on the first location information, wherein the first zoom sequence may include a first plurality of mapped images of the first location from a first plurality of zoom levels and the plurality of mapped images are sequentially ordered by a magnitude of the zoom level.
    Type: Grant
    Filed: December 5, 2013
    Date of Patent: November 29, 2016
    Assignee: Google Inc.
    Inventors: Thomas Weedon Hume, Mikkel Crone Koser, Tony Ferreira, Jeremy Lyon, Waldemar Ariel Baraldi, Bryan Mawhinney, Christopher James Smith, Lenka Trochtova, Andrei Popescu, David Ingram, Flavio Lerda, Michael Ananin, Vytautas Vaitukaitis, Marc Paulina
  • Patent number: 9467620
    Abstract: A method and system is disclosed for simulating different types of camera lens on a device by guiding a user through a set of images to be captured in connection with one or more desired lens effects. In one aspect, a wide-angle lens may be simulated by taking a plurality of images that have been taken at a particular location over a set of camera orientations that are determined based on the selection of the wide-angle lens. The mobile device may provide prompts to the user indicating the camera orientations for which images should be captured in order to generate the simulated camera lens effect.
    Type: Grant
    Filed: January 14, 2015
    Date of Patent: October 11, 2016
    Assignee: Google Inc.
    Inventors: Scott Ettinger, David Lee, Evan Rapoport, Jake Mintz, Bryan Feldman, Mikkel Crone Köser, Daniel Joseph Filip
  • Patent number: 9438947
    Abstract: A content annotation tool is disclosed. In a configuration, a portion of a movie may be obtained from a database. Entities, such as an actor, background music, text, etc. may be automatically identified in the movie. A user, such as a content producer, may associate and/or provide supplemental content for an identified entity to the database. A selection of one or more automatically identified entities may be received. A database entry may be generated that links the identified entity with the supplemental content. The selected automatically identified one or more entities and/or supplemental content associated therewith may be presented to an end user.
    Type: Grant
    Filed: May 1, 2013
    Date of Patent: September 6, 2016
    Assignee: Google Inc.
    Inventors: Henry Will Schneiderman, Michael Andrew Sipe, Marco Paglia, Mikkel Crone Köser
  • Patent number: 9304659
    Abstract: A preferred contact group centric interface for a communication device can be used to facilitate communications by a user. The user interface can be arranged to activate from a user's “home page” on the display, from an idle screen that is accessed after a timeout period expires, or any other appropriate mechanism that activates the preferred contact group centric experience. A user selects the preferred contact group from among an array of the user's contacts. Once the contact group is configured, a minimal number of navigation/selection features is necessary to activate any number of communication modes available to the contacts. The contact group is configured such that simple and quick navigation between the contact members is achieved. The contact group can be presented in 2D and 3D arrangements, in any number of list or geometric configurations. A pricing plan can optionally be tied to each member of the contact group.
    Type: Grant
    Filed: November 17, 2014
    Date of Patent: April 5, 2016
    Assignee: T-Mobile USA, Inc.
    Inventors: Andrew Sherrard, Warren McNeel, Jasdeep Singh Chugh, Stephen John O'Connor, Mikkel Crone Koser, Richard Paul Turnnidge, Michael Thomas Hendrick, Gary Sentman, Karl Warfel, Wen-Hsing Chang, Sally Abolrous, Adrian Buzescu
  • Publication number: 20150350735
    Abstract: Systems and techniques are provided for smart snap to interesting points in media content. A position control input may be received from a user to a control interface for a content player being used with a content item. A smart snap point and an associated smart snap area t may be determined for the content item based on the received position control input. The smart snap point and the associated smart snap area may be stored. A second position control input to the control interface for the content player being used with the content item may be received. The position control input may be determined to move a position indicator into the associated smart snap area for the smart snap point. Use of the content item may be resumed with the content player from the smart snap point.
    Type: Application
    Filed: June 2, 2014
    Publication date: December 3, 2015
    Applicant: Google Inc.
    Inventor: Mikkel Crone Köser
  • Patent number: 9195720
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for receiving, from a user device, data indicating a user performed a user input gesture combining a first display object in a plurality of display objects with a second display object in the plurality of display objects; identifying attributes that are associated with both the first display object and the second display object; and performing a search based on the attributes.
    Type: Grant
    Filed: March 14, 2013
    Date of Patent: November 24, 2015
    Assignee: Google Inc.
    Inventors: Henrique Dias Penha, Mark Brophy, Mathew Inwood, Mikkel Crone Koser, Thomas Jenkins, Adam Skory, Bjorn E. Bringert, Hugo B. Barra, Andrew Anderson Stewart, Robert W. Hamilton
  • Patent number: D767616
    Type: Grant
    Filed: September 25, 2014
    Date of Patent: September 27, 2016
    Assignee: Google Inc.
    Inventors: Richard D. Jones, Mikkel Crone Koser, Andrews-Junior Kimbembe
  • Patent number: D769926
    Type: Grant
    Filed: September 25, 2014
    Date of Patent: October 25, 2016
    Assignee: Google Inc.
    Inventors: Mikkel Crone Koser, Andrews-Junior Kimbembe
  • Patent number: D770512
    Type: Grant
    Filed: September 25, 2014
    Date of Patent: November 1, 2016
    Assignee: Google Inc.
    Inventors: Mikkel Crone Koser, Andrews-Junior Kimbembe