Patents by Inventor Alejandro Kauffmann

Alejandro Kauffmann has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20190130943
    Abstract: The present disclosure provides systems and methods that generate a scrubbing video that depicts a scrubbing operation performed on a first video. In particular, a user can directly manipulate video playback of the first video (e.g., by interacting with a touch-sensitive display screen) and the scrubbing video can depict such manipulation. As one example, the user can “scratch” his or her videos like a disc jockey (DJ) scratches records to produce scrubbing videos that are remixes (e.g., looping remixes) of his or her original videos. Thus, the systems and methods of the present disclosure enable a user to capture and edit a new type of video that allows the user to directly manipulate its timeline, producing fun and creative results.
    Type: Application
    Filed: October 24, 2018
    Publication date: May 2, 2019
    Inventors: Alejandro Kauffmann, Andrew Dahley, Mark Ferdinand Bowers, William Lindmeier, Ashley Ma
  • Publication number: 20190130192
    Abstract: The present disclosure provides systems and methods that generate a summary storyboard from a plurality of image frames. An example computer-implemented method can include inputting a plurality of image frames into a machine-learned model and receiving as an output of the machine-learned model, object data that describes the respective locations of a plurality of objects recognized in the plurality of image frames. The method can include generating a plurality of image crops that respectively include the plurality of objects and arranging two or more of the plurality of image crops to generate a storyboard.
    Type: Application
    Filed: October 31, 2017
    Publication date: May 2, 2019
    Inventors: Alejandro Kauffmann, Andrew Dahley, Phuong Le, Mark Bowers, Ignacio Garcia Dorado, Robin Debreuil, William Lindmeier, Brian Allen, Ashley Ma, Pascal Getreuer
  • Patent number: 10055642
    Abstract: A computer-implemented method includes detecting, at a wearable computing device, a first direction of a first stare, wherein the wearable computing device includes a head-mountable display unit, identifying a target based on the detected first direction, and based on a determination that a first time duration of the first stare is greater than or equal to a first predetermined time threshold, identifying information relevant to the target and displaying the identified information on the display unit. Subsequent to displaying the identified information, the method includes detecting a second stare that is directed at the target or at the displayed information, and based on a determination that a second time duration of the second stare is greater than or equal to a second predetermined time threshold, identifying additional information relevant to the target, and displaying the additional information on the display unit.
    Type: Grant
    Filed: February 8, 2017
    Date of Patent: August 21, 2018
    Assignee: Google LLC
    Inventors: Luis Ricardo Prada Gomez, Alejandro Kauffmann
  • Publication number: 20170147880
    Abstract: A computer-implemented method includes detecting, at a wearable computing device, a first direction of a first stare, wherein the wearable computing device includes a head-mountable display unit, identifying a target based on the detected first direction, and based on a determination that a first time duration of the first stare is greater than or equal to a first predetermined time threshold, identifying information relevant to the target and displaying the identified information on the display unit. Subsequent to displaying the identified information, the method includes detecting a second stare that is directed at the target or at the displayed information, and based on a determination that a second time duration of the second stare is greater than or equal to a second predetermined time threshold, identifying additional information relevant to the target, and displaying the additional information on the display unit.
    Type: Application
    Filed: February 8, 2017
    Publication date: May 25, 2017
    Inventors: Luis Ricardo Prada Gomez, Alejandro Kauffmann
  • Patent number: 9600721
    Abstract: A computer-implemented method includes detecting, at a wearable computing device, a first direction of a first stare, wherein the wearable computing device includes a head-mountable display unit, identifying a target based on the detected first direction, and based on a determination that a first time duration of the first stare is greater than or equal to a first predetermined time threshold, identifying information relevant to the target and displaying the identified information on the display unit. Subsequent to displaying the identified information, the method includes detecting a second stare that is directed at the target or at the displayed information, and based on a determination that a second time duration of the second stare is greater than or equal to a second predetermined time threshold, identifying additional information relevant to the target, and displaying the additional information on the display unit.
    Type: Grant
    Filed: July 2, 2015
    Date of Patent: March 21, 2017
    Assignee: Google Inc.
    Inventors: Luis Ricardo Prada Gomez, Alejandro Kauffmann
  • Patent number: 9501151
    Abstract: A method to provide simultaneous interaction with content while not disturbing the content being provided is disclosed. Content may be provided to a group of users. At least one of the users may make a gesture. The gesture may be associated with a user identifier and with a content identifier. An event may be stored based on the gesture from the at least one of the users, the user identifier, and the content identifier. The event may be selected from the group consisting of: a vote, a purchase decision, a modification of content, an adjustment of a device setting, or a bookmark. A notice may be provided to the at least one user to indicate that the action requested by the gesture was performed.
    Type: Grant
    Filed: February 13, 2013
    Date of Patent: November 22, 2016
    Assignee: Google Inc.
    Inventors: Christian Plagemann, Alejandro Kauffmann
  • Patent number: 9477302
    Abstract: Aspects of the present disclosure relate to controlling the functions of various devices based on spatial relationships. In one example, a system may include a depth and visual camera and a computer (networked or local) for processing data from the camera. The computer may be connected (wired or wirelessly) to any number of devices that can be controlled by the system. A user may use a mobile device to define a volume of space relative to the camera. The volume of space may then be associated with a controlled device as well as one or more control commands. When the volume of space is subsequently occupied, the one or more control commands may be used to control the controlled device. In this regard, a user may switch a device on or off, increase volume or speed, etc. simply by occupying the volume of space.
    Type: Grant
    Filed: August 10, 2012
    Date of Patent: October 25, 2016
    Assignee: Google Inc.
    Inventors: Alejandro Kauffmann, Aaron Joseph Wheeler, Liang-Yu Chi, Hendrik Dahlkamp, Varun Ganapathi, Yong Zhao, Christian Plagemann
  • Patent number: 9317721
    Abstract: A privacy indicator is provided that shows whether sensor data are being processed in a private or non-private mode. When sensor data are used only for controlling a device locally, it may be in a private mode, which may be shown by setting the privacy indicator to a first color. When sensor data are being sent to a remote site, it may be in a non-private mode, which may be shown by setting the privacy indicator to a second color. The privacy mode may be determined by processing a command in accordance with a privacy policy of determining if the command is on a privacy whitelist, blacklist, greylist or is not present in a privacy command library. A non-private command may be blocked.
    Type: Grant
    Filed: October 31, 2012
    Date of Patent: April 19, 2016
    Assignee: Google Inc.
    Inventors: Christian Plagemann, Abraham Murray, Hendrik Dahlkamp, Alejandro Kauffmann, Varun Ganapathi
  • Publication number: 20160011724
    Abstract: Methods and devices for providing a user-interface are disclosed. In one embodiment, the method comprises receiving data corresponding to a first position of a wearable computing device and responsively causing the wearable computing device to provide a user-interface. The user-interfaces comprises a view region and a menu, where the view region substantially fills a field of view of the wearable computing device and the menu is not fully visible in the view region. The method further comprises receiving data indicating a selection of an item present in the view region and causing an indicator to be displayed in the view region, wherein the indicator changes incrementally over a length of time. When the length of time has passed, the method comprises responsively causing the wearable computing device to select the item.
    Type: Application
    Filed: March 2, 2012
    Publication date: January 14, 2016
    Applicant: Google Inc.
    Inventors: Aaron Joseph Wheeler, Sergey Brin, Thad Eugene Starner, Alejandro Kauffmann, Cliff L. Biffle, Liang-Yu (Tom) Chi, Steve Lee, Sebastian Thrun, Luis Ricardo Prada Gomez
  • Publication number: 20150379349
    Abstract: A computer-implemented method includes detecting, at a wearable computing device, a first direction of a first stare, wherein the wearable computing device includes a head-mountable display unit, identifying a target based on the detected first direction, and based on a determination that a first time duration of the first stare is greater than or equal to a first predetermined time threshold, identifying information relevant to the target and displaying the identified information on the display unit. Subsequent to displaying the identified information, the method includes detecting a second stare that is directed at the target or at the displayed information, and based on a determination that a second time duration of the second stare is greater than or equal to a second predetermined time threshold, identifying additional information relevant to the target, and displaying the additional information on the display unit.
    Type: Application
    Filed: July 2, 2015
    Publication date: December 31, 2015
    Inventors: Luis Ricardo Prada Gomez, Alejandro Kauffmann
  • Patent number: 9159116
    Abstract: Voice commands and gesture recognition are two mechanisms by which an individual may interact with content such as that on a display. In an implementation, interactivity of a user with content on a device or display may be modified based on the distance between a user and the display. An attribute such as a user profile may be used to tailor the modification of the display to an individual user. In some configurations, the commands available to the user may also be modified based on the determined distance between the user and a device or display.
    Type: Grant
    Filed: February 13, 2013
    Date of Patent: October 13, 2015
    Assignee: Google Inc.
    Inventors: Christian Plagemann, Alejandro Kauffmann
  • Patent number: 9096920
    Abstract: A computer-implemented method includes detecting, at a wearable computing device, a first direction of a first stare, wherein the wearable computing device includes a head-mountable display unit, identifying a target based on the detected first direction, and based on a determination that a first time duration of the first stare is greater than or equal to a first predetermined time threshold, identifying information relevant to the target and displaying the identified information on the display unit. Subsequent to displaying the identified information, the method includes detecting a second stare that is directed at the target or at the displayed information, and based on a determination that a second time duration of the second stare is greater than or equal to a second predetermined time threshold, identifying additional information relevant to the target, and displaying the additional information on the display unit.
    Type: Grant
    Filed: March 22, 2012
    Date of Patent: August 4, 2015
    Assignee: Google Inc.
    Inventors: Luis Ricardo Prada Gomez, Alejandro Kauffmann
  • Publication number: 20150193098
    Abstract: Methods and systems disclosed herein relate to an action that could proceed or be dismissed in response to an affirmative or negative input, respectively. An example method could include displaying, using a head-mountable device, a graphical interface that presents a graphical representation of an action. The action could relate to at least one of a contact, a contact's avatar, a media file, a digital file, a notification, and an incoming communication. The example method could further include receiving a binary selection from among an affirmative input and a negative input. The example method may additionally include proceeding with the action in response to the binary selection being the affirmative input and dismissing the action in response to the binary selection being the negative input.
    Type: Application
    Filed: March 23, 2012
    Publication date: July 9, 2015
    Applicant: GOOGLE INC.
    Inventors: Alejandro Kauffmann, Hayes Solos Raffle, Aaron Joseph Wheeler, Luis Ricardo Prada Gomez, Steven John Lee
  • Publication number: 20150169066
    Abstract: A method to provide simultaneous interaction with content while not disturbing the content being provided is disclosed. Content may be provided to a group of users. At least one of the users may make a gesture. The gesture may be associated with a user identifier and with a content identifier. An event may be stored based on the gesture from the at least one of the users, the user identifier, and the content identifier. The event may be selected from the group consisting of: a vote, a purchase decision, a modification of content, an adjustment of a device setting, or a bookmark. A notice may be provided to the at least one user to indicate that the action requested by the gesture was performed.
    Type: Application
    Filed: February 13, 2013
    Publication date: June 18, 2015
    Inventors: Christian Plagemann, Alejandro Kauffmann
  • Publication number: 20150153822
    Abstract: Aspects of the present disclosure relate to controlling the functions of various devices based on spatial relationships. In one example, a system may include a depth and visual camera and a computer (networked or local) for processing data from the camera. The computer may be connected (wired or wirelessly) to any number of devices that can be controlled by the system. A user may use a mobile device to define a volume of space relative to the camera. The volume of space may then be associated with a controlled device as well as one or more control commands. When the volume of space is subsequently occupied, the one or more control commands may be used to control the controlled device. In this regard, a user may switch a device on or off, increase volume or speed, etc. simply by occupying the volume of space.
    Type: Application
    Filed: August 10, 2012
    Publication date: June 4, 2015
    Applicant: GOOGLE INC.
    Inventors: Alejandro Kauffmann, Aaron Joseph Wheeler, Liang-Yu Chi, Hendrik Dahlkamp, Varun Ganapathi, Yong Zhao, Christian Plagemann
  • Publication number: 20150006669
    Abstract: The disclosed technology may include systems, methods, and apparatus for directing information flow. According to an example implementation, a method is provided that includes receiving, at a first server, identification information for one or more computing devices capable of communication with the first server; receiving one or more images and an indication of a gesture performed by a first person; associating a first computing device with the first person; identifying a second computing device; determining, based on the indication of the gesture and on the received identification information that the gesture is associated with an intent to transfer information between the first computing device and the second computing device, and which from among the first and second computing devices is an intended recipient device; and sending, to the intended recipient device, content information associated with a user credential of the first person.
    Type: Application
    Filed: July 1, 2013
    Publication date: January 1, 2015
    Inventors: Alejandro Kauffmann, Christian Plagemann
  • Patent number: 8922481
    Abstract: Methods and systems for annotating objects and/or actions are provided. An example method includes receiving a selection of a content object via an interface of a wearable computing device. The wearable computing device may include a head-mounted display (HMD). The method may also include, but is not limited to, displaying the selected content object on the HMD. Additionally, the method may include obtaining facial-muscle information while the content object is being displayed on the HMD. A facial expression may also be determined based on the facial-muscle information. According to the method, the content object may be associated with an annotation comprising an indication of the facial expression.
    Type: Grant
    Filed: March 16, 2012
    Date of Patent: December 30, 2014
    Assignee: Google Inc.
    Inventors: Alejandro Kauffmann, Clifford L. Biffle, Liang-Yu (Tom) Chi, Luis Ricardo Prada Gomez, Thad Eugene Starner
  • Publication number: 20140340498
    Abstract: A function of a device, such as volume, may be controlled using a combination of gesture recognition and an interpolation scheme. Distance between two objects such as a user's hands may be determined, at a first time point and a second time point. The difference between the distances calculated at two time points may be mapped onto a plot of determined difference versus a value of the function to set the function of a device to the mapped value.
    Type: Application
    Filed: December 20, 2012
    Publication date: November 20, 2014
    Applicant: Google Inc.
    Inventors: Christian Plagemann, Alejandro Kauffmann, Joshua Kaplan
  • Patent number: 8866852
    Abstract: Methods and devices for applying at least one manipulative action to a selected content object are disclosed. In one aspect, a head-mounted-device (HMD) system includes at least one processor and data storage with user-interface logic executable by the at least one processor to apply at least one manipulative action to a displayed content object based on received data that indicates a first direction in which the HMD is tilted and an extent to which the HMD is tilted in the first direction. The at least one manipulative action is applied to a degree corresponding to the indicated extent to which the HMD is tilted in the first direction.
    Type: Grant
    Filed: November 28, 2011
    Date of Patent: October 21, 2014
    Assignee: Google Inc.
    Inventors: Aaron Joseph Wheeler, Alejandro Kauffmann, Liang-Lu (Tom) Chi, Max Braun
  • Publication number: 20140225931
    Abstract: Voice commands and gesture recognition are two mechanisms by which an individual may interact with content such as that on a display. In an implementation, interactivity of a user with content on a device or display may be modified based on the distance between a user and the display. An attribute such as a user profile may be used to tailor the modification of the display to an individual user. In some configurations, the commands available to the user may also be modified based on the determined distance between the user and a device or display.
    Type: Application
    Filed: February 13, 2013
    Publication date: August 14, 2014
    Inventors: Christian Plagemann, Alejandro Kauffmann