Patents by Inventor Rodrigo Lima Carceroni

Rodrigo Lima Carceroni has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11317018
    Abstract: In general, techniques of this disclosure may enable a computing device to capture one or more images based on a natural language user input. The computing device, while operating in an image capture mode, receive an indication of a natural language user input associated with an image capture command. The computing device determines, based on the image capture command, a visual token to be included in one or more images to be captured by the camera. The computing device locates the visual token within an image preview output by the computing device while operating in the image capture mode. The computing device captures one or more images of the visual token.
    Type: Grant
    Filed: October 18, 2019
    Date of Patent: April 26, 2022
    Assignee: Google LLC
    Inventor: Rodrigo Lima Carceroni
  • Publication number: 20200053279
    Abstract: In general, techniques of this disclosure may enable a computing device to capture one or more images based on a natural language user input. The computing device, while operating in an image capture mode, receive an indication of a natural language user input associated with an image capture command. The computing device determines, based on the image capture command, a visual token to be included in one or more images to be captured by the camera. The computing device locates the visual token within an image preview output by the computing device while operating in the image capture mode. The computing device captures one or more images of the visual token.
    Type: Application
    Filed: October 18, 2019
    Publication date: February 13, 2020
    Inventor: Rodrigo Lima Carceroni
  • Patent number: 10469740
    Abstract: In general, techniques of this disclosure may enable a computing device to capture one or more images based on a natural language user input. The computing device, while operating in an image capture mode, receive an indication of a natural language user input associated with an image capture command. The computing device determines, based on the image capture command, a visual token to be included in one or more images to be captured by the camera. The computing device locates the visual token within an image preview output by the computing device while operating in the image capture mode. The computing device captures one or more images of the visual token.
    Type: Grant
    Filed: January 8, 2019
    Date of Patent: November 5, 2019
    Assignee: Google LLC
    Inventor: Rodrigo Lima Carceroni
  • Publication number: 20190166305
    Abstract: In general, techniques of this disclosure may enable a computing device to capture one or more images based on a natural language user input. The computing device, while operating in an image capture mode, receive an indication of a natural language user input associated with an image capture command. The computing device determines, based on the image capture command, a visual token to be included in one or more images to be captured by the camera. The computing device locates the visual token within an image preview output by the computing device while operating in the image capture mode. The computing device captures one or more images of the visual token.
    Type: Application
    Filed: January 8, 2019
    Publication date: May 30, 2019
    Inventor: Rodrigo Lima Carceroni
  • Publication number: 20180101240
    Abstract: An example method includes displaying, by a display (104) of a wearable device (100), a content card (114B); receiving, by the wearable device, motion data generated by a motion sensor (102) of the wearable device that represents motion of a forearm of a user of the wearable device; responsive to determining, based on the motion data, that the user has performed a movement that includes a supination of the forearm followed by a pronation of the forearm at an acceleration that is less than an acceleration of the supination, displaying, by the display, a next content card (114C); and responsive to determining, based on the motion data, that the user has performed a movement that includes a supination of the forearm followed by a pronation of the forearm at an acceleration that is greater than an acceleration of the supination, displaying, by the display, a previous content card (114A).
    Type: Application
    Filed: October 13, 2017
    Publication date: April 12, 2018
    Inventors: Rodrigo Lima Carceroni, Pannag R. Sanketi, Suril Shah, Derya Ozkan, Soroosh Mariooryad, Seyed Mojtaba Seyedhosseini Tarzjani, Brett Lider, Peter Wilhelm Ludwig
  • Patent number: 9854160
    Abstract: Implementations of the disclosed technology include techniques for autonomously collecting image data, and generating photo summaries based thereon. In some implementations, a plurality of images may be autonomously sampled from an available stream of image data. For example, a camera application of a smartphone or other mobile computing device may present a live preview based on a stream of data from an image capture device. The live stream of image capture data may be sampled and the most interesting photos preserved for further filtering and presentation. The preserved photos may be further winnowed as a photo session continues and an image object generated summarizing the remaining photos. Accordingly, image capture data may be autonomously collected, filtered, and formatted to enable a photographer to see what moments they missed manually capturing during a photo session.
    Type: Grant
    Filed: October 21, 2016
    Date of Patent: December 26, 2017
    Assignee: Google LLC
    Inventors: Rodrigo Lima Carceroni, Marius Renn, Alan Newberger, Sascha Häberling, Jacob Mintz, Andrew Huibers
  • Patent number: 9804679
    Abstract: An example method includes displaying, by a display (104) of a wearable device (100), a content card (114B); receiving, by the wearable device, motion data generated by a motion sensor (102) of the wearable device that represents motion of a forearm of a user of the wearable device; responsive to determining, based on the motion data, that the user has performed a movement that includes a supination of the forearm followed by a pronation of the forearm at an acceleration that is less than an acceleration of the supination, displaying, by the display, a next content card (114C); and responsive to determining, based on the motion data, that the user has performed a movement that includes a supination of the forearm followed by a pronation of the forearm at an acceleration that is greater than an acceleration of the supination, displaying, by the display, a previous content card (114A).
    Type: Grant
    Filed: July 3, 2015
    Date of Patent: October 31, 2017
    Assignee: Google Inc.
    Inventors: Rodrigo Lima Carceroni, Pannag R. Sanketi, Suril Shah, Derya Ozkan, Soroosh Mariooryad, Seyed Mojtaba Seyedhosseini Tarzjani, Brett Lider, Peter Wilhelm Ludwig
  • Publication number: 20170041532
    Abstract: Implementations of the disclosed technology include techniques for autonomously collecting image data, and generating photo summaries based thereon. In some implementations, a plurality of images may be autonomously sampled from an available stream of image data. For example, a camera application of a smartphone or other mobile computing device may present a live preview based on a stream of data from an image capture device. The live stream of image capture data may be sampled and the most interesting photos preserved for further filtering and presentation. The preserved photos may be further winnowed as a photo session continues and an image object generated summarizing the remaining photos. Accordingly, image capture data may be autonomously collected, filtered, and formatted to enable a photographer to see what moments they missed manually capturing during a photo session.
    Type: Application
    Filed: October 21, 2016
    Publication date: February 9, 2017
    Inventors: Rodrigo Lima Carceroni, Marius Renn, Alan Newberger, Sascha Häberling, Jacob Mintz, Andrew Huibers
  • Publication number: 20170003747
    Abstract: An example method includes displaying, by a display (104) of a wearable device (100), a content card (114B); receiving, by the wearable device, motion data generated by a motion sensor (102) of the wearable device that represents motion of a forearm of a user of the wearable device; responsive to determining, based on the motion data, that the user has performed a movement that includes a supination of the forearm followed by a pronation of the forearm at an acceleration that is less than an acceleration of the supination, displaying, by the display, a next content card (114C); and responsive to determining, based on the motion data, that the user has performed a movement that includes a supination of the forearm followed by a pronation of the forearm at an acceleration that is greater than an acceleration of the supination, displaying, by the display, a previous content card (114A).
    Type: Application
    Filed: July 3, 2015
    Publication date: January 5, 2017
    Inventors: Rodrigo Lima Carceroni, Pannag R. Sanketi, Suril Shah, Derya Ozkan, Soroosh Mariooryad, Seyed Mojtaba Seyedhosseini Tarzjani, Brett Lider, Peter Wilhelm Ludwig
  • Patent number: 9479694
    Abstract: Implementations of the disclosed technology include techniques for autonomously collecting image data, and generating photo summaries based thereon. In some implementations, a plurality of images may be autonomously sampled from an available stream of image data. For example, a camera application of a smartphone or other mobile computing device may present a live preview based on a stream of data from an image capture device. The live stream of image capture data may be sampled and the most interesting photos preserved for further filtering and presentation. The preserved photos may be further winnowed as a photo session continues and an image object generated summarizing the remaining photos. Accordingly, image capture data may be autonomously collected, filtered, and formatted to enable a photographer to see what moments they missed manually capturing during a photo session.
    Type: Grant
    Filed: October 28, 2014
    Date of Patent: October 25, 2016
    Assignee: Google Inc.
    Inventors: Rodrigo Lima Carceroni, Marius Renn, Alan Newberger, Sascha Häberling, Jacob Mintz, Andrew Huibers
  • Publication number: 20160119536
    Abstract: Implementations of the disclosed technology include techniques for autonomously collecting image data, and generating photo summaries based thereon. In some implementations, a plurality of images may be autonomously sampled from an available stream of image data. For example, a camera application of a smartphone or other mobile computing device may present a live preview based on a stream of data from an image capture device. The live stream of image capture data may be sampled and the most interesting photos preserved for further filtering and presentation. The preserved photos may be further winnowed as a photo session continues and an image object generated summarizing the remaining photos. Accordingly, image capture data may be autonomously collected, filtered, and formatted to enable a photographer to see what moments they missed manually capturing during a photo session.
    Type: Application
    Filed: October 28, 2014
    Publication date: April 28, 2016
    Inventors: Rodrigo Lima Carceroni, Marius Renn, Alan Newberger, Sascha Häberling, Jacob Mintz, Andrew Huibers
  • Patent number: 9208573
    Abstract: Techniques for determining motion saliency in video content using center-surround receptive fields. In some implementations, images or frames from a video may be apportioned into non-overlapped regions, for example, by applying a rectilinear grid. For each grid region, or cell, motion consistency may be measured between the center and surround area of that cell across frames of the video. Consistent motion across the center-surround area may indicate that the corresponding region has low variation. The larger the difference between center-surround motions in a cell, the more likely the region has high motion saliency.
    Type: Grant
    Filed: March 28, 2014
    Date of Patent: December 8, 2015
    Assignee: Google Inc.
    Inventors: Rodrigo Lima Carceroni, Pannag Raghunath Sanketi, Marius Renn, Ruei-Sung Lin, Wei Hua
  • Publication number: 20150117707
    Abstract: Techniques for determining motion saliency in video content using center-surround receptive fields. In some implementations, images or frames from a video may be apportioned into non-overlapped regions, for example, by applying a rectilinear grid. For each grid region, or cell, motion consistency may be measured between the center and surround area of that cell across frames of the video. Consistent motion across the center-surround area may indicate that the corresponding region has low variation. The larger the difference between center-surround motions in a cell, the more likely the region has high motion saliency.
    Type: Application
    Filed: March 28, 2014
    Publication date: April 30, 2015
    Applicant: Google Inc.
    Inventors: Rodrigo Lima Carceroni, Pannag Raghunath Sanketi, Marius Renn, Ruei-Sung Lin, Wei Hua