Patents by Inventor Rodrigo Lima Carceroni
Rodrigo Lima Carceroni has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11317018Abstract: In general, techniques of this disclosure may enable a computing device to capture one or more images based on a natural language user input. The computing device, while operating in an image capture mode, receive an indication of a natural language user input associated with an image capture command. The computing device determines, based on the image capture command, a visual token to be included in one or more images to be captured by the camera. The computing device locates the visual token within an image preview output by the computing device while operating in the image capture mode. The computing device captures one or more images of the visual token.Type: GrantFiled: October 18, 2019Date of Patent: April 26, 2022Assignee: Google LLCInventor: Rodrigo Lima Carceroni
-
Publication number: 20200053279Abstract: In general, techniques of this disclosure may enable a computing device to capture one or more images based on a natural language user input. The computing device, while operating in an image capture mode, receive an indication of a natural language user input associated with an image capture command. The computing device determines, based on the image capture command, a visual token to be included in one or more images to be captured by the camera. The computing device locates the visual token within an image preview output by the computing device while operating in the image capture mode. The computing device captures one or more images of the visual token.Type: ApplicationFiled: October 18, 2019Publication date: February 13, 2020Inventor: Rodrigo Lima Carceroni
-
Patent number: 10469740Abstract: In general, techniques of this disclosure may enable a computing device to capture one or more images based on a natural language user input. The computing device, while operating in an image capture mode, receive an indication of a natural language user input associated with an image capture command. The computing device determines, based on the image capture command, a visual token to be included in one or more images to be captured by the camera. The computing device locates the visual token within an image preview output by the computing device while operating in the image capture mode. The computing device captures one or more images of the visual token.Type: GrantFiled: January 8, 2019Date of Patent: November 5, 2019Assignee: Google LLCInventor: Rodrigo Lima Carceroni
-
Publication number: 20190166305Abstract: In general, techniques of this disclosure may enable a computing device to capture one or more images based on a natural language user input. The computing device, while operating in an image capture mode, receive an indication of a natural language user input associated with an image capture command. The computing device determines, based on the image capture command, a visual token to be included in one or more images to be captured by the camera. The computing device locates the visual token within an image preview output by the computing device while operating in the image capture mode. The computing device captures one or more images of the visual token.Type: ApplicationFiled: January 8, 2019Publication date: May 30, 2019Inventor: Rodrigo Lima Carceroni
-
Publication number: 20180101240Abstract: An example method includes displaying, by a display (104) of a wearable device (100), a content card (114B); receiving, by the wearable device, motion data generated by a motion sensor (102) of the wearable device that represents motion of a forearm of a user of the wearable device; responsive to determining, based on the motion data, that the user has performed a movement that includes a supination of the forearm followed by a pronation of the forearm at an acceleration that is less than an acceleration of the supination, displaying, by the display, a next content card (114C); and responsive to determining, based on the motion data, that the user has performed a movement that includes a supination of the forearm followed by a pronation of the forearm at an acceleration that is greater than an acceleration of the supination, displaying, by the display, a previous content card (114A).Type: ApplicationFiled: October 13, 2017Publication date: April 12, 2018Inventors: Rodrigo Lima Carceroni, Pannag R. Sanketi, Suril Shah, Derya Ozkan, Soroosh Mariooryad, Seyed Mojtaba Seyedhosseini Tarzjani, Brett Lider, Peter Wilhelm Ludwig
-
Patent number: 9854160Abstract: Implementations of the disclosed technology include techniques for autonomously collecting image data, and generating photo summaries based thereon. In some implementations, a plurality of images may be autonomously sampled from an available stream of image data. For example, a camera application of a smartphone or other mobile computing device may present a live preview based on a stream of data from an image capture device. The live stream of image capture data may be sampled and the most interesting photos preserved for further filtering and presentation. The preserved photos may be further winnowed as a photo session continues and an image object generated summarizing the remaining photos. Accordingly, image capture data may be autonomously collected, filtered, and formatted to enable a photographer to see what moments they missed manually capturing during a photo session.Type: GrantFiled: October 21, 2016Date of Patent: December 26, 2017Assignee: Google LLCInventors: Rodrigo Lima Carceroni, Marius Renn, Alan Newberger, Sascha Häberling, Jacob Mintz, Andrew Huibers
-
Patent number: 9804679Abstract: An example method includes displaying, by a display (104) of a wearable device (100), a content card (114B); receiving, by the wearable device, motion data generated by a motion sensor (102) of the wearable device that represents motion of a forearm of a user of the wearable device; responsive to determining, based on the motion data, that the user has performed a movement that includes a supination of the forearm followed by a pronation of the forearm at an acceleration that is less than an acceleration of the supination, displaying, by the display, a next content card (114C); and responsive to determining, based on the motion data, that the user has performed a movement that includes a supination of the forearm followed by a pronation of the forearm at an acceleration that is greater than an acceleration of the supination, displaying, by the display, a previous content card (114A).Type: GrantFiled: July 3, 2015Date of Patent: October 31, 2017Assignee: Google Inc.Inventors: Rodrigo Lima Carceroni, Pannag R. Sanketi, Suril Shah, Derya Ozkan, Soroosh Mariooryad, Seyed Mojtaba Seyedhosseini Tarzjani, Brett Lider, Peter Wilhelm Ludwig
-
Publication number: 20170041532Abstract: Implementations of the disclosed technology include techniques for autonomously collecting image data, and generating photo summaries based thereon. In some implementations, a plurality of images may be autonomously sampled from an available stream of image data. For example, a camera application of a smartphone or other mobile computing device may present a live preview based on a stream of data from an image capture device. The live stream of image capture data may be sampled and the most interesting photos preserved for further filtering and presentation. The preserved photos may be further winnowed as a photo session continues and an image object generated summarizing the remaining photos. Accordingly, image capture data may be autonomously collected, filtered, and formatted to enable a photographer to see what moments they missed manually capturing during a photo session.Type: ApplicationFiled: October 21, 2016Publication date: February 9, 2017Inventors: Rodrigo Lima Carceroni, Marius Renn, Alan Newberger, Sascha Häberling, Jacob Mintz, Andrew Huibers
-
Publication number: 20170003747Abstract: An example method includes displaying, by a display (104) of a wearable device (100), a content card (114B); receiving, by the wearable device, motion data generated by a motion sensor (102) of the wearable device that represents motion of a forearm of a user of the wearable device; responsive to determining, based on the motion data, that the user has performed a movement that includes a supination of the forearm followed by a pronation of the forearm at an acceleration that is less than an acceleration of the supination, displaying, by the display, a next content card (114C); and responsive to determining, based on the motion data, that the user has performed a movement that includes a supination of the forearm followed by a pronation of the forearm at an acceleration that is greater than an acceleration of the supination, displaying, by the display, a previous content card (114A).Type: ApplicationFiled: July 3, 2015Publication date: January 5, 2017Inventors: Rodrigo Lima Carceroni, Pannag R. Sanketi, Suril Shah, Derya Ozkan, Soroosh Mariooryad, Seyed Mojtaba Seyedhosseini Tarzjani, Brett Lider, Peter Wilhelm Ludwig
-
Patent number: 9479694Abstract: Implementations of the disclosed technology include techniques for autonomously collecting image data, and generating photo summaries based thereon. In some implementations, a plurality of images may be autonomously sampled from an available stream of image data. For example, a camera application of a smartphone or other mobile computing device may present a live preview based on a stream of data from an image capture device. The live stream of image capture data may be sampled and the most interesting photos preserved for further filtering and presentation. The preserved photos may be further winnowed as a photo session continues and an image object generated summarizing the remaining photos. Accordingly, image capture data may be autonomously collected, filtered, and formatted to enable a photographer to see what moments they missed manually capturing during a photo session.Type: GrantFiled: October 28, 2014Date of Patent: October 25, 2016Assignee: Google Inc.Inventors: Rodrigo Lima Carceroni, Marius Renn, Alan Newberger, Sascha Häberling, Jacob Mintz, Andrew Huibers
-
Publication number: 20160119536Abstract: Implementations of the disclosed technology include techniques for autonomously collecting image data, and generating photo summaries based thereon. In some implementations, a plurality of images may be autonomously sampled from an available stream of image data. For example, a camera application of a smartphone or other mobile computing device may present a live preview based on a stream of data from an image capture device. The live stream of image capture data may be sampled and the most interesting photos preserved for further filtering and presentation. The preserved photos may be further winnowed as a photo session continues and an image object generated summarizing the remaining photos. Accordingly, image capture data may be autonomously collected, filtered, and formatted to enable a photographer to see what moments they missed manually capturing during a photo session.Type: ApplicationFiled: October 28, 2014Publication date: April 28, 2016Inventors: Rodrigo Lima Carceroni, Marius Renn, Alan Newberger, Sascha Häberling, Jacob Mintz, Andrew Huibers
-
Patent number: 9208573Abstract: Techniques for determining motion saliency in video content using center-surround receptive fields. In some implementations, images or frames from a video may be apportioned into non-overlapped regions, for example, by applying a rectilinear grid. For each grid region, or cell, motion consistency may be measured between the center and surround area of that cell across frames of the video. Consistent motion across the center-surround area may indicate that the corresponding region has low variation. The larger the difference between center-surround motions in a cell, the more likely the region has high motion saliency.Type: GrantFiled: March 28, 2014Date of Patent: December 8, 2015Assignee: Google Inc.Inventors: Rodrigo Lima Carceroni, Pannag Raghunath Sanketi, Marius Renn, Ruei-Sung Lin, Wei Hua
-
Publication number: 20150117707Abstract: Techniques for determining motion saliency in video content using center-surround receptive fields. In some implementations, images or frames from a video may be apportioned into non-overlapped regions, for example, by applying a rectilinear grid. For each grid region, or cell, motion consistency may be measured between the center and surround area of that cell across frames of the video. Consistent motion across the center-surround area may indicate that the corresponding region has low variation. The larger the difference between center-surround motions in a cell, the more likely the region has high motion saliency.Type: ApplicationFiled: March 28, 2014Publication date: April 30, 2015Applicant: Google Inc.Inventors: Rodrigo Lima Carceroni, Pannag Raghunath Sanketi, Marius Renn, Ruei-Sung Lin, Wei Hua