Patents by Inventor Flavio Lerda
Flavio Lerda has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11941416Abstract: One or more processors of a mobile computing device may receive, from a view provider, graphical user interface (GUI) view data that specifies, for each respective GUI view of a set of GUI views, a respective platform-neutral layout description and a respective one or more condition for the respective GUI view to be a relevant GUI view. The one or more processors may determine a GUI view as the relevant GUI view out of the set of GUI views based at least in part on one or more conditions for the GUI view specified by the GUI view data. The one or more processors may, in response to determining the GUI view as the relevant GUI view, output, based at least in part on a platform-neutral layout description for the GUI view specified by the GUI view data, the GUI view for display at a display device.Type: GrantFiled: July 26, 2021Date of Patent: March 26, 2024Assignee: Google LLCInventors: Ant Oztaskent, Flavio Lerda, John S. Evans
-
Publication number: 20230315494Abstract: One or more processors of a mobile computing device may receive, from a view provider, graphical user interface (GUI) view data that specifies, for each respective GUI view of a set of GUI views, a respective platform-neutral layout description and a respective one or more condition for the respective GUI view to be a relevant GUI view. The one or more processors may determine a GUI view as the relevant GUI view out of the set of GUI views based at least in part on one or more conditions for the GUI view specified by the GUI view data. The one or more processors may, in response to determining the GUI view as the relevant GUI view, output, based at least in part on a platform-neutral layout description for the GUI view specified by the GUI view data, the GUI view for display at a display device.Type: ApplicationFiled: July 26, 2021Publication date: October 5, 2023Inventors: Ant Oztaskent, Flavio Lerda, John S. Evans
-
IDENTIFYING PHYSICAL ACTIVITIES PERFORMED BY A USER OF A COMPUTING DEVICE BASED ON MEDIA CONSUMPTION
Publication number: 20230237350Abstract: A method includes identifying, based on sensor data received by a motion sensor, a physical activity performed by a user of the computing system during a time period and determining whether the user consumed media during the time period that the user performed the physical activity. The method also includes responsive to determining that the user consumed the media during the time period that the user performed the physical activity, determining, based on data indicative of the media consumed by the user, an updated physical activity performed by the user during the time period; and outputting data indicating the updated physical activity.Type: ApplicationFiled: March 30, 2023Publication date: July 27, 2023Inventors: Ant Oztaskent, Flavio Lerda -
Identifying physical activities performed by a user of a computing device based on media consumption
Patent number: 11620543Abstract: A method includes identifying, based on sensor data received by a motion sensor, a physical activity performed by a user of the computing system during a time period and determining whether the user consumed media during the time period that the user performed the physical activity. The method also includes responsive to determining that the user consumed the media during the time period that the user performed the physical activity, determining, based on data indicative of the media consumed by the user, an updated physical activity performed by the user during the time period; and outputting data indicating the updated physical activity.Type: GrantFiled: December 23, 2019Date of Patent: April 4, 2023Assignee: Google LLCInventors: Ant Oztaskent, Flavio Lerda -
IDENTIFYING PHYSICAL ACTIVITIES PERFORMED BY A USER OF A COMPUTING DEVICE BASED ON MEDIA CONSUMPTION
Publication number: 20210192367Abstract: A method includes identifying, based on sensor data received by a motion sensor, a physical activity performed by a user of the computing system during a time period and determining whether the user consumed media during the time period that the user performed the physical activity. The method also includes responsive to determining that the user consumed the media during the time period that the user performed the physical activity, determining, based on data indicative of the media consumed by the user, an updated physical activity performed by the user during the time period; and outputting data indicating the updated physical activity.Type: ApplicationFiled: December 23, 2019Publication date: June 24, 2021Inventors: Ant Oztaskent, Flavio Lerda -
Patent number: 10685680Abstract: A method includes grouping media items associated with a user into segments based on a timestamp associated with each media item and a total number of media items. The method also includes selecting target media from the media items for each of the segments based on media attributes associated with the media item. The method also includes generating a video that includes the target media for each of the segments by generating a first animation that illustrates a first transition from a first item from the target media to a second item from the target media with movement of the first item from an onscreen location to an offscreen location, wherein the first item is adjacent to the second item in the first animation and determining whether the target media includes one or more additional items. The method also includes adding a song to the video.Type: GrantFiled: March 20, 2019Date of Patent: June 16, 2020Assignee: Google LLCInventors: Shengyang Dai, Timothy Sepkoski St. Clair, Koji Ashida, Jingyu Cui, Jay Steele, Qi Gu, Erik Murphy-Chutorian, Ivan Neulander, Flavio Lerda, Eric Charles Henry, Shinko Yuanhsien Cheng, Aravind Krishnaswamy, David Cohen, Pardis Beikzadeh
-
Patent number: 10467287Abstract: The disclosed technology includes automatically suggesting audio, video, or other media accompaniments to media content based on identified objects in the media content. Media content may include images, audio, video, or a combination. In one implementation, one or more images representative of the media content may be extracted. A visual search may be run across the images to identify objects or characteristics present in or associated with the media content. Keywords may be generated based on the identified objects and characteristics. The keywords may be used to determine suitable audio tracks to accompany the media content, for example by performing a search based on the keywords. The determined tracks may be presented to a user, or automatically arranged to match the media content. In another implementation, an aural search may be run across samples of the audio data to similarly identify objects and characteristics of the media content.Type: GrantFiled: December 12, 2013Date of Patent: November 5, 2019Assignee: GOOGLE LLCInventors: Thomas Weedon Hume, Flavio Lerda, Mikkel Crone Koser
-
Publication number: 20190252001Abstract: A method includes grouping media items associated with a user into segments based on a timestamp associated with each media item and a total number of media items. The method also includes selecting target media from the media items for each of the segments based on media attributes associated with the media item. The method also includes generating a video that includes the target media for each of the segments by generating a first animation that illustrates a first transition from a first item from the target media to a second item from the target media with movement of the first item from an onscreen location to an offscreen location, wherein the first item is adjacent to the second item in the first animation and determining whether the target media includes one or more additional items. The method also includes adding a song to the video.Type: ApplicationFiled: March 20, 2019Publication date: August 15, 2019Applicant: Google LLCInventors: Shengyang DAI, Timothy Sepkoski ST. CLAIR, Koji ASHIDA, Jingyu CUI, Jay STEELE, Qi GU, Erik MURPHY-CHUTORIAN, Ivan NEULANDER, Flavio LERDA, Eric Charles HENRY, Shinko Yuanhsien CHENG, Aravind KRISHNASWAMY, David COHEN, Pardis BEIKZADEH
-
Patent number: 10242711Abstract: A method includes grouping media items associated with a user into segments based on a timestamp associated with each media item and a total number of media items. The method also includes selecting target media from the media items for each of the segments based on media attributes associated with the media item. The method also includes generating a video that includes the target media for each of the segments by generating a first animation that illustrates a first transition from a first item from the target media to a second item from the target media with movement of the first item from an onscreen location to an offscreen location, wherein the first item is adjacent to the second item in the first animation and determining whether the target media includes one or more additional items. The method also includes adding a song to the video.Type: GrantFiled: June 26, 2017Date of Patent: March 26, 2019Assignee: Google LLCInventors: Shengyang Dai, Timothy Sepkoski St. Clair, Koji Ashida, Jingyu Cui, Jay Steele, Qi Gu, Erik Murphy-Chutorian, Ivan Neulander, Flavio Lerda, Eric Charles Henry, Shinko Yuanhsien Cheng, Aravind Krishnaswamy, David Cohen, Pardis Beikzadeh
-
Patent number: 9990694Abstract: Certain embodiments of this disclosure include methods and devices for outputting a zoom sequence. According to one embodiment, a method is provided. The method may include: (i) determining first location information from first metadata associated with one or more images, wherein the first location information identifies a first location; and (ii) outputting, for display, a first zoom sequence based on the first location information, wherein the first zoom sequence may include a first plurality of mapped images of the first location from a first plurality of zoom levels and the plurality of mapped images are sequentially ordered by a magnitude of the zoom level.Type: GrantFiled: November 28, 2016Date of Patent: June 5, 2018Assignee: Google LLCInventors: Thomas Weedon Hume, Mikkel Crone Köser, Tony Ferreira, Jeremy Lyon, Waldemar Ariel Baraldi, Bryan Mawhinney, Christopher James Smith, Lenka Trochtova, Andrei Popescu, David Ingram, Flavio Lerda, Michael Ananin, Vytautas Vaitukaitis, Marc Paulina
-
Publication number: 20170309311Abstract: A method includes grouping media items associated with a user into segments based on a timestamp associated with each media item and a total number of media items. The method also includes selecting target media from the media items for each of the segments based on media attributes associated with the media item. The method also includes generating a video that includes the target media for each of the segments by generating a first animation that illustrates a first transition from a first item from the target media to a second item from the target media with movement of the first item from an onscreen location to an offscreen location, wherein the first item is adjacent to the second item in the first animation and determining whether the target media includes one or more additional items. The method also includes adding a song to the video.Type: ApplicationFiled: June 26, 2017Publication date: October 26, 2017Applicant: Google Inc.Inventors: Shengyang DAI, Timothy Sepkoski ST. CLAIR, Koji ASHIDA, Jingyu CUI, Jay STEELE, Qi GU, Erik MURPHY-CHUTORIAN, Ivan NEULANDER, Flavio LERDA, Eric Charles HENRY, Shinko Yuanhsien CHENG, Aravind KRISHNASWAMY, David COHEN, Pardis BEIKZADEH
-
Patent number: 9691431Abstract: A method includes grouping media items associated with a user into segments based on a timestamp associated with each media item and a total number of media items. The method also includes selecting target media from the media items for each of the segments based on media attributes associated with the media item. The method also includes generating a video that includes the target media for each of the segments by generating a first animation that illustrates a first transition from a first item from the target media to a second item from the target media with movement of the first item from an onscreen location to an offscreen location, wherein the first item is adjacent to the second item in the first animation and determining whether the target media includes one or more additional items. The method also includes adding a song to the video.Type: GrantFiled: October 16, 2015Date of Patent: June 27, 2017Assignee: Google Inc.Inventors: Shengyang Dai, Timothy Sepkoski St. Clair, Koji Ashida, Jingyu Cui, Jay Steele, Qi Gu, Erik Murphy-Chutorian, Ivan Neulander, Flavio Lerda, Eric Charles Henry, Shinko Yuanhsien Cheng, Aravind Krishnaswamy, David Cohen, Pardis Beikzadeh
-
Publication number: 20170110154Abstract: A method includes grouping media items associated with a user into segments based on a timestamp associated with each media item and a total number of media items. The method also includes selecting target media from the media items for each of the segments based on media attributes associated with the media item. The method also includes generating a video that includes the target media for each of the segments by generating a first animation that illustrates a first transition from a first item from the target media to a second item from the target media with movement of the first item from an onscreen location to an offscreen location, wherein the first item is adjacent to the second item in the first animation and determining whether the target media includes one or more additional items. The method also includes adding a song to the video.Type: ApplicationFiled: October 16, 2015Publication date: April 20, 2017Applicant: GOOGLE INC.Inventors: Shengyang DAI, Timothy Sepkoski ST. CLAIR, Koji ASHIDA, Jingyu CUI, Jay STEELE, Qi GU, Erik MURPHY-CHUTORIAN, Ivan NEULANDER, Flavio LERDA, Eric Charles HENRY, Shinko Yuanhsien CHENG, Aravind KRISHNASWAMY, David COHEN, Pardis BEIKZADEH
-
Publication number: 20170076427Abstract: Certain embodiments of this disclosure include methods and devices for outputting a zoom sequence. According to one embodiment, a method is provided. The method may include: (i) determining first location information from first metadata associated with one or more images, wherein the first location information identifies a first location; and (ii) outputting, for display, a first zoom sequence based on the first location information, wherein the first zoom sequence may include a first plurality of mapped images of the first location from a first plurality of zoom levels and the plurality of mapped images are sequentially ordered by a magnitude of the zoom level.Type: ApplicationFiled: November 28, 2016Publication date: March 16, 2017Inventors: Thomas Weedon Hume, Mikkel Crone Köser, Tony Ferreira, Jeremy Lyon, Waldemar Ariel Baraldi, Bryan Mawhinney, Christopher James Smith, Lenka Trochtova, Andrei Popescu, David Ingram, Flavio Lerda, Michael Ananin, Vytautas Vaitukaitis, Marc Paulina
-
Patent number: 9508172Abstract: Certain embodiments of this disclosure include methods and devices for outputting a zoom sequence. According to one embodiment, a method is provided. The method may include: (i) determining first location information from first metadata associated with one or more images, wherein the first location information identifies a first location; and (ii) outputting, for display, a first zoom sequence based on the first location information, wherein the first zoom sequence may include a first plurality of mapped images of the first location from a first plurality of zoom levels and the plurality of mapped images are sequentially ordered by a magnitude of the zoom level.Type: GrantFiled: December 5, 2013Date of Patent: November 29, 2016Assignee: Google Inc.Inventors: Thomas Weedon Hume, Mikkel Crone Koser, Tony Ferreira, Jeremy Lyon, Waldemar Ariel Baraldi, Bryan Mawhinney, Christopher James Smith, Lenka Trochtova, Andrei Popescu, David Ingram, Flavio Lerda, Michael Ananin, Vytautas Vaitukaitis, Marc Paulina
-
Publication number: 20150169747Abstract: The disclosed technology includes automatically suggesting audio, video, or other media accompaniments to media content based on identified objects in the media content. Media content may include images, audio, video, or a combination. In one implementation, one or more images representative of the media content may be extracted. A visual search may be run across the images to identify objects or characteristics present in or associated with the media content. Keywords may be generated based on the identified objects and characteristics. The keywords may be used to determine suitable audio tracks to accompany the media content, for example by performing a search based on the keywords. The determined tracks may be presented to a user, or automatically arranged to match the media content. In another implementation, an aural search may be run across samples of the audio data to similarly identify objects and characteristics of the media content.Type: ApplicationFiled: December 12, 2013Publication date: June 18, 2015Applicant: Google Inc.Inventors: Thomas Weedon Hume, Flavio Lerda, Mikkel Crone Koser
-
Patent number: 9031209Abstract: A computing device displays a call history graphical user interface (GUI). The call history GUI includes a new list and an old list. The new list may include new missed call elements and missed call elements associated with new unopened voicemails. The old list may include other call history GUI elements, such as old missed call elements and missed call elements associated with opened voicemails.Type: GrantFiled: July 8, 2013Date of Patent: May 12, 2015Assignee: Google Inc.Inventors: Flavio Lerda, Hugo Hudson, Debashish Chatterjee, Simon Tickner, Marcus Alexander Foster
-
Patent number: 9001978Abstract: A computing device displays a call history graphical user interface (GUI). The call history GUI includes a new list and an old list. The new list may include new missed call elements and missed call elements associated with new unopened voicemails. The old list may include other call history GUI elements, such as old missed call elements and missed call elements associated with opened voicemails.Type: GrantFiled: July 8, 2013Date of Patent: April 7, 2015Assignee: Google Inc.Inventors: Flavio Lerda, Hugo Hudson, Debashish Chatterjee, Simon Tickner, Marcus Alexander Foster
-
Patent number: 8958775Abstract: In one implementation, a computer-implemented method includes identifying, by a computer system, a plurality of voicemail messages that are associated with a particular user and that are from a plurality of voicemail sources; and generating, by the computer system, a plurality of graphical display elements that represent the identified plurality of voicemail messages and that include source identifiers that indicate a voicemail source from the plurality of voicemail sources for each of the plurality of voicemail messages. The method can further include providing the plurality of graphical display elements with the source identifiers for the plurality of voicemail messages in a user interface through which the plurality of voicemail messages from the plurality of voicemail sources are caused to be played based on received user input.Type: GrantFiled: June 29, 2012Date of Patent: February 17, 2015Assignee: Google Inc.Inventors: Flavio Lerda, Hugo Hudson, Debashish Chatterjee, Bryan Mawhinney, Marcus A. Foster
-
Publication number: 20130295887Abstract: A computing device displays a call history graphical user interface (GUI). The call history GUI includes a new list and an old list. The new list may include new missed call elements and missed call elements associated with new unopened voicemails. The old list may include other call history GUI elements, such as old missed call elements and missed call elements associated with opened voicemails.Type: ApplicationFiled: July 8, 2013Publication date: November 7, 2013Inventors: Flavio Lerda, Hugo Hudson, Debashish Chatterjee, Simon Tickner, Marcus Alexander Foster