Patents by Inventor Domenico Carbotta

Domenico Carbotta has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240411513
    Abstract: Implementations set forth herein relate to an automated assistant that can selectively determine whether to incorporate a verbatim interpretation of portions spoken utterances into an entry field and/or incorporate synonymous content into the entry field. For instance, a user can be accessing an interface that provides an entry field (e.g., address field) for receiving user input. In order to provide input for entry field, the user can select the entry field and/or access a GUI keyboard to initialize an automated assistant for assisting with filling the entry field. Should the user provide a spoken utterance, the user can elect to provide a spoken utterance that embodies the intended input (e.g., an actual address) or a reference to the intended input (e.g., a name). In response to the spoken utterance, the automated assistant can fill the entry field with the intended input without necessitating further input from the user.
    Type: Application
    Filed: August 19, 2024
    Publication date: December 12, 2024
    Inventors: Srikanth Pandiri, Luv Kothari, Behshad Behzadi, Zaheed Sabur, Domenico Carbotta, Akshay Kannan, Qi Wang, Gokay Baris Gultekin, Angana Ghosh, Xu Liu, Yang Lu, Steve Cheng
  • Publication number: 20240380970
    Abstract: Implementations set forth herein relate to an automated assistant that can control a camera according to one or more conditions specified by a user. A condition can be satisfied when, for example, the automated assistant detects a particular environment feature is apparent. In this way, the user can rely on the automated assistant to identify and capture certain moments without necessarily requiring the user to constantly monitor a viewing window of the camera. In some implementations, a condition for the automated assistant to capture media data can be based on application data and/or other contextual data that is associated with the automated assistant. For instance, a relationship between content in a camera viewing window and other content of an application interface can be a condition upon which the automated assistant captures certain media data using a camera.
    Type: Application
    Filed: July 25, 2024
    Publication date: November 14, 2024
    Inventors: Felix Weissenberger, Balint Miklos, Victor Carbune, Matthew Sharifi, Domenico Carbotta, Ray Chen, Kevin Fu, Bogdan Prisacari, Fo Lee, Mucun Lu, Neha Garg, Jacopo Sannazzaro Natta, Barbara Poblocka, Jae Seo, Matthew Miao, Thomas Qian, Luv Kothari
  • Patent number: 12093609
    Abstract: Implementations set forth herein relate to an automated assistant that can selectively determine whether to incorporate a verbatim interpretation of portions spoken utterances into an entry field and/or incorporate synonymous content into the entry field. For instance, a user can be accessing an interface that provides an entry field (e.g., address field) for receiving user input. In order to provide input for entry field, the user can select the entry field and/or access a GUI keyboard to initialize an automated assistant for assisting with filling the entry field. Should the user provide a spoken utterance, the user can elect to provide a spoken utterance that embodies the intended input (e.g., an actual address) or a reference to the intended input (e.g., a name). In response to the spoken utterance, the automated assistant can fill the entry field with the intended input without necessitating further input from the user.
    Type: Grant
    Filed: November 9, 2023
    Date of Patent: September 17, 2024
    Assignee: GOOGLE LLC
    Inventors: Srikanth Pandiri, Luv Kothari, Behshad Behzadi, Zaheed Sabur, Domenico Carbotta, Akshay Kannan, Qi Wang, Gokay Baris Gultekin, Angana Ghosh, Xu Liu, Yang Lu, Steve Cheng
  • Patent number: 12052492
    Abstract: Implementations set forth herein relate to an automated assistant that can control a camera according to one or more conditions specified by a user. A condition can be satisfied when, for example, the automated assistant detects a particular environment feature is apparent. In this way, the user can rely on the automated assistant to identify and capture certain moments without necessarily requiring the user to constantly monitor a viewing window of the camera. In some implementations, a condition for the automated assistant to capture media data can be based on application data and/or other contextual data that is associated with the automated assistant. For instance, a relationship between content in a camera viewing window and other content of an application interface can be a condition upon which the automated assistant captures certain media data using a camera.
    Type: Grant
    Filed: August 8, 2023
    Date of Patent: July 30, 2024
    Assignee: GOOGLE LLC
    Inventors: Felix Weissenberger, Balint Miklos, Victor Carbune, Matthew Sharifi, Domenico Carbotta, Ray Chen, Kevin Fu, Bogdan Prisacari, Fo Lee, Mucun Lu, Neha Garg, Jacopo Sannazzaro Natta, Barbara Poblocka, Jae Seo, Matthew Miao, Thomas Qian, Luv Kothari
  • Publication number: 20240078083
    Abstract: Implementations set forth herein relate to an automated assistant that can selectively determine whether to incorporate a verbatim interpretation of portions spoken utterances into an entry field and/or incorporate synonymous content into the entry field. For instance, a user can be accessing an interface that provides an entry field (e.g., address field) for receiving user input. In order to provide input for entry field, the user can select the entry field and/or access a GUI keyboard to initialize an automated assistant for assisting with filling the entry field. Should the user provide a spoken utterance, the user can elect to provide a spoken utterance that embodies the intended input (e.g., an actual address) or a reference to the intended input (e.g., a name). In response to the spoken utterance, the automated assistant can fill the entry field with the intended input without necessitating further input from the user.
    Type: Application
    Filed: November 9, 2023
    Publication date: March 7, 2024
    Inventors: Srikanth Pandiri, Luv Kolhari, Behshad Behzadi, Zaheed Sabur, Domenico Carbotta, Akshay Kannan, Qi Wang, Gokay Baris Gultekin, Angana Ghosh, Xu Liu, Yang Lu, Steve Cheng
  • Publication number: 20240022809
    Abstract: Implementations set forth herein relate to an automated assistant that can control a camera according to one or more conditions specified by a user. A condition can be satisfied when, for example, the automated assistant detects a particular environment feature is apparent. In this way, the user can rely on the automated assistant to identify and capture certain moments without necessarily requiring the user to constantly monitor a viewing window of the camera. In some implementations, a condition for the automated assistant to capture media data can be based on application data and/or other contextual data that is associated with the automated assistant. For instance, a relationship between content in a camera viewing window and other content of an application interface can be a condition upon which the automated assistant captures certain media data using a camera.
    Type: Application
    Filed: August 8, 2023
    Publication date: January 18, 2024
    Inventors: Felix Weissenberger, Balint Miklos, Victor Carbune, Matthew Sharifi, Domenico Carbotta, Ray Chen, Kevin Fu, Bogdan Prisacari, Fo Lee, Mucun Lu, Neha Garg, Jacopo Sannazzaro Natta, Barbara Poblocka, Jae Seo, Matthew Miao, Thomas Qian, Luv Kothari
  • Patent number: 11853649
    Abstract: Implementations set forth herein relate to an automated assistant that can selectively determine whether to incorporate a verbatim interpretation of portions spoken utterances into an entry field and/or incorporate synonymous content into the entry field. For instance, a user can be accessing an interface that provides an entry field (e.g., address field) for receiving user input. In order to provide input for entry field, the user can select the entry field and/or access a GUI keyboard to initialize an automated assistant for assisting with filling the entry field. Should the user provide a spoken utterance, the user can elect to provide a spoken utterance that embodies the intended input (e.g., an actual address) or a reference to the intended input (e.g., a name). In response to the spoken utterance, the automated assistant can fill the entry field with the intended input without necessitating further input from the user.
    Type: Grant
    Filed: December 13, 2019
    Date of Patent: December 26, 2023
    Assignee: GOOGLE LLC
    Inventors: Srikanth Pandiri, Luv Kothari, Behshad Behzadi, Zaheed Sabur, Domenico Carbotta, Akshay Kannan, Qi Wang, Gokay Baris Gultekin, Angana Ghosh, Xu Liu, Yang Lu, Steve Cheng
  • Patent number: 11765452
    Abstract: Implementations set forth herein relate to an automated assistant that can control a camera according to one or more conditions specified by a user. A condition can be satisfied when, for example, the automated assistant detects a particular environment feature is apparent. In this way, the user can rely on the automated assistant to identify and capture certain moments without necessarily requiring the user to constantly monitor a viewing window of the camera. In some implementations, a condition for the automated assistant to capture media data can be based on application data and/or other contextual data that is associated with the automated assistant. For instance, a relationship between content in a camera viewing window and other content of an application interface can be a condition upon which the automated assistant captures certain media data using a camera.
    Type: Grant
    Filed: January 13, 2023
    Date of Patent: September 19, 2023
    Assignee: GOOGLE LLC
    Inventors: Felix Weissenberger, Balint Miklos, Victor Carbune, Matthew Sharifi, Domenico Carbotta, Ray Chen, Kevin Fu, Bogdan Prisacari, Fo Lee, Mucun Lu, Neha Garg, Jacopo Sannazzaro Natta, Barbara Poblocka, Jae Seo, Matthew Miao, Thomas Qian, Luv Kothari
  • Publication number: 20230156322
    Abstract: Implementations set forth herein relate to an automated assistant that can control a camera according to one or more conditions specified by a user. A condition can be satisfied when, for example, the automated assistant detects a particular environment feature is apparent. In this way, the user can rely on the automated assistant to identify and capture certain moments without necessarily requiring the user to constantly monitor a viewing window of the camera. In some implementations, a condition for the automated assistant to capture media data can be based on application data and/or other contextual data that is associated with the automated assistant. For instance, a relationship between content in a camera viewing window and other content of an application interface can be a condition upon which the automated assistant captures certain media data using a camera.
    Type: Application
    Filed: January 13, 2023
    Publication date: May 18, 2023
    Inventors: Felix Weissenberger, Balint Miklos, Victor Carbune, Matthew Sharifi, Domenico Carbotta, Ray Chen, Kevin Fu, Bogdan Prisacari, Fo Lee, Mucun Lu, Neha Garg, Jacopo Sannazzaro Natta, Barbara Poblocka, Jae Seo, Matthew Miao, Thomas Qian, Luv Kothari
  • Patent number: 11558546
    Abstract: Implementations set forth herein relate to an automated assistant that can control a camera according to one or more conditions specified by a user. A condition can be satisfied when, for example, the automated assistant detects a particular environment feature is apparent. In this way, the user can rely on the automated assistant to identify and capture certain moments without necessarily requiring the user to constantly monitor a viewing window of the camera. In some implementations, a condition for the automated assistant to capture media data can be based on application data and/or other contextual data that is associated with the automated assistant. For instance, a relationship between content in a camera viewing window and other content of an application interface can be a condition upon which the automated assistant captures certain media data using a camera.
    Type: Grant
    Filed: November 24, 2020
    Date of Patent: January 17, 2023
    Assignee: Google LLC
    Inventors: Felix Weissenberger, Balint Miklos, Victor Carbune, Matthew Sharifi, Domenico Carbotta, Ray Chen, Kevin Fu, Bogdan Prisacari, Fo Lee, Mucun Lu, Neha Garg, Jacopo Sannazzaro Natta, Barbara Poblocka, Jae Seo, Matthew Miao, Thomas Qian, Luv Kothari
  • Publication number: 20220253277
    Abstract: Implementations set forth herein relate to an automated assistant that can selectively determine whether to incorporate a verbatim interpretation of portions spoken utterances into an entry field and/or incorporate synonymous content into the entry field. For instance, a user can be accessing an interface that provides an entry field (e.g., address field) for receiving user input. In order to provide input for entry field, the user can select the entry field and/or access a GUI keyboard to initialize an automated assistant for assisting with filling the entry field. Should the user provide a spoken utterance, the user can elect to provide a spoken utterance that embodies the intended input (e.g., an actual address) or a reference to the intended input (e.g., a name). In response to the spoken utterance, the automated assistant can fill the entry field with the intended input without necessitating further input from the user.
    Type: Application
    Filed: December 13, 2019
    Publication date: August 11, 2022
    Inventors: Srikanth Pandiri, Luv Kothari, Behshad Behzadi, Zaheed Sabur, Domenico Carbotta, Akshay Kannan, Qi Wang, Gokay Baris Gultekin, Angana Ghosh, Xu Liu, Yang Lu, Steve Cheng
  • Publication number: 20220166919
    Abstract: Implementations set forth herein relate to an automated assistant that can control a camera according to one or more conditions specified by a user. A condition can be satisfied when, for example, the automated assistant detects a particular environment feature is apparent. In this way, the user can rely on the automated assistant to identify and capture certain moments without necessarily requiring the user to constantly monitor a viewing window of the camera. In some implementations, a condition for the automated assistant to capture media data can be based on application data and/or other contextual data that is associated with the automated assistant. For instance, a relationship between content in a camera viewing window and other content of an application interface can be a condition upon which the automated assistant captures certain media data using a camera.
    Type: Application
    Filed: November 24, 2020
    Publication date: May 26, 2022
    Inventors: Felix Weissenberger, Balint Miklos, Victor Carbune, Matthew Sharifi, Domenico Carbotta, Ray Chen, Kevin Fu, Bogdan Prisacari, Fo Lee, Mucun Lu, Neha Garg, Jacopo Sannazzaro Natta, Barbara Poblocka, Jae Seo, Matthew Miao, Thomas Qian, Luv Kothari