Patents by Inventor Nitin Khandelwal
Nitin Khandelwal has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12327559Abstract: Systems, methods, and apparatus for using a multimodal response in the dynamic generation of client device output that is tailored to a current modality of a client device is disclosed herein. Multimodal client devices can engage in a variety of interactions across the multimodal spectrum including voice only interactions, voice forward interactions, multimodal interactions, visual forward interactions, visual only interactions etc. A multimodal response can include a core message to be rendered for all interaction types as well as one or more modality dependent components to provide a user with additional information.Type: GrantFiled: February 1, 2024Date of Patent: June 10, 2025Assignee: GOOGLE LLCInventors: April Pufahl, Jared Strawderman, Harry Yu, Adriana Olmos Antillon, Jonathan Livni, Okan Kolak, James Giangola, Nitin Khandelwal, Jason Kearns, Andrew Watson, Joseph Ashear, Valerie Nygaard
-
Publication number: 20250184311Abstract: Implementations described herein utilize an independent server for facilitating secure exchange of data between multiple disparate parties. The independent server receives client data, via an automated assistant application executing at least in part at a client device, that is to be transmitted to a given third-party application. The independent server processes the client data, using a first encoder-decoder model, to generate opaque client data, and transmits the opaque client data to the given third-party application and without transmitting any of the client data. Further, the independent server receives response data, via the given third-party application, that is generated based on the opaque client data and that is to be transmitted back to the client device. The independent server processes the response data, using a second encoder-decoder model, to generate opaque response data, and transmits the opaque response data to the client device and without transmitting any of the response data.Type: ApplicationFiled: February 5, 2025Publication date: June 5, 2025Inventors: Akshay Goel, Jonathan Eccles, Nitin Khandelwal, Sarvjeet Singh, David Sanchez, Ashwin Ram
-
Patent number: 12254886Abstract: Implementations described herein are directed to enabling collaborative ranking of interpretations of spoken utterances based on data that is available to an automated assistant and third-party agent(s), respectively. The automated assistant can determine first-party interpretation(s) of a spoken utterance provided by a user, and can cause the third-party agent(s) to determine third-party interpretation(s) of the spoken utterance provided by the user. In some implementations, the automated assistant can select a given interpretation, from the first-party interpretation(s) and the third-party interpretation(s), of the spoken utterance, and can cause a given third-party agent to satisfy the spoken utterance based on the given interpretation.Type: GrantFiled: February 28, 2024Date of Patent: March 18, 2025Assignee: GOOGLE LLCInventors: Akshay Goel, Nitin Khandelwal, Richard Park, Brian Chatham, Jonathan Eccles, David Sanchez, Dmytro Lapchuk
-
Patent number: 12244568Abstract: Implementations described herein utilize an independent server for facilitating secure exchange of data between multiple disparate parties. The independent server receives client data, via an automated assistant application executing at least in part at a client device, that is to be transmitted to a given third-party application. The independent server processes the client data, using a first encoder-decoder model, to generate opaque client data, and transmits the opaque client data to the given third-party application and without transmitting any of the client data. Further, the independent server receives response data, via the given third-party application, that is generated based on the opaque client data and that is to be transmitted back to the client device. The independent server processes the response data, using a second encoder-decoder model, to generate opaque response data, and transmits the opaque response data to the client device and without transmitting any of the response data.Type: GrantFiled: August 23, 2022Date of Patent: March 4, 2025Assignee: GOOGLE LLCInventors: Akshay Goel, Jonathan Eccles, Nitin Khandelwal, Sarvjeet Singh, David Sanchez, Ashwin Ram
-
Patent number: 12141199Abstract: A system and methodology provide for annotating videos with entities and associated probabilities of existence of the entities within video frames. A computer-implemented method identifies an entity from a plurality of entities identifying characteristics of video items. The computer-implemented method selects a set of features correlated with the entity based on a value of a feature of a plurality of features, determines a classifier for the entity using the set of features, and determines an aggregation calibration function for the entity based on the set of features. The computer-implemented method selects a video frame from a video item, where the video frame having associated features, and determines a probability of existence of the entity based on the associated features using the classifier and the aggregation calibration function.Type: GrantFiled: December 13, 2021Date of Patent: November 12, 2024Assignee: Google LLCInventors: Balakrishnan Varadarajan, George Dan Toderici, Apostol Natsev, Nitin Khandelwal, Sudheendra Vijayanarasimhan, Weilong Yang, Sanketh Shetty
-
Publication number: 20240203423Abstract: Implementations described herein are directed to enabling collaborative ranking of interpretations of spoken utterances based on data that is available to an automated assistant and third-party agent(s), respectively. The automated assistant can determine first-party interpretation(s) of a spoken utterance provided by a user, and can cause the third-party agent(s) to determine third-party interpretation(s) of the spoken utterance provided by the user. In some implementations, the automated assistant can select a given interpretation, from the first-party interpretation(s) and the third-party interpretation(s), of the spoken utterance, and can cause a given third-party agent to satisfy the spoken utterance based on the given interpretation.Type: ApplicationFiled: February 28, 2024Publication date: June 20, 2024Inventors: Akshay Goel, Nitin Khandelwal, Richard Park, Brian Chatham, Jonathan Eccles, David Sanchez, Dmytro Lapchuk
-
Patent number: 12014542Abstract: A computer-implemented method for selecting representative frames for videos is provided. The method includes receiving a video and identifying a set of features for each of the frames of the video. The features including frame-based features and semantic features. The semantic features identifying likelihoods of semantic concepts being present as content in the frames of the video. A set of video segments for the video is subsequently generated. Each video segment includes a chronological subset of frames from the video and each frame is associated with at least one of the semantic features. The method generates a score for each frame of the subset of frames for each video segment based at least on the semantic features, and selecting a representative frame for each video segment based on the scores of the frames in the video segment. The representative frame represents and summarizes the video segment.Type: GrantFiled: December 14, 2020Date of Patent: June 18, 2024Assignee: Google LLCInventors: Sanketh Shetty, Tomas Izo, Min-Hsuan Tsai, Sudheendra Vijayanarasimhan, Apostol Natsev, Sami Abu-El-Haija, George Dan Toderici, Susana Ricco, Balakrishnan Varadarajan, Nicola Muscettola, WeiHsin Gu, Weilong Yang, Nitin Khandelwal, Phuong Le
-
Publication number: 20240169989Abstract: Systems, methods, and apparatus for using a multimodal response in the dynamic generation of client device output that is tailored to a current modality of a client device is disclosed herein. Multimodal client devices can engage in a variety of interactions across the multimodal spectrum including voice only interactions, voice forward interactions, multimodal interactions, visual forward interactions, visual only interactions etc. A multimodal response can include a core message to be rendered for all interaction types as well as one or more modality dependent components to provide a user with additional information.Type: ApplicationFiled: February 1, 2024Publication date: May 23, 2024Inventors: April Pufahl, Jared Strawderman, Harry Yu, Adriana Olmos Antillon, Jonathan Livni, Okan Kolak, James Giangola, Nitin Khandelwal, Jason Kearns, Andrew Watson, Joseph Ashear, Valerie Nygaard
-
Patent number: 11948580Abstract: Implementations described herein are directed to enabling collaborative ranking of interpretations of spoken utterances based on data that is available to an automated assistant and third-party agent(s), respectively. The automated assistant can determine first-party interpretation(s) of a spoken utterance provided by a user, and can cause the third-party agent(s) to determine third-party interpretation(s) of the spoken utterance provided by the user. In some implementations, the automated assistant can select a given interpretation, from the first-party interpretation(s) and the third-party interpretation(s), of the spoken utterance, and can cause a given third-party agent to satisfy the spoken utterance based on the given interpretation.Type: GrantFiled: November 29, 2021Date of Patent: April 2, 2024Assignee: GOOGLE LLCInventors: Akshay Goel, Nitin Khandelwal, Richard Park, Brian Chatham, Jonathan Eccles, David Sanchez, Dmytro Lapchuk
-
Patent number: 11935530Abstract: Systems, methods, and apparatus for using a multimodal response in the dynamic generation of client device output that is tailored to a current modality of a client device is disclosed herein. Multimodal client devices can engage in a variety of interactions across the multimodal spectrum including voice only interactions, voice forward interactions, multimodal interactions, visual forward interactions, visual only interactions etc. A multimodal response can include a core message to be rendered for all interaction types as well as one or more modality dependent components to provide a user with additional information.Type: GrantFiled: November 1, 2021Date of Patent: March 19, 2024Assignee: GOOGLE LLCInventors: April Pufahl, Jared Strawderman, Harry Yu, Adriana Olmos Antillon, Jonathan Livni, Okan Kolak, James Giangola, Nitin Khandelwal, Jason Kearns, Andrew Watson, Joseph Ashear, Valerie Nygaard
-
Publication number: 20240031339Abstract: Implementations described herein utilize an independent server for facilitating secure exchange of data between multiple disparate parties. The independent server receives client data, via an automated assistant application executing at least in part at a client device, that is to be transmitted to a given third-party application. The independent server processes the client data, using a first encoder-decoder model, to generate opaque client data, and transmits the opaque client data to the given third-party application and without transmitting any of the client data. Further, the independent server receives response data, via the given third-party application, that is generated based on the opaque client data and that is to be transmitted back to the client device. The independent server processes the response data, using a second encoder-decoder model, to generate opaque response data, and transmits the opaque response data to the client device and without transmitting any of the response data.Type: ApplicationFiled: August 23, 2022Publication date: January 25, 2024Inventors: Akshay Goel, Jonathan Eccles, Nitin Khandelwal, Sarvjeet Singh, David Sanchez, Ashwin Ram
-
Publication number: 20230062201Abstract: Implementations described herein are directed to enabling collaborative ranking of interpretations of spoken utterances based on data that is available to an automated assistant and third-party agent(s), respectively. The automated assistant can determine first-party interpretation(s) of a spoken utterance provided by a user, and can cause the third-party agent(s) to determine third-party interpretation(s) of the spoken utterance provided by the user. In some implementations, the automated assistant can select a given interpretation, from the first-party interpretation(s) and the third-party interpretation(s), of the spoken utterance, and can cause a given third-party agent to satisfy the spoken utterance based on the given interpretation.Type: ApplicationFiled: November 29, 2021Publication date: March 2, 2023Inventors: Akshay Goel, Nitin Khandelwal, Richard Park, Brian Chatham, Jonathan Eccles, David Sanchez, Dmytro Lapchuk
-
Patent number: 11568869Abstract: Implementations include identifying, from a database of entries reflecting past automated assistant commands submitted within a threshold amount of time relative to a current time, particular entries that each reflect corresponding features of a corresponding user submission of a particular command. Further, those implementations include determining that the particular command is a golden command, for a particular automated assistant function, responsive to determining that: at least a threshold percentage of the user submissions of the particular command triggered the particular automated assistant function, and a quantity of the user submission of the particular command satisfies a threshold quantity.Type: GrantFiled: November 23, 2020Date of Patent: January 31, 2023Assignee: GOOGLE LLCInventors: Aakash Goel, Tayfun Elmas, Keith Brady, Akshay Jaggi, Ester Lopez Berga, Arne Vansteenkiste, Robin Martinjak, Mahesh Palekar, Krish Narang, Nitin Khandelwal, Pravir Gupta
-
Publication number: 20220207873Abstract: A system and methodology provide for annotating videos with entities and associated probabilities of existence of the entities within video frames. A computer-implemented method identifies an entity from a plurality of entities identifying characteristics of video items. The computer-implemented method selects a set of features correlated with the entity based on a value of a feature of a plurality of features, determines a classifier for the entity using the set of features, and determines an aggregation calibration function for the entity based on the set of features. The computer-implemented method selects a video frame from a video item, where the video frame having associated features, and determines a probability of existence of the entity based on the associated features using the classifier and the aggregation calibration function.Type: ApplicationFiled: December 13, 2021Publication date: June 30, 2022Inventors: Balakrishnan Varadarajan, George Dan Toderici, Apostol Natsev, Nitin Khandelwal, Sudheendra Vijayanarasimhan, Weilong Yang, Sanketh Shetty
-
Publication number: 20220165259Abstract: Implementations include identifying, from a database of entries reflecting past automated assistant commands submitted within a threshold amount of time relative to a current time, particular entries that each reflect corresponding features of a corresponding user submission of a particular command. Further, those implementations include determining that the particular command is a golden command, for a particular automated assistant function, responsive to determining that: at least a threshold percentage of the user submissions of the particular command triggered the particular automated assistant function, and a quantity of the user submission of the particular command satisfies a threshold quantity.Type: ApplicationFiled: November 23, 2020Publication date: May 26, 2022Inventors: Aakash Goel, Tayfun Elmas, Keith Brady, Akshay Jaggi, Ester Lopez Berga, Arne Vansteenkiste, Robin Martinjak, Mahesh Palekar, Krish Narang, Nitin Khandelwal, Pravir Gupta
-
Publication number: 20220051675Abstract: Systems, methods, and apparatus for using a multimodal response in the dynamic generation of client device output that is tailored to a current modality of a client device is disclosed herein. Multimodal client devices can engage in a variety of interactions across the multimodal spectrum including voice only interactions, voice forward interactions, multimodal interactions, visual forward interactions, visual only interactions etc. A multimodal response can include a core message to be rendered for all interaction types as well as one or more modality dependent components to provide a user with additional information.Type: ApplicationFiled: November 1, 2021Publication date: February 17, 2022Inventors: April Pufahl, Jared Strawderman, Harry Yu, Adriana Olmos Antillon, Jonathan Livni, Okan Kolak, James Giangola, Nitin Khandelwal, Jason Kearns, Andrew Watson, Joseph Ashear, Valerie Nygaard
-
Patent number: 11200423Abstract: A system and methodology provide for annotating videos with entities and associated probabilities of existence of the entities within video frames. A computer-implemented method identifies an entity from a plurality of entities identifying characteristics of video items. The computer-implemented method selects a set of features correlated with the entity based on a value of a feature of a plurality of features, determines a classifier for the entity using the set of features, and determines an aggregation calibration function for the entity based on the set of features. The computer-implemented method selects a video frame from a video item, where the video frame having associated features, and determines a probability of existence of the entity based on the associated features using the classifier and the aggregation calibration function.Type: GrantFiled: November 18, 2019Date of Patent: December 14, 2021Assignee: Google LLCInventors: Balakrishnan Varadarajan, George Dan Toderici, Apostol Natsev, Nitin Khandelwal, Sudheendra Vijayanarasimhan, Weilong Yang, Sanketh Shetty
-
Patent number: 11164576Abstract: Systems, methods, and apparatus for using a multimodal response in the dynamic generation of client device output that is tailored to a current modality of a client device is disclosed herein. Multimodal client devices can engage in a variety of interactions across the multimodal spectrum including voice only interactions, voice forward interactions, multimodal interactions, visual forward interactions, visual only interactions etc. A multimodal response can include a core message to be rendered for all interaction types as well as one or more modality dependent components to provide a user with additional information.Type: GrantFiled: January 18, 2019Date of Patent: November 2, 2021Assignee: GOOGLE LLCInventors: April Pufahl, Jared Strawderman, Harry Yu, Adriana Olmos Antillon, Jonathan Livni, Okan Kolak, James Giangola, Nitin Khandelwal, Jason Kearns, Andrew Watson, Joseph Ashear, Valerie Nygaard
-
Publication number: 20210166035Abstract: A computer-implemented method for selecting representative frames for videos is provided. The method includes receiving a video and identifying a set of features for each of the frames of the video. The features including frame-based features and semantic features. The semantic features identifying likelihoods of semantic concepts being present as content in the frames of the video. A set of video segments for the video is subsequently generated. Each video segment includes a chronological subset of frames from the video and each frame is associated with at least one of the semantic features. The method generates a score for each frame of the subset of frames for each video segment based at least on the semantic features, and selecting a representative frame for each video segment based on the scores of the frames in the video segment. The representative frame represents and summarizes the video segment.Type: ApplicationFiled: December 14, 2020Publication date: June 3, 2021Inventors: Sanketh Shetty, Tomas Izo, Min-Hsuan Tsai, Sudheendra Vijayanarasimhan, Apostol Natsev, Sami Abu-El-Haija, George Dan Toderici, Susana Ricco, Balakrishnan Varadarajan, Nicola Muscettola, WeiHsin Gu, Weilong Yang, Nitin Khandelwal, Phuong Le
-
Patent number: 10867183Abstract: A computer-implemented method for selecting representative frames for videos is provided. The method includes receiving a video and identifying a set of features for each of the frames of the video. The features including frame-based features and semantic features. The semantic features identifying likelihoods of semantic concepts being present as content in the frames of the video. A set of video segments for the video is subsequently generated. Each video segment includes a chronological subset of frames from the video and each frame is associated with at least one of the semantic features. The method generates a score for each frame of the subset of frames for each video segment based at least on the semantic features, and selecting a representative frame for each video segment based on the scores of the frames in the video segment. The representative frame represents and summarizes the video segment.Type: GrantFiled: April 23, 2018Date of Patent: December 15, 2020Assignee: Google LLCInventors: Sanketh Shetty, Tomas Izo, Min-Hsuan Tsai, Sudheendra Vijayanarasimhan, Apostol Natsev, Sami Abu-El-Haija, George Dan Toderici, Susanna Ricco, Balakrishnan Varadarajan, Nicola Muscettola, WeiHsin Gu, Weilong Yang, Nitin Khandelwal, Phuong Le