Patents by Inventor Kiran Bindhu HEMARAJ
Kiran Bindhu HEMARAJ has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11862156Abstract: Embodiments of the present invention provide systems, methods, and computer storage media directed to providing talk back automation for applications installed on a mobile device. To do so actions (e.g., talk back features) can be created, via the digital assistant, by recording a series of events that are typically provided by a user of the mobile device when manually invoking the desired action. At a desired state, the user may select an object that represents the output of the application. The recording embodies the action and can be associated with a series of verbal commands that the user would typically announce to the digital assistant when an invocation of the action is desired. In response, the object is verbally communicated to the user via the digital assistant, a different digital assistant, or even another device. Alternatively, the object may be communicated to the same application or another application as input.Type: GrantFiled: July 2, 2021Date of Patent: January 2, 2024Assignee: Peloton Interactive, Inc.Inventors: Mark Robinson, Matan Levi, Kiran Bindhu Hemaraj, Rajat Mukherjee
-
Publication number: 20230335115Abstract: Embodiments facilitate the intuitive creation, maintenance, and distribution of action datasets that include computing events or tasks that can be reproduced when a command is received by a digital assistant. The digital assistant can generate new action datasets, on-board new action datasets, and receive new action datasets or updates to existing action datasets locally or via a digital assistant server, among other things. The digital assistant server can also receive action datasets, maintain action datasets, and distribute action datasets to one or more digital assistants, among other things.Type: ApplicationFiled: June 19, 2023Publication date: October 19, 2023Inventors: Rajat Mukherjee, Kiran Bindhu Hemaraj
-
Patent number: 11682380Abstract: Embodiments facilitate the intuitive creation, maintenance, and distribution of action datasets that include computing events or tasks that can be reproduced when a command is received by a digital assistant. The digital assistant can generate new action datasets, on-board new action datasets, and receive new action datasets or updates to existing action datasets locally or via a digital assistant server, among other things. The digital assistant server can also receive action datasets, maintain action datasets, and distribute action datasets to one or more digital assistants, among other things.Type: GrantFiled: June 21, 2021Date of Patent: June 20, 2023Assignee: PELOTON INTERACTIVE INC.Inventors: Rajat Mukherjee, Kiran Bindhu Hemaraj
-
Publication number: 20230100423Abstract: Embodiments described herein are generally directed towards systems and methods relating to a crowd-sourced digital assistant and system. In particular, embodiments facilitate the intuitive creation and distribution of action datasets that include computing events or tasks that can be reproduced when an associated command, stored in an action dataset, is determined received by a digital assistant device. The digital assistant device described herein can generate new action datasets, on-board new action datasets, and receive new action datasets or updates to existing action datasets. Each digital assistant device in the described system can participate in the building of action datasets, so as to crowd-source a dialect that can be understood by a digital assistant device.Type: ApplicationFiled: December 5, 2022Publication date: March 30, 2023Inventors: Rajat MUKHERJEE, Kiran Bindhu Hemaraj, Matan Levi
-
Patent number: 11520610Abstract: Embodiments described herein are generally directed towards systems and methods relating to a crowd-sourced digital assistant and system. In particular, embodiments facilitate the intuitive creation and distribution of action datasets that include computing events or tasks that can be reproduced when an associated command, stored in an action dataset, is determined received by a digital assistant device. The digital assistant device described herein can generate new action datasets, on-board new action datasets, and receive new action datasets or updates to existing action datasets. Each digital assistant device in the described system can participate in the building of action datasets, so as to crowd-source a dialect that can be understood by a digital assistant device.Type: GrantFiled: May 18, 2018Date of Patent: December 6, 2022Assignee: PELOTON INTERACTIVE INC.Inventors: Rajat Mukherjee, Kiran Bindhu Hemaraj, Matan Levi
-
Publication number: 20220283831Abstract: Embodiments of the present invention are directed to action recipes for a crowdsourced digital assistant. Users can define an action recipe by recording a set of inputs across one or more applications, by providing multiple sub-commands in a single on-the-fly command, by providing one or more associated commands, or otherwise. An action recipe dataset is generated, and stored and indexed on a user device and/or on an action cloud server. As such, any user can invoke an action recipe by providing an associated command to a crowdsourced digital assistant application on a user device. The crowdsourced digital assistant searches for a matching command on the user device and/or the action cloud server, and if a match is located, the corresponding action recipe dataset is accessed, and the crowdsourced digital assistant emulates the actions in the action recipe on the user device.Type: ApplicationFiled: May 23, 2022Publication date: September 8, 2022Inventors: Rajat MUKHERJEE, Mark ROBINSON, Kiran BINDHU HEMARAJ
-
Patent number: 11340925Abstract: Embodiments of the present invention are directed to action recipes for a crowdsourced digital assistant. Users can define an action recipe by recording a set of inputs across one or more applications, by providing multiple sub-commands in a single on-the-fly command, by providing one or more associated commands, or otherwise. An action recipe dataset is generated, and stored and indexed on a user device and/or on an action cloud server. As such, any user can invoke an action recipe by providing an associated command to a crowdsourced digital assistant application on a user device. The crowdsourced digital assistant searches for a matching command on the user device and/or the action cloud server, and if a match is located, the corresponding action recipe dataset is accessed, and the crowdsourced digital assistant emulates the actions in the action recipe on the user device.Type: GrantFiled: March 26, 2018Date of Patent: May 24, 2022Assignee: PELOTON INTERACTIVE INC.Inventors: Rajat Mukherjee, Mark Robinson, Kiran Bindhu Hemaraj
-
Publication number: 20210335363Abstract: Embodiments of the present invention provide systems, methods, and computer storage media directed to providing talk back automation for applications installed on a mobile device. To do so actions (e.g., talk back features) can be created, via the digital assistant, by recording a series of events that are typically provided by a user of the mobile device when manually invoking the desired action. At a desired state, the user may select an object that represents the output of the application. The recording embodies the action and can be associated with a series of verbal commands that the user would typically announce to the digital assistant when an invocation of the action is desired. In response, the object is verbally communicated to the user via the digital assistant, a different digital assistant, or even another device. Alternatively, the object may be communicated to the same application or another application as input.Type: ApplicationFiled: July 2, 2021Publication date: October 28, 2021Inventors: Mark Robinson, Matan Levi, Kiran Bindhu Hemaraj, Rajat Mukherjee
-
Publication number: 20210312909Abstract: Embodiments facilitate the intuitive creation, maintenance, and distribution of action datasets that include computing events or tasks that can be reproduced when a command is received by a digital assistant. The digital assistant can generate new action datasets, on-board new action datasets, and receive new action datasets or updates to existing action datasets locally or via a digital assistant server, among other things. The digital assistant server can also receive action datasets, maintain action datasets, and distribute action datasets to one or more digital assistants, among other things.Type: ApplicationFiled: June 21, 2021Publication date: October 7, 2021Inventors: Rajat Mukherjee, Kiran Bindhu Hemaraj
-
Patent number: 11056105Abstract: Embodiments of the present invention provide systems, methods, and computer storage media directed to providing talk back automation for applications installed on a mobile device. To do so actions (e.g., talk back features) can be created, via the digital assistant, by recording a series of events that are typically provided by a user of the mobile device when manually invoking the desired action. At a desired state, the user may select an object that represents the output of the application. The recording embodies the action and can be associated with a series of verbal commands that the user would typically announce to the digital assistant when an invocation of the action is desired. In response, the object is verbally communicated to the user via the digital assistant, a different digital assistant, or even another device. Alternatively, the object may be communicated to the same application or another application as input.Type: GrantFiled: March 26, 2018Date of Patent: July 6, 2021Assignee: AIQUDO, INCInventors: Mark Robinson, Matan Levi, Kiran Bindhu Hemaraj, Rajat Mukherjee
-
Patent number: 11043206Abstract: Embodiments described herein are generally directed towards systems and methods relating to a crowd-sourced digital assistant system and related methods. In particular, embodiments facilitate the intuitive creation, maintenance, and distribution of action datasets that include computing events or tasks that can be reproduced when an associated command, stored in an action dataset, is determined received by a digital assistant device. The digital assistant device described herein can generate new action datasets, on-board new action datasets to a remote server, and receive new action datasets or updates to existing action datasets from the remote server. The digital assistant server described herein can receive action datasets, maintain action datasets, and distribute action datasets to one or more digital assistant devices.Type: GrantFiled: July 18, 2018Date of Patent: June 22, 2021Assignee: Aiqudo, Inc.Inventors: Rajat Mukherjee, Kiran Bindhu Hemaraj
-
Patent number: 10768954Abstract: Embodiments described herein are generally directed towards systems and methods relating to a crowd-sourced digital assistant system and techniques for disambiguating commands based on personalized usage of a digital assistant device, among other things. In various embodiments, the digital assistant device can use personal data, collected device usage data, and other types of collected contextual information, to disambiguate received commands for the proper selection and execution of operations on the digital assistant device. The digital assistant can process and interpret ambiguous commands and even unique user dialects without requiring extensive training to recognize and act on the received commands, even if the particular phraseology of the command has not previously been encountered by the digital assistant.Type: GrantFiled: January 30, 2019Date of Patent: September 8, 2020Assignee: AIQUDO, INC.Inventors: Kiran Bindhu Hemaraj, Rajat Mukherjee
-
Publication number: 20190347118Abstract: Embodiments described herein are generally directed towards systems and methods relating to a crowd-sourced digital assistant system and related methods. In particular, embodiments describe techniques for effectively searching, modifying, identifying parameter values, and determining features for selecting action datasets for distribution to digital assistant devices based on commands received therefrom. Action datasets include computing events or tasks that can be reproduced when a command is received by a digital assistant device and communicated to the server device.Type: ApplicationFiled: July 24, 2019Publication date: November 14, 2019Inventors: Rajat Mukherjee, Kiran Bindhu Hemaraj, Matan Levi
-
Patent number: 10466963Abstract: Various embodiments, methods and systems for implementing a digital assistant connectivity system are provided. In operation, a request to receive a unique identifier is communicated from a digital assistant device. The unique identifier is utilized to pair the digital assistant device with a smart assistant device. The unique identifier is received from and generated by a digital assistant server to correspond with the digital assistant device and a corresponding digital assistant device application instance. An instruction to perform an action on the digital assistant device is received at the digital assistant device application. The instruction is communicated based on an established command-driven session between the digital assistant device application and the smart assistant device. The command-driven session is associated with the unique identifier that paired the digital assistant device and the smart assistant device.Type: GrantFiled: March 26, 2018Date of Patent: November 5, 2019Assignee: AIQUDO, INC.Inventors: Matan Levi, Mark Robinson, Rajat Mukherjee, Kiran Bindhu Hemaraj, Sunil Patil
-
Publication number: 20190235887Abstract: Embodiments described herein are generally directed towards systems and methods relating to a crowd-sourced digital assistant system and techniques for disambiguating commands based on personalized usage of a digital assistant device, among other things. In various embodiments, the digital assistant device can use personal data, collected device usage data, and other types of collected contextual information, to disambiguate received commands for the proper selection and execution of operations on the digital assistant device. The digital assistant can process and interpret ambiguous commands and even unique user dialects without requiring extensive training to recognize and act on the received commands, even if the particular phraseology of the command has not previously been encountered by the digital assistant.Type: ApplicationFiled: January 30, 2019Publication date: August 1, 2019Inventors: Kiran Bindhu Hemaraj, Rajat Mukherjee
-
Publication number: 20180366113Abstract: Embodiments described herein facilitate the robust replay of reproducible computing events or tasks when an associated command is received by a digital assistant device. The digital assistant device can determine when a received command corresponds to one of a plurality of action datasets, select the corresponding action dataset to interpret instructions included therein, which can thereby initiate a particular feature of an application associated with the corresponding action dataset. During the process of initiating the particular feature, the digital assistant device can determine when unexpected behaviors of the associated application or the digital assistant device's operating system occur. In this way, the digital assistant device can dynamically switch to a different set of instructions included in the corresponding action dataset to address the unexpected behaviors and successfully initiate the particular feature associated with the received command.Type: ApplicationFiled: August 23, 2018Publication date: December 20, 2018Inventors: Kiran Bindhu Hemaraj, Rajat Mukherjee
-
Publication number: 20180336885Abstract: Embodiments described herein are generally directed towards systems and methods relating to a crowd-sourced digital assistant system and related methods. In particular, embodiments facilitate the intuitive creation, maintenance, and distribution of action datasets that include computing events or tasks that can be reproduced when an associated command, stored in an action dataset, is determined received by a digital assistant device. The digital assistant device described herein can generate new action datasets, on-board new action datasets to a remote server, and receive new action datasets or updates to existing action datasets from the remote server. The digital assistant server described herein can receive action datasets, maintain action datasets, and distribute action datasets to one or more digital assistant devices.Type: ApplicationFiled: July 18, 2018Publication date: November 22, 2018Inventors: Rajat Mukherjee, Kiran Bindhu Hemaraj
-
Publication number: 20180336050Abstract: Embodiments of the present invention are directed to action recipes for a crowdsourced digital assistant. Users can define an action recipe by recording a set of inputs across one or more applications, by providing multiple sub-commands in a single on-the-fly command, by providing one or more associated commands, or otherwise. An action recipe dataset is generated, and stored and indexed on a user device and/or on an action cloud server. As such, any user can invoke an action recipe by providing an associated command to a crowdsourced digital assistant application on a user device. The crowdsourced digital assistant searches for a matching command on the user device and/or the action cloud server, and if a match is located, the corresponding action recipe dataset is accessed, and the crowdsourced digital assistant emulates the actions in the action recipe on the user device.Type: ApplicationFiled: March 26, 2018Publication date: November 22, 2018Inventors: Rajat Mukherjee, Mark Robinson, Kiran Bindhu Hemaraj
-
Publication number: 20180337799Abstract: Various embodiments, methods and systems for implementing a digital assistant connectivity system are provided. In operation, a request to receive a unique identifier is communicated from a digital assistant device. The unique identifier is utilized to pair the digital assistant device with a smart assistant device. The unique identifier is received from and generated by a digital assistant server to correspond with the digital assistant device and a corresponding digital assistant device application instance. An instruction to perform an action on the digital assistant device is received at the digital assistant device application. The instruction is communicated based on an established command-driven session between the digital assistant device application and the smart assistant device. The command-driven session is associated with the unique identifier that paired the digital assistant device and the smart assistant device.Type: ApplicationFiled: March 26, 2018Publication date: November 22, 2018Inventors: Matan Levi, Mark Robinson, Rajat Mukherjee, Kiran Bindhu Hemaraj, Sunil Patil
-
Publication number: 20180336893Abstract: Embodiments of the present invention provide systems, methods, and computer storage media directed to providing talk back automation for applications installed on a mobile device. To do so actions (e.g., talk back features) can be created, via the digital assistant, by recording a series of events that are typically provided by a user of the mobile device when manually invoking the desired action. At a desired state, the user may select an object that represents the output of the application. The recording embodies the action and can be associated with a series of verbal commands that the user would typically announce to the digital assistant when an invocation of the action is desired. In response, the object is verbally communicated to the user via the digital assistant, a different digital assistant, or even another device. Alternatively, the object may be communicated to the same application or another application as input.Type: ApplicationFiled: March 26, 2018Publication date: November 22, 2018Inventors: Mark Robinson, Matan Levi, Kiran Bindhu Hemaraj, Rajat Mukherjee