Patents by Inventor Niranjan Manjunath
Niranjan Manjunath has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20250121131Abstract: A medical device hub power management system, method, and apparatus are disclosed. An example hub connectivity station includes a connectivity stage configured to provide a data connection with an external network. The hub connectivity station also includes at least one docking apparatus configured to connect to at least one medical device. The docking apparatuses are connected to the connectivity station in a stacked arrangement. The docking apparatuses and connectivity stage include power and data connectors to enable data and power to be provided through the hub connectivity station. To prevent electrical shock, a top power connector of each docking apparatus is disconnected from power when that docking apparatus is not connected to another docking apparatus or the connectivity stage. Power is provided to the top power connector after detecting another docking apparatus or the connectivity stage is connected to the top of the docking apparatus.Type: ApplicationFiled: October 14, 2024Publication date: April 17, 2025Inventors: Peter BOJAN, D. Gnana PRASUNA, Siba Prasad SAHU, David James SCHMIDT, Subbaiah PATIBANDLA, Erik Michael THOMAS, Niranjan MANJUNATH, Yogesh MUNAVALLI, Jiri SLABY
-
Publication number: 20250095648Abstract: This relates to an intelligent automated assistant in a video communication session environment. An example method includes, during a video communication session between at least two user devices, and at a first user device: receiving a first user voice input; in accordance with a determination that the first user voice input represents a communal digital assistant request, transmitting a request to provide context information associated with the first user voice input to the first user device; receiving context information associated with the first user voice input; obtaining a first digital assistant response based at least on a portion of the context information received from the second user device and at least a portion of context information associated with the first user voice input that is stored on the first user device; providing the first digital assistant response to the second user device; and outputting the first digital assistant response.Type: ApplicationFiled: November 27, 2024Publication date: March 20, 2025Inventors: Niranjan MANJUNATH, Willem MATTELAER, Jessica PECK, Lily Shuting ZHANG
-
Patent number: 12230264Abstract: An example process includes while an electronic device is engaged in a communication session with external device(s): receiving, from a first user of the electronic device, input to invoke a first digital assistant; receiving, from the first user, a natural language input corresponding to a task; in accordance with invoking the first digital assistant, generating, by the first digital assistant, a prompt for further user input about the task; transmitting, to the external device(s), the prompt for further user input about the task; after transmitting the prompt for further user input, receiving, from an external device of the external device(s), a response to the prompt for further user input; initiating, by the first digital assistant, based on the response and information corresponding to the first user stored on the electronic device, the task; and transmitting, to the external device(s), an output indicative of the initiated task.Type: GrantFiled: July 18, 2022Date of Patent: February 18, 2025Assignee: Apple Inc.Inventors: Rae L. Lasko, German W. Bauer, Felicia W. Edwards, Niranjan Manjunath, Jonathan H. Russell, Lynn Streja, Keith C. Strickling, Garrett L Weinberg
-
Publication number: 20240404207Abstract: In some embodiments, a computer system displays a navigation user interface that includes a first representation of a first location experience and a first portion of a navigation user interface element with a first orientation that includes a first representation of a first point of interest associated with the first location experience. In some embodiments, the computer system changes the display of the navigation user interface element from the first orientation to a second orientation corresponding to the first location experience. In some embodiments, the computer system displays a collection of content associated with a physical location in response to user input corresponding to a request to generate the collection of content associated with the physical location.Type: ApplicationFiled: June 4, 2024Publication date: December 5, 2024Inventors: Martynas LAURITA, Vincent P. ARROYO, Niranjan MANJUNATH, Stephen O. LEMAY, Peter D. ANTON, Matan STAUBER, Matthew J. SUNDSTROM, Fiona P. O'LEARY, Konstantin SINITSYN, Giovanni S. LUIS, Devin FLOOD, Kevin N. EUGENE, Per J. FAHLBERG, Ryan W. APUY
-
Publication number: 20240403080Abstract: In some embodiments, a computer system displays a navigation user interface within a three-dimensional environment. The navigation user interface includes one or more first travel user interface elements displayed at a first level of immersion corresponding to a first view of a first physical location, that are selectable to change the display of the navigation user interface from the first view of the first physical location to a second view of a second physical location.Type: ApplicationFiled: May 31, 2024Publication date: December 5, 2024Inventors: Martynas LAURITA, Vincent P. ARROYO, Niranjan MANJUNATH, Stephen O. LEMAY, Peter D. ANTON, Matan STAUBER, Matthew J. SUNDSTROM, Fiona P. O'LEARY
-
Patent number: 12147733Abstract: In an exemplary technique, audio information responsive to received input is provided. While providing the audio information, one or more conditions for stopping the provision of audio information are detected, and in response, the provision of the audio information is stopped. After stopping the provision of the audio information, if the one or more conditions for stopping the provision of audio information have ceased, then resumed audio information is provided, where the resumed audio information includes a rephrased version of a previously provided segment of the audio information.Type: GrantFiled: November 14, 2023Date of Patent: November 19, 2024Assignee: Apple Inc.Inventors: Rahul Nair, Golnaz Abdollahian, Avi Bar-Zeev, Niranjan Manjunath
-
Publication number: 20240347059Abstract: This relates to an intelligent automated assistant in a video communication session environment. An example method includes, during a video communication session between at least two user devices, and at a first user device: receiving a first user voice input; in accordance with a determination that the first user voice input represents a communal digital assistant request, transmitting a request to provide context information associated with the first user voice input to the first user device; receiving context information associated with the first user voice input; obtaining a first digital assistant response based at least on a portion of the context information received from the second user device and at least a portion of context information associated with the first user voice input that is stored on the first user device; providing the first digital assistant response to the second user device; and outputting the first digital assistant response.Type: ApplicationFiled: June 10, 2024Publication date: October 17, 2024Inventors: Niranjan MANJUNATH, Willem MATTELAER, Jessica PECK, Lily Shuting ZHANG
-
Publication number: 20240281109Abstract: Some examples of the disclosure are directed to systems and methods for transitioning display of user interfaces in an extended reality environment based on tilt of an electronic device. In some examples, an electronic device presents an extended reality environment that includes a virtual object in a first visual state within the extended reality environment. In some examples, if the electronic device detects a first input that includes movement of the viewpoint, in accordance with a determination that the movement of the viewpoint exceeds a threshold movement, the electronic device displays the virtual object in a second visual state, different from the first visual state. In some examples, while displaying the virtual object in the second visual state, if the electronic device detects a second input that satisfies one or more first criteria, the electronic device displays the virtual object in the first visual state.Type: ApplicationFiled: February 12, 2024Publication date: August 22, 2024Inventors: Niranjan MANJUNATH, Martynas LAURITA, Arun K. THAMPI, Haishan YE
-
Patent number: 12033636Abstract: This relates to an intelligent automated assistant in a video communication environment. An example includes, during a video communication session between at least two devices, receiving a voice input at one device, generating and transmitting to a server a textual representation of the voice input, receiving from the server a shared transcription including both the textual representation of the voice input and one or more additional textual representations generated by another device, and determining and presenting one or more candidate tasks based on the shared transcription.Type: GrantFiled: August 9, 2023Date of Patent: July 9, 2024Assignee: Apple Inc.Inventors: Niranjan Manjunath, Willem Mattelaer, Jessica Peck, Lily Shuting Zhang
-
Publication number: 20240086147Abstract: In an exemplary technique, audio information responsive to received input is provided. While providing the audio information, one or more conditions for stopping the provision of audio information are detected, and in response, the provision of the audio information is stopped. After stopping the provision of the audio information, if the one or more conditions for stopping the provision of audio information have ceased, then resumed audio information is provided, where the resumed audio information includes a rephrased version of a previously provided segment of the audio information.Type: ApplicationFiled: November 14, 2023Publication date: March 14, 2024Inventors: Rahul NAIR, Golnaz ABDOLLAHIAN, Avi BAR-ZEEV, Niranjan MANJUNATH
-
Patent number: 11861265Abstract: In an exemplary technique, speech input including one or more instructions is received. After the speech input has stopped, if it is determined that one or more visual characteristics indicate that further speech input is not expected, a response to the one or more instructions is provided. If it is determined that one or more visual characteristics indicate that further speech input is expected, a response to the one or more instructions is not provided.Type: GrantFiled: March 20, 2023Date of Patent: January 2, 2024Assignee: Apple Inc.Inventors: Rahul Nair, Golnaz Abdollahian, Avi Bar-Zeev, Niranjan Manjunath
-
Patent number: 11837232Abstract: This relates to an intelligent automated assistant in a video communication session environment. An example method includes, during a video communication session between at least two user devices, and at a first user device: receiving a first user voice input; in accordance with a determination that the first user voice input represents a communal digital assistant request, transmitting a request to provide context information associated with the first user voice input to the first user device; receiving context information associated with the first user voice input; obtaining a first digital assistant response based at least on a portion of the context information received from the second user device and at least a portion of context information associated with the first user voice input that is stored on the first user device; providing the first digital assistant response to the second user device; and outputting the first digital assistant response.Type: GrantFiled: February 28, 2023Date of Patent: December 5, 2023Assignee: Apple Inc.Inventors: Niranjan Manjunath, Willem Mattelaer, Jessica Peck, Lily Shuting Zhang
-
Publication number: 20230386464Abstract: Embodiments provide a context-aware digital assistant at multiple user devices participating in a video communication session by using context information from a first user device to determine a digital assistant response at a second user device. In this manner, users participating in the video communication session may interact with the digital assistant during the video communication session as if the digital assistant is another participant in the video communication session. Embodiments further describe automatically determining candidate digital assistant tasks based on a shared transcription of user voice inputs received at user devices participating in a video communication session. In this manner, a digital assistant of a user device participating in a video communication session may proactively determine one or more tasks that a user of the user device may want the digital assistant to perform based on conversations held during the video communication session.Type: ApplicationFiled: August 9, 2023Publication date: November 30, 2023Inventors: Niranjan MANJUNATH, Willem MATTELAER, Jessica PECK, Lily Shuting ZHANG
-
Publication number: 20230336689Abstract: A method of invoking public and private interactions during a multiuser communication session includes presenting a multiuser communication session; detecting a user invocation input that corresponds to a trigger to a digital assistant; detecting a user search input that corresponds to a request for information; obtaining the information based on the request; presenting the information; in accordance with a determination that at least one of the user invocation input and the user search input satisfy first input criteria associated with a first request type: transmitting the information to other electronic devices for presentation to other users; and in accordance with a determination that at least one of the user invocation input and the user search input satisfy second input criteria associated with a second request type: forgoing transmitting the information to the other electronic devices for presentation to other users.Type: ApplicationFiled: November 18, 2022Publication date: October 19, 2023Inventors: Jessica J. Peck, Niranjan Manjunath, Willem Mattelaer
-
Patent number: 11769497Abstract: Embodiments provide a context-aware digital assistant at multiple user devices participating in a video communication session by using context information from a first user device to determine a digital assistant response at a second user device. In this manner, users participating in the video communication session may interact with the digital assistant during the video communication session as if the digital assistant is another participant in the video communication session. Embodiments further describe automatically determining candidate digital assistant tasks based on a shared transcription of user voice inputs received at user devices participating in a video communication session. In this manner, a digital assistant of a user device participating in a video communication session may proactively determine one or more tasks that a user of the user device may want the digital assistant to perform based on conversations held during the video communication session.Type: GrantFiled: January 26, 2021Date of Patent: September 26, 2023Assignee: Apple Inc.Inventors: Niranjan Manjunath, Willem Mattelaer, Jessica Peck, Lily Shuting Zhang
-
Publication number: 20230229387Abstract: In an exemplary technique, speech input including one or more instructions is received. After the speech input has stopped, if it is determined that one or more visual characteristics indicate that further speech input is not expected, a response to the one or more instructions is provided. If it is determined that one or more visual characteristics indicate that further speech input is expected, a response to the one or more instructions is not provided.Type: ApplicationFiled: March 20, 2023Publication date: July 20, 2023Inventors: Rahul NAIR, Golnaz ABDOLLAHIAN, Avi BAR-ZEEV, Niranjan MANJUNATH
-
Publication number: 20230215435Abstract: This relates to an intelligent automated assistant in a video communication session environment. An example method includes, during a video communication session between at least two user devices, and at a first user device: receiving a first user voice input; in accordance with a determination that the first user voice input represents a communal digital assistant request, transmitting a request to provide context information associated with the first user voice input to the first user device; receiving context information associated with the first user voice input; obtaining a first digital assistant response based at least on a portion of the context information received from the second user device and at least a portion of context information associated with the first user voice input that is stored on the first user device; providing the first digital assistant response to the second user device; and outputting the first digital assistant response.Type: ApplicationFiled: February 28, 2023Publication date: July 6, 2023Inventors: Niranjan MANJUNATH, Willem MATTELAER, Jessica PECK, Lily Shuting ZHANG
-
Patent number: 11609739Abstract: In an exemplary technique for providing audio information, an input is received, and audio information responsive to the received input is provided using a speaker. While providing the audio information, an external sound is detected. If it is determined that the external sound is a communication of a first type, then the provision of the audio information is stopped. If it is determined that the external sound is a communication of a second type, then the provision of the audio information continues.Type: GrantFiled: April 24, 2019Date of Patent: March 21, 2023Assignee: Apple Inc.Inventors: Rahul Nair, Golnaz Abdollahian, Avi Bar-Zeev, Niranjan Manjunath
-
Publication number: 20230058929Abstract: An example process includes while an electronic device is engaged in a communication session with external device(s): receiving, from a first user of the electronic device, input to invoke a first digital assistant; receiving, from the first user, a natural language input corresponding to a task; in accordance with invoking the first digital assistant, generating, by the first digital assistant, a prompt for further user input about the task; transmitting, to the external device(s), the prompt for further user input about the task; after transmitting the prompt for further user input, receiving, from an external device of the external device(s), a response to the prompt for further user input; initiating, by the first digital assistant, based on the response and information corresponding to the first user stored on the electronic device, the task; and transmitting, to the external device(s), an output indicative of the initiated task.Type: ApplicationFiled: July 18, 2022Publication date: February 23, 2023Inventors: Rae L. LASKO, German W. BAUER, Felicia W. EDWARDS, Niranjan MANJUNATH, Jonathan H. RUSSELL, Lynn I. STREJA, Keith C. STRICKLING, Garrett L. WEINBERG
-
Publication number: 20230042836Abstract: The present disclosure relates to resolving natural language ambiguities with respect to a simulated reality setting. In an exemplary embodiment, a simulated reality setting having one or more virtual objects is displayed. A stream of gaze events is generated from the simulated reality setting and a stream of gaze data. A speech input is received within a time period and a domain is determined based on a text representation of the speech input. Based on the time period and a plurality of event times for the stream of gaze events, one or more gaze events are identified from the stream of gaze events. The identified one or more gaze events is used to determine a parameter value for an unresolved parameter of the domain. A set of tasks representing a user intent for the speech input is determined based on the parameter value and the set of tasks is performed.Type: ApplicationFiled: October 19, 2022Publication date: February 9, 2023Inventors: Niranjan MANJUNATH, Scott M. ANDRUS, Xinyuan HUANG, William W. LUCIW, Jonathan H. RUSSELL