Patents by Inventor Nicholas Jitkoff
Nicholas Jitkoff has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12266061Abstract: Methods and systems described herein are directed to a virtual personal interface (herein “personal interface”) for controlling an artificial reality (XR) environment, such as by providing user interfaces for interactions with a current XR application, providing detail views for selected items, navigating between multiple virtual worlds without having to transition in and out of a home lobby for those worlds, executing aspects of a second XR application while within a world controlled by a first XR application, and providing 3D content that is separate from the current world. While in at least one of those worlds, the personal interface can itself present content in a runtime separate from the current virtual world, corresponding to an item, action, or application for that world. XR applications can be defined for use with the personal interface to create both a 3D world portion and 2D interface portions that are displayed via the personal interface.Type: GrantFiled: November 17, 2022Date of Patent: April 1, 2025Assignee: Meta Platforms Technologies, LLCInventors: Matthaeus Krenn, Jeremy Edelblut, John Nicholas Jitkoff
-
Publication number: 20250077597Abstract: In general, the subject matter described in this specification can be embodied in methods, systems, and program products for receiving user input that defines a search query, and providing the search query to a server system. Information that a search engine system determined was responsive to the search query is received at a computing device. The computing device is identified as in a first state, and a first output mode for audibly outputting at least a portion of the information is selected. The first output mode is selected from a collection of the first output mode and a second output mode. The second output mode is selected in response to the computing device being in a second state and is for visually outputting at least the portion of the information and not audibly outputting the at least portion of the information. At least the portion of information is audibly output.Type: ApplicationFiled: November 15, 2024Publication date: March 6, 2025Applicant: Google LLCInventors: John Nicholas Jitkoff, Michael J. Lebeau, William J. Byrne, David P. Singleton
-
Patent number: 12223104Abstract: Systems and methods for providing partial passthrough video to a user of a virtual reality device are disclosed herein. Providing the partial passthrough video can include detecting a hand passthrough trigger event and identifying a hand passthrough video feed. Providing partial passthrough video can further include aligning the hand passthrough video feed with a virtual environment presented to a user by the virtual environment and, based on the aligning of the hand passthrough video feed with the virtual environment, overlaying the hand passthrough video feed on the virtual environment.Type: GrantFiled: October 11, 2021Date of Patent: February 11, 2025Assignee: Meta Platforms Technologies, LLCInventors: Michael James LeBeau, John Nicholas Jitkoff
-
Publication number: 20250028771Abstract: In general, the subject matter described in this specification can be embodied in methods, systems, and program products for providing search results automatically to a user of a computing device. A spoken input provided by a user to a computing device is received. The spoken input is transmitted to a computer server system that is remote from the computing device. Search result information that is responsive to the spoken input is receiving by the computing device and in response to the transmitted spoken input. An alert is provided to the user that the device will connect the user to a target of the search result information if the user does not intervene to stop the connecting of the user. The user is connected to the target of the search result information based on a determination that the user has not intervened to stop the connecting of the user.Type: ApplicationFiled: October 7, 2024Publication date: January 23, 2025Applicant: Google LLCInventors: Michael J Lebeau, John Nicholas Jitkoff, William J. Byrne
-
Patent number: 12158917Abstract: In general, the subject matter described in this specification can be embodied in methods, systems, and program products for receiving user input that defines a search query, and providing the search query to a server system. Information that a search engine system determined was responsive to the search query is received at a computing device. The computing device is identified as in a first state, and a first output mode for audibly outputting at least a portion of the information is selected. The first output mode is selected from a collection of the first output mode and a second output mode. The second output mode is selected in response to the computing device being in a second state and is for visually outputting at least the portion of the information and not audibly outputting the at least portion of the information. At least the portion of information is audibly output.Type: GrantFiled: December 29, 2021Date of Patent: December 3, 2024Assignee: Google LLCInventors: John Nicholas Jitkoff, Michael J. Lebeau, William J. Byrne, David P. Singleton
-
Publication number: 20240393126Abstract: A computer-implemented method includes receiving at a computer server system, from a computing device that is remote from the server system, a string of text that comprises a search query. The method also includes identifying one or more search results that are responsive to the search query, parsing a document that is a target of one of the one or more results, identifying geographical address information from the parsing, generating a specific geographical indicator corresponding to the one search result, and transmitting for use by the computing device, data for automatically generating a navigational application having a destination at the specific geographical indicator.Type: ApplicationFiled: August 2, 2024Publication date: November 28, 2024Applicant: Google LLCInventors: Michael J. Lebeau, Ole Cavelie, Keith Ito, John Nicholas Jitkoff
-
Patent number: 12148423Abstract: The subject matter of this specification can be implemented in, among other things, a computer-implemented method for correcting words in transcribed text including receiving speech audio data from a microphone. The method further includes sending the speech audio data to a transcription system. The method further includes receiving a word lattice transcribed from the speech audio data by the transcription system. The method further includes presenting one or more transcribed words from the word lattice. The method further includes receiving a user selection of at least one of the presented transcribed words. The method further includes presenting one or more alternate words from the word lattice for the selected transcribed word. The method further includes receiving a user selection of at least one of the alternate words. The method further includes replacing the selected transcribed word in the presented transcribed words with the selected alternate word.Type: GrantFiled: June 7, 2021Date of Patent: November 19, 2024Assignee: Google LLCInventors: Michael J. Lebeau, William J. Byrne, John Nicholas Jitkoff, Brandon M. Ballinger, Trausti T. Kristjansson
-
Patent number: 12124523Abstract: In general, the subject matter described in this specification can be embodied in methods, systems, and program products for providing search results automatically to a user of a computing device. A spoken input provided by a user to a computing device is received. The spoken input is transmitted to a computer server system that is remote from the computing device. Search result information that is responsive to the spoken input is receiving by the computing device and in response to the transmitted spoken input. An alert is provided to the user that the device will connect the user to a target of the search result information if the user does not intervene to stop the connecting of the user. The user is connected to the target of the search result information based on a determination that the user has not intervened to stop the connecting of the user.Type: GrantFiled: August 9, 2023Date of Patent: October 22, 2024Assignee: Google LLCInventors: Michael J. Lebeau, John Nicholas Jitkoff, William J. Byrne
-
Publication number: 20240305963Abstract: In general, the subject matter described in this specification can be embodied in methods, systems, and program products for receiving a voice query at a mobile computing device and generating data that represents content of the voice query. The data is provided to a server system. A textual query that has been determined by a speech recognizer at the server system to be a textual form of at least part of the data is received at the mobile computing device. The textual query is determined to include a carrier phrase of one or more words that is reserved by a first third-party application program installed on the computing device. The first third-party application is selected, from a group of one or more third-party applications, to receive all or a part of the textual query. All or a part of the textual query is provided to the selected first application program.Type: ApplicationFiled: May 20, 2024Publication date: September 12, 2024Applicant: Google LLCInventors: Michael J. Lebeau, John Nicholas Jitkoff, William J. Byrne
-
Patent number: 12072200Abstract: A computer-implemented method includes receiving at a computer server system, from a computing device that is remote from the server system, a string of text that comprises a search query. The method also includes identifying one or more search results that are responsive to the search query, parsing a document that is a target of one of the one or more results, identifying geographical address information from the parsing, generating a specific geographical indicator corresponding to the one search result, and transmitting for use by the computing device, data for automatically generating a navigational application having a destination at the specific geographical indicator.Type: GrantFiled: May 27, 2020Date of Patent: August 27, 2024Assignee: Google LLCInventors: Michael J. Lebeau, Ole Cavelie, Keith Ito, John Nicholas Jitkoff
-
Patent number: 12066298Abstract: A computer-implemented method includes receiving at a computer server system, from a computing device that is remote from the server system, a string of text that comprises a search query. The method also includes identifying one or more search results that are responsive to the search query, parsing a document that is a target of one of the one or more results, identifying geographical address information from the parsing, generating a specific geographical indicator corresponding to the one search result, and transmitting for use by the computing device, data for automatically generating a navigational application having a destination at the specific geographical indicator.Type: GrantFiled: January 24, 2020Date of Patent: August 20, 2024Assignee: Google LLCInventors: Michael J. Lebeau, Ole CaveLie, Keith Ito, John Nicholas Jitkoff
-
Publication number: 20240272764Abstract: An example method of presenting, via an artificial-reality headset, a user interface that includes a selectable user interface element is described. The method includes that while a representation of a hand of a user is within an indirect-control threshold distance of the user interface, a focus selector is projected within the user interface based on a position of the representation of the hand of the user. The example method also includes that upon determining that the representation of the hand of the user has moved within a direct-touch threshold distance of the user interface, ceasing to display the focus selector within the user interface and allowing the representation of the hand of the user to interact directly with the selectable user interface element.Type: ApplicationFiled: February 7, 2024Publication date: August 15, 2024Inventors: Anastasia Victor-Faichney, Qi Xiong, Jennifer Morrow, Miguel Angel Maya Hernández, John Nicholas Jitkoff, Ahad Habib Basravi, Irvi Stefo
-
Publication number: 20240264660Abstract: A computer implemented method for facilitating user interface interactions in an XR environment is provided. The method incudes rendering a system UI and tracking a position of user's hand. The method further includes signifying an interaction opportunity by generating first feedback that modifies a UI element based on position of user's hand being within first threshold distance of the UI element or by generating second feedback that accentuates an edge of the system UI based on the position of user's hand being within second threshold distance of the edge. Furthermore, the method includes updating the position of the user's hand. The method further includes signifying interaction with the UI element by modifying location of representation of the user's hand, when the user's hand has interacted with the UI element or signifying interaction with the edge by generating third feedback that accentuates the portion of the representation that grabs the edge.Type: ApplicationFiled: February 8, 2023Publication date: August 8, 2024Inventors: Samuel Matthew LEVATICH, Matthew Alan INSLEY, Andrew C. JOHNSON, Qi XIONG, Jeremy EDELBLUT, Matthaeus KRENN, John Nicholas JITKOFF, Jennifer MORROW, Brandon FURTWANGLER
-
Patent number: 12010597Abstract: In general, the subject matter described in this specification can be embodied in methods, systems, and program products for receiving a voice query at a mobile computing device and generating data that represents content of the voice query. The data is provided to a server system. A textual query that has been determined by a speech recognizer at the server system to be a textual form of at least part of the data is received at the mobile computing device. The textual query is determined to include a carrier phrase of one or more words that is reserved by a first third-party application program installed on the computing device. The first third-party application is selected, from a group of one or more third-party applications, to receive all or a part of the textual query. All or a part of the textual query is provided to the selected first application program.Type: GrantFiled: August 18, 2022Date of Patent: June 11, 2024Assignee: Google LLCInventors: Michael J. Lebeau, John Nicholas Jitkoff, William J. Byrne
-
Publication number: 20240160337Abstract: Methods and systems described herein are directed to a virtual web browser for providing access to multiple virtual worlds interchangeably. Browser tabs for corresponding website and virtual world pairs can be displayed along with associated controls, the selection of such controls effecting the instantiation of 3D content for the virtual worlds. One or more of the tabs can be automatically generated as a result of interactions with objects in the virtual worlds, such that travel to a world, corresponding to an object to which an interaction was directed, is facilitated.Type: ApplicationFiled: January 25, 2024Publication date: May 16, 2024Inventors: Jeremy EDELBLUT, Matthaeus KRENN, John Nicholas JITKOFF
-
Patent number: 11941244Abstract: The subject matter of this specification can be implemented in, among other things, a computer-implemented user interface method including displaying on a touchscreen display a representation of a keyboard defining a top edge and a bottom edge, and a content area adjacent to the keyboard. The method further includes receiving a user dragging input having motion directed to the bottom edge of the keyboard. The method further includes removing the keyboard from the touchscreen display and expanding the content area to an area previously occupied by the keyboard.Type: GrantFiled: October 3, 2022Date of Patent: March 26, 2024Assignee: GOOGLE LLCInventors: Alastair Tse, John Nicholas Jitkoff
-
Patent number: 11928314Abstract: Methods and systems described herein are directed to a virtual web browser for providing access to multiple virtual worlds interchangeably. Browser tabs for corresponding website and virtual world pairs can be displayed along with associated controls, the selection of such controls effecting the instantiation of 3D content for the virtual worlds. One or more of the tabs can be automatically generated as a result of interactions with objects in the virtual worlds, such that travel to a world, corresponding to an object to which an interaction was directed, is facilitated.Type: GrantFiled: November 17, 2022Date of Patent: March 12, 2024Assignee: Meta Platforms Technologies, LLCInventors: Jeremy Edelblut, Matthaeus Krenn, John Nicholas Jitkoff
-
Publication number: 20240036709Abstract: According to one general aspect, a computing device may include an application configured to create a tab in a context of a window, and a window manager configured to register the tab with a first UI element registry. The window manager may be configured to receive, over a network, at least a portion of a second UI element registry from a secondary window manager of a secondary computing device. The portion of the second UI element registry may identify a remote tab previously registered with the secondary window manager. The window manager may be configured to cause a display to provide a graphical arrangement of the tab and the remote tab.Type: ApplicationFiled: October 12, 2023Publication date: February 1, 2024Inventors: John Nicholas Jitkoff, Glen Murphy
-
Publication number: 20230419618Abstract: Methods and systems described herein are directed to a virtual personal interface (herein “personal interface”) for controlling an artificial reality (XR) environment, such as by providing user interfaces for interactions with a current XR application, providing detail views for selected items, navigating between multiple virtual worlds without having to transition in and out of a home lobby for those worlds, executing aspects of a second XR application while within a world controlled by a first XR application, and providing 3D content that is separate from the current world. While in at least one of those worlds, the personal interface can itself present content in a runtime separate from the current virtual world, corresponding to an item, action, or application for that world. XR applications can be defined for use with the personal interface to create both a 3D world portion and 2D interface portions that are displayed via the personal interface.Type: ApplicationFiled: November 17, 2022Publication date: December 28, 2023Inventors: Matthaeus KRENN, Jeremy EDELBLUT, John Nicholas JITKOFF
-
Publication number: 20230419617Abstract: Methods and systems described herein are directed to a virtual personal interface (herein “personal interface”) for controlling an artificial reality (XR) environment, such as by providing user interfaces for interactions with a current XR application, providing detail views for selected items, navigating between multiple virtual worlds without having to transition in and out of a home lobby for those worlds, executing aspects of a second XR application while within a world controlled by a first XR application, and providing 3D content that is separate from the current world. While in at least one of those worlds, the personal interface can itself present content in a runtime separate from the current virtual world, corresponding to an item, action, or application for that world. XR applications can be defined for use with the personal interface to create both a 3D world portion and 2D interface portions that are displayed via the personal interface.Type: ApplicationFiled: July 19, 2022Publication date: December 28, 2023Inventors: Matthaeus KRENN, Jeremy EDELBLUT, John Nicholas JITKOFF