Patents by Inventor Joseph Lange
Joseph Lange has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20250060934Abstract: Implementations are described herein for analyzing existing graphical user interfaces (“GUIs”) to facilitate automatic interaction with those GUIs, e.g., by automated assistants or via other user interfaces, with minimal effort from the hosts of those GUIs. For example, in various implementations, a user intent to interact with a particular GUI may be determined based at least in part on a free-form natural language input. Based on the user intent, a target visual cue to be located in the GUI may be identified, and object recognition processing may be performed on a screenshot of the GUI to determine a location of a detected instance of the target visual cue in the screenshot. Based on the location of the detected instance of the target visual cue, an interactive element of the GUI may be identified and automatically populate with data determined from the user intent.Type: ApplicationFiled: November 1, 2024Publication date: February 20, 2025Inventors: Joseph Lange, Asier Aguirre, Olivier Siegenthaler, Michal Pryt
-
Patent number: 12229173Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for generating subqueries from a query. In one aspect, a method includes obtaining a query, generating a set of two subqueries from the query, where the set includes a first subquery and a second subquery, determining a quality score for the set of two subqueries, determining whether the quality score for the set of two subqueries satisfies a quality threshold, and in response to determining that the quality score for the set of two subqueries satisfies the quality threshold, providing a first response to the first subquery that is responsive to a first operation that receives the first subquery as input and providing a second response to the second subquery that is responsive to a second operation that receives the second subquery as input.Type: GrantFiled: December 9, 2020Date of Patent: February 18, 2025Assignee: Google LLCInventors: Vladimir Vuskovic, Joseph Lange, Behshad Behzadi, Marcin M. Nowak-Przygodzki
-
Patent number: 12147732Abstract: Implementations are described herein for analyzing existing graphical user interfaces (“GUIs”) to facilitate automatic interaction with those GUIs, e.g., by automated assistants or via other user interfaces, with minimal effort from the hosts of those GUIs. For example, in various implementations, a user intent to interact with a particular GUI may be determined based at least in part on a free-form natural language input. Based on the user intent, a target visual cue to be located in the GUI may be identified, and object recognition processing may be performed on a screenshot of the GUI to determine a location of a detected instance of the target visual cue in the screenshot. Based on the location of the detected instance of the target visual cue, an interactive element of the GUI may be identified and automatically populate with data determined from the user intent.Type: GrantFiled: August 16, 2023Date of Patent: November 19, 2024Assignee: GOOGLE LLCInventors: Joseph Lange, Asier Aguirre, Olivier Siegenthaler, Michal Pryt
-
Publication number: 20240361982Abstract: Implementations set forth herein relate to an automated assistant that can provide a selectable action intent suggestion when a user is accessing a third party application that is controllable via the automated assistant. The action intent can be initialized by the user without explicitly invoking the automated assistant using, for example, an invocation phrase (e.g., “Assistant . . . ”). Rather, the user can initialize performance of the corresponding action by identifying one or more action parameters. In some implementations, the selectable suggestion can indicate that a microphone is active for the user to provide a spoken utterance that identifies a parameter(s). When the action intent is initialized in response to the spoken utterance from the user, the automated assistant can control the third party application according to the action intent and any identified parameter(s).Type: ApplicationFiled: July 5, 2024Publication date: October 31, 2024Inventors: Joseph Lange, Marcin Nowak-Przygodzki
-
Patent number: 12118998Abstract: Implementations are set forth herein for creating an order of execution for actions that were requested by a user, via a spoken utterance to an automated assistant. The order of execution for the requested actions can be based on how each requested action can, or is predicted to, affect other requested actions. In some implementations, an order of execution for a series of actions can be determined based on an output of a machine learning model, such as a model that has been trained according to supervised learning. A particular order of execution can be selected to mitigate waste of processing, memory, and network resources—at least relative to other possible orders of execution. Using interaction data that characterizes past performances of automated assistants, certain orders of execution can be adapted over time, thereby allowing the automated assistant to learn from past interactions with one or more users.Type: GrantFiled: August 7, 2023Date of Patent: October 15, 2024Assignee: GOOGLE LLCInventors: Mugurel Ionut Andreica, Vladimir Vuskovic, Joseph Lange, Sharon Stovezky, Marcin Nowak-Przygodzki
-
Publication number: 20240281205Abstract: Implementations set forth herein relate to an automated assistant that can control graphical user interface (GUI) elements via voice input using natural language understanding of GUI content in order to resolve ambiguity and allow for condensed GUI voice input requests. When a user is accessing an application that is rendering various GUI elements at a display interface, the automated assistant can operate to process actionable data corresponding to the GUI elements. The actionable data can be processed in order to determine a correspondence between GUI voice input requests to the automated assistant and at least one of the GUI elements rendered at the display interface. When a particular spoken utterance from the user is determined to correspond to multiple GUI elements, an indication of ambiguity can be rendered at the display interface in order to encourage the user to provide a more specific spoken utterance.Type: ApplicationFiled: April 30, 2024Publication date: August 22, 2024Inventors: Jacek Szmigiel, Joseph Lange
-
Patent number: 12067040Abstract: Implementations are directed to determining, based on a submitted query that is a compound query, that a set of multiple sub-queries are collectively an appropriate interpretation of the compound query. Those implementations are further directed to providing, in response to such a determination, a corresponding command for each of the sub-queries of the determined set. Each of the commands is to a corresponding agent (of one or more agents), and causes the agent to generate and provide corresponding responsive content. Those implementations are further directed to causing content to be rendered in response to the submitted query, where the content is based on the corresponding responsive content received in response to the commands.Type: GrantFiled: January 30, 2023Date of Patent: August 20, 2024Assignee: GOOGLE LLCInventors: Joseph Lange, Mugurel Ionut Andreica, Marcin Nowak-Przygodzki
-
Publication number: 20240274132Abstract: Implementations set forth herein relate to an automated assistant that can interact with applications that may not have been pre-configured for interfacing with the automated assistant. The automated assistant can identify content of an application interface of the application to determine synonymous terms that a user may speak when commanding the automated assistant to perform certain tasks. Speech processing operations employed by the automated assistant can be biased towards these synonymous terms when the user is accessing an application interface of the application. In some implementations, the synonymous terms can be identified in a responsive language of the automated assistant when the content of the application interface is being rendered in a different language. This can allow the automated assistant to operate as an interface between the user and certain applications that may not be rendering content in a native language of the user.Type: ApplicationFiled: April 22, 2024Publication date: August 15, 2024Inventors: Joseph Lange, Abhanshu Sharma, Adam Coimbra, Gökhan Bakir, Gabriel Taubman, IIya Firman, Jindong Chen, James Stout, Marcin Nowak-Przygodzki, Reed Enger, Thomas Weedon Hume, Vishwath Mohan, Jacek Szmigiel, Yunfan Jin, Kyle Pedersen, Gilles Baechler
-
Publication number: 20240257817Abstract: Techniques are described herein for delegation of request fulfillment, by an assistant, to other devices. A method includes: receiving, by a first device, a request from a first user; identifying, based on the request from the first user, (i) an action corresponding to the request and (ii) a first parameter corresponding to the action; determining that fulfillment of the action is to be delegated to a device other than the first device; in response: selecting, as the device other than the first device, a second device on which an application corresponding to the action is installed; identifying, by the first device, based on the first parameter and information associated with an account of the first user, a first disambiguated parameter corresponding to the action; and sending, to the second device, a command that specifies the action and the first disambiguated parameter, to cause the second device to fulfill the action.Type: ApplicationFiled: February 1, 2023Publication date: August 1, 2024Inventors: Marcin Nowak-Przygodzki, Andrei Giurgiu, Mugurel-Ionut Andreica, Joseph Lange
-
Patent number: 12032874Abstract: Implementations set forth herein relate to an automated assistant that can provide a selectable action intent suggestion when a user is accessing a third party application that is controllable via the automated assistant. The action intent can be initialized by the user without explicitly invoking the automated assistant using, for example, an invocation phrase (e.g., “Assistant . . . ”). Rather, the user can initialize performance of the corresponding action by identifying one or more action parameters. In some implementations, the selectable suggestion can indicate that a microphone is active for the user to provide a spoken utterance that identifies a parameter(s). When the action intent is initialized in response to the spoken utterance from the user, the automated assistant can control the third party application according to the action intent and any identified parameter(s).Type: GrantFiled: August 7, 2023Date of Patent: July 9, 2024Assignee: GOOGLE LLCInventors: Joseph Lange, Marcin Nowak-Przygodzki
-
Publication number: 20240193372Abstract: A method for dynamically generating training data for a model includes receiving a transcript corresponding to a conversation between a customer and an agent, the transcript comprising a customer input and an agent input. The method includes receiving a logic model including a plurality of responses, each response of the plurality of responses representing a potential reply to the customer input. The method further includes selecting, based on the agent input, a response from the plurality of responses of the logic model. The method includes determining that a similarity score between the selected response and the agent input satisfies a similarity threshold, and, based on determining that the similarity score between the selected response and the agent input satisfies the similarity threshold, training a machine learning model using the customer input and the selected response.Type: ApplicationFiled: December 9, 2022Publication date: June 13, 2024Applicant: Google LLCInventors: Joseph Lange, Henry Scott Dlhopolsky, Vladimir Vuskovic
-
Publication number: 20240180758Abstract: An absorbent article including a side panel having an ultrasonically bonded, gathered laminate is provided. The laminate has an elastomeric layer and a substrate and is joined to the absorbent article chassis at a chassis attachment bond and positioned in one of a first or second waist region. The ultrasonically bonded, gathered laminate also includes an ear structural feature comprising a surface modification to the substrate and comprising at least one of the following: embossing, apertures, perforations, slits, melted material or coatings, compressed material, secondary bonds that are disposed apart from a chassis attachment bond, plastic deformation, and folds.Type: ApplicationFiled: February 9, 2024Publication date: June 6, 2024Inventors: Sally Lin KILBACAK, Donald Carroll ROE, Jeromy Thomas RAYCHECK, Uwe SCHNEIDER, Michael Devin LONG, Michael Brian QUADE, Jason Edward NAYLOR, Jeffry ROSIAK, Stephen Joseph LANGE, Urmish Popatlal DALAL, Christopher KRASEN, Todd Douglas LENSER
-
Patent number: 11995379Abstract: Implementations set forth herein relate to an automated assistant that can control graphical user interface (GUI) elements via voice input using natural language understanding of GUI content in order to resolve ambiguity and allow for condensed GUI voice input requests. When a user is accessing an application that is rendering various GUI elements at a display interface, the automated assistant can operate to process actionable data corresponding to the GUI elements. The actionable data can be processed in order to determine a correspondence between GUI voice input requests to the automated assistant and at least one of the GUI elements rendered at the display interface. When a particular spoken utterance from the user is determined to correspond to multiple GUI elements, an indication of ambiguity can be rendered at the display interface in order to encourage the user to provide a more specific spoken utterance.Type: GrantFiled: September 19, 2022Date of Patent: May 28, 2024Assignee: GOOGLE LLCInventors: Jacek Szmigiel, Joseph Lange
-
Patent number: 11967321Abstract: Implementations set forth herein relate to an automated assistant that can interact with applications that may not have been pre-configured for interfacing with the automated assistant. The automated assistant can identify content of an application interface of the application to determine synonymous terms that a user may speak when commanding the automated assistant to perform certain tasks. Speech processing operations employed by the automated assistant can be biased towards these synonymous terms when the user is accessing an application interface of the application. In some implementations, the synonymous terms can be identified in a responsive language of the automated assistant when the content of the application interface is being rendered in a different language. This can allow the automated assistant to operate as an interface between the user and certain applications that may not be rendering content in a native language of the user.Type: GrantFiled: November 30, 2021Date of Patent: April 23, 2024Assignee: GOOGLE LLCInventors: Joseph Lange, Abhanshu Sharma, Adam Coimbra, Gökhan Bakir, Gabriel Taubman, Ilya Firman, Jindong Chen, James Stout, Marcin Nowak-Przygodzki, Reed Enger, Thomas Weedon Hume, Vishwath Mohan, Jacek Szmigiel, Yunfan Jin, Kyle Pedersen, Gilles Baechler
-
Patent number: 11931233Abstract: An absorbent article includes a first waist region, a second waist region, and a crotch region disposed between the first and second waist regions; and a chassis having a topsheet, a backsheet, and an absorbent core positioned between the topsheet and the backsheet. The article also includes a side panel having an ultrasonically bonded, gathered laminate. The laminate has an elastomeric layer and a substrate and is joined to the chassis at a chassis attachment bond and positioned in one of the first or second waist regions. The ultrasonically bonded, gathered laminate also includes an ear structural feature comprising a surface modification to the substrate and comprising at least one of the following: embossing, apertures, perforations, slits, melted material or coatings, compressed material, secondary bonds that are disposed apart from a chassis attachment bond, plastic deformation, and folds.Type: GrantFiled: May 4, 2021Date of Patent: March 19, 2024Assignee: The Procter & Gamble CompanyInventors: Sally Lin Kilbacak, Donald Carroll Roe, Jeromy Thomas Raycheck, Uwe Schneider, Michael Devin Long, Michael Brian Quade, Jason Edward Naylor, Jeffry Rosiak, Stephen Joseph Lange, Urmish Popatlal Dalal, Christopher Krasen, Todd Douglas Lenser
-
Patent number: 11911883Abstract: A hydraulic tensioner (1), comprising: abase (2); a piston (3) mounted for sliding motion relative to the base (2), the base (2) and the piston (3) defining a pressure space (4) therebetween and being arranged to be urged apart along an axis (8) upon introduction of a fluid into the pressure space (4), the tensioner (1) having an internal bore (6) along the axis (8) having first and second ends along the axis and comprising a threaded component having an internally threaded portion (7) at the first end and coupled to the piston (3); the tensioner (1) further comprising: a threaded stud (10) having an exterior thread which engages the internally threaded portion (7) of the threaded component; and a drive mechanism (12) arranged to transmit rotational motion from the second end of the internal bore (9) to the threaded stud; the tensioner being arranged such that rotational motion applied to the drive mechanism (12) at the second end causes rotation of the stud (10) relative to the threaded component, with the eType: GrantFiled: December 4, 2019Date of Patent: February 27, 2024Assignee: Tentec LimitedInventor: Edmund Joseph Lange
-
Publication number: 20240046929Abstract: Implementations set forth herein relate to an automated assistant that can operate as an interface between a user and a separate application to search application content of the separate application. The automated assistant can interact with existing search filter features of another application and can also adapt in circumstances when certain filter parameters are not directly controllable at a search interface of the application. For instance, when a user requests that a search operation be performed using certain terms, those terms may refer to content filters that may not be available at a search interface of the application. However, the automated assistant can generate an assistant input based on those content filters in order to ensure that any resulting search results will be filtered accordingly. The assistant input can then be submitted into a search field of the application and a search operation can be executed.Type: ApplicationFiled: October 20, 2023Publication date: February 8, 2024Inventors: Joseph Lange, Marcin Nowak-Przygodzki
-
Publication number: 20230393810Abstract: Implementations are described herein for analyzing existing graphical user interfaces (“GUIs”) to facilitate automatic interaction with those GUIs, e.g., by automated assistants or via other user interfaces, with minimal effort from the hosts of those GUIs. For example, in various implementations, a user intent to interact with a particular GUI may be determined based at least in part on a free-form natural language input. Based on the user intent, a target visual cue to be located in the GUI may be identified, and object recognition processing may be performed on a screenshot of the GUI to determine a location of a detected instance of the target visual cue in the screenshot. Based on the location of the detected instance of the target visual cue, an interactive element of the GUI may be identified and automatically populate with data determined from the user intent.Type: ApplicationFiled: August 16, 2023Publication date: December 7, 2023Inventors: Joseph Lange, Asier Aguirre, Olivier Siegenthaler, Michal Pryt
-
Publication number: 20230385022Abstract: Implementations set forth herein relate to an automated assistant that can provide a selectable action intent suggestion when a user is accessing a third party application that is controllable via the automated assistant. The action intent can be initialized by the user without explicitly invoking the automated assistant using, for example, an invocation phrase (e.g., “Assistant . . . ”). Rather, the user can initialize performance of the corresponding action by identifying one or more action parameters. In some implementations, the selectable suggestion can indicate that a microphone is active for the user to provide a spoken utterance that identifies a parameter(s). When the action intent is initialized in response to the spoken utterance from the user, the automated assistant can control the third party application according to the action intent and any identified parameter(s).Type: ApplicationFiled: August 7, 2023Publication date: November 30, 2023Inventors: Joseph Lange, Marcin Nowak-Przygodzki
-
Patent number: 11830487Abstract: Implementations set forth herein relate to an automated assistant that can operate as an interface between a user and a separate application to search application content of the separate application. The automated assistant can interact with existing search filter features of another application and can also adapt in circumstances when certain filter parameters are not directly controllable at a search interface of the application. For instance, when a user requests that a search operation be performed using certain terms, those terms may refer to content filters that may not be available at a search interface of the application. However, the automated assistant can generate an assistant input based on those content filters in order to ensure that any resulting search results will be filtered accordingly. The assistant input can then be submitted into a search field of the application and a search operation can be executed.Type: GrantFiled: April 20, 2021Date of Patent: November 28, 2023Assignee: GOOGLE LLCInventors: Joseph Lange, Marcin Nowak-Przygodzki