Patents by Inventor Joseph Lange

Joseph Lange has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240193372
    Abstract: A method for dynamically generating training data for a model includes receiving a transcript corresponding to a conversation between a customer and an agent, the transcript comprising a customer input and an agent input. The method includes receiving a logic model including a plurality of responses, each response of the plurality of responses representing a potential reply to the customer input. The method further includes selecting, based on the agent input, a response from the plurality of responses of the logic model. The method includes determining that a similarity score between the selected response and the agent input satisfies a similarity threshold, and, based on determining that the similarity score between the selected response and the agent input satisfies the similarity threshold, training a machine learning model using the customer input and the selected response.
    Type: Application
    Filed: December 9, 2022
    Publication date: June 13, 2024
    Applicant: Google LLC
    Inventors: Joseph Lange, Henry Scott Dlhopolsky, Vladimir Vuskovic
  • Publication number: 20240180758
    Abstract: An absorbent article including a side panel having an ultrasonically bonded, gathered laminate is provided. The laminate has an elastomeric layer and a substrate and is joined to the absorbent article chassis at a chassis attachment bond and positioned in one of a first or second waist region. The ultrasonically bonded, gathered laminate also includes an ear structural feature comprising a surface modification to the substrate and comprising at least one of the following: embossing, apertures, perforations, slits, melted material or coatings, compressed material, secondary bonds that are disposed apart from a chassis attachment bond, plastic deformation, and folds.
    Type: Application
    Filed: February 9, 2024
    Publication date: June 6, 2024
    Inventors: Sally Lin KILBACAK, Donald Carroll ROE, Jeromy Thomas RAYCHECK, Uwe SCHNEIDER, Michael Devin LONG, Michael Brian QUADE, Jason Edward NAYLOR, Jeffry ROSIAK, Stephen Joseph LANGE, Urmish Popatlal DALAL, Christopher KRASEN, Todd Douglas LENSER
  • Patent number: 11995379
    Abstract: Implementations set forth herein relate to an automated assistant that can control graphical user interface (GUI) elements via voice input using natural language understanding of GUI content in order to resolve ambiguity and allow for condensed GUI voice input requests. When a user is accessing an application that is rendering various GUI elements at a display interface, the automated assistant can operate to process actionable data corresponding to the GUI elements. The actionable data can be processed in order to determine a correspondence between GUI voice input requests to the automated assistant and at least one of the GUI elements rendered at the display interface. When a particular spoken utterance from the user is determined to correspond to multiple GUI elements, an indication of ambiguity can be rendered at the display interface in order to encourage the user to provide a more specific spoken utterance.
    Type: Grant
    Filed: September 19, 2022
    Date of Patent: May 28, 2024
    Assignee: GOOGLE LLC
    Inventors: Jacek Szmigiel, Joseph Lange
  • Patent number: 11967321
    Abstract: Implementations set forth herein relate to an automated assistant that can interact with applications that may not have been pre-configured for interfacing with the automated assistant. The automated assistant can identify content of an application interface of the application to determine synonymous terms that a user may speak when commanding the automated assistant to perform certain tasks. Speech processing operations employed by the automated assistant can be biased towards these synonymous terms when the user is accessing an application interface of the application. In some implementations, the synonymous terms can be identified in a responsive language of the automated assistant when the content of the application interface is being rendered in a different language. This can allow the automated assistant to operate as an interface between the user and certain applications that may not be rendering content in a native language of the user.
    Type: Grant
    Filed: November 30, 2021
    Date of Patent: April 23, 2024
    Assignee: GOOGLE LLC
    Inventors: Joseph Lange, Abhanshu Sharma, Adam Coimbra, Gökhan Bakir, Gabriel Taubman, Ilya Firman, Jindong Chen, James Stout, Marcin Nowak-Przygodzki, Reed Enger, Thomas Weedon Hume, Vishwath Mohan, Jacek Szmigiel, Yunfan Jin, Kyle Pedersen, Gilles Baechler
  • Patent number: 11931233
    Abstract: An absorbent article includes a first waist region, a second waist region, and a crotch region disposed between the first and second waist regions; and a chassis having a topsheet, a backsheet, and an absorbent core positioned between the topsheet and the backsheet. The article also includes a side panel having an ultrasonically bonded, gathered laminate. The laminate has an elastomeric layer and a substrate and is joined to the chassis at a chassis attachment bond and positioned in one of the first or second waist regions. The ultrasonically bonded, gathered laminate also includes an ear structural feature comprising a surface modification to the substrate and comprising at least one of the following: embossing, apertures, perforations, slits, melted material or coatings, compressed material, secondary bonds that are disposed apart from a chassis attachment bond, plastic deformation, and folds.
    Type: Grant
    Filed: May 4, 2021
    Date of Patent: March 19, 2024
    Assignee: The Procter & Gamble Company
    Inventors: Sally Lin Kilbacak, Donald Carroll Roe, Jeromy Thomas Raycheck, Uwe Schneider, Michael Devin Long, Michael Brian Quade, Jason Edward Naylor, Jeffry Rosiak, Stephen Joseph Lange, Urmish Popatlal Dalal, Christopher Krasen, Todd Douglas Lenser
  • Patent number: 11911883
    Abstract: A hydraulic tensioner (1), comprising: abase (2); a piston (3) mounted for sliding motion relative to the base (2), the base (2) and the piston (3) defining a pressure space (4) therebetween and being arranged to be urged apart along an axis (8) upon introduction of a fluid into the pressure space (4), the tensioner (1) having an internal bore (6) along the axis (8) having first and second ends along the axis and comprising a threaded component having an internally threaded portion (7) at the first end and coupled to the piston (3); the tensioner (1) further comprising: a threaded stud (10) having an exterior thread which engages the internally threaded portion (7) of the threaded component; and a drive mechanism (12) arranged to transmit rotational motion from the second end of the internal bore (9) to the threaded stud; the tensioner being arranged such that rotational motion applied to the drive mechanism (12) at the second end causes rotation of the stud (10) relative to the threaded component, with the e
    Type: Grant
    Filed: December 4, 2019
    Date of Patent: February 27, 2024
    Assignee: Tentec Limited
    Inventor: Edmund Joseph Lange
  • Publication number: 20240046929
    Abstract: Implementations set forth herein relate to an automated assistant that can operate as an interface between a user and a separate application to search application content of the separate application. The automated assistant can interact with existing search filter features of another application and can also adapt in circumstances when certain filter parameters are not directly controllable at a search interface of the application. For instance, when a user requests that a search operation be performed using certain terms, those terms may refer to content filters that may not be available at a search interface of the application. However, the automated assistant can generate an assistant input based on those content filters in order to ensure that any resulting search results will be filtered accordingly. The assistant input can then be submitted into a search field of the application and a search operation can be executed.
    Type: Application
    Filed: October 20, 2023
    Publication date: February 8, 2024
    Inventors: Joseph Lange, Marcin Nowak-Przygodzki
  • Publication number: 20230393810
    Abstract: Implementations are described herein for analyzing existing graphical user interfaces (“GUIs”) to facilitate automatic interaction with those GUIs, e.g., by automated assistants or via other user interfaces, with minimal effort from the hosts of those GUIs. For example, in various implementations, a user intent to interact with a particular GUI may be determined based at least in part on a free-form natural language input. Based on the user intent, a target visual cue to be located in the GUI may be identified, and object recognition processing may be performed on a screenshot of the GUI to determine a location of a detected instance of the target visual cue in the screenshot. Based on the location of the detected instance of the target visual cue, an interactive element of the GUI may be identified and automatically populate with data determined from the user intent.
    Type: Application
    Filed: August 16, 2023
    Publication date: December 7, 2023
    Inventors: Joseph Lange, Asier Aguirre, Olivier Siegenthaler, Michal Pryt
  • Publication number: 20230385022
    Abstract: Implementations set forth herein relate to an automated assistant that can provide a selectable action intent suggestion when a user is accessing a third party application that is controllable via the automated assistant. The action intent can be initialized by the user without explicitly invoking the automated assistant using, for example, an invocation phrase (e.g., “Assistant . . . ”). Rather, the user can initialize performance of the corresponding action by identifying one or more action parameters. In some implementations, the selectable suggestion can indicate that a microphone is active for the user to provide a spoken utterance that identifies a parameter(s). When the action intent is initialized in response to the spoken utterance from the user, the automated assistant can control the third party application according to the action intent and any identified parameter(s).
    Type: Application
    Filed: August 7, 2023
    Publication date: November 30, 2023
    Inventors: Joseph Lange, Marcin Nowak-Przygodzki
  • Patent number: 11830487
    Abstract: Implementations set forth herein relate to an automated assistant that can operate as an interface between a user and a separate application to search application content of the separate application. The automated assistant can interact with existing search filter features of another application and can also adapt in circumstances when certain filter parameters are not directly controllable at a search interface of the application. For instance, when a user requests that a search operation be performed using certain terms, those terms may refer to content filters that may not be available at a search interface of the application. However, the automated assistant can generate an assistant input based on those content filters in order to ensure that any resulting search results will be filtered accordingly. The assistant input can then be submitted into a search field of the application and a search operation can be executed.
    Type: Grant
    Filed: April 20, 2021
    Date of Patent: November 28, 2023
    Assignee: GOOGLE LLC
    Inventors: Joseph Lange, Marcin Nowak-Przygodzki
  • Publication number: 20230377572
    Abstract: Implementations are set forth herein for creating an order of execution for actions that were requested by a user, via a spoken utterance to an automated assistant. The order of execution for the requested actions can be based on how each requested action can, or is predicted to, affect other requested actions. In some implementations, an order of execution for a series of actions can be determined based on an output of a machine learning model, such as a model that has been trained according to supervised learning. A particular order of execution can be selected to mitigate waste of processing, memory, and network resources—at least relative to other possible orders of execution. Using interaction data that characterizes past performances of automated assistants, certain orders of execution can be adapted over time, thereby allowing the automated assistant to learn from past interactions with one or more users.
    Type: Application
    Filed: August 7, 2023
    Publication date: November 23, 2023
    Inventors: Mugurel-Ionut Andreica, Vladimir Vuskovic, Joseph Lange, Sharon Stovezky, Marcin Nowak-Przygodzki
  • Patent number: 11775254
    Abstract: Implementations are described herein for analyzing existing graphical user interfaces (“GUIs”) to facilitate automatic interaction with those GUIs, e.g., by automated assistants or via other user interfaces, with minimal effort from the hosts of those GUIs. For example, in various implementations, a user intent to interact with a particular GUI may be determined based at least in part on a free-form natural language input. Based on the user intent, a target visual cue to be located in the GUI may be identified, and object recognition processing may be performed on a screenshot of the GUI to determine a location of a detected instance of the target visual cue in the screenshot. Based on the location of the detected instance of the target visual cue, an interactive element of the GUI may be identified and automatically populate with data determined from the user intent.
    Type: Grant
    Filed: January 31, 2020
    Date of Patent: October 3, 2023
    Assignee: GOOGLE LLC
    Inventors: Joseph Lange, Asier Aguirre, Olivier Siegenthaler, Michal Pryt
  • Patent number: 11769502
    Abstract: Implementations are set forth herein for creating an order of execution for actions that were requested by a user, via a spoken utterance to an automated assistant. The order of execution for the requested actions can be based on how each requested action can, or is predicted to, affect other requested actions. In some implementations, an order of execution for a series of actions can be determined based on an output of a machine learning model, such as a model that has been trained according to supervised learning. A particular order of execution can be selected to mitigate waste of processing, memory, and network resources—at least relative to other possible orders of execution. Using interaction data that characterizes past performances of automated assistants, certain orders of execution can be adapted over time, thereby allowing the automated assistant to learn from past interactions with one or more users.
    Type: Grant
    Filed: June 4, 2021
    Date of Patent: September 26, 2023
    Assignee: GOOGLE LLC
    Inventors: Mugurel Ionut Andreica, Vladimir Vuskovic, Joseph Lange, Sharon Stovezky, Marcin Nowak-Przygodzki
  • Publication number: 20230259537
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for generating subqueries from a query. In one aspect, a method includes obtaining a query, generating a set of two subqueries from the query, where the set includes a first subquery and a second subquery, determining a quality score for the set of two subqueries, determining whether the quality score for the set of two subqueries satisfies a quality threshold, and in response to determining that the quality score for the set of two subqueries satisfies the quality threshold, providing a first response to the first subquery that is responsive to a first operation that receives the first subquery as input and providing a second response to the second subquery that is responsive to a second operation that receives the second subquery as input.
    Type: Application
    Filed: April 28, 2023
    Publication date: August 17, 2023
    Inventors: Vladimir Vuskovic, Joseph Lange, Behshad Behzadi, Marcin M. Nowak-Przygodzki
  • Publication number: 20230259714
    Abstract: Aspects of the disclosure provide for a system for navigating a conversation graph using a language model trained to generate Application Programming Interface (API) calls in response to natural language input from a user computing device. A conversational agent implementing a state handler and a language model (LM) communicates with a user computing device through a user frontend. Rather than communicating directly with a user with output in natural language, the agent uses a (LM) trained as described herein to navigate a conversation graph. The state handler receives API calls generated by the LM and updates the state of a conversation with a user as indicated in the graph. After the update, the state handler can perform one or more predetermined actions associated with a node indicating the current state of the conversation.
    Type: Application
    Filed: February 14, 2022
    Publication date: August 17, 2023
    Inventor: Joseph Lange
  • Patent number: 11720325
    Abstract: Implementations set forth herein relate to an automated assistant that can provide a selectable action intent suggestion when a user is accessing a third party application that is controllable via the automated assistant. The action intent can be initialized by the user without explicitly invoking the automated assistant using, for example, an invocation phrase (e.g., “Assistant . . . ”). Rather, the user can initialize performance of the corresponding action by identifying one or more action parameters. In some implementations, the selectable suggestion can indicate that a microphone is active for the user to provide a spoken utterance that identifies a parameter(s). When the action intent is initialized in response to the spoken utterance from the user, the automated assistant can control the third party application according to the action intent and any identified parameter(s).
    Type: Grant
    Filed: April 16, 2021
    Date of Patent: August 8, 2023
    Assignee: GOOGLE LLC
    Inventors: Joseph Lange, Marcin Nowak-Przygodzki
  • Publication number: 20230169102
    Abstract: Implementations are directed to determining, based on a submitted query that is a compound query, that a set of multiple sub-queries are collectively an appropriate interpretation of the compound query. Those implementations are further directed to providing, in response to such a determination, a corresponding command for each of the sub-queries of the determined set. Each of the commands is to a corresponding agent (of one or more agents), and causes the agent to generate and provide corresponding responsive content. Those implementations are further directed to causing content to be rendered in response to the submitted query, where the content is based on the corresponding responsive content received in response to the commands.
    Type: Application
    Filed: January 30, 2023
    Publication date: June 1, 2023
    Inventors: Joseph Lange, Mugurel Ionut Andreica, Marcin Nowak-Przygodzki
  • Publication number: 20230103677
    Abstract: Implementations set forth herein relate to an automated assistant that can interact with applications that may not have been pre-configured for interfacing with the automated assistant. The automated assistant can identify content of an application interface of the application to determine synonymous terms that a user may speak when commanding the automated assistant to perform certain tasks. Speech processing operations employed by the automated assistant can be biased towards these synonymous terms when the user is accessing an application interface of the application. In some implementations, the synonymous terms can be identified in a responsive language of the automated assistant when the content of the application interface is being rendered in a different language. This can allow the automated assistant to operate as an interface between the user and certain applications that may not be rendering content in a native language of the user.
    Type: Application
    Filed: November 30, 2021
    Publication date: April 6, 2023
    Inventors: Joseph Lange, Abhanshu Sharma, Adam Coimbra, Gökhan Bakir, GABRIEL Taubman, Ilya Firman, Jindong Chen, James Stout, Marcin Nowak-Przygodzki, Reed Enger, THOMAS Weedon Hume, Vishwath Mohan, Jacek Szmigiel, Yunfan Jin, Kyle Pedersen, Gilles Baechler
  • Patent number: 11615124
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for generating subqueries from a query. In one aspect, a method includes obtaining a query, generating a set of two subqueries from the query, where the set includes a first subquery and a second subquery, determining a quality score for the set of two subqueries, determining whether the quality score for the set of two subqueries satisfies a quality threshold, and in response to determining that the quality score for the set of two subqueries satisfies the quality threshold, providing a first response to the first subquery that is responsive to a first operation that receives the first subquery as input and providing a second response to the second subquery that is responsive to a second operation that receives the second subquery as input.
    Type: Grant
    Filed: December 9, 2020
    Date of Patent: March 28, 2023
    Assignee: Google LLC
    Inventors: Vladimir Vuskovic, Joseph Lange, Behshad Behzadi, Marcin M. Nowak-Przygodzki
  • Patent number: 11567980
    Abstract: Implementations are directed to determining, based on a submitted query that is a compound query, that a set of multiple sub-queries are collectively an appropriate interpretation of the compound query. Those implementations are further directed to providing, in response to such a determination, a corresponding command for each of the sub-queries of the determined set. Each of the commands is to a corresponding agent (of one or more agents), and causes the agent to generate and provide corresponding responsive content. Those implementations are further directed to causing content to be rendered in response to the submitted query, where the content is based on the corresponding responsive content received in response to the commands.
    Type: Grant
    Filed: May 7, 2018
    Date of Patent: January 31, 2023
    Assignee: GOOGLE LLC
    Inventors: Joseph Lange, Mugurel Ionut Andreica, Marcin Nowak-Przygodzki