Patents by Inventor Barnaby John James
Barnaby John James has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 10770093Abstract: In some implementations, (i) audio data representing a voice command spoken by a speaker and (ii) a speaker identification result indicating that the voice command was spoken by the speaker are obtained. A voice action is selected based at least on a transcription of the audio data. A service provider corresponding to the selected voice action is selected from among a plurality of different service providers. One or more input data types that the selected service provider uses to perform authentication for the selected voice action are identified. A request to perform the selected voice action and (i) one or more values that correspond to the identified one or more input data types are provided to the service provider.Type: GrantFiled: November 29, 2016Date of Patent: September 8, 2020Assignee: Google LLCInventor: Barnaby John James
-
Patent number: 10741183Abstract: Methods, systems, and apparatus for receiving, by a voice action system, data specifying trigger terms that trigger an application to perform a voice action and a context that specifies a status of the application when the voice action can be triggered. The voice action system receives data defining a discoverability example for the voice action that comprises one or more of the trigger terms that trigger the application to perform the voice action when a status of the application satisfies the specified context. The voice action system receives a request for discoverability examples for the application from a user device having the application installed, and provides the data defining the discoverability examples to the user device in response to the request. The user device is configured to provide a notification of the one or more of the trigger terms when a status of the application satisfies the specified context.Type: GrantFiled: August 13, 2018Date of Patent: August 11, 2020Assignee: GOOGLE LLCInventors: Bo Wang, Sunil Vemuri, Barnaby John James, Pravir Kumar Gupta, Nitin Mangesh Shetti
-
Patent number: 10699707Abstract: Example aspects of the present disclosure are directed to processing voice commands or utterances. For instance, data indicative of a voice utterance can be received. A device topology representation can be accessed. The device topology representation can define a plurality of smart devices associated with one or more structures. The device topology representation can further define a location of each of the plurality of devices within the associated structures. A transcription of the voice utterance can be determined based at least in part on the device topology representation. One or more selected devices and one or more actions to be performed by the one or more selected devices can be determined based at least in part on the determined transcription and the device topology representation.Type: GrantFiled: September 29, 2017Date of Patent: June 30, 2020Assignee: GOOGLE LLCInventors: Barnaby John James, David Roy Schairer, Amy Lynn Baldwin, Vincent Yanton Mo, Jun Yang, Mark Spates, IV, Lei Zhong
-
Publication number: 20200184206Abstract: Methods, systems, and apparatuses, including computer programs encoded on a computer-readable storage medium for estimating the shape, size, and mass of fish are described. A pair of stereo cameras may be utilized to obtain right and left images of fish in a defined area. The right and left images may be processed, enhanced, and combined. Object detection may be used to detect and track a fish in images. A pose estimator may be used to determine key points and features of the detected fish. Based on the key points, a three-dimensional (3-D) model of the fish is generated that provides an estimate of the size and shape of the fish. A regression model or neural network model can be applied to the 3-D model to determine a likely weight of the fish.Type: ApplicationFiled: January 24, 2020Publication date: June 11, 2020Inventors: Barnaby John James, Evan Douglas Rapoport, Matthew Messana, Peter Kimball
-
Publication number: 20200107524Abstract: A sensor positioning system, includes an actuation server for communicating with components of the sensor positioning system. The sensor positioning system additionally includes a first actuation system and a second actuation system, wherein each actuation system includes a pulley system for maneuvering an underwater sensor system. The sensor positioning system includes a dual point attachment bracket that connects through a first line to the first actuation system and connecting through a second line to the second actuation system. The underwater sensor system is affixed to the first pulley system, the second pulley system, and the dual attachment bracket through the first line and the second line.Type: ApplicationFiled: April 16, 2019Publication date: April 9, 2020Inventors: Matthew Messana, Kyle James Cormany, Christopher Thornton, Barnaby John James, Neil Davé, Shane Washburn
-
Publication number: 20200104602Abstract: A fish monitoring system deployed in a particular area to obtain fish images is described. Neural networks and machine-learning techniques may be implemented to periodically train fish monitoring systems and generate monitoring modes to capture high quality images of fish based on the conditions in the determined area. The camera systems may be configured according to the settings, e.g., positions, viewing angles, specified by the monitoring modes when conditions matching the monitoring modes are detected. Each monitoring mode may be associated with one or more fish activities, such as sleeping, eating, swimming alone, and one or more parameters, such as time, location, and fish type.Type: ApplicationFiled: December 3, 2019Publication date: April 2, 2020Inventors: Joel Fraser Atwater, Barnaby John James, Matthew Messana
-
Patent number: 10599922Abstract: Methods, systems, and apparatuses, including computer programs encoded on a computer-readable storage medium for estimating the shape, size, and mass of fish are described. A pair of stereo cameras may be utilized to obtain right and left images of fish in a defined area. The right and left images may be processed, enhanced, and combined. Object detection may be used to detect and track a fish in images. A pose estimator may be used to determine key points and features of the detected fish. Based on the key points, a three-dimensional (3-D) model of the fish is generated that provides an estimate of the size and shape of the fish. A regression model or neural network model can be applied to the 3-D model to determine a likely weight of the fish.Type: GrantFiled: January 25, 2018Date of Patent: March 24, 2020Assignee: X Development LLCInventors: Barnaby John James, Evan Douglas Rapoport, Matthew Messana, Peter Kimball
-
Publication number: 20200082481Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for computerized travel services. One of the methods includes identifying photographs using an index of photographs, the photographs being identified from the index as photographs geographically related to a point of interest or destination and having a creation timestamp corresponding to a time of the year; determining for each of the photographs, a relevancy score based at least in part on: selection success data of the photograph for image queries referring to the point of interest or destination, and references to the point of interest or destination in documents associated with the photograph; and selecting a selected photograph from the photographs based at least in part on a respective visual quality score and the respective relevancy scores, the visual quality score representing a degree of visual quality of the respective photographs.Type: ApplicationFiled: November 14, 2019Publication date: March 12, 2020Inventors: Barnaby John James, Bala Venkata Sai Ravi Krishna Kolluri
-
Patent number: 10534967Abstract: A fish monitoring system deployed in a particular area to obtain fish images is described. Neural networks and machine-learning techniques may be implemented to periodically train fish monitoring systems and generate monitoring modes to capture high quality images of fish based on the conditions in the determined area. The camera systems may be configured according to the settings, e.g., positions, viewing angles, specified by the monitoring modes when conditions matching the monitoring modes are detected. Each monitoring mode may be associated with one or more fish activities, such as sleeping, eating, swimming alone, and one or more parameters, such as time, location, and fish type.Type: GrantFiled: May 3, 2018Date of Patent: January 14, 2020Assignee: X Development LLCInventors: Joel Fraser Atwater, Barnaby John James, Matthew Messana
-
Patent number: 10510129Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for computerized travel services. One of the methods includes identifying photographs using an index of photographs, the photographs being identified from the index as photographs geographically related to a point of interest or destination and having a creation timestamp corresponding to a time of the year; determining for each of the photographs, a relevancy score based at least in part on: selection success data of the photograph for image queries referring to the point of interest or destination, and references to the point of interest or destination in documents associated with the photograph; and selecting a selected photograph from the photographs based at least in part on a respective visual quality score and the respective relevancy scores, the visual quality score representing a degree of visual quality of the respective photographs.Type: GrantFiled: October 12, 2017Date of Patent: December 17, 2019Assignee: GOOGLE LLCInventors: Barnaby John James, Bala Venkata Sai Ravi Krishna Kolluri
-
Publication number: 20190340440Abstract: A fish monitoring system deployed in a particular area to obtain fish images is described. Neural networks and machine-learning techniques may be implemented to periodically train fish monitoring systems and generate monitoring modes to capture high quality images of fish based on the conditions in the determined area. The camera systems may be configured according to the settings, e.g., positions, viewing angles, specified by the monitoring modes when conditions matching the monitoring modes are detected. Each monitoring mode may be associated with one or more fish activities, such as sleeping, eating, swimming alone, and one or more parameters, such as time, location, and fish type.Type: ApplicationFiled: May 3, 2018Publication date: November 7, 2019Inventors: Joel Fraser Atwater, Barnaby John James, Matthew Messana
-
Publication number: 20190228218Abstract: Methods, systems, and apparatuses, including computer programs encoded on a computer-readable storage medium for estimating the shape, size, and mass of fish are described. A pair of stereo cameras may be utilized to obtain right and left images of fish in a defined area. The right and left images may be processed, enhanced, and combined. Object detection may be used to detect and track a fish in images. A pose estimator may be used to determine key points and features of the detected fish. Based on the key points, a three-dimensional (3-D) model of the fish is generated that provides an estimate of the size and shape of the fish. A regression model or neural network model can be applied to the 3-D model to determine a likely weight of the fish.Type: ApplicationFiled: January 25, 2018Publication date: July 25, 2019Inventors: Barnaby John James, Evan Douglas Rapoport, Matthew Messana, Peter Kimball
-
Publication number: 20190156856Abstract: In some implementations, (i) audio data representing a voice command spoken by a speaker and (ii) a speaker identification result indicating that the voice command was spoken by the speaker are obtained. A voice action is selected based at least on a transcription of the audio data. A service provider corresponding to the selected voice action is selected from among a plurality of different service providers. One or more input data types that the selected service provider uses to perform authentication for the selected voice action are identified. A request to perform the selected voice action and (i) one or more values that correspond to the identified one or more input data types are provided to the service provider.Type: ApplicationFiled: November 29, 2016Publication date: May 23, 2019Applicant: GOOGLE LLCInventor: Barnaby John James
-
Publication number: 20190103114Abstract: Methods, systems, and apparatus for receiving, by a voice action system, data specifying trigger terms that trigger an application to perform a voice action and a context that specifies a status of the application when the voice action can be triggered. The voice action system receives data defining a discoverability example for the voice action that comprises one or more of the trigger terms that trigger the application to perform the voice action when a status of the application satisfies the specified context. The voice action system receives a request for discoverability examples for the application from a user device having the application installed, and provides the data defining the discoverability examples to the user device in response to the request. The user device is configured to provide a notification of the one or more of the trigger terms when a status of the application satisfies the specified context.Type: ApplicationFiled: August 13, 2018Publication date: April 4, 2019Inventors: Bo Wang, Sunil Vemuri, Barnaby John James, Pravir Kumar Gupta, Nitin Mangesh Shetti
-
Patent number: 10127926Abstract: In some implementations, (i) audio data representing a voice command spoken by a speaker and (ii) a speaker identification result indicating that the voice command was spoken by the speaker are obtained. A voice action is selected based at least on a transcription of the audio data. A service provider corresponding to the selected voice action is selected from among a plurality of different service providers. One or more input data types that the selected service provider uses to perform authentication for the selected voice action are identified. A request to perform the selected voice action and (i) one or more values that correspond to the identified one or more input data types are provided to the service provider.Type: GrantFiled: June 10, 2016Date of Patent: November 13, 2018Assignee: Google LLCInventor: Barnaby John James
-
Patent number: 10089982Abstract: Methods, systems, and apparatus for determining that a software application installed on a user device is compatible with a new voice action, wherein the new voice action is specified by an application developer of the software application. One or more trigger terms for triggering the software application to perform the new voice action are identified. An automatic speech recognizer is biased to prefer the identified trigger terms of the new voice action over trigger terms of other voice actions. A transcription of an utterance generated by the biased automatic speech recognizer is obtained. The transcription of the utterance generated by the biased automatic speech recognizer is determined to include a particular trigger term included in the identified trigger terms. Based at least on determining that the transcription of the utterance generated by the biased automatic speech recognizer includes the particular trigger term, execution of the new voice action is triggered.Type: GrantFiled: June 8, 2017Date of Patent: October 2, 2018Assignee: GOOGLE LLCInventors: Bo Wang, Sunil Vemuri, Barnaby John James, Pravir Kumar Gupta, Scott B. Huffman
-
Publication number: 20180276005Abstract: A digital assistant executing at, at least one processor, is described that is configured to determine a set of candidate third party agents. The digital assistant is further configured to receive, from a computing device that is associated with a user, information indicative of one or more interests of the user and determine based on the information, a set of relevance scores. The digital assistant is further configured to select one or more candidate third party agents from the set of candidate third party agents that have a respective relevance score that satisfies a threshold. Responsive to receiving an indication of user input that accepts a recommendation to configure the user account with the one or more candidate third party agents, the digital assistant is further configured to configure the user account for operation with the one or more candidate third party agents.Type: ApplicationFiled: March 24, 2017Publication date: September 27, 2018Inventor: Barnaby John James
-
Patent number: 10049670Abstract: Methods, systems, and apparatus for receiving, by a voice action system, data specifying trigger terms that trigger an application to perform a voice action and a context that specifies a status of the application when the voice action can be triggered. The voice action system receives data defining a discoverability example for the voice action that comprises one or more of the trigger terms that trigger the application to perform the voice action when a status of the application satisfies the specified context. The voice action system receives a request for discoverability examples for the application from a user device having the application installed, and provides the data defining the discoverability examples to the user device in response to the request. The user device is configured to provide a notification of the one or more of the trigger terms when a status of the application satisfies the specified context.Type: GrantFiled: June 6, 2016Date of Patent: August 14, 2018Assignee: GOOGLE LLCInventors: Bo Wang, Sunil Vemuri, Barnaby John James, Pravir Kumar Gupta, Nitin Mangesh Shetti
-
Publication number: 20180096683Abstract: Example aspects of the present disclosure are directed to processing voice commands or utterances. For instance, data indicative of a voice utterance can be received. A device topology representation can be accessed. The device topology representation can define a plurality of smart devices associated with one or more structures. The device topology representation can further define a location of each of the plurality of devices within the associated structures. A transcription of the voice utterance can be determined based at least in part on the device topology representation. One or more selected devices and one or more actions to be performed by the one or more selected devices can be determined based at least in part on the determined transcription and the device topology representation.Type: ApplicationFiled: September 29, 2017Publication date: April 5, 2018Inventors: Barnaby John James, David Roy Schairer, Amy Lynn Baldwin, Vincent Yanton Mo, Jun Yang, Mark Spates, IV, Lei Zhong
-
Publication number: 20180096283Abstract: An example method includes receiving, by a computational assistant executing at one or more processors, a representation of an utterance spoken at a computing device; identifying, based on the utterance, a task to be performed; determining a capability level of a first party (1P) agent to perform the task; determining capability levels of respective third party (3P) agents of a plurality of 3P agents to perform the task; responsive to determining that the capability level of the 1P agent does not satisfy a threshold capability level, that a capability level of a particular 3P agent of the plurality of 3P agents is a greatest of the determined capability levels, and that the capability level of the particular 3P agent satisfies the threshold capability level, selecting the particular 3P agent to perform the task; and performing one or more actions determined by the selected agent to perform the task.Type: ApplicationFiled: November 16, 2017Publication date: April 5, 2018Inventors: Bo Wang, Lei Zhong, Barnaby John James, Saisuresh Krishnakumaran, Robert Stets, Bogdan Caprita, Valerie Nygaard