Patents by Inventor Haonan Yu

Haonan Yu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240067876
    Abstract: The present invention relates to compounds and their nonlinear optical (NLO) crystals of A3B11P2O23 (A=K, Rb, Cs, NH4), their producing method and uses thereof. The series of compounds have a chemical formula of A3B11P2O23 (A=K, Rb, Cs, NH4), which are namely K3B11P2O23, Rb3B11P2O23, Cs3B11P2O23 and (NH4)3B11P2O23. The series of NLO crystals having the chemical formula of A3B11P2O23 (A=K, Rb, Cs, NH4), belong to rhombohedral crystal system, and have a space group of R3, crystal cell parameters of a=b=10.016(5)-12.591(5) ?, c=12.105(6)-14.905(6) ?, Z=3. A3B11P2O23 (A=K, Rb, Cs, NH4) compounds were prepared by a solid-state reaction method or a hydrothermal method, and A3B11P2O23 (A=K, Rb, Cs, NH4) NLO crystals were prepared by a high-temperature solid-state reaction method, a hydrothermal method, or a solution method. T They meet the requirements for the frequency conversion of UV wavelength lasers and could be used to prepare nonlinear optical devices.
    Type: Application
    Filed: September 30, 2022
    Publication date: February 29, 2024
    Inventors: Hongwei YU, Haonan LIU, Hongping WU, Zhanggui HU
  • Publication number: 20230410504
    Abstract: A system and method for determining the locations and types of objects in a plurality of videos. The method comprises pairing each video with one or more sentences describing the activity or activities in which those objects participate in the associated video, wherein no use is made of a pretrained object detector. The object locations are specified as rectangles, the object types are specified as nouns, and sentences describe the relative positions and motions of the objects in the videos referred to by the nouns in the sentences. The relative positions and motions of the objects in the video are described by a conjunction of predicates constructed to represent the activity described by the sentences associated with the videos.
    Type: Application
    Filed: December 29, 2020
    Publication date: December 21, 2023
    Applicant: Purdue Research Foundation
    Inventors: Jeffrey Mark Siskind, Haonan Yu
  • Publication number: 20230307617
    Abstract: Provided are rechargeable batteries, positive electrodes, and methods for manufacturing rechargeable batteries and positive electrodes. A rechargeable battery as disclosed includes a zinc metal negative electrode, an electrolyte having a pH between ˜4 and ˜6, and a positive electrode. The positive electrode includes a current collector and a primer coating layer applied to the current collector to form a primer-coated current collector for protecting the current collector from the electrolyte. The primer coating layer includes a binder and a conductive filler. The positive electrode further includes a positive electrode composite layer applied to the primer coating layer. The positive electrode composite layer includes a hydrophilic binder, a conductive additive, and a material that undergoes reversible faradaic reactions with zinc ions.
    Type: Application
    Filed: June 17, 2021
    Publication date: September 28, 2023
    Inventors: Brian D. Adams, Marine B. Cuisinier, Susi Jin, John Philip S. Lee, Jason Quinn, John Tytus, Yukun Wang, Haonan Yu
  • Patent number: 11741342
    Abstract: Neural Architecture Search (NAS) is a laborious process. Prior work on automated NAS targets mainly on improving accuracy but lacked consideration of computational resource use. Presented herein are embodiments of a Resource-Efficient Neural Architect (RENA), an efficient resource-constrained NAS using reinforcement learning with network embedding. RENA embodiments use a policy network to process the network embeddings to generate new configurations. Example demonstrates of RENA embodiments on image recognition and keyword spotting (KWS) problems are also presented herein. RENA embodiments can find novel architectures that achieve high performance even with tight resource constraints. For the CIFAR10 dataset, the tested embodiment achieved 2.95% test error when compute intensity is greater than 100 FLOPs/byte, and 3.87% test error when model size was less than 3M parameters.
    Type: Grant
    Filed: March 8, 2019
    Date of Patent: August 29, 2023
    Assignee: Baidu USA LLC
    Inventors: Yanqi Zhou, Siavash Ebrahimi, Sercan Arik, Haonan Yu, Hairong Liu, Gregory Diamos
  • Publication number: 20230238650
    Abstract: Separators for zinc metal batteries, zinc metal batteries, and methods of fabricating a separator for use in a zinc metal battery are provided. The separator includes a hydrophilic membrane having a first side for facing a negative electrode when arranged in the zinc metal battery and a second side for facing a positive electrode when arranged in the zinc metal battery. The hydrophilic membrane includes a plurality of pores traversing the hydrophilic membrane from the first side to the second side enabling flow of zinc cations between the negative electrode and the positive electrode through the separator. Each of the pores may have a pore size ranging from about 0.1 to 1.3 ?m.
    Type: Application
    Filed: June 17, 2021
    Publication date: July 27, 2023
    Inventors: Brian D. Adams, Marine B. Cuisinier, Susi Jin, John Philip S. Lee, Kendal Wilson, Haonan Yu
  • Patent number: 11417235
    Abstract: Described herein are systems and methods for grounded natural language learning in an interactive setting. In embodiments, during a learning process, an agent learns natural language by interacting with a teacher and learning from feedback, thus learning and improving language skills while taking part in the conversation. In embodiments, a model is used to incorporate both imitation and reinforcement by leveraging jointly sentence and reward feedback from the teacher. Various experiments are conducted to validate the effectiveness of a model embodiment.
    Type: Grant
    Filed: November 22, 2017
    Date of Patent: August 16, 2022
    Assignee: Baidu USA LLC
    Inventors: Haichao Zhang, Haonan Yu, Wei Xu
  • Publication number: 20220207272
    Abstract: A system and method for determining the locations and types of objects in a plurality of videos. The method comprises pairing each video with one or more sentences describing the activity or activities in which those objects participate in the associated video, wherein no use is made of a pretrained object detector. The object locations are specified as rectangles, the object types are specified as nouns, and sentences describe the relative positions and motions of the objects in the videos referred to by the nouns in the sentences. The relative positions and motions of the objects in the video are described by a conjunction of predicates constructed to represent the activity described by the sentences associated with the videos.
    Type: Application
    Filed: December 29, 2020
    Publication date: June 30, 2022
    Applicant: Purdue Research Foundation
    Inventors: Jeffrey Mark Siskind, Haonan Yu
  • Patent number: 11074829
    Abstract: Described herein are systems and methods for interactive language acquisition with one-shot concept learning through a conversational environment. Supervised language learning is limited by the ability of capturing mainly the statistics of training data, and is hardly adaptive to new scenarios or flexible for acquiring new knowledge without inefficient retraining or catastrophic forgetting. In one or more embodiments, conversational interaction serves as a natural interface both for language learning and for novel knowledge acquisition. Embodiments of a joint imitation and reinforcement approach are disclosed for grounded language learning through interactive conversation. An agent trained with this approach is able to actively acquire information by asking questions about novel objects and use the just-learned knowledge in subsequent conversations in a one-shot fashion. Results compared with other methods verified the effectiveness of embodiments disclosed herein.
    Type: Grant
    Filed: April 12, 2018
    Date of Patent: July 27, 2021
    Assignee: Baidu USA LLC
    Inventors: Haichao Zhang, Haonan Yu, Wei Xu
  • Publication number: 20190354837
    Abstract: Neural Architecture Search (NAS) is a laborious process. Prior work on automated NAS targets mainly on improving accuracy but lacked consideration of computational resource use. Presented herein are embodiments of a Resource-Efficient Neural Architect (RENA), an efficient resource-constrained NAS using reinforcement learning with network embedding. RENA embodiments use a policy network to process the network embeddings to generate new configurations. Example demonstrates of RENA embodiments on image recognition and keyword spotting (KWS) problems are also presented herein. RENA embodiments can find novel architectures that achieve high performance even with tight resource constraints. For the CIFAR10 dataset, the tested embodiment achieved 2.95% test error when compute intensity is greater than 100 FLOPs/byte, and 3.87% test error when model size was less than 3M parameters.
    Type: Application
    Filed: March 8, 2019
    Publication date: November 21, 2019
    Applicant: Baidu USA LLC
    Inventors: Yanqi ZHOU, Siavash EBRAHIMI, Sercan ARIK, Haonan YU, Hairong LIU, Gregory DIAMOS
  • Publication number: 20190318648
    Abstract: Described herein are systems and methods for interactive language acquisition with one-shot concept learning through a conversational environment. Supervised language learning is limited by the ability of capturing mainly the statistics of training data, and is hardly adaptive to new scenarios or flexible for acquiring new knowledge without inefficient retraining or catastrophic forgetting. In one or more embodiments, conversational interaction serves as a natural interface both for language learning and for novel knowledge acquisition. Embodiments of a joint imitation and reinforcement approach are disclosed for grounded language learning through interactive conversation. An agent trained with this approach is able to actively acquire information by asking questions about novel objects and use the just-learned knowledge in subsequent conversations in a one-shot fashion. Results compared with other methods verified the effectiveness of embodiments disclosed herein.
    Type: Application
    Filed: April 12, 2018
    Publication date: October 17, 2019
    Applicant: Baidu USA LLC
    Inventors: Haichao ZHANG, Haonan YU, Wei XU
  • Patent number: 10395118
    Abstract: Described herein are systems and methods that exploit hierarchical Recurrent Neural Networks (RNNs) to tackle the video captioning problem; that is, generating one or multiple sentences to describe a realistic video. Embodiments of the hierarchical framework comprise a sentence generator and a paragraph generator. In embodiments, the sentence generator produces one simple short sentence that describes a specific short video interval. In embodiments, it exploits both temporal- and spatial-attention mechanisms to selectively focus on visual elements during generation. In embodiments, the paragraph generator captures the inter-sentence dependency by taking as input the sentential embedding produced by the sentence generator, combining it with the paragraph history, and outputting the new initial state for the sentence generator.
    Type: Grant
    Filed: June 15, 2016
    Date of Patent: August 27, 2019
    Assignee: Baidu USA LLC
    Inventors: Haonan Yu, Jiang Wang, Zhiheng Huang, Yi Yang, Wei Xu
  • Patent number: 10366166
    Abstract: Described herein are systems and methods for human-like language acquisition in a compositional framework to implement object recognition or navigation tasks. Embodiments include a method for a model to learn the input language in a grounded and compositional manner, such that after training the model is able to correctly execute zero-shot commands, which have either combination of words in the command never appeared before, and/or new object concepts learned from another task but never learned from navigation settings. In embodiments, a framework is trained end-to-end to learn simultaneously the visual representations of the environment, the syntax and semantics of the language, and outputs actions via an action module. In embodiments, the zero-shot learning capability of a framework results from its compositionality and modularity with parameter tying.
    Type: Grant
    Filed: September 7, 2017
    Date of Patent: July 30, 2019
    Assignee: Baidu USA LLC
    Inventors: Haonan Yu, Haichao Zhang, Wei Xu
  • Publication number: 20190220668
    Abstract: A system and method for determining the locations and types of objects in a plurality of videos. The method comprises pairing each video with one or more sentences describing the activity or activities in which those objects participate in the associated video, wherein no use is made of a pretrained object detector. The object locations are specified as rectangles, the object types are specified as nouns, and sentences describe the relative positions and motions of the objects in the videos referred to by the nouns in the sentences. The relative positions and motions of the objects in the video are described by a conjunction of predicates constructed to represent the activity described by the sentences associated with the videos.
    Type: Application
    Filed: June 6, 2017
    Publication date: July 18, 2019
    Applicant: Purdue Research Foundation
    Inventors: Jeffrey Mark Siskind, Haonan Yu
  • Publication number: 20190179316
    Abstract: A system for directing the motion of a vehicle, comprising receiving commands in natural language using a processor, the commands specifying a relative path to be taken by the vehicle with respect to other objects in the environment; and determining an absolute path for the vehicle to follow based on the relative path using the processor, the absolute path comprising a series of coordinates in the environment; and directing the vehicle along the absolute path. Also provided is a system for training a lexicon of a natural language processing system, comprising receiving a data set containing a corpus of absolute paths driven by a vehicle annotated with natural language descriptions of the absolute paths using a processor, and determining parameters of the lexicon based on the data set.
    Type: Application
    Filed: August 25, 2017
    Publication date: June 13, 2019
    Applicant: Purdue Research Foundation
    Inventors: Jeffrey Mark SISKIND, Haonan Yu, Scott Alan BRONIKOWSKI, Daniel Paul BARRETT
  • Publication number: 20190073353
    Abstract: Described herein are systems and methods for human-like language acquisition in a compositional framework to implement object recognition or navigation tasks. Embodiments include a method for a model to learn the input language in a grounded and compositional manner, such that after training the model is able to correctly execute zero-shot commands, which have either combination of words in the command never appeared before, and/or new object concepts learned from another task but never learned from navigation settings. In embodiments, a framework is trained end-to-end to learn simultaneously the visual representations of the environment, the syntax and semantics of the language, and outputs actions via an action module. In embodiments, the zero-shot learning capability of a framework results from its compositionality and modularity with parameter tying.
    Type: Application
    Filed: September 7, 2017
    Publication date: March 7, 2019
    Applicant: Baidu USA LLC
    Inventors: Haonan YU, Haichao ZHANG, Wei XU
  • Publication number: 20180342174
    Abstract: Described herein are systems and methods for grounded natural language learning in an interactive setting. In embodiments, during a learning process, an agent learns natural language by interacting with a teacher and learning from feedback, thus learning and improving language skills while taking part in the conversation. In embodiments, a model is used to incorporate both imitation and reinforcement by leveraging jointly sentence and reward feedback from the teacher. Various experiments are conducted to validate the effectiveness of a model embodiment.
    Type: Application
    Filed: November 22, 2017
    Publication date: November 29, 2018
    Applicant: Baidu USA LLC
    Inventors: Haichao ZHANG, Haonan YU, Wei XU
  • Publication number: 20170127016
    Abstract: Described herein are systems and methods that exploit hierarchical Recurrent Neural Networks (RNNs) to tackle the video captioning problem; that is, generating one or multiple sentences to describe a realistic video. Embodiments of the hierarchical framework comprise a sentence generator and a paragraph generator. In embodiments, the sentence generator produces one simple short sentence that describes a specific short video interval. In embodiments, it exploits both temporal- and spatial-attention mechanisms to selectively focus on visual elements during generation. In embodiments, the paragraph generator captures the inter-sentence dependency by taking as input the sentential embedding produced by the sentence generator, combining it with the paragraph history, and outputting the new initial state for the sentence generator.
    Type: Application
    Filed: June 15, 2016
    Publication date: May 4, 2017
    Applicant: Baidu USA LLC
    Inventors: Haonan Yu, Jiang Wang, Zhiheng Huang, Yi Yang, Wei Xu
  • Patent number: 9183466
    Abstract: A method of testing a video against an aggregate query includes automatically receiving an aggregate query defining participant(s) and condition(s) on the participant(s). Candidate object(s) are detected in the frames of the video. A first lattice is constructed for each participant, the first-lattice nodes corresponding to the candidate object(s). A second lattice is constructed for each condition. An aggregate lattice is constructed using the respective first lattice(s) and the respective second lattice(s). Each aggregate-lattice node includes a scoring factor combining a first-lattice node factor and a second-lattice node factor. respective aggregate score(s) are determined of one or more path(s) through the aggregate lattice, each path including a respective plurality of the nodes in the aggregate lattice, to determine whether the video corresponds to the aggregate query.
    Type: Grant
    Filed: December 6, 2013
    Date of Patent: November 10, 2015
    Assignee: Purdue Research Foundation
    Inventors: Jeffrey Mark Siskind, Andrei Barbu, Siddharth Narayanaswamy, Haonan Yu
  • Publication number: 20140369596
    Abstract: A method of testing a video against an aggregate query includes automatically receiving an aggregate query defining participant(s) and condition(s) on the participant(s). Candidate object(s) are detected in the frames of the video. A first lattice is constructed for each participant, the first-lattice nodes corresponding to the candidate object(s). A second lattice is constructed for each condition. An aggregate lattice is constructed using the respective first lattice(s) and the respective second lattice(s). Each aggregate-lattice node includes a scoring factor combining a first-lattice node factor and a second-lattice node factor. respective aggregate score(s) are determined of one or more path(s) through the aggregate lattice, each path including a respective plurality of the nodes in the aggregate lattice, to determine whether the video corresponds to the aggregate query.
    Type: Application
    Filed: December 6, 2013
    Publication date: December 18, 2014
    Applicant: Purdue Research Foundation
    Inventors: Jeffrey Mark Siskind, Andrei Barbu, Siddharth Narayanaswamy, Haonan Yu
  • Patent number: 8750602
    Abstract: Embodiments of the present invention relate to a method and a system for personalized advertisement push based on user interest learning. The method may include: obtaining multiple user interest models through multitask sorting learning; extracting an object of interest in a video according to the user interest models; and extracting multiple visual features of the object of interest, and according to the visual features, retrieving related advertising information in an advertisement database. Through the method and the system provided in embodiments of the present invention, a push advertisement may be closely relevant to the content of the video, thereby meeting personalized requirements of a user to a certain extent and achieving personalized advertisement push.
    Type: Grant
    Filed: December 10, 2012
    Date of Patent: June 10, 2014
    Assignee: Huawei Technologies Co., Ltd.
    Inventors: Jia Li, Yunchao Gao, Haonan Yu, Jun Zhang, Yonghong Tian, Jun Yan