Patents Examined by Michael Ortiz Sanchez
-
Patent number: 12293156Abstract: Systems and methods for deep technology innovation management by cross-pollinating innovations dataset are disclosed. A system extracts context-based keyword from an innovation dataset by transforming the innovation dataset to a vector. Further, the system searches semantically relevant keywords for the extracted context-based keyword, by extracting an entity and a key phrase from the extracted a context-based keyword. Furthermore, system clusters the vector, by identifying frequent keywords in the semantically relevant keywords to obtain cluster centroids of the frequent keywords. Thereafter, the system determines weighted keywords in each cluster using the obtained cluster centroids, and classifies the weighted keywords to identify emerging innovation trends relevant to the innovation in the innovation dataset. The system forms cohorts of innovators to explore the reuse of innovations, assets, code, and build focused monetization model.Type: GrantFiled: August 10, 2022Date of Patent: May 6, 2025Assignee: ACCENTURE GLOBAL SOLUTIONS LIMITEDInventors: Raghavan Tinniyam Iyer, Amod Deshpande, Puneet Kalra, Bhavna Butani, Kiran Raghunath Sathvik, Bhaskar Ghosh
-
Patent number: 12293768Abstract: A method for decoding an encoded audio bitstream in an audio processing system is disclosed. The method includes extracting from the encoded audio bitstream a first waveform-coded signal comprising spectral coefficients corresponding to frequencies up to a first cross-over frequency for a time frame and performing parametric decoding at a second cross-over frequency for the time frame to generate a reconstructed signal. The second cross-over frequency is above the first cross-over frequency and the parametric decoding uses reconstruction parameters derived from the encoded audio bitstream to generate the reconstructed signal. The method also includes extracting from the encoded audio bitstream a second waveform-coded signal comprising spectral coefficients corresponding to a subset of frequencies above the first cross-over frequency for the time frame and interleaving the second waveform-coded signal with the reconstructed signal to produce an interleaved signal for the time frame.Type: GrantFiled: November 8, 2023Date of Patent: May 6, 2025Assignee: Dolby International ABInventors: Kristofer Kjörling, Heiko Purnhagen, Harald Mundt, Karl Jonas Roeden, Leif Sehlström
-
Patent number: 12288028Abstract: Improvement is made in performance of a trained neural network that uses positional information indicating a position at which each token included in an input sequence is present in the input sequence.Type: GrantFiled: July 6, 2020Date of Patent: April 29, 2025Assignee: NATIONAL INSTITUTE OF INFORMATION AND COMMUNICATIONS TECHNOLOGYInventors: Kehai Chen, Rui Wang, Masao Uchiyama, Eiichiro Sumita
-
Patent number: 12282743Abstract: Described herein is an Autonomous Conversational AI system, which does not require any human configuration or annotation, and is used to have multi-turn dialogs with a user. A typical Conversational AI system consists of three main models: Natural Language Understanding (NLU), Dialog Manager (DM) and Natural Language Generation (NLG), which requires human provided data and configuration. The system proposed herein leverages novel Conversational AI methods which automatically generates conversational AI configuration from any historical conversation logs. The automatically generated configuration contains Auto-Topics, Auto-Subtopics, Auto-Intents, Auto-Responses and Auto-Flows which are used to automatically train NLU, DM and NLG models. Once these models are trained for given conversation logs, the system can be used to have dialog with any user.Type: GrantFiled: January 6, 2022Date of Patent: April 22, 2025Assignee: GICRM AI LLCInventors: Amol Kelkar, Nikhil Varghese, Chandra Khatri, Utkarsh Mittal, Nachiketa Rajpurohit, Peter Relan, Hung Tran
-
Patent number: 12230244Abstract: Systems and methods are described herein for an application and graphical user interface (“GUI”) for customized storytelling. In an example, a user can create profiles for a listener user and a reader user. The listener user profile can include information about the listener user. The reader user profile can include a voice model of the reader user's voice. The GUI can allow the user to provide a brief description of a story. The application can send the story description and listener user profile to a server that uses an artificial intelligence engine to generate a customized story for the listener user. The application can apply the reader user voice model to the story and play audio of the reader user's voice reading the story.Type: GrantFiled: December 28, 2023Date of Patent: February 18, 2025Inventor: Todd Searcy
-
Patent number: 12223960Abstract: Implementations relate to generating a proficiency measure, and utilizing the proficiency measure to adapt one or more automated assistant functionalities. The generated proficiency measure is for a particular class of automated assistant actions, and is specific to an assistant device and/or is specific to a particular user. A generated proficiency measure for a class can reflect a degree of proficiency, of a user and/or of an assistant device, for that class. Various automated assistant functionalities can be adapted, for a particular class, responsive to determining the proficiency measure satisfies a threshold, or fails to satisfy the threshold (or an alternate threshold). The adaptation(s) can make automated assistant processing more efficient and/or improve (e.g., shorten the duration of) user-assistant interaction(s).Type: GrantFiled: March 18, 2024Date of Patent: February 11, 2025Assignee: GOOGLE LLCInventors: Matthew Sharifi, Victor Carbune
-
Patent number: 12217002Abstract: Apparatuses, systems, and techniques to parse textual data using parallel computing devices. In at least one embodiment, text is parsed by a plurality of parallel processing units using a finite state machine and logical stack to convert the text to a tree data structure. Data is extracted from the tree by the plurality of parallel processors and stored.Type: GrantFiled: May 11, 2022Date of Patent: February 4, 2025Assignee: NVIDIA CorporationInventors: Elias Stehle, Gregory Michael Kimball
-
Patent number: 12205027Abstract: A method for neural network training is provided. The method inputs a training set of textual claims, lists of evidence including gold evidence chains, and claim labels labelling the evidence with respect to the textual claims. The claim labels include refutes, supports, and not enough information (NEI). The method computes an initial set of document retrievals for each of the textual claims. The method also includes computing an initial set of page element retrievals including sentence retrievals from the initial set of document retrievals for each of the textual claims. The method creates, from the training set of textual claims, a Leave Out Training Set which includes input texts and target texts relating to the labels. The method trains a sequence-to-sequence neural network to generate new target texts from new input texts using the Leave Out Training Set.Type: GrantFiled: June 15, 2022Date of Patent: January 21, 2025Assignee: NEC CorporationInventor: Christopher Malon
-
Patent number: 12190046Abstract: Text editing apparatus comprises a database memory configured to store a text database, in which the text database is configured to store a plurality of text portions and a set of links between text portions, the set of links defining a document as a linked list of the text portions; and a data processor configured, in response to user input, to perform an editing operation to edit the text database so as to define an edited document by changing at least one of: (i) text within a text portion and (ii) the set of links between text portions.Type: GrantFiled: November 26, 2021Date of Patent: January 7, 2025Assignee: SONY GROUP CORPORATIONInventors: Vittorio Loreto, Pietro Gravino
-
Patent number: 12183363Abstract: A system, method and computer product for training a neural network system. The method comprises applying an audio signal to the neural network system, the audio signal including a vocal component and a non-vocal component. The method also comprises comparing an output of the neural network system to a target signal, and adjusting at least one parameter of the neural network system to reduce a result of the comparing, for training the neural network system to estimate one of the vocal component and the non-vocal component. In one example embodiment, the system comprises a U-Net architecture. After training, the system can estimate vocal or instrumental components of an audio signal, depending on which type of component the system is trained to estimate.Type: GrantFiled: November 20, 2023Date of Patent: December 31, 2024Assignee: Spotify ABInventors: Andreas Simon Thore Jansson, Angus William Sackfield, Ching Chuan Sung
-
Patent number: 12183330Abstract: In certain embodiments, speech is converted to text for theme identification by natural language processing. Notification data is generated based on detected themes and the notification data may include rules for notification presentation on a client device. The notification data may include parameters for processing image data captured by an augmented reality device to detect one or more objects. The objects may be associated with the theme and detection thereof within captured image data, and in accordance with other rules, may cause the augmented reality device to present a notification with contextual relevance to a current environment of a user utilizing the augmented reality device.Type: GrantFiled: January 29, 2021Date of Patent: December 31, 2024Assignee: Capital One Services, LLCInventors: Joshua Edwards, Michael Mossoba, Abdelkader Benkreira
-
Patent number: 12175957Abstract: A system, method and computer product for training a neural network system. The method comprises inputting an audio signal to the system to generate plural outputs f(X, ?). The audio signal includes one or more of vocal content and/or musical instrument content, and each output f(X, ?) corresponds to a respective one of the different content types. The method also comprises comparing individual outputs f(X, ?) of the neural network system to corresponding target signals. For each compared output f(X, ?), at least one parameter of the system is adjusted to reduce a result of the comparing performed for the output f(X, ?), to train the system to estimate the different content types. In one example embodiment, the system comprises a U-Net architecture. After training, the system can estimate various different types of vocal and/or instrument components of an audio signal, depending on which type of component(s) the system is trained to estimate.Type: GrantFiled: December 23, 2022Date of Patent: December 24, 2024Assignee: Spotify ABInventors: Andreas Simon Thore Jansson, Angus William Sackfield, Ching Chuan Sung, Rachel M. Bittner
-
Patent number: 12169697Abstract: In accordance with one embodiment, a system includes a processor, a memory module communicatively coupled to the processor, an NLP module communicatively coupled to the processor, and a set of machine-readable instructions stored in the memory module. The machine-readable instructions, when executed by the processor, direct the processor to perform operations including receiving a text data, and receiving a training text data for training one or more models of the NLP module. The operations also include generating, with a novice model of the NLP module, a novice suggestion based on the text data and the training text data to present an idea related to the text data, generating, with an expert model of the NLP module, an expert suggestion based on the text data and the training text data to present an idea elaborating on the text data, and outputting the novice suggestion and/or the expert suggestion.Type: GrantFiled: September 14, 2021Date of Patent: December 17, 2024Assignee: Toyota Research Institute, Inc.Inventors: Emily Sumner, Nikos Arechiga, Yue Weng, Shabnam Hakimi, Jonathan A. DeCastro
-
Patent number: 12164859Abstract: Methods for generating a categorized, ranked, condensed summary of a transcript of a conversation, involving obtaining a diarized version of the transcript of the conversation, storing textual monologues from the transcript, determining classifications as to the textual monologues based on a classifier algorithm, associating the classifications with the textual monologues, creating textually-modified rephrasings of the textual monologues based on text and classification thereof, storing the textually-modified rephrasings, aggregating the textually-modified rephrasings based on associated clustering and scoring, and transmitting summary information pertaining to the aggregated textually-modified rephrasings to a user device.Type: GrantFiled: June 1, 2022Date of Patent: December 10, 2024Assignee: GONG.IO LTDInventors: Shlomi Medalion, Inbal Horev, Raz Nussbaum, Omri Allouche, Raquel Sitman, Ortal Ashkenazi
-
Patent number: 12155884Abstract: A remote control for generating output signals apt at controlling one or more electronic devices includes a sound transducer, a speech recognition unit for recognizing voice commands, a memory for storing information relative to available content of the one or more electronic devices and a control signal generating and receiving unit for generating control signals corresponding to the voice commands for controlling the one or more electronic devices.Type: GrantFiled: April 22, 2020Date of Patent: November 26, 2024Assignee: Saronikos Trading and Services, Unipessoal LDAInventor: Robert James
-
Patent number: 12119013Abstract: An acoustic crosstalk suppression device includes a speaker estimation unit configured to estimate a main speaker based on voice signals collected by n units of microphones corresponding to n number of persons (n: an integer equal to or larger than 3); n units of filter update units each of which is configured to update a parameter of a filter configured to generate a suppression signal of a crosstalk component included in a voice signal of the main speaker; and a crosstalk suppression unit configured to suppress the crosstalk component by using a synthesis suppression signal generated by the maximum (n-1) units of filter update units corresponding to reference signals collected by the maximum (n-1) units of microphones.Type: GrantFiled: November 16, 2020Date of Patent: October 15, 2024Assignee: PANASONIC INTELLECTUAL PROPERTY MANAGEMENT CO., LTD.Inventors: Masanari Miyamoto, Naoya Tanaka, Hiromasa Ohashi
-
Patent number: 12119004Abstract: The present disclosure may provide a voice audio data processing system. The voice audio data processing system may obtain voice audio data, which includes one or more voices, each being respectively associated with one of one or more subjects. For one of the one or more voices and the subject associated with the voice, the voice audio processing system may generate a text based on the voice audio data. The text may have one or more sizes, each size corresponding to one of one or more volumes of the voice. The text may have one or more colors, each color corresponding to one of one or more emotion types of the voice.Type: GrantFiled: September 8, 2021Date of Patent: October 15, 2024Assignee: ZHEJIANG TONGHUASHUN INTELLIGENT TECHNOLOGY CO., LTD.Inventors: Yichen Yu, Yunsan Guo
-
Patent number: 12112749Abstract: A command analysis device capable of shortening the time until a command is executed is provided. The command analysis device includes a speech recognition unit that performs, every time when a predetermined unit of a speech signal is input, speech recognition on the speech signal and acquires a partial speech recognition result which is an intermediate result, and a command analysis unit that verifies an intermediate result in light of a predetermined intermediate result recognition rule, and outputs an analysis result during the input of the speech signal when a command execution target and a command execution content are successfully analyzed are provided.Type: GrantFiled: April 3, 2020Date of Patent: October 8, 2024Assignee: NIPPON TELEGRAPH AND TELEPHONE CORPORATIONInventors: Kazunori Kobayashi, Shoichiro Saito, Hiroaki Ito
-
Patent number: 12106051Abstract: There is a need for more effective and efficient text categorization. This need can be addressed by, for example, techniques for semantic text categorization. In one example, a method includes determining an input vector-based representation of an input document; processing the input vector-based representation using a trained supervised machine learning model to generate the categorization based at least in part on the input vector-based representation, wherein: (i) the trained supervised machine learning model has been trained using automatically-generated training data, and (ii) the automatically generated training data is generated by determining an inferred semantic label for each unlabeled training document of one or more unlabeled training documents; and performing one or more categorization-based actions based at least in part on the categorization, and (iii) the labels are described by one or more short documents/short texts.Type: GrantFiled: July 16, 2020Date of Patent: October 1, 2024Assignee: Optum Technology, Inc.Inventors: Suman Roy, Shashi Kumar, Amit Kumar, Vijay Varma Malladi, Rahul Chetlangia, Prakhar Pratap
-
Patent number: 12093635Abstract: Embodiments of this disclosure disclose a sentence processing method and device. The method may include performing word segmentation operation on a source sentence to be encoded to obtain m words. The method may further include obtaining an ith word in the m words using an ith encoding processing node in the n encoding processing nodes, and obtaining an (i?1)th word vector from an (i?1)th encoding processing node. The method may further include performing linear operation and non-linear operation on the ith word and the (i?1)th word vector using the first unit of the ith encoding processing node to obtain an ith operation result, and outputting the ith operation result to the at least one second unit for processing to obtain an ith word vector. The method may further include generating, in response to obtaining m word vectors, a sentence vector according to the m word vectors.Type: GrantFiled: February 22, 2021Date of Patent: September 17, 2024Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITEDInventors: Fandong Meng, Jinchao Zhang, Jie Zhou