Patents Examined by Kevin Ky
-
Patent number: 11748578Abstract: A method may include presenting a user interface on a computing device, the user interface including a character input portion and a predictive suggestion portion; converting, using at least one processor, a set of characters entered in the character input portion into a word vector; inputting the word vector into a neural network; based on the inputting, determining a set of output terms from the neural network; querying a data store to retrieve a user specific data value for an output term of the set of output terms; and presenting a paired output term that includes the output term and the user specific data value in the predictive suggestion portion of the user interface.Type: GrantFiled: November 29, 2021Date of Patent: September 5, 2023Assignee: Wells Fargo Bank, N.A.Inventors: Ganesan Anand, Bipin M. Sahni, Madhu V. Vempati
-
Patent number: 11741979Abstract: Audio content may be played on multiple audio-enabled devices via a voice command issued by a user. The voice command is received at a first audio-enabled device and processed, via speech recognition, to identify the audio content to be played and the target devices on which the audio content is to be played. In addition, the voice command can also indicate the time periods associated with the audio playback to provide synchronized playback. Device set information can be used to determine if the first audio-enabled device shares audio functionality with the target devices. If shared functionality is confirmed, one or more commands are sent to the respective target devices to instruct the corresponding playback of the audio content.Type: GrantFiled: August 21, 2020Date of Patent: August 29, 2023Assignee: AMAZON TECHNOLOGIES, INC.Inventors: Albert M. Scalise, Tony David
-
Patent number: 11743418Abstract: System that facilitates rapid onboarding of an autonomous (cashier-less) store by capturing images of the store's items from multiple angles, with varying background colors, and that builds a classifier training dataset from these images. Background surfaces may for example be coated with retroreflective tape or film, and variable-color incident light sources may generate the desired background colors. Embodiments may automatically rotate or otherwise reorient the item placed in the onboarding system, so that a relatively small number of cameras can capture views from multiple angles. When an item is placed in the system, a fully automated process may generate a sequence of item orientations and background colors, and may capture and process images from the cameras to create training images. Images of the item from multiple angles, under varying lighting conditions, may be captured without requiring an operator to move or reorient the item.Type: GrantFiled: February 17, 2021Date of Patent: August 29, 2023Assignee: ACCEL ROBOTICS CORPORATIONInventors: Marius Buibas, John Quinn
-
Patent number: 11727685Abstract: A system for characterizing content relating to a desired outcome is disclosed. The disclosed system can be configured to identify context included in content collected from various content sources, map the identified context into graph nodes and graph edges connecting the graph nodes, identify one or more features of the identified context and adjust at least one of: a graph node and a graph edge based on the identified one or more features, identify a graph incorporating the graph nodes, the graph edges, and at least one of an adjusted graph node and an adjusted graph edge, and provide a recommendation for at least one action for achieving the desired outcome based on the identified graph.Type: GrantFiled: July 29, 2020Date of Patent: August 15, 2023Assignee: KPMG LLPInventors: Swaminathan Chandrasekaran, Shiwangi Singh, Shan-Chuan Li, Anand Sekhar, Qianhao Yu, Sathyanarayan Venkateswaran, Oliver Michael Baltay
-
Patent number: 11721340Abstract: A personal information assistant computing system may include a user computing device having a processor and a non-transitory memory device storing instructions. The personal information assistant may receive a user accessible input as a natural language communication from the user, which may be analyzed by a personal information assistant to determine a task to be performed by the virtual information assistant. The personal information assistant may be personalized to the user using encrypted user information. The personal information assistant communicates with a remote computing system in performance of a computer-assisted task, wherein the first personal information assistant interacts as a proxy for the user in response to at least one response received from the remote computing system. The personal information assistant may communicate the results of the task to the user via a user information screen and/or an audio device.Type: GrantFiled: October 20, 2020Date of Patent: August 8, 2023Assignee: Allstate Insurance CompanyInventors: Tao Chen, Philip Peter Ramirez, Manjunath Rao
-
Patent number: 11699044Abstract: An apparatus and methods for generating automated communication simulation using artificial intelligence is disclosed. The apparatus comprises at least a processor, a memory communicatively connected to the processor, wherein the memory includes instructions configuring the at least a processor to receive a prior communication datum from a first user, parse the prior communication datum to extract at least a contextual datum relating to a second user, generate a correspondence simulation from the first user to the second suer as a function of the at least a contextual datum, receive a prior handwriting image datum from the first user, classify at least a portion of the prior handwriting image datum to at least a glyph, identify at least a semantic match to the at least a glyph in the correspondence simulation, generate and transmit an automated communication simulation using the at least a semantic match and the correspondence simulation.Type: GrantFiled: October 31, 2022Date of Patent: July 11, 2023Inventor: Todd Allen
-
Patent number: 11694035Abstract: One embodiment of the present invention provides a method comprising receiving a text corpus, and generating a first list of triples based on the text corpus. Each triple of the first list comprises a first term representing a candidate hyponym, a second term representing a candidate hypernym, and a frequency value indicative of a number of times a hypernymy relation is observed between the candidate hyponym and the candidate hypernym in the text corpus. The method further comprises training a neural network for hypernym induction based on the first list. The trained neural network is a strict partial order network (SPON) model.Type: GrantFiled: June 9, 2021Date of Patent: July 4, 2023Assignee: International Business Machines CorporationInventors: Sarthak Dash, Alfio Massimiliano Gliozzo, Md Faisal Mahbub Chowdhury
-
Patent number: 11687728Abstract: A text sentiment analysis method based on multi-level graph pooling includes steps of: preprocessing a target text; taking collocate point mutual information between word nodes as an edge weight between the word nodes, and building a graph for each text independently; constructing a multi-level graph pooling model, of which a gated graph neural network layer transfers low-level information, a first graph self-attention pooling layer performs an initial graph pooling operation and uses a Readout function to extract low-level features, a second graph self-attention pooling layer performs a graph pooling operation again, performs a pruning update on the graph structure by calculating attention scores of nodes in the graph and uses a Readout function to extract high-level features; obtaining a multi-level final vector representation through a feature fusion function; and selecting a sentiment category corresponding to a maximum probability value as a final sentiment category output of the text.Type: GrantFiled: June 21, 2022Date of Patent: June 27, 2023Assignee: JINAN UNIVERSITYInventors: Feiran Huang, Guan Liu, Yuanchen Bei
-
Patent number: 11669950Abstract: The analysis apparatus (2000) includes a co-appearance event extraction unit (2020) and a frequent event detection unit (2040). The co-appearance event extraction unit (2020) extracts co-appearance events of two or more persons from each of a plurality of sub video frame sequences. The sub video frame sequence is included in a video frame sequence. The analysis apparatus (2000) may obtain the plurality of sub video frame sequences from one or more of the video frame sequences. The one or more of the video frame sequences may be generated by one or more of surveillance cameras. Each of the sub video frame sequences has a predetermined time length. The frequent event detection unit (2040) detects co-appearance events of the same persons occurring at a frequency higher than or equal to a pre-determined frequency threshold.Type: GrantFiled: November 11, 2020Date of Patent: June 6, 2023Assignee: NEC CORPORATIONInventors: Jianquan Liu, Ka Wai Yung
-
Patent number: 11663466Abstract: A method for generating a dual-class dataset is disclosed. A single-class dataset and a context dataset are obtained. The context dataset can be labeled. A model can be trained using the combination of the single-class dataset and the labeled context dataset. The model can be run on the context dataset. The data points that are classified the same as the data points included in the single-class dataset, can be removed from the labeled context dataset and added to the single-class dataset. These steps can be repeated until no data points are classified by the model.Type: GrantFiled: November 18, 2019Date of Patent: May 30, 2023Assignee: CAPITAL ONE SERVICES, LLCInventors: Fardin Abdi Taghi Abad, Reza Farivar, Vincent Pham, Kenneth Taylor, Mark Watson, Jeremy Goodsitt, Austin Walters, Anh Truong
-
Patent number: 11665427Abstract: Techniques are disclosed for managing image capture and processing in a multi-camera imaging system. In such a system, a pair of cameras each may output a sequence of frames representing captured image data. The cameras' output may be synchronized to each other to cause synchronism in the image capture operations of the cameras. The system may assess image quality of frames output from the cameras and, based on the image quality, designate a pair of the frames to serve as a “reference frame pair.” Thus, one frame from the first camera and a paired frame from the second camera will be designated as the reference frame pair. The system may adjust each reference frame in the pair using other frames from their respective cameras. The reference frames also may be processed by other operations within the system, such as image fusion.Type: GrantFiled: August 11, 2020Date of Patent: May 30, 2023Assignee: APPLE INC.Inventors: Paul M. Hubel, Marius Tico, Ting Chen
-
Patent number: 11657223Abstract: A system for extracting a key phrase from a document includes a neural key phrase extraction model (“BLING-KPE”) having a first layer to extract a word sequence from the document, a second layer to represent each word in the word sequence by ELMo embedding, position embedding, and visual features, and a third layer to concatenate the ELMo embedding, the position embedding, and the visual features to produce hybrid word embeddings. A convolutional transformer models the hybrid word embeddings to n-gram embeddings, and a feedforward layer converts the n-gram embeddings into a probability distribution over a set of n-grams and calculates a key phrase score of each n-gram. The neural key phrase extraction model is trained on annotated data based on a labeled loss function to compute cross entropy loss of the key phrase score of each n-gram as compared with a label from the annotated dataset.Type: GrantFiled: December 16, 2021Date of Patent: May 23, 2023Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Li Xiong, Chuan Hu, Arnold Overwijk, Junaid Ahmed, Daniel Fernando Campos, Chenyan Xiong
-
Patent number: 11648480Abstract: Systems and methods are provided for enhanced pose generation based on generative modeling. An example method includes accessing an autoencoder trained based on poses of real-world persons, each pose being defined based on location information associated with joints, with the autoencoder being trained to map an input pose to a feature encoding associated with a latent feature space. Information identifying, at least, a first pose and a second pose associated with a character configured for inclusion in an in-game world is obtained via user input, with each of the poses being defined based on location information associated with the joints and with the joints being included on a skeleton associated with the character. Feature encodings associated with the first pose and the second pose are generated based on the autoencoder. Output poses are generated based on transition information associated with the first pose and the second pose.Type: GrantFiled: April 6, 2020Date of Patent: May 16, 2023Assignee: Electronic Arts Inc.Inventor: Elaheh Akhoundi
-
Patent number: 11650164Abstract: An artificial neural network-based method for selecting a surface type of an object includes receiving at least one object image, performing surface type identification on each of the at least one object image by using a first predictive model to categorize the object image to one of a first normal group and a first abnormal group, and performing surface type identification on each output image in the first normal group by using a second predictive model to categorize the output image to one of a second normal group and a second abnormal group.Type: GrantFiled: April 14, 2020Date of Patent: May 16, 2023Assignee: GETAC TECHNOLOGY CORPORATIONInventors: Kun-Yu Tsai, Po-Yu Yang
-
Patent number: 11651765Abstract: Techniques and apparatuses for recognizing accented speech are described. In some embodiments, an accent module recognizes accented speech using an accent library based on device data, uses different speech recognition correction levels based on an application field into which recognized words are set to be provided, or updates an accent library based on corrections made to incorrectly recognized speech.Type: GrantFiled: October 14, 2020Date of Patent: May 16, 2023Assignee: Google Technology Holdings LLCInventor: Kristin A. Gray
-
Patent number: 11645758Abstract: In an example, a digital image comprising a representation of multiple physical objects is received at a client computer. The digital image is copied into a temporary canvas. The digital image is then analyzed to identify a plurality of potential object areas, each of the potential object areas having pixels with colors similar to the other pixels within the potential object area. A minimum bounding region for each of the identified potential object areas is identified, the minimum bounding region being a smallest region of a particular shape that bounds the corresponding potential object area. The pixels within a selected minimum bounding region are cropped from the digital image. The pixels within the selected minimum bounding region are then sent to an object recognition service on a server to identify an object represented by the pixels within the selected minimum bounding region.Type: GrantFiled: October 30, 2020Date of Patent: May 9, 2023Assignee: eBay Inc.Inventors: Yoni Medoff, Siddharth Sakhadeo, Deepu Joseph
-
Patent number: 11646014Abstract: An ensemble of machine learning models used for real-time prediction of text for an electronic chat with an expert user. A global machine learning model, e.g., a transformer model, trained with domain specific knowledge makes a domain specific generalized prediction. Another machine learning model, e.g., an n-gram model, learns the specific style of the expert user as the expert user types to generate more natural, more expert user specific text. If specific words cannot be predicted with a desired probability level, another word level machine learning model, e.g., a word completion model, completes the words as the characters are being typed. The ensemble therefore produces real-time, natural, and accurate text that is provided to the expert user. Continuous feedback of the acceptance/rejection of predictions by the expert is used to fine tune one or more machine learning models of the ensemble in real time.Type: GrantFiled: July 25, 2022Date of Patent: May 9, 2023Assignee: INTUIT INC.Inventors: Shrutendra Harsola, Sourav Prosad, Viswa Datha Polavarapu
-
Patent number: 11640527Abstract: Systems and methods are provided for near-zero-cost (NZC) query framework or approach for differentially private deep learning. To protect the privacy of training data during learning, the near-zero-cost query framework transfers knowledge from an ensemble of teacher models trained on partitions of the data to a student model. Privacy guarantees may be understood intuitively and expressed rigorously in terms of differential privacy. Other features are also provided.Type: GrantFiled: October 21, 2019Date of Patent: May 2, 2023Assignee: Salesforce.com, Inc.Inventors: Lichao Sun, Jia Li, Caiming Xiong, Yingbo Zhou
-
Patent number: 11638569Abstract: Systems and methods for detecting placement of an object in a digital image are provided. The system receives a digital image and processes the digital image to generate one or more candidate regions within the digital image using a first neural network. The system then selects a proposed region from the one or more candidate regions using the first neural network and assigns a score to the proposed region using the first neural network. Lastly, the system processes the proposed region using a second neural network to detect an object in the proposed region.Type: GrantFiled: June 10, 2019Date of Patent: May 2, 2023Assignee: Rutgers, The State University of New JerseyInventors: Cosmas Mwikirize, John L. Nosher, Ilker Hacihaliloglu
-
Patent number: 11640233Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for extracting text from an input document to generate one or more inference. Each inference box may be input into a machine learning network trained on training labels. Each training label provides a human-augmented version of output from a separate machine translation engine. A first translation may be generated by machine learning network. The first translation may be displayed in a user interface with respect to display of an original version of the input document and a translated version of a portion of the input document.Type: GrantFiled: January 3, 2022Date of Patent: May 2, 2023Assignee: Vannevar Labs, Inc.Inventors: Daniel Goodman, Nathanial Hartman, Nathaniel Bush, Brett Granberg