Patents by Inventor Ajinkya Kale

Ajinkya Kale has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11928418
    Abstract: Embodiments provide systems, methods, and computer storage media for text style suggestions and/or text emphasis suggestions. In an example embodiment, an electronic design application provides a text style suggestion tool that generates text style suggestions to stylize a selected text element based on the context of the design. A text emphasis tool allows a user to select a text element and generate text emphasis suggestions for which words should be emphasized with a different text styling. Various interaction elements allow the user to iterate through the suggestions. For example, a set of style suggestions may be mapped to successive rotational increments around a style wheel, and as the user rotates through the positions on the style wheel, a corresponding text style suggestion is previewed and/or applied.
    Type: Grant
    Filed: June 8, 2022
    Date of Patent: March 12, 2024
    Assignee: ADOBE INC.
    Inventors: William Frederick Kraus, Nathaniel Joseph Grabaskas, Ajinkya Kale
  • Patent number: 11914641
    Abstract: The present disclosure describes systems and methods for information retrieval. Embodiments of the disclosure provide a color embedding network trained using machine learning techniques to generate embedded color representations for color terms included in a text search query. For example, techniques described herein are used to represent color text in a same space as color embeddings (e.g., an embedding space created by determining a histogram of LAB based colors in a three-dimensional (3D) space). Further, techniques are described for indexing color palettes for all the searchable images in the search space. Accordingly, color terms in a text query are directly converted into a color palette and an image search system can return one or more search images with corresponding color palettes that are relevant to (e.g., within a threshold distance from) the color palette of the text query.
    Type: Grant
    Filed: February 26, 2021
    Date of Patent: February 27, 2024
    Assignee: ADOBE INC.
    Inventors: Pranav Aggarwal, Ajinkya Kale, Baldo Faieta, Saeid Motiian, Venkata naveen kumar yadav Marri
  • Patent number: 11907280
    Abstract: Embodiments of the technology described herein, provide improved visual search results by combining a visual similarity and a textual similarity between images. In an embodiment, the visual similarity is quantified as a visual similarity score and the textual similarity is quantified as a textual similarity score. The textual similarity is determined based on text, such as a title, associated with the image. The overall similarity of two images is quantified as a weighted combination of the textual similarity score and the visual similarity score. In an embodiment, the weighting between the textual similarity score and the visual similarity score is user configurable through a control on the search interface. In one embodiment, the aggregate similarity score is the sum of a weighted visual similarity score and a weighted textual similarity score.
    Type: Grant
    Filed: November 5, 2020
    Date of Patent: February 20, 2024
    Assignee: Adobe Inc.
    Inventors: Mikhail Kotov, Roland Geisler, Saeid Motiian, Dylan Nathaniel Warnock, Michele Saad, Venkata Naveen Kumar Yadav Marri, Ajinkya Kale, Ryan Rozich, Baldo Faieta
  • Patent number: 11816162
    Abstract: Systems and methods are disclosed for search query language identification. One method comprises generating a seed dictionary comprising a plurality of labeled dictionary terms and receiving a plurality of unlabeled sample query terms. The plurality of unlabeled sample query terms are compared to the plurality of labeled dictionary terms at a first time, and a first set of labeled sample query terms are generated by labeling at least a subset of the plurality of unlabeled sample query terms based on the first comparison. Remaining unlabeled sample query terms are then compared with the first set of labeled sample query terms at a second time, and a second set of labeled sample query terms are generated by labeling the remaining unlabeled sample query terms based on the second comparison. The first and second sets of labeled sample query terms are provided to a machine learning model configured for query language prediction.
    Type: Grant
    Filed: July 31, 2020
    Date of Patent: November 14, 2023
    Assignee: Adobe Inc.
    Inventors: Ritiz Tambi, Ajinkya Kale, Tracy Holloway King
  • Publication number: 20230315988
    Abstract: Disclosed are computer-implemented methods and systems for generating text descriptive of digital images, comprising using a machine learning model to pre-process an image to generate initial text descriptive of the image; adjusting one or more inferences of the machine learning model, the inferences biasing the machine learning model away from associating negative words with the image; using the machine learning model comprising the adjusted inferences to post-process the image to generate updated text descriptive of the image; and processing the generated updated text descriptive of the image outputted by the machine learning model to fine-tune the updated text descriptive of the image.
    Type: Application
    Filed: May 10, 2023
    Publication date: October 5, 2023
    Applicant: Adobe Inc.
    Inventors: Pranav Aggarwal, Di Pu, Daniel ReMine, Ajinkya Kale
  • Patent number: 11756239
    Abstract: Systems and methods for color replacement are described. Embodiments of the disclosure include a color replacement system that adjusts an image based on a user-input source color and target color. For example, the source color may be replaced with the target color throughout the entire image. In some embodiments, a user provides a speech or text input that identifies a source color to be replaced. The user may then provide a speech or text input identifying the target color, replacing the source color. A color replacement system creates and embedding of the source color, segments the image based on the source color embedding, and then replaces the color of segmented portion of the image with the target color.
    Type: Grant
    Filed: April 26, 2021
    Date of Patent: September 12, 2023
    Assignee: ADOBE, INC.
    Inventors: Pranav Aggarwal, Ajinkya Kale
  • Patent number: 11741157
    Abstract: Systems, methods, and non-transitory computer-readable media are disclosed for determining multi-term contextual tags for digital content and propagating the multi-term contextual tags to additional digital content. For instance, the disclosed systems can utilize search query supervision to determine and associate multi-term contextual tags (e.g., tags that represent a specific concept based on the order of the terms in the tag) with digital content. Furthermore, the disclosed systems can propagate the multi-term contextual tags determined for the digital content to additional digital content based on similarities between the digital content and additional digital content (e.g., utilizing clustering techniques). Additionally, the disclosed systems can provide digital content as search results based on the associated multi-term contextual tags.
    Type: Grant
    Filed: December 7, 2021
    Date of Patent: August 29, 2023
    Assignee: Adobe Inc.
    Inventors: Ajinkya Kale, Baldo Faieta, Benjamin Leviant, Fengbin Chen, Francois Guerin, Kate Sousa, Trung Bui, Venkat Barakam, Zhe Lin
  • Patent number: 11734339
    Abstract: The present disclosure relates to methods, systems, and non-transitory computer-readable media for retrieving digital images in response to queries. For example, in one or more embodiments, the disclosed systems receive a query comprising text and generates a cross-lingual-multimodal embedding for the text within a multimodal embedding space. The disclosed systems further identifies an image embedding for a digital image that corresponds to (e.g., is relevant to) the text from the query based on an embedding distance between the image embedding and the cross-lingual-multimodal embedding for the text within the multimodal embedding space. Accordingly, the disclosed systems retrieve the digital image associated with the image embedding for display on a client device, such as the client device that submitted the query.
    Type: Grant
    Filed: October 20, 2020
    Date of Patent: August 22, 2023
    Assignee: Adobe Inc.
    Inventors: Ajinkya Kale, Zhe Lin, Pranav Aggarwal
  • Publication number: 20230206525
    Abstract: A non-transitory computer-readable medium includes program code that is stored thereon. The program code is executable by one or more processing devices for performing operations including generating, using a model, a learned image representation of a target image. The operations further include generating, using a text embedding model, a text embedding of a text query. The text embedding and the learned image representation of the target image are in a same embedding space. Additionally, the operations include convolving the learned image representation of the target image with the text embedding of the text query. Moreover, the operations include generating an object-segmented image based on the convolving of the learned image representation of the target image with the text embedding.
    Type: Application
    Filed: March 3, 2023
    Publication date: June 29, 2023
    Inventors: Midhun Harikumar, Pranav Aggarwal, Baldo Faieta, Ajinkya Kale, Zhe Lin
  • Patent number: 11687714
    Abstract: Disclosed are computer-implemented methods and systems for generating text descriptive of digital images, comprising using a machine learning model to pre-process an image to generate initial text descriptive of the image; adjusting one or more inferences of the machine learning model, the inferences biasing the machine learning model away from associating negative words with the image; using the machine learning model comprising the adjusted inferences to post-process the image to generate updated text descriptive of the image; and processing the generated updated text descriptive of the image outputted by the machine learning model to fine-tune the updated text descriptive of the image.
    Type: Grant
    Filed: August 20, 2020
    Date of Patent: June 27, 2023
    Assignee: Adobe Inc.
    Inventors: Pranav Aggarwal, Di Pu, Daniel ReMine, Ajinkya Kale
  • Patent number: 11681737
    Abstract: The present disclosure relates to a retrieval method including: generating a graph representing a set of users, items, and queries; generating clusters from the media items; generating embeddings for each cluster from embeddings of the items within the corresponding cluster; generating augmented query embeddings for each cluster from the embedding of the corresponding cluster and query embeddings of the queries; inputting the cluster embeddings and the augmented query embeddings to a layer of a graph convolutional network (GCN) to determine user embeddings of the users; inputting the embedding of the given user and a query embedding of the given query to a layer of the GCN to determine a user-specific query embedding; generating a score for each of the items based on the item embeddings and the user-specific query embedding; and presenting the items having the score exceeding a threshold.
    Type: Grant
    Filed: April 8, 2020
    Date of Patent: June 20, 2023
    Assignee: ADOBE INC.
    Inventors: Handong Zhao, Ajinkya Kale, Xiaowei Jia, Zhe Lin
  • Patent number: 11645478
    Abstract: Introduced here is an approach to translating tags assigned to digital images. As an example, embeddings may be extracted from a tag to be translated and the digital image with which the tag is associated by a multimodal model. These embeddings can be compared to embeddings extracted from a set of target tags associated with a target language by the multimodal model. Such an approach allows similarity to be established along two dimensions, which ensures the obstacles associated with direct translation can be avoided.
    Type: Grant
    Filed: November 4, 2020
    Date of Patent: May 9, 2023
    Assignee: Adobe Inc.
    Inventors: Ritiz Tambi, Pranav Aggarwal, Ajinkya Kale
  • Publication number: 20230133583
    Abstract: Techniques and systems are described for performing semantic text searches. A semantic text-searching solution uses a machine learning system (such as a deep learning system) to determine associations between the semantic meanings of words. These associations are not limited by the spelling, syntax, grammar, or even definition of words. Instead, the associations can be based on the context in which characters, words, and/or phrases are used in relation to one another. In response to detecting a request to locate text within an electronic document associated with a keyword, the semantic text-searching solution can return strings within the document that have matching and/or related semantic meanings or contexts, in addition to exact matches (e.g., string matches) within the document. The semantic text-searching solution can then output an indication of the matching strings.
    Type: Application
    Filed: December 29, 2022
    Publication date: May 4, 2023
    Applicant: Adobe Inc.
    Inventors: Trung Bui, Yu Gong, Tushar Dublish, Sasha Spala, Sachin Soni, Nicholas Miller, Joon Kim, Franck Dernoncourt, Carl Dockhorn, Ajinkya Kale
  • Patent number: 11615567
    Abstract: A non-transitory computer-readable medium includes program code that is stored thereon. The program code is executable by one or more processing devices for performing operations including generating, by a model that includes trainable components, a learned image representation of a target image. The operations further include generating, by a text embedding model, a text embedding of a text query. The text embedding and the learned image representation of the target image are in a same embedding space. Additionally, the operations include generating a class activation map of the target image by, at least, convolving the learned image representation of the target image with the text embedding of the text query. Moreover, the operations include generating an object-segmented image using the class activation map of the target image.
    Type: Grant
    Filed: November 18, 2020
    Date of Patent: March 28, 2023
    Assignee: Adobe Inc.
    Inventors: Midhun Harikumar, Pranav Aggarwal, Baldo Faieta, Ajinkya Kale, Zhe Lin
  • Patent number: 11615239
    Abstract: The present disclosure relates to systems for identifying instances of natural language input, determining intent classifications associated with instances of natural language input, and generating responses based on the determined intent classifications. In particular, the disclosed systems intelligently identify and group instances of natural language input based on characteristics of the user input. Additionally, the disclosed systems determine intent classifications for the instances of natural language input based message queuing in order to delay responses to the user input in ways that increase accuracy of the responses, while retaining a conversational aspect of the ongoing chat. Moreover, in one or more embodiments, the disclosed systems generate responses utilizing natural language.
    Type: Grant
    Filed: March 31, 2020
    Date of Patent: March 28, 2023
    Assignee: Adobe Inc.
    Inventors: Oliver Brdiczka, Ajinkya Kale, Piyush Chandra, Tracy King, Abhishek Gupta, Sourabh Goel, Nitin Garg, Deepika Naryani, Feroz Ahmad, Vikas Sagar
  • Patent number: 11567981
    Abstract: Techniques and systems are described for performing semantic text searches. A semantic text-searching solution uses a machine learning system (such as a deep learning system) to determine associations between the semantic meanings of words. These associations are not limited by the spelling, syntax, grammar, or even definition of words. Instead, the associations can be based on the context in which characters, words, and/or phrases are used in relation to one another. In response to detecting a request to locate text within an electronic document associated with a keyword, the semantic text-searching solution can return strings within the document that have matching and/or related semantic meanings or contexts, in addition to exact matches (e.g., string matches) within the document. The semantic text-searching solution can then output an indication of the matching strings.
    Type: Grant
    Filed: April 15, 2020
    Date of Patent: January 31, 2023
    Assignee: Adobe Inc.
    Inventors: Trung Bui, Yu Gong, Tushar Dublish, Sasha Spala, Sachin Soni, Nicholas Miller, Joon Kim, Franck Dernoncourt, Carl Dockhorn, Ajinkya Kale
  • Publication number: 20220343561
    Abstract: Systems and methods for color replacement are described. Embodiments of the disclosure include a color replacement system that adjusts an image based on a user-input source color and target color. For example, the source color may be replaced with the target color throughout the entire image. In some embodiments, a user provides a speech or text input that identifies a source color to be replaced. The user may then provide a speech or text input identifying the target color, replacing the source color. A color replacement system creates and embedding of the source color, segments the image based on the source color embedding, and then replaces the color of segmented portion of the image with the target color.
    Type: Application
    Filed: April 26, 2021
    Publication date: October 27, 2022
    Inventors: Pranav Aggarwal, Ajinkya Kale
  • Publication number: 20220300696
    Abstract: Embodiments provide systems, methods, and computer storage media for text style suggestions and/or text emphasis suggestions. In an example embodiment, an electronic design application provides a text style suggestion tool that generates text style suggestions to stylize a selected text element based on the context of the design. A text emphasis tool allows a user to select a text element and generate text emphasis suggestions for which words should be emphasized with a different text styling. Various interaction elements allow the user to iterate through the suggestions. For example, a set of style suggestions may be mapped to successive rotational increments around a style wheel, and as the user rotates through the positions on the style wheel, a corresponding text style suggestion is previewed and/or applied.
    Type: Application
    Filed: June 8, 2022
    Publication date: September 22, 2022
    Inventors: William Frederick Kraus, Nathaniel Joseph Grabaskas, Ajinkya Kale
  • Publication number: 20220284321
    Abstract: Systems and methods for multi-modal representation learning are described. One or more embodiments provide a visual representation learning system trained using machine learning techniques. For example, some embodiments of the visual representation learning system are trained using cross-modal training tasks including a combination of intra-modal and inter-modal similarity preservation objectives. In some examples, the training tasks are based on contrastive learning techniques.
    Type: Application
    Filed: March 3, 2021
    Publication date: September 8, 2022
    Inventors: Xin Yuan, Zhe Lin, Jason Wen Yong Kuen, Jianming Zhang, Yilin Wang, Ajinkya Kale, Baldo Faieta
  • Publication number: 20220277039
    Abstract: The present disclosure describes systems and methods for information retrieval. Embodiments of the disclosure provide a color embedding network trained using machine learning techniques to generate embedded color representations for color terms included in a text search query. For example, techniques described herein are used to represent color text in a same space as color embeddings (e.g., an embedding space created by determining a histogram of LAB based colors in a three-dimensional (3D) space). Further, techniques are described for indexing color palettes for all the searchable images in the search space. Accordingly, color terms in a text query are directly converted into a color palette and an image search system can return one or more search images with corresponding color palettes that are relevant to (e.g., within a threshold distance from) the color palette of the text query.
    Type: Application
    Filed: February 26, 2021
    Publication date: September 1, 2022
    Inventors: PRANAV AGGARWAL, Ajinkya Kale, Baldo Faieta, Saeid Motiian, Venkata naveen kumar yadav Marri