Patents by Inventor Sachin Soni

Sachin Soni has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11216998
    Abstract: In implementations of jointly editing related objects in a digital image, an image editing application segments a first object in a digital image, and an additional object corresponding to the first object, such a shadow cast by the first object, a reflection of the first object, or an object of a same object class as the first object. Respective stick diagrams for the first object and the additional object are generated, and a mapping of the first object to the additional object is generated based on the stick diagrams. Based on a user request to edit the first object, such as to warp the first object, the first object and the additional object are jointly edited based on the mapping. Accordingly, realistic digital images are efficiently generated that maintain correspondence between related objects, without requiring a user to edit each object individually, thus saving time and resources.
    Type: Grant
    Filed: December 17, 2020
    Date of Patent: January 4, 2022
    Assignee: Adobe Inc.
    Inventors: Sanjeev Tagra, Sachin Soni, Prasenjit Mondal, Ajay Jain
  • Patent number: 11201754
    Abstract: Techniques and systems for synchronized accessibility for client devices in an online conference are described. For example, a conferencing system receives presentation content and audio content as part of the online conference from a client device. The conferencing system generates sign language content by converting audio in the audio content to sign language. The conferencing system then synchronizes display of the sign language content with the presentation content in a user interface based on differences in durations of segments of the audio content from durations of corresponding segments of the sign language content. Then, the conferencing system outputs the sign language content as synchronized with the presentation content, such as to a viewer client device that requested the sign language content, or to storage for later access by viewers that request sign language content.
    Type: Grant
    Filed: October 3, 2019
    Date of Patent: December 14, 2021
    Assignee: Adobe Inc.
    Inventors: Sachin Soni, Ajay Jain
  • Patent number: 11175807
    Abstract: Techniques are provided for customizing, based on a user's activity over time, the selection of a video thumbnail for inclusion as a selectable interface element or element of a graphical interface. A server computer identifies events associated with prior interactions of a user and computes a time-decayed metric based on the time of a predicted future action of the user in comparison to a respective time of each identified event. Based on the time-decayed metric, the server computer selects a video thumbnail that is more relevant to the first event than the second event for presentation to the user.
    Type: Grant
    Filed: February 3, 2021
    Date of Patent: November 16, 2021
    Assignee: Adobe Inc.
    Inventors: Ajay Jain, Sanjeev Tagra, Sachin Soni, Eric Kienle
  • Publication number: 20210334458
    Abstract: In implementations of systems for role classification, a computing device implements a role system to receive data describing a corpus of text that is associated with a user ID. Feature values of features are generated by a first machine learning model by processing the corpus of text, the features representing questions with respect to the corpus of text and the feature values representing answers to the questions included in the corpus of text. A classification of a role is generated by a second machine learning model by processing the feature values, the classification of the role indicating a relationship of the user ID with respect to a product or service. The role system outputs an indication of the classification of the role for display in a user interface of a display device.
    Type: Application
    Filed: April 27, 2020
    Publication date: October 28, 2021
    Applicant: Adobe Inc.
    Inventors: Ajay Jain, Sanjeev Tagra, Sachin Soni, Niranjan Shivanand Kumbi, Eric Andrew Kienle, Ajay Awatramani, Abhishek Jain
  • Publication number: 20210326371
    Abstract: Techniques and systems are described for performing semantic text searches. A semantic text-searching solution uses a machine learning system (such as a deep learning system) to determine associations between the semantic meanings of words. These associations are not limited by the spelling, syntax, grammar, or even definition of words. Instead, the associations can be based on the context in which characters, words, and/or phrases are used in relation to one another. In response to detecting a request to locate text within an electronic document associated with a keyword, the semantic text-searching solution can return strings within the document that have matching and/or related semantic meanings or contexts, in addition to exact matches (e.g., string matches) within the document. The semantic text-searching solution can then output an indication of the matching strings.
    Type: Application
    Filed: April 15, 2020
    Publication date: October 21, 2021
    Inventors: Trung Bui, Yu Gong, Tushar Dublish, Sasha Spala, Sachin Soni, Nicholas Miller, Joon Kim, Franck Dernoncourt, Carl Dockhorn, Ajinkya Kale
  • Publication number: 20210303825
    Abstract: Methods and systems are provided for providing directional assistance to guide a user to position a camera for centering a person's face within the camera's field of view. A neural network system is trained to determine the position of the user's face relative to the center of the field of view as captured by an input image. The neural network system is trained using training input images that are generated by cropping different regions of initial training images. Each initial image is used to create a plurality of different training input images, and directional assistance labels used to train the network may be assigned to each training input image based on how the image is cropped. Once trained, the neural network system determines a position of the user's face, and automatically provides a non-visual prompt indicating how to center the face within the field of view.
    Type: Application
    Filed: June 11, 2021
    Publication date: September 30, 2021
    Inventors: Sachin Soni, Siddharth Kumar, Ram Bhushan Agrawal, Ajay Jain
  • Publication number: 20210287425
    Abstract: Certain embodiments involve visually augmenting images of three-dimensional containers with virtual elements that fill one or more empty regions of the three-dimensional containers. For instance, a computing system receives a first image that depicts a storage container and identify sub-containers within the storage container. The computing system selects, from a virtual object library, a plurality of virtual objects that are semantically related to the sub-container. The computing system determines an arrangement of the virtual objects within the sub-container based on semantics associated with the sub-container and the plurality of virtual objects. The computing system generates a second image that depicts the arrangement of the plurality of virtual objects within the storage container and sub-containers. The computing system generates, for display, the second image depicting the storage container and the arrangement of the virtual objects.
    Type: Application
    Filed: June 1, 2021
    Publication date: September 16, 2021
    Inventors: Sanjeev Tagra, Sachin Soni, Ajay Jain, Ryan Rozich, Jonathan Roeder, Prasenjit Mondal
  • Patent number: 11113722
    Abstract: Embodiments of the present invention provide systems, methods, and computer storage media directed to providing targeted content related to sentiment associated with products. In one embodiment, content of a referral source from which a user navigates to arrive at a product page having an item of interest is analyzed. A sentiment of the item based on the analysis of the content within the referral source is determined. Based on the sentiment of the item, targeted content related to the item is identified and provided to the user in an effort to reconcile the determined sentiment of the item.
    Type: Grant
    Filed: September 29, 2015
    Date of Patent: September 7, 2021
    Assignee: ADOBE INC.
    Inventors: Stéphane Moreau, Sachin Soni, Ashish Duggal, Anmol Dhawan
  • Patent number: 11113734
    Abstract: Techniques for generating leads for consumers using IoT devices at brick-and-mortar stores are provided. A retailer can determine a consumer's level of interest in a product and provide information or other benefits to the consumer. In some embodiments, sensor data from at least one of one or more consumer devices or IoT devices are received, the sensor data being indicative of interaction of a consumer with a product. One or more interactions of the consumer with the product are determined based on the received sensor data. An interaction database is searched for an interaction mapped to specific sensor data requirements matching the received sensor data. A leads score is calculated based on the one or more interactions, the leads score indicating an interest level of the consumer in the product. When the leads score exceeds a threshold, a lead is generated for the consumer.
    Type: Grant
    Filed: January 14, 2016
    Date of Patent: September 7, 2021
    Assignee: ADOBE INC.
    Inventors: Anmol Dhawan, Stephane Moreau, Sachin Soni, Ashish Duggal
  • Patent number: 11113716
    Abstract: Systems and methods are disclosed herein for attributing credit to online consumer touchpoints for a consumer performing an action. The systems and methods involve determining whether a consumer is in a particular environment for an online consumer touchpoint by detecting an external viewing condition for the consumer for the online consumer touchpoint. The systems and methods determine that the consumer performed an action, such as a conversion, following the online consumer touchpoint and additional online consumer touchpoints. An effectiveness of the online consumer touchpoint in the particular environment is determined and used to attribute relative credit to the online consumer touchpoint and the additional online consumer touchpoints for the consumer performing the action.
    Type: Grant
    Filed: June 27, 2016
    Date of Patent: September 7, 2021
    Assignee: ADOBE INC.
    Inventors: Ashish Duggal, Anmol Dhawan, Sachin Soni, Russell Stringham
  • Publication number: 20210232621
    Abstract: Digital image selection techniques are described that employ machine learning to select a digital image of an object from a plurality of digital images of the object. The plurality of digital images each capture the object for inclusion as part of generating digital content, e.g., a webpage, a thumbnail to represent a digital video, and so on. In one example, digital image selection techniques are described that employ machine learning to select a digital image of an object from a plurality of digital images of the object. As a result, the service provider system may select a digital image of an object from a plurality of digital images of the object that has an increased likelihood of achieving a desired outcome and may address the multitude of different ways in which an object may be presented to a user.
    Type: Application
    Filed: January 28, 2020
    Publication date: July 29, 2021
    Applicant: Adobe Inc.
    Inventors: Ajay Jain, Sanjeev Tagra, Sachin Soni, Ryan Timothy Rozich, Nikaash Puri, Jonathan Stephen Roeder
  • Patent number: 11074430
    Abstract: Methods and systems are provided for providing directional assistance to guide a user to position a camera for centering a person's face within the camera's field of view. A neural network system is trained to determine the position of the user's face relative to the center of the field of view as captured by an input image. The neural network system is trained using training input images that are generated by cropping different regions of initial training images. Each initial image is used to create a plurality of different training input images, and directional assistance labels used to train the network may be assigned to each training input image based on how the image is cropped. Once trained, the neural network system determines a position of the user's face, and automatically provides a non-visual prompt indicating how to center the face within the field of view.
    Type: Grant
    Filed: May 29, 2018
    Date of Patent: July 27, 2021
    Assignee: ADOBE INC.
    Inventors: Sachin Soni, Siddharth Kumar, Ram Bhushan Agrawal, Ajay Jain
  • Patent number: 11069034
    Abstract: The present disclosure relates to a computer-implemented method for generating an enhanced image from an original image, the method including segmenting the original image into a segmented image using an artificial neural network; curve fitting the segmented image to determine boundary artifacts; removing the determined boundary artifacts to generate a smoothed boundary image; and generating the enhanced image from the original image and the smoothed boundary image. The image maybe enhanced further by correcting for glare and adding artificial light.
    Type: Grant
    Filed: September 6, 2019
    Date of Patent: July 20, 2021
    Assignee: ADOBE INC.
    Inventors: Sanjeev Tagra, Sachin Soni, Ajay Jain, Ryan Rozich, Prasenjit Mondal, Jonathan Roeder
  • Publication number: 20210216540
    Abstract: Techniques are disclosed for narrowing search requests, based on interaction between a search system and a user. For example, a plurality of search results is generated in response to a search query. To reduce the number of search results, a plurality of attributes or features of the search results are identified. Each feature has a corresponding plurality of clusters, where a cluster of a feature represents a corresponding range or value of the feature. For each feature, the first plurality of search results is categorized into the corresponding plurality of clusters of the corresponding feature. A feature is then selected. The search system interacts with the user, to identify a cluster of the plurality of clusters of the selected feature in which one or more intended search results belong. Based on such identification of the cluster, the search system refines or narrows down the first plurality of search results.
    Type: Application
    Filed: January 10, 2020
    Publication date: July 15, 2021
    Applicant: Adobe Inc.
    Inventors: Minal Bansal, Prasenjit Mondal, Sanjeev Tagra, Sachin Soni, Ajay Jain, Andres Gonzalez
  • Patent number: 11064000
    Abstract: Techniques and systems are described for accessible audio switching options during the online conference. For example, a conferencing system receives presentation content and audio content as part of the online conference from a client device. The conferencing system generates voice-over content from the presentation content by converting text of the presentation content to audio. The conferencing system then divides the presentation content into presentation segments. The conferencing system also divides the audio content into audio segments that correspond to respective presentation segments, and the voice-over content into voice-over segments that correspond to respective presentation segments. As the online conference is output, the conferencing system enables switching between a corresponding audio segment and voice-over segment during output of a respective presentation segment.
    Type: Grant
    Filed: November 29, 2017
    Date of Patent: July 13, 2021
    Assignee: Adobe Inc.
    Inventors: Ajay Jain, Sachin Soni, Amit Srivastava
  • Patent number: 11055905
    Abstract: Certain embodiments involve visually augmenting images of three-dimensional containers with virtual elements that fill one or more empty regions of the three-dimensional containers. For instance, a computing system receives a first image that depicts a storage container and identify sub-containers within the storage container. The computing system selects, from a virtual object library, a plurality of virtual objects that are semantically related to the sub-container. The computing system determines an arrangement of the virtual objects within the sub-container based on semantics associated with the sub-container and the plurality of virtual objects. The computing system generates a second image that depicts the arrangement of the plurality of virtual objects within the storage container and sub-containers. The computing system generates, for display, the second image depicting the storage container and the arrangement of the virtual objects.
    Type: Grant
    Filed: August 8, 2019
    Date of Patent: July 6, 2021
    Assignee: Adobe Inc.
    Inventors: Sanjeev Tagra, Sachin Soni, Ajay Jain, Ryan Rozich, Prasenjit Mondal, Jonathan Roeder
  • Patent number: 11043015
    Abstract: Techniques for propagating a reflection of an object. In an example, a method includes receiving an input image comprising a first reflection of a first object on a reflective surface. The method further includes generating a second reflection for a second object in the input image. The second reflection is a reflection of the second object on the reflective surface. The method includes adding the second reflection to the input image. The method includes outputting a modified image comprising the first object, first reflection, the second object, and the second reflection.
    Type: Grant
    Filed: August 12, 2019
    Date of Patent: June 22, 2021
    Assignee: Adobe Inc.
    Inventors: Sanjeev Tagra, Sachin Soni, Ajay Jain, Prasenjit Mondal, Jingwan Lu
  • Patent number: 11036348
    Abstract: In implementations of user interaction determination within a webinar system, a computing device implements a webinar system that exposes interactive elements on user devices during a webinar and monitors device interactions reflecting user interactions with webinar content on the user devices. The webinar system determines amounts of user interaction based on the device interactions, and can output the interactive elements based on the device interactions. The webinar system can receive user responses to the interactive elements, and maintain a pipeline that assigns levels to the users based on the user responses and the interactive elements. Users are determined as sales leads based on the levels for the users in the pipeline.
    Type: Grant
    Filed: July 23, 2019
    Date of Patent: June 15, 2021
    Assignee: Adobe Inc.
    Inventors: Ajay Jain, Sanjeev Tagra, Sachin Soni, Eric Andrew Kienle
  • Patent number: 11003830
    Abstract: Methods and systems for location-based digital font recommendations determine locations of the images and assign mappings between the identified digital fonts in the images and the locations of the images. Additionally, one or more embodiments detect a location related to content being viewed by a user. In response, one or more embodiments determine a location associated with the content and identify one or more digital fonts associated with the location from a font-location database. Based on the identified digital font(s), one or more embodiments provide a location-based recommendation of digital fonts for use in connection with the content.
    Type: Grant
    Filed: September 26, 2016
    Date of Patent: May 11, 2021
    Assignee: ADOBE INC.
    Inventors: Sachin Soni, Ashish Duggal
  • Patent number: 10999638
    Abstract: Navigating a video recording based on changes in views of the recording's visual content is described. A content-based navigation system receives a recording including visual content and audio content. The content-based navigation system then determines a content scale for navigating the recording relative to an overall number of new or updated views of visual content during playback of the recording. Given the content scale, the content-based navigation system generates a content navigation control that enables navigating the recording at a granularity defined by the overall number of new or updated views of the recording's visual content. Navigation via the content navigation control is thus independent of time between changes to views of the recording's visual content during playback. Input to the content navigation control causes output of a different view of the recording's visual content, and optionally causes output of audio content synchronized with the different view of visual content.
    Type: Grant
    Filed: January 17, 2019
    Date of Patent: May 4, 2021
    Assignee: Adobe Inc.
    Inventors: Sanjeev Tagra, Sachin Soni, Ajay Jain