Patents by Inventor Samartha Vashishtha
Samartha Vashishtha has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240121356Abstract: Systems for capturing and organizing team-generated content produced during a meeting defined/facilitated by a third-party meeting tool or service. In particular, a server system includes a memory allocation and a processor allocation configured to cooperate to instantiate an instance of a bridge service configured to communicably couple to API endpoints of the third-party meeting tool and to one or more collaboration tools. The bridge service can monitor user or team input (and/or user input events) to the third-party meeting tool before, during, or after a meeting. Captured user input is provided to an input classifier which classifies the input as one of a set of input types. Based on the input type, parsing or analysis operations can be triggered and/or one or more API endpoints of a collaboration tool is selected such that an input to the collaboration tool, including the user input, can be provided.Type: ApplicationFiled: December 15, 2023Publication date: April 11, 2024Inventors: Vishesh Gupta, Samartha Vashishtha
-
Patent number: 11849254Abstract: Systems for capturing and organizing team-generated content produced during a meeting defined/facilitated by a third-party meeting tool or service. In particular, a server system includes a memory allocation and a processor allocation configured to cooperate to instantiate an instance of a bridge service configured to communicably couple to API endpoints of the third-party meeting tool and to one or more collaboration tools. The bridge service can monitor user or team input (and/or user input events) to the third-party meeting tool before, during, or after a meeting. Captured user input is provided to an input classifier which classifies the input as one of a set of input types. Based on the input type, parsing or analysis operations can be triggered and/or one or more API endpoints of a collaboration tool is selected such that an input to the collaboration tool, including the user input, can be provided.Type: GrantFiled: September 24, 2021Date of Patent: December 19, 2023Assignees: ATLASSIAN PTY LTD., ATLASSIAN US, INC.Inventors: Vishesh Gupta, Samartha Vashishtha
-
Publication number: 20220207489Abstract: Systems for capturing and organizing team-generated content produced during a meeting defined/facilitated by a third-party meeting tool or service. In particular, a server system includes a memory allocation and a processor allocation configured to cooperate to instantiate an instance of a bridge service configured to communicably couple to API endpoints of the third-party meeting tool and to one or more collaboration tools. The bridge service can use meeting data obtained from a third-party meeting tool to obtain task information from a task management system for task associated with a current meeting being hosted by the third-party meeting tool. The bridge service can analyze the task information to determine a status of the tasks, and generate graphical summaries of the task associated with the event. The graphical summaries can be displayed to participants of the current meeting.Type: ApplicationFiled: August 26, 2021Publication date: June 30, 2022Inventors: Vishesh Gupta, Samartha Vashishtha
-
Publication number: 20220210372Abstract: Systems for capturing and organizing team-generated content produced during a meeting defined/facilitated by a third-party meeting tool or service. In particular, a server system includes a memory allocation and a processor allocation configured to cooperate to instantiate an instance of a bridge service configured to communicably couple to API endpoints of the third-party meeting tool and to one or more collaboration tools. The bridge service can monitor user or team input (and/or user input events) to the third-party meeting tool before, during, or after a meeting. Captured user input is provided to an input classifier which classifies the input as one of a set of input types. Based on the input type, parsing or analysis operations can be triggered and/or one or more API endpoints of a collaboration tool is selected such that an input to the collaboration tool, including the user input, can be provided.Type: ApplicationFiled: September 24, 2021Publication date: June 30, 2022Inventors: Vishesh Gupta, Samartha Vashishtha
-
Patent number: 11153532Abstract: Systems for capturing and organizing team-generated content produced during a meeting defined/facilitated by a third-party meeting tool or service. In particular, a server system includes a memory allocation and a processor allocation configured to cooperate to instantiate an instance of a bridge service configured to communicably couple to API endpoints of the third-party meeting tool and to one or more collaboration tools. The bridge service can monitor user or team input (and/or user input events) to the third-party meeting tool before, during, or after a meeting. Captured user input is provided to an input classifier which classifies the input as one of a set of input types. Based on the input type, parsing or analysis operations can be triggered and/or one or more API endpoints of a collaboration tool is selected such that an input to the collaboration tool, including the user input, can be provided.Type: GrantFiled: December 29, 2020Date of Patent: October 19, 2021Assignees: ATLASSIAN PTY LTD., ATLASSIAN INC.Inventors: Vishesh Gupta, Samartha Vashishtha
-
Patent number: 10242033Abstract: Techniques for extrapolative searches are described herein. In one or more implementations, a searches are conducted using an extrapolative and additive mechanism that expands a specific query into one or more generalized queries. To do so, keywords contained in an input search query are extracted to use as a basis for a search related to content. The extracted keywords are expanded by categorization of name entities recognized from the keywords into corresponding generalized terms. Query strings to use for the search are built using combinations of keywords that are extracted from the text and corresponding generalized terms obtained by expanding the keywords. A search is conducted using the expanded queries and images results returned by the search are exposed as suggested images to represent the content.Type: GrantFiled: July 7, 2015Date of Patent: March 26, 2019Assignee: Adobe Inc.Inventors: Samartha Vashishtha, Frank Jennings
-
Patent number: 10019648Abstract: Image classification based on a calculated camera-to-object distance is described. The camera-to-object distance is calculated based in part on a ratio of a measured dimension of an identified feature in the digital image compared to a known physical dimension of the identified feature in real life. A human anatomical constant, such as a dimension of the human eye, may be used as the feature to calculate the camera-to-object distance. The camera-to-object distance can be used to classify the digital image, such as by determining whether the image is a selfie. The camera-to-object distance may also be used for image editing operations to be performed on the digital image.Type: GrantFiled: December 9, 2015Date of Patent: July 10, 2018Assignee: ADOBE SYSTEMS INCORPORATEDInventors: Samartha Vashishtha, Vikrant Rai
-
Patent number: 9712569Abstract: A computer implemented method and apparatus for timeline-synchronized note taking during a web conference. The method comprises receiving a note from a user in a web conference; generating metadata that identifies a timestamp in the web conference when the note was created and a user identifier of the user who authored the note; and storing the note and the metadata with a recording of the web conference.Type: GrantFiled: June 23, 2014Date of Patent: July 18, 2017Assignee: ADOBE SYSTEMS INCORPORATEDInventors: Samartha Vashishtha, Vikrant Rai, Gaurav Gupta, Aman Kumar Gupta, Gaurav Satija
-
Publication number: 20170169570Abstract: Image classification based on a calculated camera-to-object distance is described. The camera-to-object distance is calculated based in part on a ratio of a measured dimension of an identified feature in the digital image compared to a known physical dimension of the identified feature in real life. A human anatomical constant, such as a dimension of the human eye, may be used as the feature to calculate the camera-to-object distance. The camera-to-object distance can be used to classify the digital image, such as by determining whether the image is a selfie. The camera-to-object distance may also be used for image editing operations to be performed on the digital image.Type: ApplicationFiled: December 9, 2015Publication date: June 15, 2017Inventors: Samartha Vashishtha, Vikrant Rai
-
Publication number: 20170011068Abstract: Techniques for extrapolative searches are described herein. In one or more implementations, a searches are conducted using an extrapolative and additive mechanism that expands a specific query into one or more generalized queries. To do so, keywords contained in an input search query are extracted to use as a basis for a search related to content. The extracted keywords are expanded by categorization of name entities recognized from the keywords into corresponding generalized terms. Query strings to use for the search are built using combinations of keywords that are extracted from the text and corresponding generalized terms obtained by expanding the keywords. A search is conducted using the expanded queries and images results returned by the search are exposed as suggested images to represent the content.Type: ApplicationFiled: July 7, 2015Publication date: January 12, 2017Inventors: Samartha Vashishtha, Frank Jennings
-
Publication number: 20150373063Abstract: A computer implemented method and apparatus for timeline-synchronized note taking during a web conference. The method comprises receiving a note from a user in a web conference; generating metadata that identifies a timestamp in the web conference when the note was created and a user identifier of the user who authored the note; and storing the note and the metadata with a recording of the web conference.Type: ApplicationFiled: June 23, 2014Publication date: December 24, 2015Inventors: Samartha Vashishtha, Vikrant Rai, Gaurav Gupta, Aman Kumar Gupta, Gaurav Satija
-
Publication number: 20150186363Abstract: Techniques for a search-powered language usage service are described in which existing collections of documents are employed as sources of correct usage. A service may operate to search documents from the Internet or other document sources to produce a usage database of “correct” usage phrases that spans different languages, styles, and other contexts. Metadata associated with phrases added to the database may be used to understand the context of usage and perform usage checks using filtered, context-specific phrases for particular languages, dialects, geographic regions, styles, custom scenarios, and so forth. In one approach, separate databases for different contexts may be derived from data maintained in a global database. The service may expose the usage database(s) to enable applications to analyze target documents by comparing phrases to correct usage phrases and perform responsive actions to facilitate correct usage in various ways.Type: ApplicationFiled: December 27, 2013Publication date: July 2, 2015Applicant: Adobe Systems IncorporatedInventor: Samartha Vashishtha