Patents by Inventor Abhishek Shah

Abhishek Shah has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10977443
    Abstract: Embodiments provide for class balancing for intent authoring using search via: receiving a positive example of an utterance associated with an intent, building an in-intent pool of utterances from a conversation log using the positive example in a first search query of the conversation log; adding the in-intent pool of utterances as a positive class to a training dataset; applying Boolean operators to negate the positive example to form a complement example; building an out-intent pool of utterances from the conversation log using the complement example in a first search query of the conversation log; and adding the out-intent pool of utterances as a complement class to the training dataset. The training dataset may be balanced to include a predefined ratio of positive and complement examples. The training dataset may be used to train or retrain an intent classifier.
    Type: Grant
    Filed: November 5, 2018
    Date of Patent: April 13, 2021
    Assignee: International Business Machines Corporation
    Inventors: Abhishek Shah, Tin Kam Ho
  • Publication number: 20210103611
    Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for organization of digital media in which a digital media gallery is organized based on underlying events or occasions by leveraging content tags associated with media. Content tags and their corresponding confidence scores for a set of media are compared with correlation scores of content tags with certain event types. Upon receipt of a set of media, candidate event types may be determined based on content tags associated with the set of media and relevant tags for different event types. The candidate event types are scored based on the confidence scores and the correlations scores for each candidate event type. The highest scoring candidate event type may be presented to the user as the event type for the set of media.
    Type: Application
    Filed: October 3, 2019
    Publication date: April 8, 2021
    Inventors: Rotish KUMAR, Arshla JINDAL, Abhishek SHAH
  • Publication number: 20210035557
    Abstract: A combination of propagation operations and learning algorithms is applied, using a selected set of labeled conversational logs retrieved from a subset of a plurality of conversational logs, to a remaining corpus of the plurality of conversational logs to train an automated response system according to an intent associated with each of the conversational logs. The combination of propagation operations and learning algorithms may include defining the labels by a user for the selected set of the subset of the plurality of conversational logs; training a probabilistic classifier using the defined labels of features of the selected set, wherein the probabilistic classifier produces labeling decisions for the subset of conversational logs; weighting the features of the selected set in a model optimization process; and/or training an additional classifier using the weighted features of the selected set and applying the additional classifier to the remaining corpus.
    Type: Application
    Filed: October 21, 2020
    Publication date: February 4, 2021
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Tin Kam HO, Robert L. YATES, Blake MCGREGOR, Rajendra G. UGRANI, Neil R. MALLINAR, Abhishek SHAH, Ayush GUPTA
  • Publication number: 20200372112
    Abstract: Evaluating intent authoring processes, by a processor in a computing environment. Results are received of a simulated intent labeling effort of a dataset comprising utterances of interactive dialog sessions between agents and clients for a given product or service. Figures of merits for respective algorithms used to perform the simulated intent labeling effort are computed. Each of the respective algorithms are evaluated according to the computed figures of merits; and one of the respective algorithms is implemented for labeling intents of a remaining corpus of the synthesized dataset according to parameters evaluated in the computed figures of merits.
    Type: Application
    Filed: May 20, 2019
    Publication date: November 26, 2020
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Tin Kam HO, Abhishek SHAH, Neil MALLINAR, Rajendra G. UGRANI, Ayush GUPTA
  • Publication number: 20200372111
    Abstract: Evaluating intent authoring processes, by a processor in a computing environment. A dataset comprising utterances of interactive dialog sessions between agents and clients for a given product or service is received. A classification of at least a portion of the utterances is performed for a target intent according to at least one of a plurality of recommendation algorithms, where the classification is performed by an automatic driver invoking the recommendation algorithm and simulating a manual confirmation of the algorithm's decision by a user. A classifier trained with the utterances recommended and confirmed by the automatic driver is automatically evaluated according to at least one of the plurality of evaluation criteria. A report tracking the evaluation results is generated.
    Type: Application
    Filed: May 20, 2019
    Publication date: November 26, 2020
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Tin Kam HO, Abhishek SHAH, Neil MALLINAR, Rajendra G. UGRANI, Ayush GUPTA
  • Patent number: 10832659
    Abstract: Embodiments for training an automated response system using weak supervision and co-training in a computing environment are provided. A plurality of conversational logs comprising interactive dialog sessions between agents and clients for a given product or service are received. A subset of the plurality of conversational logs are retrieved according to a defined criterion, and a selected set of the subset of the plurality of retrieved conversational logs are labeled by a user. The labeling is associated with a semantic scope of intent considered by the clients. A combination of propagation operations and learning algorithms using the selected set of labeled conversational logs are applied to a remaining corpus of the plurality of conversational logs to train the automated response system according to the semantic scope of intent.
    Type: Grant
    Filed: August 31, 2018
    Date of Patent: November 10, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Tin Kam Ho, Robert L. Yates, Blake McGregor, Rajendra G. Ugrani, Neil R. Mallinar, Abhishek Shah, Ayush Gupta
  • Patent number: 10819876
    Abstract: Technologies for video-based document scanning are disclosed. The video scanning system may divide a video into segments. A segment has frames with a common feature. For a segment, the video scanning system is configured to rank the frames in the segment, e.g., based on motion characteristics, zoom characteristics, aesthetics characteristics, quality characteristics, etc., of the frames. Accordingly, the system can generate a scan from a selected frame in a segment, e.g., based on the rank of the selected frame in the segment.
    Type: Grant
    Filed: June 25, 2018
    Date of Patent: October 27, 2020
    Assignee: ADOBE INC.
    Inventors: Ankit Pangasa, Abhishek Shah
  • Publication number: 20200323318
    Abstract: The present invention provides one or more head sections having a setting that are releasably engaged with one or more bases.
    Type: Application
    Filed: April 11, 2019
    Publication date: October 15, 2020
    Applicant: Sandeep Diamond Corporation
    Inventor: Abhishek Shah
  • Publication number: 20200276010
    Abstract: A device and method for aspirating graft tissue and delivering the graft tissue to a target delivery site (for example, when performing Descemet's membrane endothelial keratoplasty). In one embodiment, an injector comprises a cylinder and a plunger at least partially located within the cylinder, the plunger being rotatably advanceable and retractable within the cylinder. In one aspect of the embodiment, rotating the plunger in a first direction within the cylinder controllably aspirates a graft tissue into the injector and rotating the plunger in a second direction opposite the first direction controllably ejects the graft tissue from the injector.
    Type: Application
    Filed: February 28, 2020
    Publication date: September 3, 2020
    Inventors: Alfonso L. SABATER, Abhishek SHAH, William B. BURAS, Alejandro M. SABATER
  • Patent number: 10740925
    Abstract: Object tracking verification techniques are described as implemented by a computing device. In one example, feature points are selected on and along a boundary of an object to be tracked, e.g., in an initial frame of a digital video, which are referred to as “feature points.” Tracking of the feature points is verified by the computing device between frames. If the feature points have been found to deviate from the object, the feature points are reselected. To verify the feature points, a number of tracked features points in a subsequent frame is compared to a number of feature points used to initiate tracking with respect to a threshold. Based on this comparison, if a number of feature points is “lost” in the subsequent frame that is greater than the threshold, the feature points are reselected for tracking the object in subsequent frames of the video.
    Type: Grant
    Filed: August 29, 2018
    Date of Patent: August 11, 2020
    Assignee: Adobe Inc.
    Inventors: Angad Kumar Gupta, Abhishek Shah
  • Patent number: 10699381
    Abstract: Certain embodiments involve a model for enhancing text in electronic content. For example, a system obtains electronic content comprising input text and converts the electronic content into a grayscale image. The system also converts the grayscale image into a binary image using a grid-based grayscale-conversion filter, which can include: generating a grid of pixels on the grayscale image; determining a plurality of grid-pixel threshold values at intersection points in the grid of pixels; determining a plurality of estimated pixel threshold values based on the plurality of grid-pixel threshold values; and converting the grayscale image into the binary image using the plurality of grid-pixel threshold values and the plurality of estimated pixel threshold values. The system also generates an interpolated image based on the electronic content and the binary image. The interpolated image includes output text that is darker than the input text. The system can then output the interpolated image.
    Type: Grant
    Filed: May 24, 2018
    Date of Patent: June 30, 2020
    Assignee: Adobe Inc.
    Inventors: Ram Bhushan Agrawal, Ankit Pangasa, Abhishek Shah
  • Patent number: 10692259
    Abstract: Techniques for automatic creation of media collages are described. In one or more implementations, unwanted frames are identified and removed from items of media content. A media score is then determined for items of media content based on characteristics of an appearance of the items within a plurality of collage templates. A template score is determined for each collage template of the plurality of collage templates by combining the media scores for each media item of the plurality of media items included in a collage template. At least one of the plurality of collage templates is selected based on determined template scores. Then, at least one media collage is outputted based on the selected collage templates.
    Type: Grant
    Filed: December 15, 2016
    Date of Patent: June 23, 2020
    Assignee: Adobe Inc.
    Inventors: Abhishek Shah, Sameer Bhatt
  • Patent number: 10692197
    Abstract: Computer-implemented systems and methods herein disclose automatic haze correction in a digital video. In one example, a video dehazing module identifies a scene including a set of video frames. The video dehazing module identifies the dark channel, brightness, and atmospheric light characteristics in the scene. For each video frame in the scene, the video dehazing module determines a unique haze correction amount parameter by taking into account the dark channel, brightness, and atmospheric light characteristics. The video dehazing module applies the unique haze correction amount parameters to each video frame and thereby generates a sequence of dehazed video frames.
    Type: Grant
    Filed: June 28, 2018
    Date of Patent: June 23, 2020
    Assignee: Adobe Inc.
    Inventors: Abhishek Shah, Gagan Singhal
  • Publication number: 20200167604
    Abstract: Embodiments for creating compact example subsets for intent classification in a conversational system are provided. A set of content used for training an intent classifier is received from a conversational corpus. Entries within the set of content are separated into a first subset and a second subset, and a cross-validation operation is performed on the first and second subsets to identify a correctly labeled portion and an incorrectly labeled portion of the set of content. A reduced content used for performing a final training of the intent classifier is formed by combining a first number of the entries from the correctly labeled portion and a second number of the entries from the incorrectly labeled portion of the set of content.
    Type: Application
    Filed: November 28, 2018
    Publication date: May 28, 2020
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Abhishek SHAH, Tin Kam HO
  • Publication number: 20200142960
    Abstract: Embodiments provide for class balancing for intent authoring using search via: receiving a positive example of an utterance associated with an intent, building an in-intent pool of utterances from a conversation log using the positive example in a first search query of the conversation log; adding the in-intent pool of utterances as a positive class to a training dataset; applying Boolean operators to negate the positive example to form a complement example; building an out-intent pool of utterances from the conversation log using the complement example in a first search query of the conversation log; and adding the out-intent pool of utterances as a complement class to the training dataset. The training dataset may be balanced to include a predefined ratio of positive and complement examples. The training dataset may be used to train or retrain an intent classifier.
    Type: Application
    Filed: November 5, 2018
    Publication date: May 7, 2020
    Inventors: Abhishek SHAH, Tin Kam HO
  • Publication number: 20200094192
    Abstract: A filtration apparatus includes a tubular casing having a longitudinal axis and first and second casing ends. A plurality of partition plates are positioned in the casing and sealed thereto to thereby define an intake collection chamber between a first of the partition plates and the first casing end, a discharge collection chamber between a second of the partition plates and the second casing end, and a reject collection chamber opposite the second partition plate from the second casing end. A plurality of elongated filtration membrane stacks are positioned side-by-side in the casing generally parallel to the longitudinal axis. Each filtration membrane stack includes an intake end which is fluidly connected to the intake collection chamber, a discharge end which is fluidly connected to the reject collection chamber, and a permeate channel which extends between the first and second ends and is fluidly connected to the discharge collection chamber.
    Type: Application
    Filed: June 22, 2017
    Publication date: March 26, 2020
    Inventors: Andrei Strikovski, Janardhan Davalath, Abhishek Shah, Loreen Ople Villacorte, Paul Verbeek, Thomas Krebs, Vivek Mehrotra, Rahul Ganguli
  • Publication number: 20200074673
    Abstract: Object tracking verification techniques are described as implemented by a computing device. In one example, feature points are selected on and along a boundary of an object to be tracked, e.g., in an initial frame of a digital video, which are referred to as “feature points.” Tracking of the feature points is verified by the computing device between frames. If the feature points have been found to deviate from the object, the feature points are reselected. To verify the feature points, a number of tracked features points in a subsequent frame is compared to a number of feature points used to initiate tracking with respect to a threshold. Based on this comparison, if a number of feature points is “lost” in the subsequent frame that is greater than the threshold, the feature points are reselected for tracking the object in subsequent frames of the video.
    Type: Application
    Filed: August 29, 2018
    Publication date: March 5, 2020
    Applicant: Adobe Inc.
    Inventors: Angad Kumar Gupta, Abhishek Shah
  • Publication number: 20200074984
    Abstract: Embodiments for training an automated response system using weak supervision and co-training in a computing environment are provided. A plurality of conversational logs comprising interactive dialog sessions between agents and clients for a given product or service are received. A subset of the plurality of conversational logs are retrieved according to a defined criterion, and a selected set of the subset of the plurality of retrieved conversational logs are labeled by a user. The labeling is associated with a semantic scope of intent considered by the clients. A combination of propagation operations and learning algorithms using the selected set of labeled conversational logs are applied to a remaining corpus of the plurality of conversational logs to train the automated response system according to the semantic scope of intent.
    Type: Application
    Filed: August 31, 2018
    Publication date: March 5, 2020
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Tin Kam HO, Robert L. YATES, Blake MCGREGOR, Rajendra G. UGRANI, Neil R. MALLINAR, Abhishek SHAH, Ayush GUPTA
  • Publication number: 20200051300
    Abstract: Systems and techniques are disclosed for automatically creating a group shot image by intelligently selecting a best frame of a video clip to use as a base frame and then intelligently merging features of other frames into the base frame. In an embodiment, this involves determining emotional alignment scores and eye scores for the individual frames of the video clip. The emotional alignment scores for the frames are determined by assessing the faces in each of the frames with respect to an emotional characteristic (e.g., happy, sad, neutral, etc.). The eye scores for the frames are determined based on assessing the states of the eyes (e.g., fully open, partially open, closed, etc.) of the faces in individual frames. Comprehensive scores for the individual frames are determined based on the emotional alignment scores and the eye scores, and the frame having the best comprehensive score is selected as the base frame.
    Type: Application
    Filed: October 17, 2019
    Publication date: February 13, 2020
    Inventors: Abhishek Shah, Andaleeb Fatima
  • Patent number: D907523
    Type: Grant
    Filed: December 24, 2018
    Date of Patent: January 12, 2021
    Assignee: Sandeep Diamond Corporation
    Inventor: Abhishek Shah