Patents by Inventor Andrey Konin

Andrey Konin has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11947343
    Abstract: A system and method for optimizing industrial assembly process in an industrial environment is disclosed. A system operates on artificial intelligence (AI) based conversational/GUI platform, where it receives user commands related to industrial assembly process improvement queries. By analyzing received user commands, system identifies type of industrial assembly process mentioned by extracting relevant keywords or other attributes. Using trained AI-based classification table, system determines performance attributes associated with identified type of process. The system leverages various sources such as domain knowledge, organization-specific knowledge bases, data from tools/internet-based services, and statistical measurements from industrial environment.
    Type: Grant
    Filed: September 5, 2023
    Date of Patent: April 2, 2024
    Assignee: Retrocausal, Inc.
    Inventors: Muhammad Zeeshan Zia, Quoc-Huy Tran, Andrey Konin
  • Patent number: 11941080
    Abstract: A system and method for learning human activities from video demonstrations using video augmentation is disclosed. The method includes receiving original videos from one or more data sources. The method includes processing the received original videos using one or more video augmentation techniques to generate a set of augmented videos. Further, the method includes generating a set of training videos by combining the received original videos with the generated set of augmented videos. Also, the method includes generating a deep learning model for the received original videos based on the generated set of training videos. Further, the method includes learning the one or more human activities performed in the received original videos by deploying the generated deep learning model. The method includes outputting the learnt one or more human activities performed in the original videos.
    Type: Grant
    Filed: May 20, 2021
    Date of Patent: March 26, 2024
    Assignee: Retrocausal, Inc.
    Inventors: Quoc-Huy Tran, Muhammad Zeeshan Zia, Andrey Konin, Sanjay Haresh, Sateesh Kumar
  • Publication number: 20220383638
    Abstract: A system and method for determining sub-activities in videos and segmenting the videos is disclosed. The method includes extracting one or more batches from one or more videos and extracting one or more features from set of frames associated with the one or more batches. The method further includes generating a set of predicted codes and determining a cross-entropy loss, temporal coherence loss and a final loss. Further, the method includes categorizing the set of frames into one or more predefined clusters and generating one or more segmented videos based on the categorized set of frames, the determined final loss, and the set of predicted codes by using s activity determination-based ML model. The method includes outputting the generated one or more segmented videos on user interface screen of one or more electronic devices associated with one or more users.
    Type: Application
    Filed: May 25, 2022
    Publication date: December 1, 2022
    Inventors: Quoc-Huy Tran, Muhammad Zeeshan Zia, Andrey Konin, Sateesh Kumar, Sanjay Haresh, Awais Ahmed, Hamza Khan, Muhammad Shakeeb Hussain Siddiqui
  • Publication number: 20220374653
    Abstract: A system and method for learning human activities from video demonstrations using video augmentation is disclosed. The method includes receiving original videos from one or more data sources. The method includes processing the received original videos using one or more video augmentation techniques to generate a set of augmented videos. Further, the method includes generating a set of training videos by combining the received original videos with the generated set of augmented videos. Also, the method includes generating a deep learning model for the received original videos based on the generated set of training videos. Further, the method includes learning the one or more human activities performed in the received original videos by deploying the generated deep learning model. The method includes outputting the learnt one or more human activities performed in the original videos.
    Type: Application
    Filed: May 20, 2021
    Publication date: November 24, 2022
    Inventors: Quoc-Huy Tran, Muhammad Zeeshan Zia, Andrey Konin, Sanjay Haresh, Sateesh Kumar
  • Patent number: 11368756
    Abstract: A system and method for correlating video frames in a computing environment. The method includes receiving first video data and second video data from one or more data sources. The method further includes encoding the received first video data and the second video data using machine learning network. Further, the method includes generating first embedding video data and second embedding video data corresponding to the received first video data and the received second video data. Additionally, the method includes determining a contrastive IDM temporal regularization value for the first video data and the second video data. The method further includes determining temporal alignment loss between the first video data and the second video data. Also, the method includes determining correlated video frames between the first video data and the second video databased on the determined temporal alignment loss and the determined contrastive IDM temporal regularization value.
    Type: Grant
    Filed: March 26, 2021
    Date of Patent: June 21, 2022
    Inventors: Quoc-Huy Tran, Muhammad Zeeshan Zia, Andrey Konin, Sanjay Haresh, Sateesh Kumar, Shahram Najam Syed
  • Patent number: 11216656
    Abstract: A system and method for management and evaluation of one or more human activities is disclosed. The method includes receiving live videos from data sources. The live videos comprises activity performed by human. The activity comprises actions performed by the human. Further, the method includes detecting the actions performed by the human in the live videos using a neural network model. The method further includes generating a procedural instruction set for the activity performed by the human. Also, the method includes validating quality of the identified actions performed by the human using the generated procedural instruction set. Furthermore, the method includes detecting anomalies in the actions performed by the human based on results of validation. Additionally, the method includes generating rectifiable solutions for the detected anomalies. Moreover, the method includes outputting the rectifiable solutions on a user interface of a user device.
    Type: Grant
    Filed: June 21, 2021
    Date of Patent: January 4, 2022
    Inventors: Muhammad Zeeshan Zia, Quoc-Huy Tran, Andrey Konin
  • Patent number: 11132845
    Abstract: A method for object recognition includes, at a computing device, receiving an image of a real-world object. An identity of the real-world object is recognized using an object recognition model trained on a plurality of computer-generated training images. A digital augmentation model corresponding to the real-world object is retrieved, the digital augmentation model including a set of augmentation-specific instructions. A pose of the digital augmentation model is aligned with a pose of the real-world object. An augmentation is provided, the augmentation associated with the real-world object and specified by the augmentation-specific instructions.
    Type: Grant
    Filed: May 22, 2019
    Date of Patent: September 28, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Harpreet Singh Sawhney, Andrey Konin, Bilha-Catherine W. Githinji, Amol Ashok Ambardekar, William Douglas Guyman, Muhammad Zeeshan Zia, Ning Xu, Sheng Kai Tang, Pedro Urbina Escos
  • Patent number: 11017690
    Abstract: A system for building computational models of a goal-driven task from demonstration is disclosed. A task recording subsystem receives a recorded video file or recorded sensor data representative of an expert demonstration for a task. An instructor authoring tool generates one or more sub-activity proposals; enables an instructor to specify one or more sub-activity labels upon modification of the one or more sub-activity proposals into one or more sub-tasks. A task learning subsystem learns the one or more sub-tasks represented in the demonstration of the task; builds an activity model to predict and locate the task being performed in the recorded video file. A task evaluation subsystem evaluates a live video representative of the task; generates at least one performance description statistics; identifies a type of activity step executed by the one or more actors; provides an activity guidance feedback in real-time to the one or more actors.
    Type: Grant
    Filed: December 18, 2020
    Date of Patent: May 25, 2021
    Inventors: Muhammad Zeeshan Zia, Quoc-Huy Tran, Andrey Konin, Sanjay Haresh, Sateesh Kumar
  • Patent number: 10902250
    Abstract: Facilitating input to the computing system by displaying an input area on the palm of a human hand, and allowing easy input mode changes using gestures of that hand. Computer vision is used to detect the palm of a human hand. Augmented reality is used to display the input area on that palm of the display human hand. Computer vision may then be used to detect when the other human hand of that user interfaces with the input area. The input area has multiple input modes that each define how input from the input human hand is interpreted by the computing system. In response to the computer vision detected an input mode changing gesture of the display human hand, the computing system changes the input mode of the input area so as to change how input provided by the input human hand is interpreted by the computing system.
    Type: Grant
    Filed: December 21, 2018
    Date of Patent: January 26, 2021
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Andrey Konin, Michael Bleyer, Yuri Pekelny
  • Publication number: 20200372715
    Abstract: A method for object recognition includes, at a computing device, receiving an image of a real-world object. An identity of the real-world object is recognized using an object recognition model trained on a plurality of computer-generated training images. A digital augmentation model corresponding to the real-world object is retrieved, the digital augmentation model including a set of augmentation-specific instructions. A pose of the digital augmentation model is aligned with a pose of the real-world object. An augmentation is provided, the augmentation associated with the real-world object and specified by the augmentation-specific instructions.
    Type: Application
    Filed: May 22, 2019
    Publication date: November 26, 2020
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Harpreet Singh SAWHNEY, Andrey KONIN, Bilha-Catherine W. GITHINJI, Amol Ashok AMBARDEKAR, William Douglas GUYMAN, Muhammad Zeeshan ZIA, Ning XU, Sheng Kai TANG, Pedro URBINA ESCOS
  • Publication number: 20200202121
    Abstract: Facilitating input to the computing system by displaying an input area on the palm of a human hand, and allowing easy input mode changes using gestures of that hand. Computer vision is used to detect the palm of a human hand. Augmented reality is used to display the input area on that palm of the display human hand. Computer vision may then be used to detect when the other human hand of that user interfaces with the input area. The input area has multiple input modes that each define how input from the input human hand is interpreted by the computing system. In response to the computer vision detected an input mode changing gesture of the display human hand, the computing system changes the input mode of the input area so as to change how input provided by the input human hand is interpreted by the computing system.
    Type: Application
    Filed: December 21, 2018
    Publication date: June 25, 2020
    Inventors: Andrey KONIN, Michael BLEYER, Yuri PEKELNY
  • Patent number: 10445942
    Abstract: Techniques are provided to rank a set of candidate hologram placement locations based on a defined set of rules or criteria. An environment's spatial mapping is initially accessed. This spatial mapping describes the environment in a 3D representation. By analyzing this spatial mapping, an unordered set of candidate hologram placement locations is generated. These candidate hologram placement locations are locations that are identified from within the environment as being locations that are potentially suitable for hosting a hologram. To be selected, each of the candidate hologram placement locations is required to satisfy certain minimum criteria. Thereafter, these candidate hologram placement locations are then ranked based on a set of pre-defined hologram placement criteria to generate a set of ranked candidate hologram placement locations. This set of ranked candidate hologram placement locations is then exposed by causing the set to be accessible during a hologram placement event.
    Type: Grant
    Filed: June 29, 2018
    Date of Patent: October 15, 2019
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Yuri Pekelny, Andrey Konin, Michael Bleyer