Patents by Inventor Eduard Oks

Eduard Oks has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11961601
    Abstract: To assist a user in the correct performance of an activity, video data is acquired. A pose of the user is determined from the video data and an avatar is generated representing the user in the pose. The pose of the user is compared to one or more other poses representing correct performance of the activity to determine one or more differences that may represent errors by the user. Depending on the activity that is being performed, some errors may be presented to the user during performance of the activity, while other errors may be presented after performance of the activity has ceased. To present an indication of an error, a specific body part or other portion of the avatar that corresponds to a difference between the user's pose and a correct pose may be presented along with an instruction regarding correct performance of the activity.
    Type: Grant
    Filed: July 2, 2020
    Date of Patent: April 16, 2024
    Assignee: AMAZON TECHNOLOGIES, INC.
    Inventors: Imry Kissos, Joel Wilson Brown, Ilia Vitsnudel, Omer Meir, Lior Fritz, Matan Goldman, Eduard Oks
  • Patent number: 11955145
    Abstract: Video output is synchronized to the actions of a user by determining positions of the user's body based on acquired video of the user. The positions of the user's body are compared to the positions of a body shown in the video output to determine corresponding positions in the video output. The video output may then be synchronized so that the subsequent output that is shown corresponds to the subsequent position attempted by the user. The rate of movement of the user may be used to determine output characteristics for the video to cause the body shown in the video output to appear to move at a similar rate to that of the user. If the user moves at a rate less than a threshold or performs an activity erroneously, the video output may be slowed or portions of the video output may be repeated.
    Type: Grant
    Filed: December 7, 2020
    Date of Patent: April 9, 2024
    Assignee: AMAZON TECHNOLOGIES, INC.
    Inventors: Dotan Kaufman, Guy Adam, Eran Borenstein, Ianir Ideses, Eduard Oks, Noam Sorek
  • Patent number: 11880492
    Abstract: First video data representing performance of an activity by a first user is acquired. Poses of the first user are determined from the first video data. Second video data is generated based on the determined poses and based on appearance data that represents a second user, such as a model, paid performer, and so forth, in various poses. The resulting second video data depicts the second user performing the same poses as the first user. The second video data may then be sent to a recipient. For example, a participant in an exercise class may send a video to an instructor that depicts what appears to be the paid performer performing the poses, instead of the participant. As a result, video data showing the participant is not shared, protecting the privacy of the participant while still allowing them to participate and interact.
    Type: Grant
    Filed: September 14, 2020
    Date of Patent: January 23, 2024
    Assignee: AMAZON TECHNOLOGIES, INC.
    Inventors: Dotan Kaufman, Guy Adam, Eran Borenstein, Ianir Ideses, Eduard Oks, Noam Sorek
  • Patent number: 11861944
    Abstract: Video output is generated based on first video data that depicts the user performing an activity. Poses of the user during performance of the activity are compared with second video data that depicts an instructor performing the activity. Corresponding poses of the user's body and the instructor's body may be determined through comparison of the first and second video data. The video data is used to determine the rate of motion of the user and to generate video output in which a visual representation of the instructor moves at a rate similar to the that of the user. For example, video output generated based on an instructional fitness video may be synchronized so that movement of the presented instructor matches the rate of movement of the user performing an exercise, improving user comprehension and performance.
    Type: Grant
    Filed: September 25, 2019
    Date of Patent: January 2, 2024
    Assignee: AMAZON TECHNOLOGIES, INC.
    Inventors: Ido Yerushalmy, Ianir Ideses, Eli Alshan, Mark Kliger, Liza Potikha, Dotan Kaufman, Sharon Alpert, Eduard Oks, Noam Sorek
  • Patent number: 11810597
    Abstract: Devices, systems and methods are disclosed for improving story assembly and video summarization. For example, video clips may be received and a theme may be determined from the received video clips based on annotation data or other characteristics of the received video data. Individual moments may be extracted from the video clips, based on the selected theme and the annotation data. The moments may be ranked based on a priority metric corresponding to content determined to be desirable for purposes of video summarization. Select moments may be chosen based on the priority metric and a structure may be determined based on the selected theme. Finally, a video summarization may be generated using the selected theme and the structure, the video summarization including the select moments.
    Type: Grant
    Filed: October 4, 2021
    Date of Patent: November 7, 2023
    Assignee: Amazon Technologies, Inc.
    Inventors: Matthew Alan Townsend, Rohith Mysore Vijaya Kumar, Yadunandana Nagaraja Rao, Ambrish Tyagi, Eduard Oks, Apoorv Chaudhri
  • Patent number: 11783542
    Abstract: Devices and techniques are generally described for three dimensional mesh generation. In various examples, first two-dimensional (2D) image data representing a human body may be received from a first image sensor. Second 2D image data representing the human body may be received from a second image sensor. A first pose parameter and a first shape parameter may be determined using a first three-dimensional (3D) mesh prediction model and the first 2D image data. A second pose parameter and a second shape parameter may be determined using a second 3D mesh prediction model and the second 2D image data. In various examples, an updated 3D mesh prediction model may be generated from the first 3D mesh prediction model based at least in part on a first difference between the first pose parameter and the second pose parameter and a second difference between the first shape parameter and the second shape parameter.
    Type: Grant
    Filed: September 29, 2021
    Date of Patent: October 10, 2023
    Assignee: AMAZON TECHNOLOGIES, INC.
    Inventors: Matan Goldman, Lior Fritz, Omer Meir, Imry Kissos, Yaar Harari, Eduard Oks, Mark Kliger
  • Patent number: 11771863
    Abstract: Systems for assisting a user in performance of a meditation activity or another type of activity are described. The systems receive user input and sensor data indicating physiological values associated with the user. These values are used to determine a recommended type of activity and a length of time for the activity. While the user performs the activity, sensors are used to measure physiological values, and an output that is provided to the user is selected based on the measured physiological values. The output may be selected to assist the user in reaching target physiological values, such as a slower respiration rate. After completion of the activity, additional physiological values are used to determine the effectiveness of the activity and the output that was provided. The effectiveness of the activity and the output may be used to determine future recommendations and future output.
    Type: Grant
    Filed: December 11, 2019
    Date of Patent: October 3, 2023
    Assignee: AMAZON TECHNOLOGIES, INC.
    Inventors: Eli Alshan, Mark Kliger, Ido Yerushalmy, Liza Potikha, Dotan Kaufman, Ianir Ideses, Eduard Oks, Noam Sorek
  • Patent number: 11682237
    Abstract: A first user generates video data for performance of an activity, such as a fitness exercise, by performing the activity in front of a camera. Based on the video data, the amount of movement of different parts of the first user's body is determined. Data representing the position of the first user over time is generated. The data may take the form of a function or a signal that is based on the function. The locations of body parts that move significantly are prioritized over other body parts when determining this data. At a subsequent time, a second user performs the activity. The number of times the second user completes the activity is counted by determining the number of times the second user reaches a position corresponding to a maximum value in the data representing the position of the first user.
    Type: Grant
    Filed: September 4, 2020
    Date of Patent: June 20, 2023
    Assignee: AMAZON TECHNOLOGIES, INC.
    Inventors: Eran Borenstein, Guy Adam, Dotan Kaufman, Ianir Ideses, Eduard Oks, Noam Sorek, Lior Fritz, Omer Meir, Imry Kissos, Matan Goldman
  • Patent number: 11587271
    Abstract: First image data representing a first human wearing a first article of clothing may be received. The first image data, when rendered on a display, may include a first photometric artifact. A first generator network may be used to generate second image data from the first image data. The first photometric artifact may be removed from the second image data. A second generator network may be used to generate third image data from the second image data, the third image data representing the first human in a different pose relative to the first image data. Fourth image data representing the first article of clothing segmented from the first human may be generated and displayed on a display.
    Type: Grant
    Filed: September 1, 2020
    Date of Patent: February 21, 2023
    Assignee: AMAZON TECHNOLOGIES, INC.
    Inventors: Assaf Neuberger, Alexander Lorbert, Arik Poznanski, Eduard Oks, Sharon Alpert, Bar Hilleli
  • Publication number: 20220261574
    Abstract: Characteristics of a user's movement are evaluated based on performance of activities by a user within a field of view of a camera. Video data representing performance of a series of movements by the user is acquired by the camera. Pose data is determined based on the video data, the pose data representing positions of the user's body while performing the movements. The pose data is compared to a set of existing videos that correspond to known errors to identify errors performed by the user. The errors may be used to generate scores for various characteristics of the user's movement. Based on the errors, exercises or other activities to improve the movement of the user may be determined and included in an output presented to the user.
    Type: Application
    Filed: February 16, 2021
    Publication date: August 18, 2022
    Inventors: EDUARD OKS, RIDGE CARPENTER, LAMARR SMITH, CLAIRE MCGOWAN, ELIZABETH REISMAN, IANIR IDESES, ELI ALSHAN, MARK KLIGER, MATAN GOLDMAN, LIZA POTIKHA, IDO YERUSHALMY, DOTAN KAUFMAN, GUY ADAM, OMER MEIR, LIOR FRITZ, IMRY KISSOS, GEORGY MELAMED, ERAN BORENSTEIN, SHARON ALPERT, NOAM SOREK
  • Publication number: 20220122639
    Abstract: Devices, systems and methods are disclosed for improving story assembly and video summarization. For example, video clips may be received and a theme may be determined from the received video clips based on annotation data or other characteristics of the received video data. Individual moments may be extracted from the video clips, based on the selected theme and the annotation data. The moments may be ranked based on a priority metric corresponding to content determined to be desirable for purposes of video summarization. Select moments may be chosen based on the priority metric and a structure may be determined based on the selected theme. Finally, a video summarization may be generated using the selected theme and the structure, the video summarization including the select moments.
    Type: Application
    Filed: October 4, 2021
    Publication date: April 21, 2022
    Inventors: Matthew Alan Townsend, Rohith Mysore Vijaya Kumar, Yadunandana Nagaraja Rao, Ambrish Tyagi, Eduard Oks, Apoorv Chaudhri
  • Publication number: 20220067994
    Abstract: Devices and techniques are generally described for catalog normalization and segmentation for fashion images. First image data representing a first human wearing a first article of clothing may be received. The first image data, when rendered on a display, may include a first photometric artifact. A first generator network may be used to generate second image data from the first image data. The first photometric artifact may be removed from the second image data. A second generator network may be used to generate third image data from the second image data, the third image data representing the first human in a different pose relative to the first image data. Fourth image data representing the first article of clothing segmented from the first human may be generated and displayed on a display.
    Type: Application
    Filed: September 1, 2020
    Publication date: March 3, 2022
    Inventors: Assaf Neuberger, Alexander Lorbert, Arik Poznanski, Eduard Oks, Sharon Alpert, Bar Hilleli
  • Patent number: 11158344
    Abstract: Devices, systems and methods are disclosed for improving story assembly and video summarization. For example, video clips may be received and a theme may be determined from the received video clips based on annotation data or other characteristics of the received video data. Individual moments may be extracted from the video clips, based on the selected theme and the annotation data. The moments may be ranked based on a priority metric corresponding to content determined to be desirable for purposes of video summarization. Select moments may be chosen based on the priority metric and a structure may be determined based on the selected theme. Finally, a video summarization may be generated using the selected theme and the structure, the video summarization including the select moments.
    Type: Grant
    Filed: September 30, 2015
    Date of Patent: October 26, 2021
    Assignee: Amazon Technologies, Inc.
    Inventors: Matthew Alan Townsend, Rohith Mysore Vijaya Kumar, Yadunandana Nagaraja Rao, Ambrish Tyagi, Eduard Oks, Apoorv Chaudhri
  • Patent number: 10614342
    Abstract: Techniques are generally described for performing outfit recommendation using a recurrent neural network. In various examples, a computing device may receive a first state vector representing an outfit comprising at least one article of clothing. First image data depicting a second article of clothing of a first clothing category may be received. A recurrent neural network may generate a first output feature vector based on the first state vector, the first image data and the first clothing category. The first output feature vector may be compared to other feature vectors representing other articles of clothing in the first category to determine distances between the first output feature vector and the other feature vectors. A set of articles of clothing may be recommended based on the distances between the first output feature vector and the other feature vectors.
    Type: Grant
    Filed: December 11, 2017
    Date of Patent: April 7, 2020
    Assignee: AMAZON TECHNOLOGIES, INC.
    Inventors: Alexander Lorbert, Eduard Oks
  • Patent number: 10580453
    Abstract: A system and method for determining video clips including interesting content from video data. The system may receive annotation data identifying time and positions corresponding to objects represented in the video data and the system may determine priority metrics associated with each of the objects. By associating the priority metrics with the time and positions corresponding to the objects, the system may generate a priority metric map indicating a time and position of interesting moments in the video data. The system may generate moments and/or video clips based on the priority metric map. The system may determine a time (e.g., video frames) and/or space (e.g., pixel coordinates) associated with the moments/video clips and may simulate camera motion such as panning and/or zooming with the moments/video clips. The system may generate a Master Clip Table including the moments, video clips and/or annotation data associated with the moments/video clips.
    Type: Grant
    Filed: April 5, 2017
    Date of Patent: March 3, 2020
    Assignee: AMAZON TECHNOLOGIES, INC.
    Inventors: Matthew Alan Townsend, Moshe Bouhnik, Konstantin Kraimer, Eduard Oks
  • Patent number: 10554850
    Abstract: Devices, systems and methods are disclosed for reducing a perceived latency associated with uploading and annotating video data. For example, video data may be divided into video sections that are uploaded individually so that the video sections may be annotated as they are received. This reduces a latency associated with the annotation process, as a portion of the video data is annotated before an entirety of the video data is uploaded. In addition, the annotation data may be used to generate a master clip table and extract individual video clips from the video data.
    Type: Grant
    Filed: March 7, 2019
    Date of Patent: February 4, 2020
    Assignee: Amazon Technologies, Inc.
    Inventors: Matthew Alan Townsend, Eduard Oks, Rohith Mysore Vijaya Kumar, Apoorv Chaudhri, Yadunandana Nagaraja Rao, Ambrish Tyagi
  • Patent number: 10540757
    Abstract: A computer-implemented method includes receiving first pose data for a first human represented in a first image, receiving second pose data for a second human represented in a second image, receiving first semantic segmentation data for the first image, and receiving second semantic segmentation data for the second image. A pose-aligned second image can be generated by modifying the second image based on the first pose data, the second pose data, the first semantic segmentation data, and the second semantic segmentation data. A mixed image can be determined by combining pixel values from the first image and pixel values of the pose-aligned second image according to mask data. In some embodiments, the mixed image includes a representation of an outfit that includes first clothing represented in the first image and second clothing represented in the second image.
    Type: Grant
    Filed: March 12, 2018
    Date of Patent: January 21, 2020
    Assignee: Amazon Technologies, Inc.
    Inventors: Moshe Bouhnik, Hilit Unger, Eduard Oks, Noam Sorek
  • Publication number: 20190273837
    Abstract: Devices, systems and methods are disclosed for reducing a perceived latency associated with uploading and annotating video data. For example, video data may be divided into video sections that are uploaded individually so that the video sections may be annotated as they are received. This reduces a latency associated with the annotation process, as a portion of the video data is annotated before an entirety of the video data is uploaded. In addition, the annotation data may be used to generate a master clip table and extract individual video clips from the video data.
    Type: Application
    Filed: March 7, 2019
    Publication date: September 5, 2019
    Inventors: Matthew Alan Townsend, Eduard Oks, Rohith Mysore Vijaya Kumar, Apoorv Chaudhri, Yadunandana Nagaraja Rao, Ambrish Tyagi
  • Patent number: 10230866
    Abstract: Devices, systems and methods are disclosed for reducing a perceived latency associated with uploading and annotating video data. For example, video data may be divided into video sections that are uploaded individually so that the video sections may be annotated as they are received. This reduces a latency associated with the annotation process, as a portion of the video data is annotated before an entirety of the video data is uploaded. In addition, the annotation data may be used to generate a master clip table and extract individual video clips from the video data.
    Type: Grant
    Filed: September 30, 2015
    Date of Patent: March 12, 2019
    Assignee: AMAZON TECHNOLOGIES, INC.
    Inventors: Matthew Alan Townsend, Eduard Oks, Rohith Mysore Vijaya Kumar, Apoorv Chaudhri, Yadunandana Nagaraja Rao, Ambrish Tyagi
  • Patent number: 9620168
    Abstract: A system and method for determining video clips including interesting content from video data. The system may receive annotation data identifying time and positions corresponding to objects represented in the video data and the system may determine priority metrics associated with each of the objects. By associating the priority metrics with the time and positions corresponding to the objects, the system may generate a priority metric map indicating a time and position of interesting moments in the video data. The system may generate moments and/or video clips based on the priority metric map. The system may determine a time (e.g., video frames) and/or space (e.g., pixel coordinates) associated with the moments/video clips and may simulate camera motion such as panning and/or zooming with the moments/video clips. The system may generate a Master Clip Table including the moments, video clips and/or annotation data associated with the moments/video clips.
    Type: Grant
    Filed: December 21, 2015
    Date of Patent: April 11, 2017
    Assignee: Amazon Technologies, Inc.
    Inventors: Matthew Alan Townsend, Moshe Bouhnik, Konstantin Kraimer, Eduard Oks