Patents by Inventor Ido Yerushalmy
Ido Yerushalmy has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12283291Abstract: Systems, devices, and methods are provided for determining factually consistent generative narrations. A narrative may be generated by performing steps to determine one or more metadata messages for a first portion of a video stream, determine transcribed commentary for a second portion of the video stream, wherein the second portion includes the first portion, and determine a prompt based at least in part on the one or more metadata messages and the transcribed commentary. The prompt may be provided to a generative model that produces an output text. Techniques for performing a factual consistency evaluation may be used to determine a consistency score for the output text that indicates whether the output text is factually consistent with the one or more metadata messages and the transcribed commentary. A narrated highlight video may be generated using the consistent narrative.Type: GrantFiled: August 16, 2023Date of Patent: April 22, 2025Assignee: Amazon Technologies, Inc.Inventors: Noah Lirone Sarfati, Ido Yerushalmy, Michael Chertok, Ianir Ideses
-
Patent number: 12067806Abstract: Characteristics of a user's movement are evaluated based on performance of activities by a user within a field of view of a camera. Video data representing performance of a series of movements by the user is acquired by the camera. Pose data is determined based on the video data, the pose data representing positions of the user's body while performing the movements. The pose data is compared to a set of existing videos that correspond to known errors to identify errors performed by the user. The errors may be used to generate scores for various characteristics of the user's movement. Based on the errors, exercises or other activities to improve the movement of the user may be determined and included in an output presented to the user.Type: GrantFiled: February 16, 2021Date of Patent: August 20, 2024Assignee: AMAZON TECHNOLOGIES, INC.Inventors: Eduard Oks, Ridge Carpenter, Lamarr Smith, Claire McGowan, Elizabeth Reisman, Ianir Ideses, Eli Alshan, Mark Kliger, Matan Goldman, Liza Potikha, Ido Yerushalmy, Dotan Kaufman, Guy Adam, Omer Meir, Lior Fritz, Imry Kissos, Georgy Melamed, Eran Borenstein, Sharon Alpert, Noam Sorek
-
Patent number: 11961331Abstract: A first computing device acquires video data representing a user performing an activity. The first device uses a first pose extraction algorithm to determine a pose of the user within a frame of video data. If the pose is determined to be potentially inaccurate, the user is prompted for authorization to send the frame of video data to a second computing device. If authorization is granted, the second computing device may use a different algorithm to determine a pose of the user and send data indicative of this pose to the first computing device to enable the first computing device to update a score or other output. The second computing device may also use the frame of video data as training data to retrain or modify the first pose extraction algorithm, and may send the modified algorithm to the first computing device for future use.Type: GrantFiled: August 30, 2021Date of Patent: April 16, 2024Assignee: AMAZON TECHNOLOGIES, INC.Inventors: Ido Yerushalmy, Michael Chertok, Sharon Alpert
-
Patent number: 11861944Abstract: Video output is generated based on first video data that depicts the user performing an activity. Poses of the user during performance of the activity are compared with second video data that depicts an instructor performing the activity. Corresponding poses of the user's body and the instructor's body may be determined through comparison of the first and second video data. The video data is used to determine the rate of motion of the user and to generate video output in which a visual representation of the instructor moves at a rate similar to the that of the user. For example, video output generated based on an instructional fitness video may be synchronized so that movement of the presented instructor matches the rate of movement of the user performing an exercise, improving user comprehension and performance.Type: GrantFiled: September 25, 2019Date of Patent: January 2, 2024Assignee: AMAZON TECHNOLOGIES, INC.Inventors: Ido Yerushalmy, Ianir Ideses, Eli Alshan, Mark Kliger, Liza Potikha, Dotan Kaufman, Sharon Alpert, Eduard Oks, Noam Sorek
-
Publication number: 20230419730Abstract: Described are systems and methods directed to the processing of two-dimensional (ā2Dā) images of a body to determine a physical activity performed by the body, repetitions of the physical activity, whether the body is performing the physical activity with proper form, and providing physical activity feedback. In addition, the disclosed implementations are able to determine the physical activity, repetitions, and/or form through the processing of 2D partial body images that include less than all of the body of the user.Type: ApplicationFiled: June 27, 2022Publication date: December 28, 2023Inventors: Ido Yerushalmy, Amir Dudai, Eli Alshan
-
Patent number: 11771863Abstract: Systems for assisting a user in performance of a meditation activity or another type of activity are described. The systems receive user input and sensor data indicating physiological values associated with the user. These values are used to determine a recommended type of activity and a length of time for the activity. While the user performs the activity, sensors are used to measure physiological values, and an output that is provided to the user is selected based on the measured physiological values. The output may be selected to assist the user in reaching target physiological values, such as a slower respiration rate. After completion of the activity, additional physiological values are used to determine the effectiveness of the activity and the output that was provided. The effectiveness of the activity and the output may be used to determine future recommendations and future output.Type: GrantFiled: December 11, 2019Date of Patent: October 3, 2023Assignee: AMAZON TECHNOLOGIES, INC.Inventors: Eli Alshan, Mark Kliger, Ido Yerushalmy, Liza Potikha, Dotan Kaufman, Ianir Ideses, Eduard Oks, Noam Sorek
-
Publication number: 20220261574Abstract: Characteristics of a user's movement are evaluated based on performance of activities by a user within a field of view of a camera. Video data representing performance of a series of movements by the user is acquired by the camera. Pose data is determined based on the video data, the pose data representing positions of the user's body while performing the movements. The pose data is compared to a set of existing videos that correspond to known errors to identify errors performed by the user. The errors may be used to generate scores for various characteristics of the user's movement. Based on the errors, exercises or other activities to improve the movement of the user may be determined and included in an output presented to the user.Type: ApplicationFiled: February 16, 2021Publication date: August 18, 2022Inventors: EDUARD OKS, RIDGE CARPENTER, LAMARR SMITH, CLAIRE MCGOWAN, ELIZABETH REISMAN, IANIR IDESES, ELI ALSHAN, MARK KLIGER, MATAN GOLDMAN, LIZA POTIKHA, IDO YERUSHALMY, DOTAN KAUFMAN, GUY ADAM, OMER MEIR, LIOR FRITZ, IMRY KISSOS, GEORGY MELAMED, ERAN BORENSTEIN, SHARON ALPERT, NOAM SOREK
-
Patent number: 10904512Abstract: In an imaging system having a first camera with a first field of view (FOV) and a second camera with a second FOV smaller than the first FOV, wherein the first and second FOVs overlap over an overlap region, a method for calculating a calibrated phase detection depth map over the entire first FOV comprises calculating a stereoscopic depth map in the overlap region using image information provided by the first and second cameras, obtaining a first camera phase detection (PD) disparity map in the entire first FOV, and using the stereoscopic depth map in the overlap region to provide a calibrated 2PD depth map in the entire first FOV.Type: GrantFiled: September 6, 2017Date of Patent: January 26, 2021Assignee: Corephotonics Ltd.Inventors: Ido Yerushalmy, Noy Cohen, Ephraim Goldenberg
-
Publication number: 20200221064Abstract: In an imaging system having a first camera with a first field of view (FOV) and a second camera with a second FOV smaller than the first FOV, wherein the first and second FOVs overlap over an overlap region, a method for calculating a calibrated phase detection depth map over the entire first FOV comprises calculating a stereoscopic depth map in the overlap region using image information provided by the first and second cameras, obtaining a first camera phase detection (PD) disparity map in the entire first FOV, and using the stereoscopic depth map in the overlap region to provide a calibrated 2PD depth map in the entire first FOV.Type: ApplicationFiled: September 6, 2017Publication date: July 9, 2020Inventors: Ido Yerushalmy, Noy Cohen, Ephraim Goldenberg
-
Patent number: 9230331Abstract: A computerized method for model-less segmentation and registration of ultrasound (US) with computed tomography (CT) images of an organ with a fluid filled chamber. The method is based on correlating between the US image(s) and the CT image(s) by processing the US image(s) by iteratively expanding the CT image segment so that the expanded CT image segment is correlated with the visual boundaries of the US image segment; transforming the CT image(s) according to an estimated US transducer position and estimated US beam direction related to the US image(s) so that at least one of shape and volume of the organ in the CT image is adapted with at least one of shape and volume of the organ of the US image, to form a CT image representation which is correlated with US image(s).Type: GrantFiled: October 21, 2013Date of Patent: January 5, 2016Assignee: Samsung Electronics Co., Ltd.Inventors: Amir Shaham, Ido Yerushalmy, Eran Itan, Orna Bregman-Amitai
-
Publication number: 20150371420Abstract: There is provided a computer-implemented method of calculating an extended field of view (EFOV) from medical images, comprising: receiving multiple registered acquired medical images of a patient, the medical images having multiple imaging artifacts based on the medical imaging modality acquiring the medical images; analyzing the multiple medical images to identify locations of the multiple imaging artifacts within the medical images; calculating multiple multi-planar stitching surfaces such that seams connecting therebetween are outside the boundaries of the multiple imaging artifacts; and providing an extended field of view (EFOV) image having un-edited imaging artifacts from all stitched medical images.Type: ApplicationFiled: June 19, 2014Publication date: December 24, 2015Inventors: Ido YERUSHALMY, Rebecca NATAF, Gavriel SPEYER, Amir SHAHAM, Alon FLEIDER
-
Publication number: 20150294182Abstract: There is provided a method for estimating semi-transparent object(s) from an image comprising: receiving an image having semi-transparent and overlaid object(s) for estimation; calculating a probability map of the object(s), the probability map comprising multiple pixels corresponding to the plurality of pixels of the received image, wherein each probability map pixel has a value proportional to the probability that the pixel of the received image contains the object(s); calculating an approximation image of an object suppressed image based on the object probability map, wherein the approximation image is substantially equal to corresponding regions of the received image at portions with low probability values, and the approximation image denotes a smooth approximation of the image with the object(s) suppressed at portions with high probability values of the object probability map; and calculating the object(s) for estimation based on the calculated approximation of the object suppressed image.Type: ApplicationFiled: April 13, 2014Publication date: October 15, 2015Applicant: Samsung Electronics Co., Ltd.Inventors: Tal KENIG, Ido YERUSHALMY, Lev GOLDENTOUCH
-
Publication number: 20150110373Abstract: A computerized method for model-less segmentation and registration of ultrasound (US) with computed tomography (CT) images of an organ with a fluid filled chamber, comprising: correlating between the at least one US image and the at least one CT image by processing the at least one US image by iteratively expanding the CT image segment so that the expanded CT image segment is correlated with the visual boundaries of the US image segment; transforming the at least one CT image according to an estimated US transducer position and estimated US beam direction related to the at least one US image so that at least one of shape and volume of the organ in the CT image is adapted with at least one of shape and volume of the organ of the US image, to form a CT image representation which is correlated with the at least one US image.Type: ApplicationFiled: October 21, 2013Publication date: April 23, 2015Applicant: Samsung Electronics Co., Ltd.Inventors: Amir SHAHAM, Ido Yerushalmy, Eran Itan, Orna Bregman-Amitai