Patents by Inventor Oren Steinfeld

Oren Steinfeld has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11900635
    Abstract: A system and method of organically generating a camera-pose map is disclosed. A target image is obtained of a location deemed suitable for augmenting with a virtual augmentation or graphic. An initial camera-pose map is created having a limited number of calibrated camera-pose images having calculated camera-pose locations and homographies to the target image. Then, during the event, the system automatically obtains current images of the event venue and determines homographies to the nearest calibration camera-pose image in the camera-pose map. The separation in camera-pose space between the current images and the camera-pose images are calculated. If this separation is less than a predetermined threshold, that current image is fully calibrated and added to the camera-pose map, thereby growing the map organically.
    Type: Grant
    Filed: May 15, 2023
    Date of Patent: February 13, 2024
    Inventors: Oran Gilad, Samuel Chenillo, Oren Steinfeld
  • Patent number: 11887289
    Abstract: A system and method of obtaining an occlusion key using a background pixel map is disclosed. A target image containing a target location suitable for displaying a virtual augmentation is obtained. A stream of current images are transformed into a stationary stream having the camera pose of the target image. These are segmented using a trained neural network. The background pixel map is then the color values of background pixels found at each position within the target location. An occlusion key for a new current image is obtained by first transforming it to conform to the target image and then comparing each pixel in the target location with the color values of background pixels in the background pixel map. The occlusion key is then transformed back to conform to the current image and used for virtual augmentation of the current image.
    Type: Grant
    Filed: May 15, 2023
    Date of Patent: January 30, 2024
    Inventors: Oran Gilad, Samuel Chenillo, Oren Steinfeld
  • Patent number: 11823410
    Abstract: A system and method of video match moving using minimal calibration is disclosed. A camera obtains an initial set of calibration images and a location image. Homographies are then determined between all the images using direct linear transformation. Camera pose locations of the images are determined from the homography matrices. Match moving augmentation is accomplished by obtaining an homography between a live image and the location image, either directly, or by combining an homography to a calibration image with that calibration image's augmentation homography. A virtual augmentation is then placed in the live image. The system evaluates the quality of each live image homography. If of sufficiently high quality, that live image is added to the set of calibration images. When a live image is received, the search for an homography typically starts with the calibration image located closest in camera pose space to the preceding live image.
    Type: Grant
    Filed: May 11, 2023
    Date of Patent: November 21, 2023
    Inventors: Oran Gilad, Samuel Chenillo, Oren Steinfeld
  • Publication number: 20230368425
    Abstract: A system and method of organically generating a camera-pose map is disclosed. A target image is obtained of a location deemed suitable for augmenting with a virtual augmentation or graphic. An initial camera-pose map is created having a limited number of calibrated camera-pose images having calculated camera-pose locations and homographies to the target image. Then, during the event, the system automatically obtains current images of the event venue and determines homographies to the nearest calibration camera-pose image in the camera-pose map. The separation in camera-pose space between the current images and the camera-pose images are calculated. If this separation is less than a predetermined threshold, that current image is fully calibrated and added to the camera-pose map, thereby growing the map organically.
    Type: Application
    Filed: May 15, 2023
    Publication date: November 16, 2023
    Inventors: Oran Gilad, Samuel Chenillo, Oren Steinfeld
  • Publication number: 20230368350
    Abstract: A system and method of obtaining an occlusion key using a background pixel map is disclosed. A target image containing a target location suitable for displaying a virtual augmentation is obtained. A stream of current images are transformed into a stationary stream having the camera pose of the target image. These are segmented using a trained neural network. The background pixel map is then the color values of background pixels found at each position within the target location. An occlusion key for a new current image is obtained by first transforming it to conform to the target image and then comparing each pixel in the target location with the color values of background pixels in the background pixel map. The occlusion key is then transformed back to conform to the current image and used for virtual augmentation of the current image.
    Type: Application
    Filed: May 15, 2023
    Publication date: November 16, 2023
    Inventors: Oran Gilad, Samuel Chenillo, Oren Steinfeld
  • Publication number: 20230368413
    Abstract: A system and method of video match moving using minimal calibration is disclosed. A camera obtains an initial set of calibration images and a location image. Homographies are then determined between all the images using direct linear transformation. Camera pose locations of the images are determined from the homography matrices. Match moving augmentation is accomplished by obtaining an homography between a live image and the location image, either directly, or by combining an homography to a calibration image with that calibration image's augmentation homography. A virtual augmentation is then placed in the live image. The system evaluates the quality of each live image homography. If of sufficiently high quality, that live image is added to the set of calibration images. When a live image is received, the search for an homography typically starts with the calibration image located closest in camera pose space to the preceding live image.
    Type: Application
    Filed: May 11, 2023
    Publication date: November 16, 2023
    Inventors: Oran Gilad, Samuel Chenillo, Oren Steinfeld
  • Patent number: 11736662
    Abstract: Systems and methods for precision downstream synchronization of digital streaming content on an edge content processor in the absence of access to pixel-level data by a video player app operative on the edge content processor are disclosed. Encrypted video streams are synchronized to unencrypted video streams using acquired knowledge of the edge content processor's latency, i.e., the time elapsed between a command to render a video frame and that frame being displayed by the edge content processor. Once a predicted time of display of an encrypted video is obtained by a video player app operative on the edge content processor, a corresponding RGBA video frame is delayed by an amount of time equal to that predicted time minus the edge content processor's latency before a command to render it is issued by the video player app, thereby ensuring both frames are displayed simultaneously.
    Type: Grant
    Filed: September 20, 2022
    Date of Patent: August 22, 2023
    Inventors: Oran Gilad, Samuel Chenillo, Oren Steinfeld
  • Publication number: 20230262201
    Abstract: Systems and methods for precision downstream synchronization of digital streaming content on an edge content processor in the absence of access to pixel-level data by a video player app operative on the edge content processor are disclosed. Encrypted video streams are synchronized to unencrypted video streams using acquired knowledge of the edge content processor's latency, i.e., the time elapsed between a command to render a video frame and that frame being displayed by the edge content processor. Once a predicted time of display of an encrypted video is obtained by a video player app operative on the edge content processor, a corresponding RGBA video frame is delayed by an amount of time equal to that predicted time minus the edge content processor's latency before a command to render it is issued by the video player app, thereby ensuring both frames are displayed simultaneously.
    Type: Application
    Filed: September 20, 2022
    Publication date: August 17, 2023
    Inventors: Oran Gilad, Samuel Chenillo, Oren Steinfeld
  • Patent number: 11601713
    Abstract: A system and method for identifying media segments using audio augmented image cross-comparison is disclosed, in which a media segment identifying system analyses both audio and video content, producing a unique identifier to compare with previously identified media segments in a media segment database. The characteristic landmark-linked-image-comparisons are constructed by first identifying an audio landmark. The audio landmark is an audio peak that exceeds a predetermined threshold. Two digital images are then obtained, one associated directly with the audio landmark, and one obtained a predetermined landmark time removed from the first image. The two images are then used to provide a characteristic landmark-linked-image-comparison. The pair of images are reduced in pixel size and converted to gray scale. Corresponding pixels are compared to form a numeric comparison. One image is mirrored before comparison to reduce the possibility of null comparisons.
    Type: Grant
    Filed: December 9, 2020
    Date of Patent: March 7, 2023
    Inventors: Oran Gilad, Samuel Chenillo, Oren Steinfeld
  • Publication number: 20220385956
    Abstract: A system and method for remunerating a display of a cognate substitute image sequence is disclosed. Images sequences in received video content are examined for sequences that are cognate to known image sequences. When detected, such sequences are replaced with cognate substitute image sequences to create a modified video image stream that is displayed on a display screen. A reasonable remuneration for displaying the cognate substitute image is automatically calculated. This monetary charge reflects the nature of the image sequence, the time it was displayed for, the time at which was displayed and the number of electronic devices within viewing range of the display at the time of display. This remuneration is accumulated, and equivalent digital assets automatically transferred between client and provider accounts at an appropriate time.
    Type: Application
    Filed: May 26, 2021
    Publication date: December 1, 2022
    Inventors: Oran Gilad, Samuel Chenillo, Oren Steinfeld
  • Patent number: 11509943
    Abstract: A system and method for remunerating a display of a cognate substitute image sequence is disclosed. Images sequences in received video content are examined for sequences that are cognate to known image sequences. When detected, such sequences are replaced with cognate substitute image sequences to create a modified video image stream that is displayed on a display screen. A reasonable remuneration for displaying the cognate substitute image is automatically calculated. This monetary charge reflects the nature of the image sequence, the time it was displayed for, the time at which was displayed and the number of electronic devices within viewing range of the display at the time of display. This remuneration is accumulated, and equivalent digital assets automatically transferred between client and provider accounts at an appropriate time.
    Type: Grant
    Filed: May 26, 2021
    Date of Patent: November 22, 2022
    Inventors: Oran Gilad, Samuel Chenillo, Oren Steinfeld
  • Patent number: 11483496
    Abstract: A system and method of synchronizing auxiliary content to a video stream is disclosed that uses a block of bit-vectors linked to a target frame in a video steam. The block of bit-vectors consists of a multi-frame sequence of image bit-vectors. The video stream and block of bit-vectors are transmitted to an end user device that uses the bit-vector block to uniquely identify the target frame. The target frame is used to synchronize auxiliary content to the video stream.
    Type: Grant
    Filed: February 13, 2022
    Date of Patent: October 25, 2022
    Inventors: Oran Gilad, Samuel Chenillo, Oren Steinfeld
  • Publication number: 20220166940
    Abstract: A system and method of synchronizing auxiliary content to a video stream is disclosed that uses a block of bit-vectors linked to a target frame in a video steam. The block of bit-vectors consists of a multi-frame sequence of image bit-vectors. The video stream and block of bit-vectors are transmitted to an end user device that uses the bit-vector block to uniquely identify the target frame. The target frame is used to synchronize auxiliary content to the video stream.
    Type: Application
    Filed: February 13, 2022
    Publication date: May 26, 2022
    Inventors: Oran Gilad, Samuel Chenillo, Oren Steinfeld
  • Patent number: 11048946
    Abstract: A system and method of identifying cognate image sequences is disclosed that examines significant frames of a stream of video images using an array of image indexes. The image index array includes image indexes obtained by at least two different image indexing methods. These are compared to a corresponding array of image indices of significant frames of known image sequences. An image quality indicator is used to determine which set of image index thresholds to use in making the comparison. These thresholds are more stringent for higher quality frames. Two image sequences are considered cognate when a string of sufficiently many sequential frame matches is established. In an alternate embodiment, image blurriness is also, or instead, used to determine the appropriate set of image index thresholds. The sets of image index thresholds are determined using machine learning on a curated set of representative images.
    Type: Grant
    Filed: October 16, 2020
    Date of Patent: June 29, 2021
    Inventors: Samuel Chenillo, Oran Gilad, Oren Steinfeld
  • Publication number: 20210092480
    Abstract: A system and method for identifying media segments using audio augmented image cross-comparison is disclosed, in which a media segment identifying system analyses both audio and video content, producing a unique identifier to compare with previously identified media segments in a media segment database. The characteristic landmark-linked-image-comparisons are constructed by first identifying an audio landmark. The audio landmark is an audio peak that exceeds a predetermined threshold. Two digital images are then obtained, one associated directly with the audio landmark, and one obtained a predetermined landmark time removed from the first image. The two images are then used to provide a characteristic landmark-linked-image-comparison. The pair of images are reduced in pixel size and converted to gray scale. Corresponding pixels are compared to form a numeric comparison. One image is mirrored before comparison to reduce the possibility of null comparisons.
    Type: Application
    Filed: December 9, 2020
    Publication date: March 25, 2021
    Inventors: Oran Gilad, Samuel Chenillo, Oren Steinfeld
  • Patent number: 10867185
    Abstract: A system and method for identifying media segments using audio augmented image cross-comparison is disclosed, in which a media segment identifying system analyses both audio and video content, producing a unique identifier to compare with previously identified media segments in a media segment database. The characteristic landmark-linked-image-comparisons are constructed by first identifying pairs of audio landmarks separated by a characteristic, or landmark, time. Digital images associated the audio landmarks are then compared, with the combination providing a characteristic landmark-linked-image-comparison. The audio landmarks are audio peaks that exceed predetermined thresholds. A landmark-time is the time between adjacent pairs of audio peaks. The pair of images associated with the audio peaks are reduced in pixel size and converted to gray scale. Corresponding pixels are compared to form a numeric comparison. One image may be mirrored before comparison to reduce the possibility of null comparisons.
    Type: Grant
    Filed: April 21, 2019
    Date of Patent: December 15, 2020
    Inventors: Samuel Chenillo, Oren Steinfeld
  • Publication number: 20190244032
    Abstract: A system and method for identifying media segments using audio augmented image cross-comparison is disclosed, in which a media segment identifying system analyses both audio and video content, producing a unique identifier to compare with previously identified media segments in a media segment database. The characteristic landmark-linked-image-comparisons are constructed by first identifying pairs of audio landmarks separated by a characteristic, or landmark, time. Digital images associated the audio landmarks are then compared, with the combination providing a characteristic landmark-linked-image-comparison. The audio landmarks are audio peaks that exceed predetermined thresholds. A landmark-time is the time between adjacent pairs of audio peaks. The pair of images associated with the audio peaks are reduced in pixel size and converted to gray scale. Corresponding pixels are compared to form a numeric comparison. One image may be mirrored before comparison to reduce the possibility of null comparisons.
    Type: Application
    Filed: April 21, 2019
    Publication date: August 8, 2019
    Inventors: Samuel Chenillo, Oren Steinfeld
  • Patent number: 10271095
    Abstract: A system and method for identifying media segments using audio augmented image cross-comparisoning is disclosed, in which a media segment identifying system analyzes both audio and video content, producing a unique identifier to compare with previously identified media segments in a media segment database. The characteristic landmark-linked-image-comparisones are constructed by first identifying pairs of audio landmarks separated by a characteristic, or landmark, time. Digital images associated the audio landmarks are then image comparisoned, with the combination providing a characteristic landmark-linked-image-comparison. The audio landmarks are audio peaks that exceed predetermined thresholds. A landmark-time is the time between adjacent pairs of audio peaks. The pair of images associated with the audio peaks are reduced in pixel size and converted to gray scale. Corresponding pixels are compared to form a numeric comparison.
    Type: Grant
    Filed: August 23, 2018
    Date of Patent: April 23, 2019
    Inventors: Samuel Chenillo, Oren Steinfeld
  • Patent number: 10096169
    Abstract: A system for the augmented assessment of digital media streams for virtual less than insertion opportunities is disclosed. A digital media stream is automatically decomposed, using a programmed digital processor, into one or more candidate-clips have a predetermined minimum length, and no internal shot transitions. These candidate-clips are then examined, and the ones deemed suitable for virtual insertion use, are classified as viable-clips and stored in a digital database. The artificial intelligence techniques of machine learning and deep learning are then used to further classify the viable-clips according to their virtual insertion related attributes that may be attractive to advertisers, such as scene context, emotional tone and contained characters. A value is then assigned to the viable-clips, dependent on their insertion related attributes, and the overlap of those attributes with client requested requirements.
    Type: Grant
    Filed: December 22, 2017
    Date of Patent: October 9, 2018
    Inventors: Samuel Chenillo, Oren Steinfeld
  • Patent number: 9332160
    Abstract: A system and method of synchronizing the processing and distribution of audio-visual assets in a distributed production studio are presented. The method addresses the processing of broadcast television, including match moving computer graphics technology, in such an environment. Audio-visual sources provide the assets while processing nodes interact with them to produce a composite output. Synchronization of the communication between, and the processing by, the sources and nodes is managed by a master controller and delay buffers located on the nodes. Assets are tagged with updatable reference-to-zero values indicative of a current transmission delay between their last source and the controller. These reference-to-zero values are used by processing nodes to determine where in their local buffer to place each asset so they emerge from the buffer to a local digital signal processor in synchronicity and that assets originally produced at a common time are executed locally at a common time.
    Type: Grant
    Filed: September 9, 2015
    Date of Patent: May 3, 2016
    Inventors: Samuel Chenillo, Oren Steinfeld