Abstract: By performing a simple operation on an information processing terminal, a direction of a subject desired to be viewed by a user 40 can be smoothly displayed from various directions.
Abstract: An apparatus for receiving a video according to embodiments of the present invention comprises a decoder configured to decode bitstream based on viewing position and viewport information; an unpacker configured to unpack pictures in the decoded bitstream; a view regenerator configured to perform view regenerating the unpacked pictures; and a view synthesizer configured to perform view synthesis on the view regenerated pictures. A method of transmitting a video, the method comprising inter-view redundancy removing pictures for multiple viewing positions; packing the inter-view redundancy removed pictures; and encoding the packed pictures and signaling information.
Abstract: Provided are a method, apparatus and device for adding a video special effect and a storage medium. The method includes: acquiring a source video sequence and at least one special effect video sequence; in the case where frame rates of the two or more than two special effect video sequences are same, inserting a frame into the source video sequence and superimposing the two or more than two special effect video sequences on the source video sequence at the same time; and in the case where frame rates of the two or more than two special effect video sequences are different, determining a target frame rate from the frame rates of the two or more than two special effect video sequences inserting frames into the source video sequence and then superimposing the two or more than two special effect video sequences on the source video sequence.
Abstract: An electronic device according to the present invention includes: a processor; and a memory storing a program which, when executed by the processor, causes the electronic device to: perform control to change a display region of an image in accordance with an orientation change of the electronic device or in accordance with accepting a user operation and display the display region of the image on a screen; and determine a clipping region of the image to be clipped from the image based on a position of the display region of the image, wherein the image includes the display region and the clipping region and the clipping region is wider than the display region.
Abstract: A system for instant assembly of video clips through user's interactive performance, comprising a device operated by a user, wherein the device comprises: user interface means configured for input and output interaction with the user; a processing unit and a memory configured for the creation of a new video assembled appending a plurality of video clip segments extracted from a plurality of video clips; and an I/O unit configured for access to the plurality of video clips; the user interface means are configured to detect a sequence of manual assembling commands, and to display the plurality of video clip segments, the display order of the video segments being defined by the sequence of manual concatenation commands; the processing unit and the memory are configured to record the appending process of the video segments extracted from a plurality of video clips.
Abstract: Disclosed herein is a gauge for slurry coating thickness determination. The gauge includes a body and at least three probes extending from the body. The at least three probes provide a go-no-go indicator including a first demarcation that defines a minimum slurry coating thickness and a second demarcation that defines a maximum slurry coating thickness. A minimum no-go region is defined between the first demarcation and a probe tip, a maximum no-go region is defined between the second demarcation and the body, and a go region is defined between the first demarcation and the second demarcation.
Abstract: A scanner includes a camera, a light source for generating a probe light incorporating a spatial pattern, an optical system for transmitting the probe light towards the object and for transmitting at least a part of the light returned from the object to the camera, a focus element within the optical system for varying a position of a focus plane of the spatial pattern on the object, unit for obtaining at least one image from said array of sensor elements, unit for evaluating a correlation measure at each focus plane position between at least one image pixel and a weight function, a processor for determining the in-focus position(s) of each of a plurality of image pixels for a range of focus plane positions, or each of a plurality of groups of image pixels for a range of focus plane positions, and transforming in-focus data into 3D real world coordinates.
Type:
Grant
Filed:
May 11, 2022
Date of Patent:
December 27, 2022
Assignee:
3SHAPE A/S
Inventors:
Rune Fisker, Henrik Öjelund, Rasmus Kjaer, Mike Van Der Poel, Arish A. Qazi, Karl-Josef Hollenbeck
Abstract: A server configured to receive video clips from a mobile device, such as eyewear. The server has an electronic processor enabled to execute computer instructions to process the video clips to identify one or more characteristics in the frames of the video clips. The processor selects the video clips having the identified characteristics in the frames and creates a set of the selected video clips having the identified characteristics in the frames. The processor automatically trims the video clips based on frames that have the identified characteristics to create trimmed video clip segments, and then sends the trimmed video clip segments to the mobile device.
Type:
Grant
Filed:
December 30, 2019
Date of Patent:
December 27, 2022
Assignee:
Snap Inc.
Inventors:
David Ben Haim, Justin Huang, Nathan Litke, Eyal Zak
Abstract: System, apparatus and method for facilitating interactive reading can include an electronic device having a program or application thereon. In one embodiment, the application can recognize one or more cues, combined with an external data source, that result from reading a story aloud and/or performing one or more acts.
Abstract: A video sequence layout method, electronic device and storage medium are provided, and relate to fields of deep learning, virtual reality, cloud computing, video layout processing and the like. The method includes: acquiring a first video sequence, the first video sequence including a main sequence for describing a first posture of a human body and a subordinate sequence for describing a plurality of second postures of the human body; extracting the main sequence and the subordinate sequence from the first video sequence; and in a case that it is detected that a sequencing identification frame exists in the first video sequence, performing random mixed sequencing processing on video frames in the main sequence and the subordinate sequence based on the sequencing identification frame and taking a sequence combination obtained by the random mixed sequencing processing as a second video sequence.
Abstract: There is provided a system including a non-transitory memory storing an executable code and a hardware processor executing the executable code to receive a media content including a plurality of frames, divide the media content into a plurality of shots, each of the plurality of shots including a plurality of frames of the media content based on a first similarity between the plurality of frames, determine a plurality of sequential shots of the plurality of shots to be part of a first sub-scene of a plurality of sub-scenes of a scene based on a timeline continuity of the plurality of sequential shots, identify each of the plurality of shots of the media content and each of the plurality of sub-scenes with a corresponding beginning time code and a corresponding ending time code.
Type:
Grant
Filed:
June 30, 2016
Date of Patent:
December 6, 2022
Assignee:
Disney Enterprises, Inc.
Inventors:
Nimesh Narayan, Jack Luu, Alan Pao, Matthew Petrillo, Anthony M. Accardo, Alexis Lindquist, Miquel Angel Farre Guiu, Katharine S. Ettinger, Lena Volodarsky Bareket
Abstract: A method includes rendering a portion of first video on a display associated with a device; and, in response to a first user gesture and/or interaction on and/or with a touch-sensitive interface, selecting a second video, and rendering a portion of the second video on the display, wherein the first user gesture and/or interaction corresponded to a first time in the first video, and wherein the portion of the second video begins a second time in the second video corresponding substantially to the first time in the first video. The method may include, in response to a second user gesture and/or interaction on and/or with the touch-sensitive interface, selecting a third video, and rendering a portion of the third video on the display, wherein the second user gesture and/or interaction corresponded to a second time in the second video, and wherein the portion of the third video begins a third time in the third video corresponding substantially to the second time in the second video.
Abstract: A system, apparatus, computer program product, and method perform controlled video recording of video captured by a camera. The method includes checking by recorder circuitry of receipt of a ping signal sent from a server to the recorder, the ping signal being expected to be received by the recorder circuitry within a predetermined time interval after an earlier ping signal was received (or when an initial ping signal was expected to be received), in response to a determination by the recorder circuitry of not receiving the ping signal, entering an autonomous recording mode, the autonomous recording mode including recording video provided from at least one camera, and continuing operation of the autonomous recording mode until at least one of receiving another ping signal by the recorder circuitry, or receiving a control signal from the server that directs the recorder to stop recording the video provided from the at least one camera.
Type:
Grant
Filed:
November 24, 2021
Date of Patent:
November 29, 2022
Assignee:
AXIS AB
Inventors:
Emma Holmberg Ohlsson, Jonathan Karlsson, Fredrik Brozén, Viktor Andersson, Per Johansson
Abstract: A method for processing video data is performed by a data processing unit (DPU). The method includes obtaining, by the DPU of an edge device, video data; processing the video data to obtain video data chunks and indexing attributes; generating retention and staging metadata based on the video data chunks and the video processing engine outcomes, and the retention and staging metadata specifies retention and staging information associated with the video data chunks; associating the retention and staging metadata with the video data chunks; and storing the retention and staging metadata and the video data chunks in appropriate storages based on the retention and staging metadata.
Abstract: Systems and methods for editing a media composition from media assets are provided. An editing device receives a media asset associated with a scene to be rendered in a media composition. The editing device receives a script including script elements that index script sections associated with the scene and metadata. The editing device edits the media composition with segments of the media asset based on a comparison of the segments, the script elements, and the metadata.
Type:
Grant
Filed:
August 27, 2021
Date of Patent:
November 22, 2022
Assignee:
Verizon Patent and Licensing Inc.
Inventors:
Daniel Elortegui, Jay Cee Straley, Praveen Nair
Abstract: A method of processing first video data of a region from incoming video data collected by a camera. The method includes subscribing, by a first subscriber, to an endpoint at which the first video data is published after the first video data is preprocessed according to preprocessing parameters defined within a runtime configuration file. The preprocessing includes formatting the incoming video data to create the first video data of the first region of interest with a second field of view that is less than the first field of view. The method further includes processing, by a computer processor, the first video data to determine at least one output that is indicative of a first inference dependent upon the first video data. The preprocessing parameters that format incoming video data to create the first video data are dependent upon the processing to be performed on the first video data.
Abstract: An acquisition device for at least semiautomated acquisition of sets of multiple object data of at least one object, including a movement device for generating a defined relative movement between at least one object data acquisition unit and the at least one object.
Type:
Grant
Filed:
October 26, 2018
Date of Patent:
November 15, 2022
Assignee:
Robert Bosch GmbH
Inventors:
Darno Alexander Ketterer, Christin Ketterer, Julian Weiss, Sebastian Schmitt
Abstract: An interactive exercise method includes streaming exercise content to an interactive video system for display via a mirror, the exercise content including a depiction of an instructor performing a repetitive movement. A video stream of a user performing the repetitive movement is received, via a camera of the interactive video system, and the user is detected in the video stream. The user or a body portion thereof is tracked in the video stream as the user performs the repetitive movement. The method also includes generating a measure of a difference between a form of the user performing the repetitive movement and a predetermined form for the repetitive movement. A corrective movement is displayed to the user, via the display and during the display of the instructor performing the repetitive movement, based on the measure, to conform the form of the user to the predetermined form for the repetitive movement.
Abstract: Aspects of the subject disclosure may include, for example, applying first data associated with a first content item to a model to generate first classification characteristics, analyzing the first classification characteristics to generate a first marker, wherein the first marker delineates a first location of inventory within the first content item, selecting a first creative to populate a portion of the inventory, and populating, based on the selecting, the portion of the inventory with the first creative. Other embodiments are disclosed.
Abstract: Techniques for receiving and processing sensor data captured by a fleet of vehicle are discussed herein. In some examples, a fleet dashcam system can receive sensor data captured by electronic devices on a fleet of vehicles and can use that data to detect collision and near-collision events. The data of the collision or near-collision event can be used to determine a simulation scenario and a response of an autonomous vehicle control to the simulation scenario and/or it can be used to create a collision heat map to aid in operation of an autonomous vehicle.
Type:
Grant
Filed:
March 23, 2021
Date of Patent:
November 8, 2022
Assignee:
Zoox, Inc.
Inventors:
Andrew Scott Crego, Josh Alexander Jimenez, James William Vaisey Philbin, Chuang Wang