Abstract: An electronic device and method for content-aware image encoding using a machine learning (ML) model are provided. The electronic device receives at least one foreground region and at least one background region from a first image frame. The electronic device determines a set of first macroblocks associated with the detected at least one foreground region and a set of second macroblocks associated with the detected at least one background region, determines a bit allocation control parameter associated with the determined set of second macroblocks, updates the determined bit allocation control parameter based on an application of a first trained ML model, and encodes the first image frame based on the updated bit allocation control parameter to obtain a second image frame so that a first image quality index associated with the first image frame matches a second image quality index associated with the second image frame within a threshold range.
Abstract: An endoscope apparatus includes a compression processing control unit configured to carry out a compression processing of compressing image data by using a compression parameter to generate compressed data, a monitor that is a display unit configured to display a display image corresponding to the image data, an information quantity detection unit configured to detect a quantity of information on an object contained in the image data, and a judgement unit configured to carry out a judgement processing of judging whether or not a judgement value corresponding to the quantity of information is smaller than a predetermined threshold. The image pickup of the object and the generation of the image data are continuously performed multiple times, and the judgement processing is carried out whenever the image data is generated. The compression parameter and the display image are determined based on a result of the judgement processing.
Abstract: Implementations of the present disclosure are directed to a computer-implemented method, a system, and an article for managing event data in a multi-player online game. The method can include, for example, receiving user input at a plurality of client devices for a multi-player online game that includes a virtual environment; generating user-initiated events for the online game on the client devices based on the user input; distributing any user-initiated events generated on each client device to other client devices from the plurality of client devices; determining at each client device a plurality of derived game events based on the user-initiated events; storing on each client device the user-initiated events and the derived game events in one or more event queues; and determining at each client device a state of the virtual environment over time, according to the stored user-initiated events and the derived game events.
Type:
Grant
Filed:
November 29, 2017
Date of Patent:
July 13, 2021
Assignee:
MZ IP HOLDINGS, LLC
Inventors:
John O'Connor, Nathan Spencer, Garth Gillespie, Timothy Wong
Abstract: The present invention is directed to a method of integrating information, including real-time information, into a virtual thematic environment using a computer system, including accessing the stored information from a database or downloading the real-time information from a source external to the thematic environment; inserting the real-time information into the thematic environment; and displaying the information to a user within the thematic environment. In one embodiment, the computer system is connected to a holographic projection system such that the images from the thematic environment can be projected as holographic projections.
Abstract: A content device and method is disclosed to include a processing device to process streaming video content. A fingerprinter receives captured frames of the streaming video content and, for each frame of a plurality of the captured frames, generates a one-dimensional histogram function of pixel values and transforms the histogram function with a Fast Fourier Transform (FFT), to generate a plurality of complex values for the frame. The fingerprinter further, for each of the plurality of complex values, assigns a binary one (“1”) when a real part of the complex value is greater than zero (“0”) and assigns a binary zero (“0”) when the real part is less than or equal to zero, to generate a plurality of bits. The fingerprinter further concatenates a specific number of the bits to generate a fingerprint for the frame.