Patents Examined by Loi H Tran
  • Patent number: 11139000
    Abstract: Aspects of the disclosure provide an apparatus that includes interface circuitry and processing circuitry. The interface circuitry is configured to receive signals carrying metadata that associates a region of interest in a first visual view provided by a first visual track with the first visual track and a second visual track that provides a second visual view that is a part of the first visual view. The processing circuitry is configured to parse the metadata, determine, when the region of interest is selected, the second visual track to provide visual data, and generate images for the region of interest based on the visual data from the second visual track.
    Type: Grant
    Filed: March 6, 2018
    Date of Patent: October 5, 2021
    Assignee: MediaTek Inc.
    Inventors: Xin Wang, Lulin Chen, Wang Lin Lai, Shan Liu
  • Patent number: 11130475
    Abstract: The invention concerns a vehicle, comprising at least one camera and one corresponding washing system (3) for cleaning the camera, the washing system comprising a housing (10) for receiving the camera, a cleaning fluid storage tank (20), a nozzle (12) for spraying the cleaning fluid inside the housing and a pump (22) for pumping the cleaning fluid from the tank to the nozzle. The washing system further includes a passage (18) extending from the housing (10) to the tank (20), for draining, under gravity, used cleaning fluid back into the tank (20).
    Type: Grant
    Filed: May 10, 2017
    Date of Patent: September 28, 2021
    Assignee: Volvo Truck Corporation
    Inventor: Nicolas Berne
  • Patent number: 11133036
    Abstract: A system and method for associating audio feeds to corresponding video feeds, including determining a subject of interest within a video feed based on the video feed and metadata associated with the video feed; analyzing the metadata to determine an optimal audio source for the subject of interest; configuring the optimal audio source to capture an audio feed; and associating the captured audio feed with the video feed.
    Type: Grant
    Filed: March 13, 2018
    Date of Patent: September 28, 2021
    Assignee: Insoundz Ltd.
    Inventors: Tomer Goshen, Emil Winebrand
  • Patent number: 11134317
    Abstract: System and devices for live captioning events is disclosed. The system may receive event calendar data and a first plurality of caption files and preselect a first caption file based on the event calendar data. The system may then access an audiovisual recorder of a user device, and receive a first feedback from the recorder. The system may then determine whether the first caption file matches the first feedback. When there is a match, the system may determine a first synchronization between the caption file and the feedback. When there is no match, the system may determine if there is a match with a second caption file of the first plurality of caption files and determine a second synchronization. When the second caption file does not match, the system may receive at least a third caption file over a mobile network and determine a third synchronization for display.
    Type: Grant
    Filed: March 25, 2020
    Date of Patent: September 28, 2021
    Assignee: CAPITAL ONE SERVICES, LLC
    Inventors: Galen Rafferty, Austin Walters, Jeremy Edward Goodsitt, Vincent Pham, Mark Watson, Reza Farivar, Anh Truong
  • Patent number: 11120271
    Abstract: Data processing systems and methods are disclosed for augmenting video content with one or more augmentations to produce augmented video. Elements within video content may be identified by spatiotemporal indices and may have associated values. An advertiser can pay to have an augmentation added to an element that, for example, advertises the advertiser's goods and/or includes a link that, when activated, takes a user to the advertiser's web site. Elements may have associated contexts that can be used to determine augmentations and element value, such as a position and/or current use of the element.
    Type: Grant
    Filed: December 10, 2020
    Date of Patent: September 14, 2021
    Assignee: Second Spectrum, Inc.
    Inventors: Yu-Han Chang, Tracey Chui Ping Ho, Rajiv Tharmeswaran Maheswaran
  • Patent number: 11107506
    Abstract: A system for combining data includes one or more processors that are, individually or collectively, configured to receive video data recorded by at least one sensor and operation data of a movable object that carries the at least one sensor, associate the operation data with the video data, and store the associated operation data and the video data in a video file. The operation data indicates one or more operation states of the movable object during a time period when the video data is recorded.
    Type: Grant
    Filed: January 8, 2019
    Date of Patent: August 31, 2021
    Assignee: SZ DJI TECHNOLOGY CO., LTD.
    Inventors: Gaoping Bai, Dajun Huo, Ruoying Zhang
  • Patent number: 11095938
    Abstract: A browser-based video editor is configured to allow a user to create a composition schema having multimedia layers, including video, audio, and graphic and text layers. The composition schema may be transmitted to a remote service that employs a rendering engine to play back the composition schema and align the clocks for each respective multimedia layer into a single video representation. A master clock object is employed to sync the clocks while also checking a series of properties with each multimedia layer to comport the multimedia layers with an interval-based master clock. The composition schema is recorded using FFmpeg (Fast Forward Moving Picture Experts Group) to create a video representation for user consumption, such as an MP4 (Motion Pictures Experts Group 4) formatted file.
    Type: Grant
    Filed: November 21, 2019
    Date of Patent: August 17, 2021
    Assignee: POND5 INC.
    Inventors: Mathieu Frederic Welche, Hugo Valentín Elías García, Taylor James McMonigle, Pier Stabilini, Nicola Onassis
  • Patent number: 11089184
    Abstract: A synchronous modulation method based on an embedded player. The method comprises the steps of: Step S1, acquiring a current timestamp adopted by a current synchronous audio signal and a current synchronous video signal; Step S2, acquiring a jump difference value of the current timestamp; Step S3, determining whether the jump difference value is less than a first preset time, if yes, synchronously playing, by the player, the audio signal and the video signal through a first timestamp, and exiting; and Step S4, determining whether the jump difference value is greater than a second preset time, if yes, synchronously playing, by the player, the audio signal and the video signal through a second timestamp, and exiting.
    Type: Grant
    Filed: January 14, 2020
    Date of Patent: August 10, 2021
    Assignee: AMLOGIC (SHANGHAI) CO., LTD.
    Inventors: Yunmin Chen, Ting Yao, Zhi Zhou, Lifeng Cao, Zhiheng Cao
  • Patent number: 11074458
    Abstract: A system and method for searching a video stream collected by a camera in a video surveillance system for an object's placement or displacement is disclosed. The searching includes an interactive question/answer approach that allows for a video snippet including the object's placement or displacement to be found quickly without the need for complicated video analytics. During the search, frames from algorithmically selected points in the video stream are presented to a user for review. The user reviews each frame and indicates if he/she sees the object. Based on the user's response the searching algorithmically reduces the portion video stream that is searched until a snippet of a video is found that includes the object's placement or displacement.
    Type: Grant
    Filed: November 25, 2019
    Date of Patent: July 27, 2021
    Assignee: VERINT AMERICAS INC.
    Inventor: Shahar Daliyot
  • Patent number: 11070765
    Abstract: A method and apparatus for night lapse video capture are disclosed. The apparatus includes an image sensor, an image processor, and a video encoder. The image sensor is configured to capture image data. The image data includes a first image that is temporally precedent to a second image. The image processor is configured to determine a motion estimation. The motion estimation is based on a comparison of a portion of the first image and a portion of the second image. The image processor is configured to subtract a mask from the second image to obtain a denoised image. The mask is based on the motion estimation. The video encoder is configured to receive the denoised image from the image processor. The video encoder is configured to encode the denoised image in a video format. The video encoder is configured to output a video file in the video format.
    Type: Grant
    Filed: April 23, 2019
    Date of Patent: July 20, 2021
    Assignee: GoPro, Inc.
    Inventors: Ojas Gandhi, Anandhakumar Chinnaiyan, Naveen Chinya Krishnamurthy
  • Patent number: 11042754
    Abstract: Systems and methods of automatically extracting summaries of video content are described herein. A data processing system can access, from a video database, a first video content element including a first plurality of frame. The data processing system can select an intervallic subset of the first plurality of frames of the first video content element. The data processing system can calculate, for each of a plurality of further subsets comprising a predetermined number of frames from the intervallic subset, a score for the further subset. The data processing system can identify, from the plurality of further subsets, a further subset having a highest score. The data processing system can select a portion of the first video content element comprising the frames of the further subset having the highest score. The data processing system can generate a second video content element comprising the selected portion of the first video content element.
    Type: Grant
    Filed: August 3, 2017
    Date of Patent: June 22, 2021
    Assignee: Google LLC
    Inventors: Yi Shen, Xiangrong Chen, Min-hsuan Tsai, Yun Shi, Tianpeng Jin, Zheng Sun, Weilong Yang, Jingbin Wang
  • Patent number: 11023736
    Abstract: Presenting event-specific video content that conforms to a user selection of an event type includes processing at least one video feed through at least one spatiotemporal pattern recognition algorithm that uses machine learning to develop an understanding of at least one event within the at least one video feed to determine at least one event type, wherein the at least one event type includes an entry in a relationship library at least detailing a relationship between two visible features of the at least one video feed, extracting the video content displaying the at least one event and associating the understanding with the video content in a video content data structure. A user interface is configured to permit a user to indicate a preference for at least one event type that is used to retrieve and provide corresponding extracted video content with the data structure in a new video feed.
    Type: Grant
    Filed: March 20, 2020
    Date of Patent: June 1, 2021
    Assignee: Second Spectrum, Inc.
    Inventors: Yu-Han Chang, Rajiv Maheswaran, Jeffrey Wayne Su, Noel Hollingsworth
  • Patent number: 11009945
    Abstract: The invention relates to a method for operating an eye tracking device (10) for multi¬user eye tracking, wherein images (24) of a predefined capturing area (14) of the eye tracking device (10) are captured by means of an imaging device (12) of the eye tracking device (10) and the captured images (24) are processed by means of a processing unit (16) of the eye tracking device (10). If a first user (26a) and a second user (26b) are present in the predefined capturing area (14) of the eye tracking device (10), a first information relating to the first user (26a) and a second information relating to the second user (26b) are determined on the basis of the captured images (24) by processing the images (24). Furthermore the images (24) are captured successively in a predeterminable time sequence.
    Type: Grant
    Filed: July 30, 2019
    Date of Patent: May 18, 2021
    Assignee: APPLE INC.
    Inventors: Fabian Wanner, Matthias Nieser, Kun Liu, Walter Nistico
  • Patent number: 11012685
    Abstract: There are provided a plurality of methods for detecting a scene change in a streamed video, the streamed video comprising a series of pictures. An example method comprises calculating, for a plurality of positions, a difference between the costs of coding macro-blocks at the same position in successive pictures. The method further comprises identifying a new scene when the sum of the differences for a plurality of positions meets a threshold criterion. There is further provided a method of determining the perceptual impact of a packet loss on a streamed video the method comprising: identifying a packet loss; and determining if the lost packet contained information relating to a picture at the start of a new scene, wherein a new scene is detected using one of the methods disclosed herein.
    Type: Grant
    Filed: July 8, 2019
    Date of Patent: May 18, 2021
    Assignees: TELEFONAKTIEBOLAGET LM ERICSSON (PUBL), DEUTSCHE TELEKOM AG
    Inventors: Martin Pettersson, Savvas Argyropoulos, David Lindero, Peter List
  • Patent number: 11008735
    Abstract: An image pick-up apparatus includes a first stereo camera attached to a revolving unit and a second stereo camera attached to the revolving unit. The first stereo camera picks up an image of a first image pick-up range. The second stereo camera picks up an image of a second image pick-up range above or beyond the first image pick-up range.
    Type: Grant
    Filed: September 30, 2015
    Date of Patent: May 18, 2021
    Assignee: KOMATSU LTD.
    Inventors: Akiyoshi Deguchi, Shun Saito, Hiroyoshi Yamaguchi, Shun Kawamoto, Taiki Sugawara
  • Patent number: 11012603
    Abstract: An apparatus for capturing media for capturing media using a plurality of cameras associated with an electronic device based on an ambient light condition is provided. The apparatus includes a processor, and a memory unit coupled to the processor, the memory unit including a processing module configured to obtain at least one preview frame from a first camera in response to enabling an image capture application of the electronic device, determine ambient light parameters of the obtained at least one preview frame, and switch a camera operation of the electronic device from the first camera to a second camera to capture the media, if the determined ambient light parameters are below a pre-defined threshold.
    Type: Grant
    Filed: June 10, 2019
    Date of Patent: May 18, 2021
    Inventors: Digadari Suman, Abhijit Dey, Apurbaa Bhattacharjee, Gaurav Khandelwal, Kiran Nataraju
  • Patent number: 10997425
    Abstract: A media system generally includes a memory device that stores an event datastore that stores a plurality of event records, each event record corresponding to a respective event and event metadata describing at least one feature of the event. The media system (a) receives a request to generate an aggregated clip comprised of one or more media segments, where each media segment depicts a respective event; (b) for each event record from at least a subset of the plurality of event records, determines an interest level of the event corresponding to the event record; (c) determines one or more events to depict in the aggregated clip based on the respective interest levels of the one or more events; (d) generates the aggregated clip based on the respective media segments that depict the one or more events; and (e) transmits the aggregated clip to a user device.
    Type: Grant
    Filed: September 17, 2019
    Date of Patent: May 4, 2021
    Assignee: Second Spectrum, Inc.
    Inventors: Edward Shek Chan, Yu-Han Chang, Rajiv Tharmeswaran Maheswaran, Jeffrey Wayne Su
  • Patent number: 10999597
    Abstract: According to one embodiment, an image encoding method includes selecting a motion reference block from an already-encoded pixel block. The method includes selecting an available block including different motion information from the motion reference block, and selecting a selection block from the available block. The method includes generating a predicted image of the encoding target block using motion information of the selection block. The method includes encoding a prediction error between the predicted image and an original image. The method includes encoding selection information identifying the selection block by referring to a code table decided according to a number of the available block.
    Type: Grant
    Filed: June 2, 2020
    Date of Patent: May 4, 2021
    Assignee: KABUSHIKI KAISHA TOSHIBA
    Inventors: Taichiro Shiodera, Saori Asaka, Akiyuki Tanizawa, Takeshi Chujoh
  • Patent number: 10992989
    Abstract: Video information may define video content. The video content may be characterized by capture information. A remote device may transmit at least a portion of the capture information to a computing device. The computing device may identify one or more portions of the video content based on the transmitted capture information. The remote device may receive the identification of the identified portion(s) of the video content from the computing device. The remote device may stream one or more portions of the video information defining at least some of the identified portion(s) of the video content to the computing device. The streamed video information may enable the computing devices to provide a presentation of at least some of the identified portion(s) of the video content using a buffer of the streamed video information, which may allow the computing device to present the video content without permanently storing the video information.
    Type: Grant
    Filed: August 7, 2020
    Date of Patent: April 27, 2021
    Assignee: GoPro, Inc.
    Inventors: Jean-Baptiste Noel, Jean Caille, Matthieu Bouron, Francescu Santoni
  • Patent number: 10951879
    Abstract: A method for synthesising a viewpoint, comprising: capturing a scene using a network of cameras, the cameras defining a system volume of the scene, wherein a sensor of one of the cameras has an output frame rate for the system volume below a predetermined frame rate; selecting a portion of the system volume as an operational volume based on the sensor output frame rate, the predetermined frame rate and a region of interest, the operational volume being a portion of the system volume from which image data for the viewpoint can be synthesised at the predetermined frame rate, wherein a frame rate for synthesising a viewpoint outside the operational volume is limited by the output frame rate; reading, from the sensors at the predetermined frame rate, image data corresponding to the operational volume; and synthesising the viewpoint at the predetermined frame rate using the image data.
    Type: Grant
    Filed: December 4, 2017
    Date of Patent: March 16, 2021
    Assignee: Canon Kabushiki Kaisha
    Inventor: James Austin Besley