Patents Examined by David E. Harvey
  • Patent number: 10764656
    Abstract: Presentation of video highlights is disclosed. A data processing system receives from multiple users, multimedia files with user-generated video(s), the multimedia files being produced and enhanced by the users. The data processing system generates a speckle excitement vector of the multimedia files based on identifying feature(s) of the user-generated video(s). The processing and distribution system determines a cognitive state of each of the users based, in part, on the speckle excitement vector of each of the multimedia files. The processing and distribution system alters characteristic(s) of the user-generated video(s) of the multimedia files based on the cognitive state of each of the users that results in altered video(s). The processing and distribution system compiles the altered video(s) into a digital file that includes automatically-produced multimedia content. The processing and distribution system makes the digital file available for viewing.
    Type: Grant
    Filed: January 4, 2019
    Date of Patent: September 1, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Aaron K. Baughman, Mauro Marzorati, Gray Cannon, Craig M. Trim
  • Patent number: 10755749
    Abstract: Systems, devices, apparatuses, components, methods, and techniques for repetitive-motion activity enhancement based upon media content selection are provided. An example media-playback device for enhancement of a repetitive-motion activity includes a media-output device that plays media content items, a plurality of media content selection engines, and a repetitive-activity enhancement mode selection engine. The plurality of media content selection engines includes a cadence-based media content selection engine and an enhancement program engine. The cadence-based media content selection engine is configured to select media content items based on a cadence associated with the repetitive-motion activity. The enhancement program engine is configured to select a media content items according to an enhancement program for the repetitive-motion activity.
    Type: Grant
    Filed: April 23, 2018
    Date of Patent: August 25, 2020
    Assignee: SPOTIFY AB
    Inventors: Owen Smith, Tristan Jehan, Sten Garmark, Rahul Sen
  • Patent number: 10757468
    Abstract: An example method for performing playout of multiple media recordings includes receiving a plurality of media recordings, indexing the plurality of media recordings for storage into a database, dividing each of the plurality of media recordings into multiple segments, and for each segment of each media recording, (i) comparing the segment with the indexed plurality of media recordings stored in the database to determine one or more matches to the segment, and (ii) determining a relative time offset of the segment within each matched media recording. Following, the method includes performing playout of a representation of the plurality of media recordings based on the relative time offset of each matched segment.
    Type: Grant
    Filed: December 17, 2018
    Date of Patent: August 25, 2020
    Assignee: Apple Inc.
    Inventors: Avery Wang, Maxwell Leslie Szabo
  • Patent number: 10755747
    Abstract: Computer-implemented methods and systems for creating non-interactive, linear video from video segments in a video tree. Selectably presentable video segments are stored in a memory, with each segment representing a predefined portion of one or more paths in a traversable video tree. A linear, non-interactive video is automatically created from the selectably presentable video segments by traversing at least a portion of a first path in the video tree and, upon completion, is provided to a viewer for playback.
    Type: Grant
    Filed: April 7, 2017
    Date of Patent: August 25, 2020
    Assignee: JBF Interlude 2009 LTD
    Inventors: Jonathan Bloch, Barak Feldman, Tal Zubalsky, Yuval Hofshy
  • Patent number: 10750129
    Abstract: The invention relates to a hospital video surveillance system comprising several cameras (2, 3, 4) for acquiring video data for surveilling several patient regions (5, 6, 7). The video data are transmitted from the cameras to a display device and are used to determine physiological properties of patients, wherein the physiological properties are vital signs. The bandwidths for the transmission of the video data are allocated depending on the determined physiological properties. Thus, the bandwidth allocation considers the physiological states of the patients, which can ensure that a sufficient bandwidth is provided where it is really required. This is especially useful, if the overall bandwidth is limited. Moreover, since the video data are used for fulfilling several functions, i.e. surveilling the several patient regions and determining the physiological properties, the overall system can be very compact and less or no additional physiological sensors might be required.
    Type: Grant
    Filed: August 23, 2017
    Date of Patent: August 18, 2020
    Assignee: KONINKLIJKE PHILIPS N.V.
    Inventors: Harald Greiner, Ihor Olehovych Kirenko
  • Patent number: 10735707
    Abstract: A method includes capturing, by a camera, two-dimensional imagery. The method also includes determining, by a positioning sensor, a viewing perspective of a viewer of the two-dimensional imagery. The method also includes generating at least a first imagery based at least on the captured two-dimensional imagery and the determined viewing perspective. The method also includes displaying a three-dimensional representation of the two-dimensional imagery to the viewer. The displaying the three-dimensional representation includes displaying dual imagery, and the dual imagery includes the first imagery.
    Type: Grant
    Filed: August 15, 2017
    Date of Patent: August 4, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Adam Bishop, Glenn P. Crawford, Christopher R. Florence, Rocky D. McMahan
  • Patent number: 10714144
    Abstract: Systems and methods for tagging video content are disclosed. A method includes: receiving a video stream from a user computer device, the video stream including audio data and video data; determining a candidate audio tag based on analyzing the audio data; establishing an audio confidence score of the candidate audio tag based on the analyzing of the audio data; determining a candidate video tag based on analyzing the video data; establishing a video confidence score of the candidate video tag based on the analyzing of the video data; determining a correlation factor of the candidate audio tag relative to the candidate video tag; and assigning a tag to a portion in the video stream based on the correlation factor exceeding a correlation threshold value and at least one of the audio confidence score exceeding an audio threshold value and the video confidence score exceeding a video threshold value.
    Type: Grant
    Filed: November 6, 2017
    Date of Patent: July 14, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Mark P. Delaney, Robert H. Grant, Trudy L. Hewitt, Martin A. Oberhofer
  • Patent number: 10708521
    Abstract: A multimedia file and methods of generating, distributing and using the multimedia file are described. Multimedia files in accordance with embodiments of the present invention can contain multiple video tracks, multiple audio tracks, multiple subtitle tracks, data that can be used to generate a menu interface to access the contents of the file and ‘meta data’ concerning the contents of the file. Multimedia files in accordance with several embodiments of the present invention also include references to video tracks, audio tracks, subtitle tracks and ‘meta data’ external to the file. One embodiment of a multimedia file in accordance with the present invention includes a series of encoded video frames and encoded menu information.
    Type: Grant
    Filed: April 2, 2019
    Date of Patent: July 7, 2020
    Assignee: DIVX, LLC
    Inventors: Jason Braness, Jerome Rota, Eric William Grab, Jerald Donaldson, Heather Hitchcock, Damien Chavarria, Michael John Floyd, Brian T. Fudge, Adam H. Li
  • Patent number: 10698308
    Abstract: Described are a ranging method, and an automatic focusing method and device. The ranging method comprises: acquiring a coefficient of relationship between the number of pixels and the object distance within a range of a distance between a camera and a projection lens on the basis of a preset calibrated object distance; and calculating an actual object distance according to the acquired coefficient of relationship.
    Type: Grant
    Filed: July 21, 2015
    Date of Patent: June 30, 2020
    Assignee: ZTE Corporation
    Inventor: Feng Jiang
  • Patent number: 10694105
    Abstract: The present disclosure is related in general to field of machine learning and image processing and a method and system for handling occluded regions in an image frame to generate surround view. A surround view generating device detects presence or absence of occluded blocks in each image frame received from image capturing devices associated with the vehicle, based on speed and direction of the vehicle. Further, occluded blocks are predicted by mapping the image frames and sensor data of the vehicle with pre-stored image data using a machine learning model. Finally, corrected image frames are generated by stitching the predicted occluded blocks to the image frames, and then a surround view of the vehicle is generated using corrected image frames and the plurality of image frames. The present disclosure enables prediction of occlusions caused due to dust or water depositions on the image capturing devices of non-overlapping field of view.
    Type: Grant
    Filed: February 14, 2019
    Date of Patent: June 23, 2020
    Assignee: Wipro Limited
    Inventors: Tarun Yadav, Vinod Pathangay, Gyanesh Dwivedi
  • Patent number: 10679669
    Abstract: Automatic generation of a narration of what is happening in a signal segment (live or recorded). The signal segment that is to be narrated is accessed from a physical graph. In the physical graph, the signal segment evidences state of physical entities, and thus has a semantic understanding of what is depicted in the signal segment. The system then automatically determines how the physical entities are acting within the signal segment based on that semantic understanding, and builds a narration of the activities based on the determined actions. The system may determine what is interesting for narration based on a wide variety of criteria. The system could use machine learning to determine what will be interesting to narrate.
    Type: Grant
    Filed: February 17, 2017
    Date of Patent: June 9, 2020
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventor: Vijay Mital
  • Patent number: 10674077
    Abstract: A mixed reality content providing apparatus is disclosed. The mixed reality content providing apparatus may recognize an OOI included in a 360-degree VR image to generate metadata of the OOI and may provide a user with mixed reality content where the metadata is overlaid on the 360-degree VR image.
    Type: Grant
    Filed: October 26, 2017
    Date of Patent: June 2, 2020
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Nack Woo Kim, Byung Tak Lee, Sei Hyoung Lee, Hyun Yong Lee, Hyung Ok Lee, Young Sun Kim
  • Patent number: 10636449
    Abstract: Metadata about a movie is retrieved. The metadata includes a plurality of associated viewer responses from at least one previous audience viewing of the video. The plurality of associated viewer responses from the at least one previous audience viewing are associated with one or more segments of the video. A segment of the video associated with a type of viewer reaction based on emotion and sentiment recognition is identified. Additional media content based on the identified video segment is retrieved. A segment of the additional media content that exceeds a threshold of similarity with the segment of the video is determined. A video clip that includes the segment of the additional media content is created.
    Type: Grant
    Filed: November 6, 2017
    Date of Patent: April 28, 2020
    Assignee: International Business Machines Corporation
    Inventors: Enara C. Vijil, Seema Nagar, Srikanth G. Tamilselvam, Kuntal Dey
  • Patent number: 10629240
    Abstract: A recorded data processing method includes: for each of N recorded data pairs each formed by two pieces of recorded data X that are adjacent to each other when N pieces of recorded data X each representing a recording target including at least one of audio and video are arranged cyclically, calculating a plurality of candidate values for a time difference between time signals representing temporal changes of the recording target in the two respective pieces of recorded data X of the recorded data pair; and identifying one of the plurality of candidate values in each of the N recorded data pairs as the time difference between the two pieces of recorded data in the recorded data pair such that a numerical value obtained by summing, over the N recorded data pairs, one of the plurality of candidate values calculated for each of the N recorded data pairs approaches zero.
    Type: Grant
    Filed: September 10, 2018
    Date of Patent: April 21, 2020
    Assignee: YAMAHA CORPORATION
    Inventor: Yu Takahashi
  • Patent number: 10627494
    Abstract: Aspects of the embodiments are directed to methods and imaging systems. The imaging systems can be configured to sense, by an light sensor of the imaging system, light received during a time period, process the light received by the light sensor, identify an available measurement period for the imaging system within the time period based on the processed light, and transmit and receive light during a corresponding measurement period in one or more subsequent time periods.
    Type: Grant
    Filed: September 14, 2017
    Date of Patent: April 21, 2020
    Assignee: Analog Devices, Inc.
    Inventors: Sefa Demirtas, Tao Yu, Atulya Yellepeddi, Nicolas Le Dortz
  • Patent number: 10622021
    Abstract: Disclosed is a method for video editing. The method comprises selecting at least one video, using a user interface, displaying one of the selected at least one video, on a video preview area on the user interface, providing at least one effect button on the user interface, to be activated by applying a pointing device at the at least one effect button, wherein each of the at least one effect button is associated with one video editing effect, selecting a time point in a timeline of the displayed one video, activating an effect button selected from the at least one effect button provided, and applying a video editing effect corresponding to the activated effect button from the selected time point forward until detecting de-activation of the activated effect button.
    Type: Grant
    Filed: February 17, 2017
    Date of Patent: April 14, 2020
    Assignee: AVCR BILGI TEKNOLOJILERI A.S
    Inventors: Ugur Buyuklu, Kemal Ugur, Oguz Bici
  • Patent number: 10623198
    Abstract: A smart electronic device is provided for a multi-user environment to meet more convenient life requirements of a plurality of users. The smart electronic device has a camera, a microphone, a processing circuit, a network interface and a projector. The processing circuit loads a user's image and determines an identity of the user appearing before the camera based on the user's image. The processing circuit also analyzes a gesture of the user fetched by the camera or a voice message fetched by the microphone for transforming into a corresponding operating command. The projector displays a frame generated in correspondence to the operating command.
    Type: Grant
    Filed: February 16, 2017
    Date of Patent: April 14, 2020
    Assignee: XIAMEN ECO LIGHTING CO., LTD.
    Inventors: Wuwei Lin, Yuchun Ding, Danqing Liu
  • Patent number: 10622018
    Abstract: In one aspect, an example method includes (i) receiving, by a first computing system, content captured by a second computing system via a camera of the second computing system; (ii) receiving, by the first computing system, metadata of the received content, wherein the metadata was generated by the second computing system proximate a time when the second computing system captured the content; (iii) executing, by the first computing system, a digital-video effect (DVE), wherein executing the DVE causes the first computing system to generate video content that includes the received content and content derived from the received metadata; and (iv) transmitting, by the first computing system, to a third computing system, the generated video content for presentation of the generated video content on the third computing system.
    Type: Grant
    Filed: October 17, 2016
    Date of Patent: April 14, 2020
    Assignee: TRIBUNE BROADCASTING COMPANY, LLC
    Inventor: Hank J. Hundemer
  • Patent number: 10616665
    Abstract: Exemplary embodiments of systems and methods are provided for automatically creating time-based video metadata for a video source and a video playback mechanism. An automated logging process can be provided for receiving a digital video stream, analyzing one or more frames of the digital video stream, extracting a time from each of the one or more frames analyzed, and creating a clock index file associating a time with each of the one or more analyzed frames. The process can further provide for parsing one or more received data files, extracting time-based metadata from the one or more parsed data files, and determining a frame of the digital video stream that correlates to the extracted time based metadata.
    Type: Grant
    Filed: August 31, 2018
    Date of Patent: April 7, 2020
    Inventor: Daniel Stieglitz
  • Patent number: 10609355
    Abstract: A method, a system, and a computer program product for dynamically adjusting sampling of a depth map based on detected motion in a current scene. The method includes capturing real-time scan data at a first resolution by a first camera and a second camera of an image capturing device. The method further includes synchronizing a first plurality of frames of the real-time scan data to create a plurality of synchronized frames at a first frame rate. The method further includes analyzing the synchronized frames to determine whether motion exists. The method further includes, in response to determining motion exists: determining, based on the plurality of synchronized frames, a rate of motion within the current scene; and dynamically calculating a target resolution and a target frame rate for a real-time depth map. The method further includes generating a real-time depth map at the target resolution and target frame rate.
    Type: Grant
    Filed: October 27, 2017
    Date of Patent: March 31, 2020
    Assignee: Motorola Mobility LLC
    Inventors: Yin Hu Chen, Valeriy Marchevsky, Susan Yanqing Xu