The Images Being Video Sequences (epo) Patents (Class 707/E17.028)
  • Patent number: 11894915
    Abstract: Methods, apparatus, systems and articles of manufacture are disclosed to credit media based on the presentation rate. An example apparatus disclosed herein includes memory and instructions. The example apparatus disclosed herein further includes at least one processor to execute the instructions to cause the at least one processor to at least detect a first peak in a first frequency range of audio data associated with monitored media, detect a second peak in a second frequency range of the audio data, the second frequency range different from the first frequency range, detect that a presentation rate of the monitored media is increased relative to a reference version of the media in response to the second peak being greater than the first peak, generate a signature corresponding to the monitored media, and transmit the signature to a media monitoring entity to credit the monitored media based on the presentation rate.
    Type: Grant
    Filed: December 29, 2021
    Date of Patent: February 6, 2024
    Assignee: The Nielsen Company (US), LLC
    Inventor: Morris Lee
  • Patent number: 11887613
    Abstract: A computer extracts a vocal portion from a first audio content item and determines a first representative vector that corresponds to a vocal style of the first audio content item by applying a variational autoencoder (VAE) to the extracted vocal portion of the representation of the audio content item. The computer streams, to an electronic device, a second audio content item, selected from a plurality of audio content items, that has a second representative vector that corresponds to a vocal style of the second audio content item, wherein the second representative vector that corresponds the vocal style of the second audio content item meets similarity criteria with respect to the first representative vector that corresponds to the vocal style of the first audio content item.
    Type: Grant
    Filed: April 12, 2022
    Date of Patent: January 30, 2024
    Assignee: Spotify AB
    Inventor: Aparna Kumar
  • Patent number: 11803749
    Abstract: A method for recognizing a key time point in a video includes: obtaining at least one video segment by processing each image frame in the video by an image classification model; determining a target video segment in the at least one video segment based on a shot type; obtaining respective locations of a first object and a second object in an image frame of the target video segment by an image detection model; and based on a distance between the location of the first object and the location of the second object in the image frame satisfying a preset condition, determining a time point of the image frame as the key time point of the video.
    Type: Grant
    Filed: September 8, 2020
    Date of Patent: October 31, 2023
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LTD
    Inventors: Tao Wu, Xu Yuan Xu, Guo Ping Gong
  • Patent number: 11715488
    Abstract: Methods and apparatus to dynamically generate audio signatures adaptive to circumstances associated with media being monitored are disclosed. An example apparatus includes a media content analyzer to detect a watermark encoded in media monitored by a meter. The apparatus includes a media environment analyzer to estimate an amount of background noise in an environment in which the media is monitored by the meter. The apparatus further includes a signature scheme selector to select a first signature scheme from among a plurality of signature schemes to generate monitored signatures of the media. The first signature scheme is selected based on the amount of background noise. The apparatus also includes a signature generator to generate a first monitored signature of the media based on the first signature scheme.
    Type: Grant
    Filed: January 11, 2021
    Date of Patent: August 1, 2023
    Assignee: THE NIELSEN COMPANY (US), LLC
    Inventors: Arun Ramaswamy, Anand Jain, John Stavropoulos
  • Patent number: 11687589
    Abstract: Methods and systems for auto-populating image metadata are described herein. The system receives or accesses an image. The system then generates a link to a video having a frame that corresponds to the image. To generate the link, the system searches for a video having a frame comprising a portion of the image and generates the link such that the link comprises a timestamp of the frame. The system then modifies the metadata of the image to include the link. Once a user interaction with the image is detected, the system may follow the link to generate for display the video beginning at the timestamp.
    Type: Grant
    Filed: December 20, 2021
    Date of Patent: June 27, 2023
    Assignee: Rovi Guides, Inc.
    Inventors: Durga Prasad Pulicharla, Madhusudhan Srinivasan
  • Patent number: 11653069
    Abstract: Systems and methods are provided for presenting subtitles in association with a composite video. The systems and methods include a facility for uploading a subtitle file having the full subtitles information for the entire composite video. The uploaded subtitle file is then split to generate video content item subtitles files that correspond to video content items in the composite video.
    Type: Grant
    Filed: August 23, 2022
    Date of Patent: May 16, 2023
    Assignee: Snap Inc.
    Inventors: David Michael Hornsby, David Paliwoda, Georgiy Kassabli, Kevin Joseph Thornberry
  • Patent number: 11509539
    Abstract: A traffic analysis apparatus includes: a first means that estimates a state sequence from time-series data of communication traffic based on a hidden Markov model, and groups, into one group, a plurality of patterns with resembling state transitions in the state sequence to perform extraction of a state sequence, with taking the plurality of patterns grouped into one group as one state; and a second means that determines an application state corresponding to the time-series data based on the state sequence extracted by the first means and predetermined application characteristics.
    Type: Grant
    Filed: October 25, 2018
    Date of Patent: November 22, 2022
    Assignee: NEC CORPORATION
    Inventors: Takanori Iwai, Anan Sawabe
  • Patent number: 11490134
    Abstract: A method and system for codec of visual feature data is provided. First protocol format visual feature data generated by an intelligent front end and a certificate identity used to uniquely identify a corresponding first protocol format are received. The first protocol format visual feature data is converted to a same type of second protocol format visual feature data according to the certificate identity. The second protocol format visual feature data is received and parsed according to the second protocol format to obtain original visual feature data that are generated by the intelligent front end and contained in the first protocol format visual feature data.
    Type: Grant
    Filed: December 8, 2017
    Date of Patent: November 1, 2022
    Assignee: Peking University
    Inventors: Lingyu Duan, Yihang Lou, Ziqian Chen, Yan Bai, Yicheng Huang, Tiejun Huang, Wen Gao
  • Patent number: 10990620
    Abstract: In one embodiment, a theme may be obtained. A search query may be executed to identify a plurality of search results pertaining to the theme. A plurality of topics pertaining to the theme may be identified from the search results. Search log data pertaining to the plurality of topics may be ascertained from a search log. The plurality of topics may be ranked based, at least in part, upon the search log data. At least a portion of the plurality of topics may be provided according to the ranking.
    Type: Grant
    Filed: July 14, 2014
    Date of Patent: April 27, 2021
    Assignee: Verizon Media Inc.
    Inventors: John Peng, Arun Autuchirayll, Eric Bax
  • Patent number: 10986379
    Abstract: Systems and methods for content and program type detection, including identification of true boundaries between content segments. A broadcast provider sends a broadcast as an encoded stream. During a switch between content types, an automation system sends identifying metadata indicative of an approximate boundary between content types. A mediacast generation system receives the encoded stream of content and metadata, processes the metadata, time corrects the metadata, and slices the content on the exact boundary where the content change occurs. The mediacast generation system decodes an audio stream directly into Waveform Audio File Format (WAVE) while using an envelope follower to measure amplitude. When the system detects a metadata marker, an analyzer may look inside a buffered time window. The WAVE data may be analyzed to look for a period most likely to be the true boundary or split point between content segments. The content may then be split up on the new true boundary.
    Type: Grant
    Filed: June 7, 2016
    Date of Patent: April 20, 2021
    Assignee: WIDEORBIT LLC
    Inventors: Robert D. Green, John W. Morris, James M. Kott, Brian S. Bosworth
  • Patent number: 10805111
    Abstract: An image stream of static graphic images and a corresponding audio stream (e.g., a comic book image stream and an audio narration stream) are simultaneously rendered. One or more images from the image stream, which are each associated with time information relative to a timeline of the audio stream, are downloaded to the client device. A page is assembled from the images and is assigned time information relative to the timeline of the audio stream on the basis of the time information for the images. A portion of the audio stream including a time offset corresponding to a position on the page is downloaded to the client device. The page and the portion of the audio stream are simultaneously rendered on the client device by using the time information for the images or for the page, the portion of the audio stream being rendered in dependence upon the time offset.
    Type: Grant
    Filed: February 5, 2019
    Date of Patent: October 13, 2020
    Assignee: Audio Pod Inc.
    Inventors: John McCue, Robert McCue, Gregory Shostakovsky, Glenn McCue
  • Patent number: 10796089
    Abstract: Timed text that is provided in a television broadcast or media stream can be enhanced to provide an improved user experience. A scrollable text window can be provided in a media player application, for example, that can allow the user to quickly “catchup” from a missed moment. The timed text may be enhanced to allow links to dictionaries, encyclopedias, online sources, thesauruses, translating services, and/or the like. Further implementations could use automated tools to automatically generate program summaries for watched or unwatched content.
    Type: Grant
    Filed: December 31, 2015
    Date of Patent: October 6, 2020
    Assignee: SLING MEDIA PVT. LTD
    Inventors: Kiran Chittella, Yatish J. Naik Raikar
  • Patent number: 10462519
    Abstract: Systems and methods for dynamically and automatically generating short-form versions of long-form media content are provided. The long-form media content may be tagged with metadata indicating objects or actions in frames, scenes, portions of scenes, or other units of the media content. The systems and methods may receive a user-specified time limit and use the metadata to create a short-form version of the long-form media content (using one or more of the scenes, portions of scenes, or other units of the media content), which preserves one or more story arcs within the media content.
    Type: Grant
    Filed: September 8, 2015
    Date of Patent: October 29, 2019
    Assignee: Disney Enterprises, Inc.
    Inventors: Edward Drake, Andrew J. Wright, Letisha Shaw, Alexander C. Chen
  • Patent number: 10366169
    Abstract: Systems and methods for identifying and locating related content using natural language processing are generally disclosed herein. One embodiment includes an HTML5/JavaScript user interface configured to execute scripting commands to perform natural language processing and related content searches, and to provide a dynamic interface that enables both user-interactive and automatic methods of obtaining and displaying related content. The natural language processing may extract one or more context-sensitive key terms of text associated with a set of content. Related content may be located and identified using keyword searches that include the context-sensitive key terms. For example, text associated with video of a first content, such as text originating from subtitles or closed captioning, may be used to perform searches and locate related content such as a video of a second content, or text of a third content.
    Type: Grant
    Filed: July 17, 2017
    Date of Patent: July 30, 2019
    Assignee: Intel Corporation
    Inventors: Elliot Smith, Victor Szilagyi
  • Patent number: 10347298
    Abstract: A method generating control data for displaying a video sequence on a low resolution display may comprise providing at least a first video sequence, the first video sequence comprising a plurality of image frames, determining whether a first image frame of the first video sequence comprises a sub-image of a primary object, selecting a first position of a primary image portion of an image frame such that the first position substantially matches with the position of the sub-image of the primary object, if the first image frame has been determined to comprise said sub-image of said primary object and providing control data, which indicates said first position.
    Type: Grant
    Filed: June 13, 2014
    Date of Patent: July 9, 2019
    Assignee: NOKIA TECHNOLOGIES OY
    Inventors: Igor Danilo Diego Curcio, Sujeet Shyamsundar Mate
  • Patent number: 10341694
    Abstract: Data processing methods, live broadcasting methods and devices are disclosed. An example data processing method may comprise converting audio and video data into broadcast data in a predetermined format, and performing speech recognition on audio data in the audio and video data, and adding the text information obtained from speech recognition into the broadcast data. In real time, text information obtained from speech recognition according to the audio data can be inserted.
    Type: Grant
    Filed: August 4, 2017
    Date of Patent: July 2, 2019
    Assignee: ALIBABA GROUP HOLDING LIMITED
    Inventor: Gang Xu
  • Patent number: 10332559
    Abstract: An information processing apparatus comprising that includes a reproduction unit to reproduce video content comprising a plurality of frames; a memory to store a table including object identification information identifying an object image, and frame identification information identifying a frame of the plurality of frames that includes the object image; and a processor to extract the frame including the object image from the video content and generate display data of a reduced image corresponding to the frame for display.
    Type: Grant
    Filed: November 24, 2015
    Date of Patent: June 25, 2019
    Assignee: SONY CORPORATION
    Inventor: Kenji Tokutake
  • Patent number: 10198480
    Abstract: According to an example, at least one hot account is determined for each category according to quality scores and correlation degrees of history user generated content (UGCs); after a UGC newly posted by the hot account is received, if a quality score of the newly posted UGC is higher than a predefined quality score threshold and a correlation degree between the newly posted UGC and the category that the hot account belongs to is higher than a predefined correlation degree threshold, the newly posted UGC is determined as a hot UGC.
    Type: Grant
    Filed: February 20, 2015
    Date of Patent: February 5, 2019
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Yun Yang, Weigang Li
  • Patent number: 10180949
    Abstract: The present invention provides a method and an apparatus for information searching. The method includes: displaying a shooting interface for image search, and displaying guide information in the shooting interface; obtaining an image shot according to the guide information; and obtaining a search result according to the image shot and displaying the search result. With the present method for information searching, an accuracy rate of the image search may be improved, and requirements of a user may be better satisfied.
    Type: Grant
    Filed: December 15, 2015
    Date of Patent: January 15, 2019
    Assignee: BAIDU ONLINE NETWORK TECHNOLOGY (BEIJING) CO., LTD.
    Inventors: Zhen Wang, Na Zhang, Kai Chen, Wenbo Yang, Xi Wang
  • Patent number: 10181168
    Abstract: Disclosed is a system whereby it is possible to verify the safety of a person even if the person is not aware that the person is being searched for as a missing person. In this system, each verification requesting person who is searching for another person registers, in a database of a portal server (4), a set comprising a feature value of the face of the searched-for person and personal information (e.g., telephone number) about the searched-for person or the verification requesting person. A field server (2) constantly compares feature values of captured face images with the database, and if a close match is found between the feature value of a captured face image and the stored feature value of the face of a person, the field server (2) presents the registered personal information associated with that person to the person from which the captured face image was derived and requests verification from the latter person.
    Type: Grant
    Filed: March 31, 2014
    Date of Patent: January 15, 2019
    Assignee: HITACHI KOKUSA1 ELECTRIC, INC.
    Inventor: Wataru Ito
  • Patent number: 10157170
    Abstract: A computer implemented method for the collection and segmentation of media through search and segmentation settings.
    Type: Grant
    Filed: December 30, 2013
    Date of Patent: December 18, 2018
    Assignee: eBay, Inc.
    Inventors: Dane M. Howard, James W. Lanahan, Brian F. Williams
  • Patent number: 9953032
    Abstract: A method and system for characterization of multimedia content inputs using cores of a natural liquid architecture are provided. The method comprises receiving at least one multimedia content signal; generating at least a signature respective of the multimedia content signal; matching the generated at least a signature respective of the multimedia content signal to at least a signature from a Signature Database (SDB); identifying a cluster respective of the generated at least a signature; and identifying in a Concept Database (CDB) a concept respective of the cluster.
    Type: Grant
    Filed: June 12, 2014
    Date of Patent: April 24, 2018
    Assignee: Cortica, Ltd.
    Inventors: Igal Raichelgauz, Karina Odinaev, Yehoshua Y. Zeevi
  • Patent number: 9898666
    Abstract: An apparatus and method for providing primitive visual knowledge are disclosed. The method of providing primitive visual knowledge includes receiving an image in a form of a digital image sequence, dividing the received image into scenes, extracting a representative shot from each of the scenes, extracting objects from frames which compose the representative shot, extracting action verbs based on a mutual relationship between the extracted objects, selecting a frame best expressing the mutual relationship with the objects, which are the basis for the extracting of the action verbs, as a key frame, generating the primitive visual knowledge based on the selected key frame, storing the generated primitive visual knowledge in a database, and visualizing the primitive visual knowledge stored in the database to provide the primitive visual knowledge to a manager.
    Type: Grant
    Filed: January 22, 2016
    Date of Patent: February 20, 2018
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Kyu-Chang Kang, Yong-Jin Kwon, Jin-Young Moon, Kyoung Park, Chang-Seok Bae, Jeun-Woo Lee
  • Patent number: 9894291
    Abstract: A moving image acquiring part acquires a moving image. A still image accumulating part stores still images. An image processing part calculates gains obtained by previously storing based on the costs of extraction of still images from a moving image, and extracts some still images with higher gains from among the still images composing the moving image and stores into the still image accumulating part. A request processing part retrieves a still image requested by a request source from the still image accumulating part and transmits to the request source and, when the still image requested by the request source is not stored in the still image accumulating part, extracts the still image requested by the request source from the moving image acquired by the moving image acquiring part and transmits to the request source.
    Type: Grant
    Filed: August 13, 2014
    Date of Patent: February 13, 2018
    Assignee: NEC CORPORATION
    Inventor: Yoshihiro Kanna
  • Patent number: 9779152
    Abstract: In a data visualization system, a method of analyzing and representing spatial data sets to optimize the arrangement of spatial elements, the method including the steps of: retrieving data from a data storage module that is in communication with the data visualization system, determining lift values for a plurality of predefined spatial areas from the retrieved data based on a set of fuzzy association rules applied to the predefined spatial areas, determining spatial performance values for the predefined spatial areas, and calculating a weighted spatial relationship between the determined lift values and spatial performance values.
    Type: Grant
    Filed: July 12, 2016
    Date of Patent: October 3, 2017
    Assignee: New BIS Safe Luxco S.à r.l
    Inventor: Andrew John Cardno
  • Patent number: 9720563
    Abstract: A method of representing a 3D video from a 2D video by use of a node-based task pipeline for 3D video representation, the method implementable by a computer and including generating nodes, each having a defined task sequence required for a 3D video representation, in a node connecting task section provided to a Graphic User Interface (GUI), generating a task pipeline defining a connectivity relationship between the generated nodes, providing a user interface that is configured to operate user-defined data that is to be used by a certain node of the task pipeline, and generating user-defined data based on a user input that is input through the user interface, and outputting a 3D video from an input 2D video by use of the task pipeline and the user-defined data.
    Type: Grant
    Filed: January 20, 2015
    Date of Patent: August 1, 2017
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Seung Woo Nam, Kyung Ho Jang, Myung Ha Kim, Yun Ji Ban, Hye Sun Kim, Bon Ki Koo
  • Patent number: 9710598
    Abstract: Disclosed herein is an information processor including: an image storage section storing a piece of first image data having a first resolution and at least one piece of second image data as layer-by-layer image data for a specimen, the piece of second image data being obtained by spatially compressing the piece of first image data at different magnification ratios; an image data acquisition section acquiring image data from the layer-by-layer image data in units of a predetermined second resolution by which the first resolution is equally divisible to display the image data on a display device; an annotation setting section setting annotations at arbitrary spatial positions of the display image data in response to an instruction from the user; and an image optimization section determining whether each piece of the image data stored in the image storage section is necessary to delete the image data determined to be unnecessary.
    Type: Grant
    Filed: December 14, 2011
    Date of Patent: July 18, 2017
    Assignee: Sony Corporation
    Inventors: Kenji Yamane, Seiji Miyama, Masato Kajimoto
  • Patent number: 9705884
    Abstract: Embodiments of the present invention disclose a method, system, and computer program product for intelligent access control. A computer detects a new user or modifications made to an existing user in an access control list. The computer determines which other users share an attribute with the newly added or modified employee and then determines which asset(s) are associated with the determined group(s). The computer determines the correlation value between the group(s) and the asset. Based on the determined correlation value, the computer determines whether the newly added or modified employee should have access to the asset.
    Type: Grant
    Filed: September 25, 2015
    Date of Patent: July 11, 2017
    Assignee: International Business Machines Corporation
    Inventors: Tamer E. Abuelsaad, Jonathan M. Barney, Carlos A. Hoyos, Robert R. Wentworth
  • Patent number: 9639739
    Abstract: Facial image bucketing is disclosed, whereby a query for facial image recognition compares the facial image against existing candidate images. Rather than comparing the facial image to each candidate image, the candidate images are organized or clustered into buckets according to their facial similarities, and the facial image is then compared to the image(s) in most-likely one(s) of the buckets. The organizing uses particular selected facial features, computes distance between the facial features, and selects ones of the computed distances to determine which facial images should be organized into the same bucket.
    Type: Grant
    Filed: May 28, 2016
    Date of Patent: May 2, 2017
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Michael Poplavski, Scott Schumacher, Prachi Snehal, Sean J. Welleck, Alan Xia, Yinle Zhou
  • Patent number: 9633015
    Abstract: A method and client device is disclosed for indexing content of a multimedia file. The method comprises using a client device to segment the content of the multimedia file into a plurality of segments and to determine structure-searchable data for each segment. Determining structure searchable data for a segment comprises (1) identifying one or more features of respective multimedia types in the segment; (2) correlating each of the identified features to one or more respective keywords; and (3) calculating one or more respective relevance factors for each of the keywords, where at least one of the relevance factors is based on one or more characteristics of the client device. The method also comprises the client device transmitting the structure-searchable data (including the keywords, relevance factors, and respective media types of the identified features) to an indexing server.
    Type: Grant
    Filed: July 26, 2012
    Date of Patent: April 25, 2017
    Assignee: Telefonaktiebolaget LM Ericsson (PUBL)
    Inventors: Tommy Arngren, David Lindegren, Joakim Söderberg, Marika Stålnacke
  • Patent number: 9613057
    Abstract: A document management apparatus receives image data generated by a first user using an image processing apparatus, stores the image data, receives a document file that enables image data to be edited and was transmitted by a second user from a user terminal, searches for image data corresponding to the received document file among the stored image data, and transmits the received document file to an unique destination assigned to the first user who has generated the found image data.
    Type: Grant
    Filed: August 5, 2013
    Date of Patent: April 4, 2017
    Assignee: Canon Kabushiki Kaisha
    Inventor: Satoshi Kawara
  • Patent number: 9510043
    Abstract: One embodiment of the present invention sets forth a technique for identifying and pre-buffering audio/video stream pairs. The method includes the steps of predictively identifying for pre-buffering at least one audio/video stream pair that may be selected for playback by a user subsequent to a currently playing audio/video stream pair, computing a first rate for pre-buffering an audio portion of the at least one audio/video stream pair and a second rate for pre-buffering a video portion of the at least one audio/video stream pair, downloading the audio portion at the first rate and downloading the video portion at the second rate, and storing the downloaded audio portion and the downloaded video portion in a content buffer.
    Type: Grant
    Filed: April 27, 2015
    Date of Patent: November 29, 2016
    Assignee: NETFLIX, INC.
    Inventors: John Funge, Greg Peters
  • Patent number: 9495439
    Abstract: In one embodiment, a method includes receiving digital media content files. The digital media content has at least one property associated with it. Topically related segments are determined from received content in accordance with one or more property. Topic clusters are generated based on similarities between segments. Topic clusters are compared and clustered from multiple files of the plurality into cluster groups in accordance with a comparison. Cluster groups are associatively stored in a data storage. A search for topic clusters relevant to a particular need is made, and a series of related segments associated with the search are generated for serial display.
    Type: Grant
    Filed: October 8, 2013
    Date of Patent: November 15, 2016
    Assignee: Cisco Technology, Inc.
    Inventors: Yongxin Xi, Mainak Sen
  • Patent number: 9489421
    Abstract: Disclosed herein is a transmission apparatus including: at least one content recognition section; and a timeline data generation section. The content recognition section has a database block configured to store reference data including at least signature data and a content identifier, and timeline data including at least an application identifier and timeline information. The content recognition section further has a response generation block configured to recognize content from which signature data included in a query, generate a response including the timeline data including a content identifier and the application identifier, and return the generated response to the reception apparatus. The timeline data generation section is configured to generate the timeline data and collectively supply the generated timeline data to the at least one content recognition section, the generated timeline data being common to the at least one content recognition section.
    Type: Grant
    Filed: July 9, 2013
    Date of Patent: November 8, 2016
    Assignee: SONY CORPORATION
    Inventor: Yasuaki Yamagishi
  • Patent number: 9471585
    Abstract: A local de-duplication table for at least a particular partition of a data stream is instantiated at a particular ingestion node of a multi-tenant stream management service. A submission request indicating a data record of the partition is received at the ingestion node. In response to a determination that (a) the submission request was received within a de-duplication time window corresponding to the particular partition, and (b) the local de-duplication table does not indicate that the data record is a duplicate, a write operation to store the data record at one or more storage locations of the stream management system is initiated.
    Type: Grant
    Filed: December 20, 2013
    Date of Patent: October 18, 2016
    Assignee: Amazon Technologies, Inc.
    Inventor: Marvin Michael Theimer
  • Patent number: 9448763
    Abstract: A system operates to manage accessibility of media content items based on a user's performance of a repetitive motion activity. The system can generate rule data based on a rule designed to permit access to certain media content items. The rule data can include information about various conditions to be satisfied to make the media content items accessible for playback. Such conditions can be associated with a user's performance or status of a repetitive motion activity.
    Type: Grant
    Filed: October 14, 2015
    Date of Patent: September 20, 2016
    Assignee: Spotify AB
    Inventors: Dariusz Dziuk, Rahul Sen, Matilda Hannäs, Nikolaos Toumpelis
  • Patent number: 9405963
    Abstract: Facial image bucketing is disclosed, whereby a query for facial image recognition compares the facial image against existing candidate images. Rather than comparing the facial image to each candidate image, the candidate images are organized or clustered into buckets according to their facial similarities, and the facial image is then compared to the image(s) in most-likely one(s) of the buckets. The organizing uses particular selected facial features, computes distance between the facial features, and selects ones of the computed distances to determine which facial images should be organized into the same bucket.
    Type: Grant
    Filed: July 30, 2014
    Date of Patent: August 2, 2016
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Michael Poplavski, Scott Schumacher, Prachi Snehal, Sean J. Welleck, Alan Xia, Yinle Zhou
  • Patent number: 9330171
    Abstract: A method includes receiving, by a processing device of a content sharing platform, a video content, selecting at least one video frame from the video content, subsampling the at least one video frame to generate a first representation of the at least one video frame, selecting a sub-region of the at least one video frame to generate a second representation of the at least one video frame, and applying a convolutional neuron network to the first and second representations of the at least one video frame to generate an annotation for the video content.
    Type: Grant
    Filed: January 22, 2014
    Date of Patent: May 3, 2016
    Assignee: GOOGLE INC.
    Inventors: Sanketh Shetty, Andrej Karpathy, George Dan Toderici
  • Patent number: 9305215
    Abstract: An apparatus for analyzing a video capture screen includes: a video frame extracting unit extracting at least one frame from a video having a plurality of frames; an extracted frame digitizing unit digitizing features of each of the at least one frame extracted by the video frame extracting unit; an image digitizing unit digitizing features of at least one collected search target image; an image comparing and searching unit comparing the search target image with the at least one frame extracted from the plurality of frames by digitized values of the collected search target image and the at least one frame; and a search result processing unit mapping related information of the collected search target image to a frame coinciding with the search target image and storing the related information in a database, when the extracted at least one frame coinciding with the search target image is present in a comparison result.
    Type: Grant
    Filed: July 12, 2013
    Date of Patent: April 5, 2016
    Assignee: NHN Corporation
    Inventors: Gunhan Park, Jeanie Jung
  • Patent number: 8897603
    Abstract: An image processing apparatus includes a selection device that selects a plurality of frames from a video image constituted with a group of frames, an extraction device that recognizes a specific subject image in the plurality of frames having been selected and extracts the recognized subject image, and an image creation device that creates a still image containing a plurality of subject images having been extracted by the extraction device.
    Type: Grant
    Filed: August 20, 2010
    Date of Patent: November 25, 2014
    Assignee: Nikon Corporation
    Inventor: Takeshi Nishi
  • Patent number: 8892553
    Abstract: Recording of various events in a video format that facilitates viewing and selective editing are provided. The video can be presented in a wiki-format that allows a multitude of subsequent users to add, modify and/or delete content to the original recorded event or a revision of that event. As edits and annotations are applied, either automatically or manually, such edits can be indexed based on criteria such as identification of an annotator, a time stamp associated with the edit, a revision number, or combinations thereof. The edits or annotations can be provided in various formats including video, audio, text, and so forth.
    Type: Grant
    Filed: June 18, 2008
    Date of Patent: November 18, 2014
    Assignee: Microsoft Corporation
    Inventors: Rebecca Norlander, Anoop Gupta, Bruce A. Johnson, Paul J. Hough, Mary P. Czerwinski, Pavel Curtis, Raymond E. Ozzie
  • Publication number: 20140250055
    Abstract: Certain embodiments described herein provide methods and systems that use metadata placeholders to facilitate the association of metadata with recorded media content. Metadata placeholders, for example, may be created prior to recording content and then used at the time of the recording and editing of the actual content. Metadata placeholders can be used to make useful information, including a director's shot plan and other shot attribute information, available on-location to be used and edited by those present at recording and to facilitate the association of the information with the actual recorded content. One exemplary method involves creating a metadata placeholder for a shot, including information about the shot in the metadata fields of the metadata placeholder, and then storing the placeholder's metadata with the content that is recorded for the shot.
    Type: Application
    Filed: July 7, 2008
    Publication date: September 4, 2014
    Inventors: David Kuspa, Mark Mapes, Benoit Ambry
  • Patent number: 8768924
    Abstract: A media editing system includes one or more machines that are configured to support cloud-based collaborative editing of media by one or more client devices. A machine within the media editing system may be configured to receive a render request for generation of a media frame, determine whether a client device is to generate the media frame, and initiate generation of the media frame. Moreover, a machine within the media editing system may facilitate resolution of conflicts between edits to a particular piece of media. Furthermore, a machine within the media editing system may facilitate provision of convenient access to media from a particular client device to one or more additional client devices.
    Type: Grant
    Filed: November 8, 2011
    Date of Patent: July 1, 2014
    Assignee: Adobe Systems Incorporated
    Inventors: Enzo Mario Guerrera, Boris Alexander Prüessmann, Matthew C. Weagle
  • Publication number: 20140114917
    Abstract: A device may play a content item and detect an event while the content item is playing. The device may also determine a position, within the content item, at which the content item is playing when the event is detected, to obtain position information. In addition, the device may associate the position information with information obtained based on the event to generate a log entry. The device may update an experience log with the log entry.
    Type: Application
    Filed: October 18, 2012
    Publication date: April 24, 2014
    Applicant: Sony Mobile Communications AB
    Inventor: David Karlsson
  • Patent number: 8694667
    Abstract: A filtering method and system. The method includes receiving by a computer processor an audio/video data file and filtering data. The computer processor analyzes the filtering data with respect to the audio/video data file and retrieves specified audio/video data portions comprising data objects within frames of the audio/video data file. The computer processor removes gaps existing in the audio/video data file and receives tags comprising instructions for presenting video data of the audio/video data file, audio data of the audio/video data file, and the specified audio/video data portions. The computer processor stores the video data in a first layer of a multimedia file, the audio data in a second layer of the multimedia file, and the specified audio/video data portions in additional layers of the multimedia file. Each of the first layer, the second layer, and the additional layers comprises a tag layer comprising the tags.
    Type: Grant
    Filed: January 5, 2011
    Date of Patent: April 8, 2014
    Assignee: International Business Machines Corporation
    Inventor: Sarbajit K. Rakshit
  • Publication number: 20140074866
    Abstract: A method is provided in one example embodiment and includes detecting user interaction associated with a video file; extracting interaction information that is based on the user interaction associated with the video file; and enhancing the metadata based on the interaction information. In more particular embodiments, the enhancing can include generating additional metadata associated with the video file. Additionally, the enhancing can include determining relevance values associated with the metadata.
    Type: Application
    Filed: September 10, 2012
    Publication date: March 13, 2014
    Applicant: Cisco Technology, Inc.
    Inventors: Sandipkumar V. Shah, Ananth Sankar
  • Publication number: 20130262462
    Abstract: Methods and systems for providing related video files in a video file storage system are disclosed. One method includes identifying a plurality of video files within the video file storage system, wherein the plurality of video files each have a relationship with the first file, and each video file includes a video and associated information. The method further includes generating, by a system server, a list of inquiries based on the plurality of video files, providing, by the system server, the list of inquiries to at least one creator of the first file, receiving from the at least one creator at least one response to the list of inquiries, selecting a subset of the plurality of video files based on the at least one response, and storing information related to the selected subset of the plurality of video files.
    Type: Application
    Filed: April 3, 2012
    Publication date: October 3, 2013
    Applicant: PYTHON4FUN
    Inventors: Devabhaktuni Srikrishna, Marc A. Coram, Christopher Hogan
  • Patent number: 8515933
    Abstract: A video search method including following steps is provided. Meta-data of a query clip is received, wherein the meta-data includes an index tag and a semantic pattern. One or more candidate clips are retrieved from at least one video database according to the index tag. The semantic pattern is compared with a semantic pattern of each of the candidate clips, and each of the candidate clips is marked as a returnable video clip or a non-returnable video clip according to a comparison result. The candidate clips marked as the returnable video clip are served as a query result matching the query clip. A video search system and a method for establishing a video database are also provided.
    Type: Grant
    Filed: April 1, 2011
    Date of Patent: August 20, 2013
    Assignee: Industrial Technology Research Institute
    Inventors: Jih-Sheng Tu, Jung-Yang Kao
  • Patent number: 8509490
    Abstract: A trajectory processing apparatus comprises a trajectory database configured to store a position coordinate of a movable body detected from a camera image in association with data that specifies the camera image from which the movable body is detected, and a camera image database configured to store the camera image. A control section fetches the position coordinate of the movable body and the specifying data for the camera image from which the movable body is detected from the trajectory database. Further, the position coordinate of the movable body fetched from the trajectory database is displayed in a display section as a trajectory of the movable body. Furthermore, the control section acquires from the camera image database the camera image specified by the specifying data fetched from the trajectory database. Moreover, this camera image is displayed in the display section.
    Type: Grant
    Filed: February 14, 2012
    Date of Patent: August 13, 2013
    Assignee: Toshiba Tec Kabushiki Kaisha
    Inventors: Masami Takahata, Takashi Koiso, Masaki Narahashi, Tomonori Ikumi
  • Publication number: 20130173635
    Abstract: A system and method identifies a video file in response to a video based search query. A video imaging device in a mobile device captures a video file, and sends the video file to a search engine. A database associated with the search engine stores pre-indexed metadata of pre-indexed frames of video. A video analyzer separates the received video file into individual frames, analyzes the individual frames received from the mobile device by converting the individual frames into metadata, and compares the metadata to the pre-indexed metadata of the pre-indexed frames stored in the database. The video analyzer then sends a message containing information about the identified pre-existing video back to the mobile device based on the comparison of metadata. The metadata of the file and/or the metadata in the database may include one or more of pixel information, histogram information, image recognition information and audio information for each individual frame.
    Type: Application
    Filed: December 30, 2011
    Publication date: July 4, 2013
    Applicant: Cellco Partnership d/b/a Verizon Wireless
    Inventor: Kumar SANJEEV