With At Least One Audio Signal Patents (Class 386/285)
  • Patent number: 11978484
    Abstract: Various embodiments facilitate the creation and presentation of a virtual experience. In one embodiment, the virtual experience is assembled from user model data corresponding to a three-dimensional representation of a user, user movement data corresponding to at least one movement characteristic of the user, user voice data corresponding to at least one vocal characteristic of the user, environment data corresponding to a three-dimensional representation of a location, and event data corresponding to a captured event at the location. The virtual experience is a virtual recreation of the captured event at the location, with the three-dimensional representation of the user, the vocal characteristic of the user, and the movement characteristic of the user inserted into the captured event.
    Type: Grant
    Filed: October 21, 2021
    Date of Patent: May 7, 2024
    Assignee: DISH Technologies L.L.C.
    Inventors: Nicholas Brandon Newell, Swapnil Anil Tilaye, Carlos Garcia Navarro
  • Patent number: 11964357
    Abstract: A conditioner of a chemical mechanical polishing (CMP) apparatus includes a disk to polish a polishing pad of the CMP apparatus, a driver to rotate the disk, a lifter to lift the driver, an arm to rotate the lifter, and a connector to connect the driver to the lifter, the driver being tiltable with respect to the lifter.
    Type: Grant
    Filed: July 5, 2022
    Date of Patent: April 23, 2024
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Seungchul Han, Yonghee Lee, Taemin Earmme, Byoungho Kwon, Kuntack Lee
  • Patent number: 11942116
    Abstract: In one aspect, an example method includes (i) obtaining a set of user attributes for a user of a content-presentation device; (ii) based on the set of user attributes, obtaining structured data and determining a textual description of the structured data; (iii) transforming, using a text-to-speech engine, the textual description of the structured data into synthesized speech; and (iv) generating, using the synthesized speech and for display by the content-presentation device, a synthetic video of a targeted advertisement comprising the synthesized speech.
    Type: Grant
    Filed: May 17, 2023
    Date of Patent: March 26, 2024
    Assignee: Roku, Inc.
    Inventors: Sunil Ramesh, Michael Cutter, Charles Brian Pinkerton, Karina Levitian
  • Patent number: 11936940
    Abstract: A user system for rendering accessibility enhanced content includes processing hardware, a display, and a memory storing software code. The processing hardware executes the software code to receive primary content from a content distributor and determine whether the primary content is accessibility enhanced content including an accessibility track. When the primary content omits the accessibility track, the processing hardware executes the software code to perform a visual analysis, an audio analysis, or both, of the primary content, generate, based on the visual analysis and/or the audio analysis, the accessibility track to include at least one of a sign language performance or one or more video tokens configured to be played back during playback of the primary content, and synchronize the accessibility track to the primary content. The processing hardware also executes the software code to render, using the display, the primary content or the accessibility enhanced content.
    Type: Grant
    Filed: May 3, 2022
    Date of Patent: March 19, 2024
    Assignee: Disney Enterprises, Inc.
    Inventors: Mark Arana, Katherine S. Navarre, Michael A. Radford, Joseph S. Rice, Noel Brandon Vasquez
  • Patent number: 11915123
    Abstract: Embodiments relate to a system, program product, and method for employing deep learning techniques to fuse data across modalities. A multi-modal data set is received, including a first data set having a first modality and a second data set having a second modality, with the second modality being different from the first modality. The first and second data sets are processed, including encoding the first data set into one or more first vectors, and encoding the second data set into one or more second vectors. The processed multi-modal data set is analyzed, and the encoded features from the first and second modalities are iteratively and asynchronously fused. The fused modalities include combined vectors from the first and second data sets representing correlated temporal behavior. The fused vectors are then returned as output data.
    Type: Grant
    Filed: November 14, 2019
    Date of Patent: February 27, 2024
    Assignee: International Business Machines Corporation
    Inventors: Xuan-Hong Dang, Syed Yousaf Shah, Petros Zerfos, Nancy Anne Greco
  • Patent number: 11900682
    Abstract: A method for video clip extraction includes: obtaining a video, and sampling the video to obtain N video frames, wherein N is a positive integer; inputting the N video frames to a pre-trained frame feature extraction model to obtain a feature vector of each video frame in N video frames; determining scores of the N video frames based on a pre-trained scoring model; and extracting target video clips from the video based on the scores of the N video frames.
    Type: Grant
    Filed: April 26, 2021
    Date of Patent: February 13, 2024
    Assignee: BEIJING XIAOMI PINECONE ELECTRONICS CO., LTD.
    Inventors: Jiagao Hu, Fei Wang, Pengfei Yu, Daiguo Zhou
  • Patent number: 11882244
    Abstract: The embodiments of the disclosure disclose a video special effect processing method and a video special effect processing device. The method includes: detecting music played along with a video during a process of playing the video; acquiring a video frame image to be played in the video when the music is detected to be played at a preset rhythm; performing special effect processing on the target object in the video frame image to obtain a special effect processed video frame image; and displaying and playing the special effect processed video frame image.
    Type: Grant
    Filed: June 24, 2022
    Date of Patent: January 23, 2024
    Inventors: Xiaoqi Li, Jingjin Zhou
  • Patent number: 11817131
    Abstract: A graphical user interface for indicating video editing decisions may include an inclusion marker element, an exclusion marker element, and a selection marker element. The inclusion marker element may indicate a segment of a video that has been marked for inclusion in a video edit. The exclusion marker element may indicate a segment of the video has been marked for exclusion from the video edit. The selection marker element may indicate a segment of the video has been selected for inclusion in the video edit.
    Type: Grant
    Filed: January 17, 2023
    Date of Patent: November 14, 2023
    Assignee: GoPro, Inc.
    Inventors: Thomas Achddou, Renaud Cousin, Nicolas Duponchel, Jean-Baptiste Noel
  • Patent number: 11778138
    Abstract: An electronic device can be synchronized with a broadcast of a live sporting event to obtain supplemental sports data over a data network from a server storing data associated with the live sporting event. Supplemental sports data is obtained from the server for display on the electronic device following a triggering activity associated with the broadcast of the live sporting event. Supplemental sports data can be transmitted for rendering on a display associated with the electronic device. Supplemental sports data can include display of an instant replay video of a sports athlete combined with audio of a pre-recorded statement by the sports athlete associated with the instant replay video, an announcement of a score change for a sporting event monitored by the electronic device, and a display of a football widget providing updates on football game status (e.g., possession, ball location, current score) monitored by the electronic device.
    Type: Grant
    Filed: October 27, 2021
    Date of Patent: October 3, 2023
    Assignee: STRIPE, INC.
    Inventors: Anthony F. Verna, Luis M. Ortiz
  • Patent number: 11756528
    Abstract: A system for generating videos uses a domain-specific instructional language and a video rendering engine that produces videos against a digital product which changes and evolves over time. The video rendering engine uses the instructions in an instruction markup document written in the domain-specific instructional language to generate a video while navigating a web-based document, (which is different from the instruction markup document), representing the digital product for which the video is generated. The video rendering engine navigates the web-based document, coupled with the instruction markup document, which explains the operations to be performed on the web-based document. The instruction markup document also identifies the special effects that manipulate the underlying product in real-time, includes the spoken text for generating subtitles, and provides formalized change management by design.
    Type: Grant
    Filed: April 13, 2021
    Date of Patent: September 12, 2023
    Assignee: Videate, Inc.
    Inventor: David Christian Gullo
  • Patent number: 11736654
    Abstract: Methods, apparatus and systems related to production of a movie, a TV show or a multimedia content are described. In one example aspect, a system for producing a movie includes a pre-production subsystem configured to receive information about a storyline, cameras, cast, and other assets for the movie from a user. The pre-production subsystem is configured to generate one or more machine-readable scripts corresponding to one or more scenes. The system includes a production subsystem configured to receive the scripts from the pre-production system to determine constraints among the scenes. The production subsystem is further configured to determine actions for the cameras and the cast for obtaining footages according to the storyline.
    Type: Grant
    Filed: July 17, 2020
    Date of Patent: August 22, 2023
    Assignee: WeMovie Technologies
    Inventors: Xidong Wu, Xiubo Wu
  • Patent number: 11670284
    Abstract: Systems and methods are disclosed herein for detecting dubbed speech in a media asset and receiving metadata corresponding to the media asset. The systems and methods may determine a plurality of scenes in the media asset based on the metadata, retrieve a portion of the dubbed speech corresponding to the first scene, and process the retrieved portion of the dubbed speech corresponding to the first scene to identify a speech characteristic of a character featured in the first scene. Further, the systems and methods may determine whether the speech characteristic of the character featured in the first scene matches the context of the first scene, and if the match fails, perform a function to adjust the portion of the dubbed speech so that the speech characteristic of the character featured in the first scene snatches the context of the first scene.
    Type: Grant
    Filed: September 21, 2021
    Date of Patent: June 6, 2023
    Assignee: Rovi Guides, Inc.
    Inventors: Mario Sanchez, Ashleigh Miller, Paul T. Stathacopoulos
  • Patent number: 11665391
    Abstract: A signal processing device includes an input interface, an image processor, an audio processor, and a controller. The input interface receives signals of a video and an audio acquired concurrently in a space where subjects exist. The image processor recognizes subject images in the video, to determine a first type of area where each subject exists. The audio processor recognizes sound sources in the audio, to determine a second type of area where each sound source exists in the space, independently of the first type of area. The controller uses the first and second types of areas to judge coincidence or non-coincidence between a position of the each subject and a position of the each sound source, to determine a combination of a subject and a sound source whose positions coincide with each other. The controller selectively determines the subject image to be output that corresponds to the combination.
    Type: Grant
    Filed: January 8, 2022
    Date of Patent: May 30, 2023
    Assignee: Panasonic Intellectual Property Management Co., Ltd.
    Inventors: Hiroki Kasugai, Shinichi Yamamoto
  • Patent number: 11581018
    Abstract: There are provided methods and systems for media processing, comprising: providing at least one media asset source selected from a media asset sources library, the at least one media asset source comprising at least one source video, via a network to a client device; receiving via the network or the client device a media recording comprising a client video recorded by a user of the client device; transcoding the at least one source video and the client video which includes parsing the client video and the source video, respectively, to a plurality of client video frames and a plurality of source video frames based on the matching; segmenting one or more frames of the plurality of source video frames to one or more character frames; detecting one or more face images in one or more frames of the plurality of client video frames and provide face markers; resizing the one or more character frames according to the face markers compositing the resized character frames with the background frames using one or more bl
    Type: Grant
    Filed: September 3, 2021
    Date of Patent: February 14, 2023
    Assignee: FUSIT, INC.
    Inventor: Michal Shafir Nir
  • Patent number: 11462248
    Abstract: A playback of a video may be generated to include accompaniment of music. For parts of the video that includes voice, an instrumental version of the music may be used. For parts of the video that does not include voice, a singing version of the music may be used.
    Type: Grant
    Filed: June 29, 2021
    Date of Patent: October 4, 2022
    Assignee: GoPro, Inc.
    Inventors: Guillaume Oulès, Anais Oulès
  • Patent number: 11456017
    Abstract: In one general aspect, a method can include receiving a video loop portion included in a video file and receiving an audio loop portion included in an audio file. The method can include analyzing at least a portion of the audio file based on a musical characteristic and identifying a plurality of segment locations within the audio file based on the analyzing where the plurality of segment locations define a plurality of audio segments of the audio file. The method can also include modifying the video loop portion based on one of the plurality of segment locations in the audio file.
    Type: Grant
    Filed: September 22, 2020
    Date of Patent: September 27, 2022
    Assignee: Twitter, Inc.
    Inventors: Richard J. Plom, Jason J. Mante, Ryan Swigart, Mikhail Kaplinskiy
  • Patent number: 11395088
    Abstract: A method comprising: causing analysis of a portion of a visual scene; causing modification of a first sound object to modify a spatial extent of the first sound object in dependence upon the analysis of the portion of the visual scene corresponding to the first sound object; and causing rendering of the visual scene and the corresponding sound scene including of the modified first sound object with modified spatial extent.
    Type: Grant
    Filed: September 8, 2020
    Date of Patent: July 19, 2022
    Assignee: NOKIA TECHNOLOGIES OY
    Inventors: Antti Eronen, Jussi Leppänen, Francesco Cricri, Arto Lehtiniemi
  • Patent number: 11355155
    Abstract: System and method to summarize one or more videos are provided. The system includes a data receiving module configured to receive videos; a video analysis module configured to analyse the one or more videos to generate one or more transcription text output; a building block data module configured to create a building block model and to apply the building block model on analysed videos; a video presentation module configured to present contents of the videos using elements and to present the one or more transcription texts; a video prioritization configured to generate one or more ranking formulas for the videos, to prioritize building block models, upon receiving feedback from users, based on contents and transcription texts; a video summarization module configured to generate a video summary; a video action module configured to choose an action to be performed on the videos based on the feedback received from the corresponding users.
    Type: Grant
    Filed: May 11, 2021
    Date of Patent: June 7, 2022
    Assignee: CLIPr Co.
    Inventors: Humphrey Chen, Cindy Chin, Aaron Sloman
  • Patent number: 11303970
    Abstract: Systems and methods are disclosed for delivering video content over a network, such as the Internet. Videos are identified and pre-processed by a web service and then separated into a plurality of segments. Based on user interest, video segments may be pre-fetched and stored by a client associated with a user. Upon receiving a selection from a user to play a video, the first video segment may begin playing instantaneously from a local cache. While the first video segment plays, subsequent video segments are transmitted from the web service to the client, so that the subsequent video segments will be ready for viewing at the client when playback of the first video segment has finished.
    Type: Grant
    Filed: January 21, 2020
    Date of Patent: April 12, 2022
    Assignee: Verizon Patent and Licensing Inc.
    Inventors: Peter F. Kocks, Rami El Mawas, Ping-Hao Wu
  • Patent number: 11276434
    Abstract: Systems and methods for generating individualized content trailers. Content such as a video is divided into segments each representing a set of common features. With reference to a set of stored user preferences, certain segments are selected as aligning with the user's interests. Each selected segment may then be assigned a label corresponding to the plot portion or element to which it belongs. A coherent trailer may then be assembled from the selected segments, ordered according to their plot elements. This allows a user to see not only segments containing subject matter that aligns with their interests, but also a set of such segments arranged to give the user an idea of the plot, and a sense of drama, increasing the likelihood of engagement with the content.
    Type: Grant
    Filed: November 17, 2020
    Date of Patent: March 15, 2022
    Assignee: ROVI GUIDES, INC.
    Inventors: Jeffry Copps Robert Jose, Mithun Umesh, Sindhuja Chonat Sri
  • Patent number: 11222523
    Abstract: A method of establishing a communication path between a first terminal and a second terminal, the method including receiving a first call from the first terminal; answering the first call; generating a call establishment request (CER); establishing a second call to the second terminal; forwarding the CER to the second terminal; receiving an acknowledgement from the second terminal; and connecting the first call and the second call.
    Type: Grant
    Filed: March 29, 2017
    Date of Patent: January 11, 2022
    Assignee: CARRIER CORPORATION
    Inventors: Ron Johan, Gabriel Daher, Daniel Ming On Wu
  • Patent number: 11200278
    Abstract: Provided are a method and apparatus for determining background music of a video, a terminal device, and a storage medium. The method includes: acquiring a scene selection instruction, and performing video capturing based on the scene selection instruction; after the video capturing is completed and a music selection instruction for the captured video is detected, displaying a music replacement interface; and determining background music of the captured video according to music selected on the music replacement interface.
    Type: Grant
    Filed: September 10, 2020
    Date of Patent: December 14, 2021
    Assignee: BEIJING MICROLIVE VISION TECHNOLOGY CO., LTD
    Inventor: Yu Song
  • Patent number: 11144764
    Abstract: Systems, methods, and storage media for selecting video portions for a video synopsis of streaming video content are disclosed. Exemplary implementations may: extract at least a portion of an audio track from a live stream of video content over time to create an audio file; convert the audio file from a time domain to a frequency domain; generate a visual representation of the spectrum of frequencies of the audio signal as it varies with time; apply a classification algorithm to the visual representation to generate interest probability scores for portions of the audio signal; select portions of the audio signal that meet or exceed a threshold probability score; correlate the selected portions of the audio signal to corresponding segments of the video content that has been streamed; and select the corresponding segments of the video content for inclusion in the synopsis.
    Type: Grant
    Filed: September 30, 2020
    Date of Patent: October 12, 2021
    Assignee: CBS Interactive Inc.
    Inventor: Marc Sharma
  • Patent number: 11127232
    Abstract: A system and method are presented for combining visual recordings from a camera, audio recordings from a microphone, and behavioral data recordings from behavioral sensors, during a panel discussion. Cameras and other sensors can be assigned to specific individuals or can be used to create recordings from multiple individuals simultaneously. Separate recordings are combined and time synchronized, and portions of the synchronized data are then associated with the specific individuals in the panel discussion. Interactions between participants are determined by examining the individually assigned portions of the time synchronized recordings. Events are identified in the interactions and then recorded as separate event data associated with individuals.
    Type: Grant
    Filed: November 26, 2019
    Date of Patent: September 21, 2021
    Assignee: On Time Staffing Inc.
    Inventor: Roman Olshansky
  • Patent number: 10939084
    Abstract: Disclosed is an approach for displaying 3D videos in a VR and/or AR system. The 3D videos may include 3D animated objects that escape from the display screen. The 3D videos may interact with objects within the VR and/or AR environment. The 3D video may be interactive with a user such that based on user input corresponding to decisions elected by the user at certain portions of the 3D video such that a different storyline and possibly a different conclusion may result for the 3D video. The 3D video may be a 3D icon displayed within a portal of a final 3D render world.
    Type: Grant
    Filed: December 19, 2018
    Date of Patent: March 2, 2021
    Assignee: Magic Leap, Inc.
    Inventors: Praveen Babu J D, Sean Christopher Riley
  • Patent number: 10904123
    Abstract: A route tracing request packet is generated comprising a time-to-live value, a source address of a source of the route tracing request packet, and an address of a destination of the route tracing request packet. The source and destination are in the virtual network; the route tracing request packet is usable to identify the virtual appliance, and the virtual appliance is configured to examine the route tracing request packet for a time-to-live value indicating that the route tracing request packet has expired and sending a time-to-live exceeded message to the source address. The time-to-live exceeded message comprises an identifier for the virtual appliance. The route tracing request packet is forwarded to the destination. The time-to-live exceeded message is received. Data is extracted to determine network virtual appliances that were traversed by the route tracing request packet prior to expiration of the time-to-live. The network virtual appliances are reported.
    Type: Grant
    Filed: May 31, 2019
    Date of Patent: January 26, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Rishabh Tewari, Michael Czeslaw Zygmunt, Madhan Sivakumar, Deepak Bansal, Shefali Garg
  • Patent number: 10860190
    Abstract: Techniques for presenting and interacting with composite images on a computing device are described. In an example, the device presents a first article and a first portion of a first composite image showing a second article. The first composite image shows a first outfit combination that is different from the first outfit. The device receives a first user interaction indicating a request to change the second article and presents the first article and a second portion of a second composite image showing a clothing article in a second outfit. The second composite image shows a second outfit combination that is different from the first outfit, the second outfit, and the first outfit combination. The device receives a second user interaction indicating a request to use the third article and presents the second composite image showing the second outfit combination.
    Type: Grant
    Filed: April 22, 2019
    Date of Patent: December 8, 2020
    Assignee: Amazon Technologies, Inc.
    Inventors: Nicholas Robert Ditzler, Lee David Thompson, Devesh Sanghvi, Hilit Unger, Moshe Bouhnik, Siddharth Jacob Thazhathu, Anton Fedorenko
  • Patent number: 10783930
    Abstract: A display control device includes an assigning unit and a display control unit. The assigning unit assigns, with respect to S number of units of display (where S is an integer equal to or greater than two) included in a display area in which units of display having the width equal to L number of pixels (where L is an integer equal to or greater than one) are placed in the width direction, M number of sets of data (where M is an integer greater than S) in a divided manner. The display control unit controls display of the units of display in different display formats according to the number of sets of data of a particular type included in the assigned data.
    Type: Grant
    Filed: January 25, 2017
    Date of Patent: September 22, 2020
    Assignee: KABUSHIKI KAISHA TOSHIBA
    Inventor: Shinichiro Hamada
  • Patent number: 10643366
    Abstract: Implementations are directed to methods, systems, apparatus, and computer programs for generation of a three-dimensional (3D) animation by receiving a user input defining a two-dimensional (2D) representation of a plurality of elements, processing, by the one or more processors, the 2D representation to classify the plurality of elements in symbolic elements and action elements, generating, by the one or more processors, based on the symbolic elements, the action elements, and a set of rules a 3D animation corresponding to the 2D representation, and transmitting, by the one or more processors, the 3D animation to an extended reality device for display.
    Type: Grant
    Filed: December 3, 2019
    Date of Patent: May 5, 2020
    Assignee: Accenture Global Solutions Limited
    Inventors: Matthew Thomas Short, Robert Dooley, Grace T. Cheng, Sunny Webb, Mary Elizabeth Hamilton
  • Patent number: 10573349
    Abstract: Systems, methods, and non-transitory computer readable media can obtain a first image of a first user depicting a face of the first user with a neutral expression or position. A first image of a second user depicting a face of the second user with a neutral expression or position can be identified, wherein the face of the second user is similar to the face of the first user based on satisfaction of a threshold value. A second image of the first user depicting the face of the first user with an expression different from the neutral expression or position can be generated based on a second image of the second user depicting the face of the second user with an expression or position different from the neutral expression or position.
    Type: Grant
    Filed: December 28, 2017
    Date of Patent: February 25, 2020
    Assignee: Facebook, Inc.
    Inventors: Fernando De la Torre, Dong Huang, Francisco Vicente Carrasco
  • Patent number: 10535172
    Abstract: Implementations are directed to methods, systems, apparatus, and computer programs for generation of a three-dimensional (3D) animation by receiving a user input defining a two-dimensional (2D) representation of a plurality of elements, processing, by the one or more processors, the 2D representation to classify the plurality of elements in symbolic elements and action elements, generating, by the one or more processors, based on the symbolic elements, the action elements, and a set of rules a 3D animation corresponding to the 2D representation, and transmitting, by the one or more processors, the 3D animation to an extended reality device for display.
    Type: Grant
    Filed: December 11, 2018
    Date of Patent: January 14, 2020
    Assignee: Accenture Global Solutions Limited
    Inventors: Matthew Thomas Short, Robert Dooley, Grace T. Cheng, Sunny Webb, Mary Elizabeth Hamilton
  • Patent number: 10491947
    Abstract: Generally, systems and methods for generating personalized videos are disclosed. The systems and methods can enable parallel and/or on-demand and/or video portions-based rendering of a plurality of personalized videos for a plurality of recipients. The disclosed systems and methods can significantly reduce amount of processing resources and amount of time required to render the personalized videos thereof, as compared to current available systems and methods.
    Type: Grant
    Filed: December 26, 2018
    Date of Patent: November 26, 2019
    Assignee: XMPIE (ISRAEL) LTD.
    Inventors: Hanan Weisman, Galit Zwickel, Amit Cohen
  • Patent number: 10469968
    Abstract: In general, techniques are described for adapting higher order ambisonic audio data to include three degrees of freedom plus effects. An example device configured to perform the techniques includes a memory, and a processor coupled to the memory. The memory may be configured to store higher order ambisonic audio data representative of a soundfield. The processor may be configured to obtain a translational distance representative of a translational head movement of a user interfacing with the device. The processor may further be configured to adapt, based on the translational distance, higher order ambisonic audio data to provide three degrees of freedom plus effects that adapt the soundfield to account for the translational head movement, and generate speaker feeds based on the adapted higher order ambient audio data.
    Type: Grant
    Filed: October 12, 2017
    Date of Patent: November 5, 2019
    Assignee: Qualcomm Incorporated
    Inventors: Nils Günther Peters, S M Akramus Salehin, Shankar Thagadur Shivappa, Moo Young Kim, Dipanjan Sen
  • Patent number: 10453497
    Abstract: An information processing apparatus includes a receiving unit that receives, during or after reproduction of a video, a predetermined operation with respect to the video, an associating unit that associates the received operation with a reproduction location where the received operation has been generated in the video, and a setting unit that sets in response to the received operation an importance degree of the reproduction location associated with the received operation.
    Type: Grant
    Filed: March 12, 2018
    Date of Patent: October 22, 2019
    Assignee: FUJI XEROX CO., LTD.
    Inventor: Mai Suzuki
  • Patent number: 10425613
    Abstract: An electronic device can be synchronized with a broadcast of a live sporting event to obtain supplemental sports data over a data network from a server storing data associated with the live sporting event. Supplemental sports data is obtained from the server for display on the electronic device following a triggering activity associated with the broadcast of the live sporting event. Supplemental sports data can be transmitted for rendering on a display associated with the electronic device. Supplemental sports data can include display of an instant replay video of a sports athlete combined with audio of a pre-recorded statement by the sports athlete associated with the instant replay video, an announcement of a score change for a sporting event monitored by the electronic device, and a display of a football widget providing updates on football game status (e.g., possession, ball location, current score) monitored by the electronic device.
    Type: Grant
    Filed: May 1, 2015
    Date of Patent: September 24, 2019
    Assignee: CRIA, INC.
    Inventors: Anthony F. Verna, Luis M. Ortiz
  • Patent number: 10423659
    Abstract: Disclosed subject matter relates to digital media including a method and system for generating a contextual audio related to an image. An audio generating system may determine scene-theme and viewer theme of scene in the image. Further, audio files matching scene-objects and the contextual data may be retrieved in real-time and relevant audio files from audio files may be identified based on relationship between scene-theme, scene-objects, viewer theme, contextual data and metadata of audio files. A contribution weightage may be assigned to the relevant and substitute audio files based on contextual data and may be correlated based on contribution weightage, thereby generating the contextual audio related to the image. The present disclosure provides a feature wherein the contextual audio generated for an image may provide a holistic audio effect in accordance with context of the image, thus recreating the audio that might have been present when the image was captured.
    Type: Grant
    Filed: August 17, 2017
    Date of Patent: September 24, 2019
    Assignee: Wipro Limited
    Inventors: Adrita Barari, Manjunath Ramachandra, Ghulam Mohiuddin Khan, Sethuraman Ulaganathan
  • Patent number: 10325397
    Abstract: Aspects of the present innovations relate to systems and/or methods involving multimedia modules, objects or animations. According to an illustrative implementation, one method may include accepting at least one input keyword relating to a subject for the animation and performing processing associated with templates. Further, templates may generates different types of output, and each template may include components for display time, screen location, and animation parameters. Other aspects of the innovations may involve providing search results, retrieving data from a plurality of web sites or data collections, assembling information into multimedia modules or animations, and/or providing module or animation for playback.
    Type: Grant
    Filed: March 20, 2017
    Date of Patent: June 18, 2019
    Assignee: OATH INC.
    Inventors: Doug Imbruce, Owen Bossola, Louis Monier, Rasmus Knutsson, Christian Le Cocq
  • Patent number: 10249341
    Abstract: A method, apparatus and system for synchronizing audiovisual content with inertial outputs for content reproduced on a mobile content device include, in response to a vibration of the mobile content device, receiving a recorded audio signal and a corresponding recorded inertial signal generated by the vibration. The recorded signals are each processed to determine a timestamp for a corresponding peak in each of the recorded signals. A time distance between the timestamp of the recorded audio signal and the timestamp of the recorded inertial signal is determine and inertial signals for content reproduced on the mobile content device are shifted by an amount of time equal to the determined time distance between the timestamp of the recorded audio signal and the timestamp of the recorded inertial signal.
    Type: Grant
    Filed: February 2, 2016
    Date of Patent: April 2, 2019
    Assignee: INTERDIGITAL CE PATENT HOLDINGS
    Inventors: Julien Fleureau, Fabien Danieau, Khanh-Duy Le
  • Patent number: 10198843
    Abstract: Implementations are directed to methods, systems, apparatus, and computer programs for generation of a three-dimensional (3D) animation by receiving a user input defining a two-dimensional (2D) representation of a plurality of elements, processing, by the one or more processors, the 2D representation to classify the plurality of elements in symbolic elements and action elements, generating, by the one or more processors, based on the symbolic elements, the action elements, and a set of rules a 3D animation corresponding to the 2D representation, and transmitting, by the one or more processors, the 3D animation to an extended reality device for display.
    Type: Grant
    Filed: July 17, 2018
    Date of Patent: February 5, 2019
    Assignee: Accenture Global Solutions Limited
    Inventors: Matthew Thomas Short, Robert Dooley, Grace T. Cheng, Sunny Webb, Mary Elizabeth Hamilton
  • Patent number: 10129586
    Abstract: Various implementations process a television content stream to detect program boundaries such as the starting point and ending point of the program. In at least some implementations, program boundaries such as intermediate points between the starting point and ending point of the program are also detected. The intermediate points correspond to where a program pauses for secondary content such as an advertisement or advertisements, and then resumes once the secondary content has run. Once program boundaries are detected, primary content is isolated by removing secondary content that occurs before the starting point and after the ending point. In at least some implementations, secondary content that occurs between detected intermediate points is also removed. The primary content is then recorded without secondary content that originally comprised part of the original television content stream.
    Type: Grant
    Filed: December 19, 2016
    Date of Patent: November 13, 2018
    Assignee: Google LLC
    Inventors: Joon-Hee Jeon, Jason R. Kimball, Benjamin P. Stewart
  • Patent number: 10109318
    Abstract: Various embodiments of the invention provide systems and methods for low bandwidth consumption online content editing, where user-created content comprising high definition/quality content is created or modified at an online content editing server according to instructions from an online content editor client, and where a proxy version of the resulting user-created content is provided to online content editor client to facilitate review or further editing of the user-created content from the online content editor client. In some embodiments, the online content editing server utilizes proxy content during creation and modification operations on the user-created content, and replaces such proxy content with corresponding higher definition/quality content, possibly when the user-created content is published for consumption, or when the user has paid for the higher quality content.
    Type: Grant
    Filed: November 3, 2016
    Date of Patent: October 23, 2018
    Assignee: WeVideo, Inc.
    Inventors: Jostein Svendsen, Bjørn Rustberggaard
  • Patent number: 10083535
    Abstract: Disclosed herein is an online software application for providing users or customers with newly created animated equivalents of their originally submitted static (aka, un-animated) scannable codes.
    Type: Grant
    Filed: August 30, 2016
    Date of Patent: September 25, 2018
    Inventors: Peter Miller, Will Bilton
  • Patent number: 9998722
    Abstract: A system and method of guided video creation is described. In an exemplary method, the system guides a user to create a video production based on a set of pre-defined activities. In one embodiment, the system detects a selection of an item in a shotlist. In response to the item selection, the system stores structured metadata about the selection, opens the video camera and displays a dynamic video overlay relevant to the item selection. In addition, the system detects contact with an overlay button in the dynamic video overlay configured to toggle visibility of the dynamic video overlay. The system further receives a command to save a recorded clip of video content, stores additional metadata for the recorded clip of the video content, and updates the respective item in the shotlist.
    Type: Grant
    Filed: March 13, 2013
    Date of Patent: June 12, 2018
    Assignee: Tapshot, Inc.
    Inventors: Lee Eugene Swearingen, Cathy Teresa Clarke
  • Patent number: 9942673
    Abstract: The method for adjusting a hearing system (2) to the preferences of a user (3) of the hearing system comprises a) playing an audio sequence to said user (3); wherein the audio sequence comprises a first sound object representative of a first real-life sound source and a second sound object representative of a second real-life sound source; b) receiving an input (R) in response to step a); c) adjusting at least one audio processing parameter (P) of said hearing system (2) in dependence of said input (R). Preferably, the method further comprises d) providing the user (3) synchronously with step a) with a visualization of a scene to which said audio sequence belongs; and providing a user input (U) which is indicative of a sound source or of a sound object or of an instant in or a portion of the audio sequence; and automatically selecting an audio processing parameter (P) of the hearing system (2) in dependence of the user input (U) and offering the selected audio processing parameter (P) for adjusting.
    Type: Grant
    Filed: November 14, 2007
    Date of Patent: April 10, 2018
    Assignee: SONOVA AG
    Inventor: Michael Boretzki
  • Patent number: 9875245
    Abstract: User created playlists can be analyzed to create a statistical language model indicating the likelihood that a particular sequence of content attributes will be found in a playlist created by a user, as well as the likelihood of any sequence of one or more content attributes following a playlist or partial playlist created by a user. The language model can be used to generate a recommended content attribute sequence based on a partial playlist of one or more content items. A recommended content item sequence that will be pleasant to a user when added to the partial playlist can be selected based on the recommended content attribute sequence.
    Type: Grant
    Filed: April 10, 2015
    Date of Patent: January 23, 2018
    Assignee: APPLE INC.
    Inventors: Daniel Cartoon, Mark H. Levy
  • Patent number: 9855497
    Abstract: An immersive play environment platform including techniques describing recognizing non-verbal vocalization gestures from a user is disclosed. A headset device receives audio input from a user. The headset device transmits the audio input to a controller device. The controller device evaluates characteristics of the audio input (e.g., spectral features over a period of time) to determine whether the audio input corresponds to a predefined non-verbal vocalization, such as a humming noise, shouting noise, etc. The controller device may perform an action in response to detecting such non-verbal vocalizations, such as engaging a play object (e.g., an action figure, an action disc) in the play environment.
    Type: Grant
    Filed: January 20, 2015
    Date of Patent: January 2, 2018
    Assignee: Disney Enterprises, Inc.
    Inventors: Michael P. Goslin, Eric C. Haseltine, Joseph L. Olson
  • Patent number: 9832590
    Abstract: Embodiments are described for a method of rendering an audio program by receiving, in a renderer of a playback system, the audio program and a target response representing desired characteristics of the playback environment, deriving a playback environment response based on characteristics of the playback environment, comparing the target response to the playback environment response to generate a set of correction settings, and applying the correction settings to the audio program so that the audio program is rendered according to the characteristics of the target response. The target response may be based on audio characteristics in a creation environment.
    Type: Grant
    Filed: September 8, 2016
    Date of Patent: November 28, 2017
    Assignee: Dolby Laboratories Licensing Corporation
    Inventor: Charles Q. Robinson
  • Patent number: 9767852
    Abstract: Systems and methods determine, identify and/or detect one or more audio mismatches between at least two digital media files by providing a group of digital media files to a computer system as input files. Each digital media file comprises digital audio and digital video signals previously recorded at a same performance by a same artist, and the digital media files are previously synchronized with respect to each other and aligned on a timeline of the same performance and provide a first multi-angle digital video of the same performance. The systems and methods compare audio features based on the audio signals of each digital media file and detect at least one audio mismatch between at least two digital media files of the group based on compared audio features, wherein the at least one audio mismatch is generated by, caused by or based on one or more previously edited digital media files present within the group.
    Type: Grant
    Filed: September 11, 2015
    Date of Patent: September 19, 2017
    Inventor: Frederick Mwangaguhunga
  • Patent number: 9665895
    Abstract: A computer system receiving audiovisual information, geographic information, and a seller term from a seller computer, said audiovisual information disclosing a seller offer at a seller location captured via said seller computer, said geographic information associated with said location, said audiovisual information associated with said geographic information, said system providing said audiovisual information to a buyer computer for an acceptance of said offer via said buyer computer, said system conditioning said acceptance upon said buyer computer being geographically positioned in compliance with said term based on said geographic information.
    Type: Grant
    Filed: August 12, 2014
    Date of Patent: May 30, 2017
    Assignee: MOV, INC.
    Inventor: Christopher Renwick Alston
  • Patent number: 9495362
    Abstract: According to embodiments of the invention, systems, methods and devices are provided for a plurality of participants speaking different languages to participate in a singing event by using pre-determined song samples of different languages. In one embodiment, a system is provided that includes a storage that identifies songs by using samples from the song. The storage contains a song including both text and melody, wherein the song contains a plurality of versions of different languages. The system also includes devices allowing superiors and subordinates speaking different languages to sing at the same time. The collaboration may then be recorded and stored remotely via a cloud-based server.
    Type: Grant
    Filed: September 2, 2014
    Date of Patent: November 15, 2016
    Inventor: Pui Shan Xanaz Lee