With At Least One Audio Signal Patents (Class 386/285)
-
Patent number: 12260882Abstract: In one aspect, an example method includes (i) estimating, using a skeletal detection model, a pose of an original actor for each of multiple frames of a video; (ii) obtaining, for each of a plurality of the estimated poses, a respective image of a replacement actor; (iii) obtaining replacement speech in the replacement actor's voice that corresponds to speech of the original actor in the video; (iv) generating, using the estimated poses, the images of the replacement actor, and the replacement speech, synthetic frames corresponding to the multiple frames of the video that depict the replacement actor in place of the original actor, with the synthetic frames including facial expressions for the replacement actor that temporally align with the replacement speech; and (iv) combining the synthetic frames and the replacement speech so as to obtain a synthetic video that replaces the original actor with the replacement actor.Type: GrantFiled: May 16, 2024Date of Patent: March 25, 2025Assignee: Roku, Inc.Inventors: Sunil Ramesh, Michael Cutter, Karina Levitian
-
Patent number: 12254903Abstract: Locations in which a person is depicted within video frames may be determined to identify portions of the video frames to be included in a composite video. A background image not including any depiction of the person may be generated, and the identified portions of the video frames may be inserted into the background image to generate the composite video.Type: GrantFiled: May 3, 2023Date of Patent: March 18, 2025Assignee: GoPro, Inc.Inventors: Guillaume Oules, Anais Oules
-
Patent number: 12231717Abstract: A method for video editing comprises: presenting a template publishing interface based on a target video draft; the template publishing interface comprises an edit track area, the edit track area comprises a plurality of edit tracks, and material track segments formed based on at least part of a material of the target video draft are presented on the plurality of edit tracks; the target video draft comprises a material and edit information, the edit information indicates an edit operation on the material; determining a replaceable material in the target video draft in response to a setting operation on a first target track segment in the material track segments; generating a target video template based on the target video draft and the replaceable material; the target video template is used to achieve a video editing effect of replacing a replaceable material with a filled material based on the target video draft.Type: GrantFiled: March 29, 2024Date of Patent: February 18, 2025Assignee: Beijing Zitiao Network Technology Co., Ltd.Inventors: Haowen Zheng, Xiangrui Zeng, Fan Wu, Qizhi Zhang
-
Patent number: 12155962Abstract: An image pickup apparatus which reduces variations in volume of voice data recorded by a voice memo function without increasing the number of components therein. The image pickup apparatus, having a first and a second display part, determines a display destination of image data according to a detection result of an eye approach detection part, and performs synthesis processing of adding voice data recorded by a sound collecting member to the image data, wherein in a case where user's eye approach is detected by the eye approach detection part and the image data is displayed on the second display part, when voice recording is started by a user operation, a first sound collecting sensitivity adjustment process of adjusting a sound collecting sensitivity of the sound collecting member is performed.Type: GrantFiled: March 31, 2022Date of Patent: November 26, 2024Assignee: Canon Kabushiki KaishaInventor: Shinsaku Watanabe
-
Patent number: 12155957Abstract: The embodiments of the disclosure disclose a video effect processing method and a video effect processing device. The method includes: detecting audio of a video; acquiring a video frame in the video when the audio is detected at a preset rhythm; performing effect processing on the target object in the video frame to obtain an effect processed video frame; and displaying and playing the effect processed video frame.Type: GrantFiled: December 4, 2023Date of Patent: November 26, 2024Assignee: BEIJING BYTEDANCE NETWORK TECHNOLOGY CO., LTD.Inventors: Xiaoqi Li, Jingjin Zhou
-
Patent number: 12125504Abstract: A system and a method for automatic video redaction are provided herein. The method may include: receiving, an input video comprising a sequence of frames captured by a camera, wherein the input video includes live video obtained directly from the camera, wherein recordation of the video directly from the camera is disabled; performing visual analysis of the input video, to detect portions of the frames of the input video in which one of a plurality of predefined objects or a descriptor thereof is detected; generating a redacted input video by replacing the portions of the frames with new portions of another visual content; and recording the redacted input video on a data storage device, wherein the generating of the redacted input video, is carried out by a computer processor, after the input video is captured by the camera and before the recording of the redacted input video on the data storage device.Type: GrantFiled: May 13, 2024Date of Patent: October 22, 2024Assignee: BriefCam Ltd.Inventors: Shmuel Peleg, Elhanan Hayim Elboher, Ariel Naim
-
Patent number: 12063380Abstract: A panoramic multimedia streaming server communicatively coupled to numerous client devices of different types enables individual client devices to dynamically select view-regions of interest. A source video signal capturing a panoramic field of view is processed to generate a pure video signal faithfully representing the panoramic field of view. The pure video signal is content filtered to produce multiple view-region-specific video signals corresponding to predefined view regions within the panoramic field of view. Frame samples of the pure video signal are transmitted to a client device to enable selection of a view region of interest. Upon receiving, from the client device, an indication of a view region of interest, a controller selects a specific predefined view region according to proximity of each predefined view region to the indicated view region of interest. A view-region-specific video signal corresponding to the specific predefined view region is streamed to the client device.Type: GrantFiled: July 4, 2021Date of Patent: August 13, 2024Inventor: Jean Mayrand
-
Patent number: 12019985Abstract: Systems, apparatuses, and methods are described herein for providing language-level content recommendations to users based on an analysis of closed captions of content viewed by the users and other data. Language-level analysis of content viewed by a user may be performed to generate metrics that are associated with the user. The metrics may be used to provide recommendations for content, which may include advertising, that is closely aligned with the user's interests.Type: GrantFiled: January 25, 2022Date of Patent: June 25, 2024Assignee: Comcast Cable Communications, LLCInventor: Richard Walsh
-
Patent number: 11984140Abstract: A matching method, a terminal and a non-transitory computer-readable storage medium are provided. The matching method includes extracting audio clips from an integrated video clip, the integrated video clip being obtained by integrating a plurality of original video clips. The matching method further includes acquiring recognition data of the audio clips, the recognition data including subtitle data, a start time of the subtitle data and an end time of the subtitle data. The matching method further includes matching the subtitle data to the integrated video clip based on the start time and the start time of the recognition data, to obtain a recommend video.Type: GrantFiled: February 22, 2022Date of Patent: May 14, 2024Assignee: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS CORP., LTD.Inventor: Henggang Wu
-
Patent number: 11978484Abstract: Various embodiments facilitate the creation and presentation of a virtual experience. In one embodiment, the virtual experience is assembled from user model data corresponding to a three-dimensional representation of a user, user movement data corresponding to at least one movement characteristic of the user, user voice data corresponding to at least one vocal characteristic of the user, environment data corresponding to a three-dimensional representation of a location, and event data corresponding to a captured event at the location. The virtual experience is a virtual recreation of the captured event at the location, with the three-dimensional representation of the user, the vocal characteristic of the user, and the movement characteristic of the user inserted into the captured event.Type: GrantFiled: October 21, 2021Date of Patent: May 7, 2024Assignee: DISH Technologies L.L.C.Inventors: Nicholas Brandon Newell, Swapnil Anil Tilaye, Carlos Garcia Navarro
-
Patent number: 11964357Abstract: A conditioner of a chemical mechanical polishing (CMP) apparatus includes a disk to polish a polishing pad of the CMP apparatus, a driver to rotate the disk, a lifter to lift the driver, an arm to rotate the lifter, and a connector to connect the driver to the lifter, the driver being tiltable with respect to the lifter.Type: GrantFiled: July 5, 2022Date of Patent: April 23, 2024Assignee: Samsung Electronics Co., Ltd.Inventors: Seungchul Han, Yonghee Lee, Taemin Earmme, Byoungho Kwon, Kuntack Lee
-
Patent number: 11942116Abstract: In one aspect, an example method includes (i) obtaining a set of user attributes for a user of a content-presentation device; (ii) based on the set of user attributes, obtaining structured data and determining a textual description of the structured data; (iii) transforming, using a text-to-speech engine, the textual description of the structured data into synthesized speech; and (iv) generating, using the synthesized speech and for display by the content-presentation device, a synthetic video of a targeted advertisement comprising the synthesized speech.Type: GrantFiled: May 17, 2023Date of Patent: March 26, 2024Assignee: Roku, Inc.Inventors: Sunil Ramesh, Michael Cutter, Charles Brian Pinkerton, Karina Levitian
-
Patent number: 11936940Abstract: A user system for rendering accessibility enhanced content includes processing hardware, a display, and a memory storing software code. The processing hardware executes the software code to receive primary content from a content distributor and determine whether the primary content is accessibility enhanced content including an accessibility track. When the primary content omits the accessibility track, the processing hardware executes the software code to perform a visual analysis, an audio analysis, or both, of the primary content, generate, based on the visual analysis and/or the audio analysis, the accessibility track to include at least one of a sign language performance or one or more video tokens configured to be played back during playback of the primary content, and synchronize the accessibility track to the primary content. The processing hardware also executes the software code to render, using the display, the primary content or the accessibility enhanced content.Type: GrantFiled: May 3, 2022Date of Patent: March 19, 2024Assignee: Disney Enterprises, Inc.Inventors: Mark Arana, Katherine S. Navarre, Michael A. Radford, Joseph S. Rice, Noel Brandon Vasquez
-
Patent number: 11915123Abstract: Embodiments relate to a system, program product, and method for employing deep learning techniques to fuse data across modalities. A multi-modal data set is received, including a first data set having a first modality and a second data set having a second modality, with the second modality being different from the first modality. The first and second data sets are processed, including encoding the first data set into one or more first vectors, and encoding the second data set into one or more second vectors. The processed multi-modal data set is analyzed, and the encoded features from the first and second modalities are iteratively and asynchronously fused. The fused modalities include combined vectors from the first and second data sets representing correlated temporal behavior. The fused vectors are then returned as output data.Type: GrantFiled: November 14, 2019Date of Patent: February 27, 2024Assignee: International Business Machines CorporationInventors: Xuan-Hong Dang, Syed Yousaf Shah, Petros Zerfos, Nancy Anne Greco
-
Patent number: 11900682Abstract: A method for video clip extraction includes: obtaining a video, and sampling the video to obtain N video frames, wherein N is a positive integer; inputting the N video frames to a pre-trained frame feature extraction model to obtain a feature vector of each video frame in N video frames; determining scores of the N video frames based on a pre-trained scoring model; and extracting target video clips from the video based on the scores of the N video frames.Type: GrantFiled: April 26, 2021Date of Patent: February 13, 2024Assignee: BEIJING XIAOMI PINECONE ELECTRONICS CO., LTD.Inventors: Jiagao Hu, Fei Wang, Pengfei Yu, Daiguo Zhou
-
Patent number: 11882244Abstract: The embodiments of the disclosure disclose a video special effect processing method and a video special effect processing device. The method includes: detecting music played along with a video during a process of playing the video; acquiring a video frame image to be played in the video when the music is detected to be played at a preset rhythm; performing special effect processing on the target object in the video frame image to obtain a special effect processed video frame image; and displaying and playing the special effect processed video frame image.Type: GrantFiled: June 24, 2022Date of Patent: January 23, 2024Inventors: Xiaoqi Li, Jingjin Zhou
-
Patent number: 11817131Abstract: A graphical user interface for indicating video editing decisions may include an inclusion marker element, an exclusion marker element, and a selection marker element. The inclusion marker element may indicate a segment of a video that has been marked for inclusion in a video edit. The exclusion marker element may indicate a segment of the video has been marked for exclusion from the video edit. The selection marker element may indicate a segment of the video has been selected for inclusion in the video edit.Type: GrantFiled: January 17, 2023Date of Patent: November 14, 2023Assignee: GoPro, Inc.Inventors: Thomas Achddou, Renaud Cousin, Nicolas Duponchel, Jean-Baptiste Noel
-
Patent number: 11778138Abstract: An electronic device can be synchronized with a broadcast of a live sporting event to obtain supplemental sports data over a data network from a server storing data associated with the live sporting event. Supplemental sports data is obtained from the server for display on the electronic device following a triggering activity associated with the broadcast of the live sporting event. Supplemental sports data can be transmitted for rendering on a display associated with the electronic device. Supplemental sports data can include display of an instant replay video of a sports athlete combined with audio of a pre-recorded statement by the sports athlete associated with the instant replay video, an announcement of a score change for a sporting event monitored by the electronic device, and a display of a football widget providing updates on football game status (e.g., possession, ball location, current score) monitored by the electronic device.Type: GrantFiled: October 27, 2021Date of Patent: October 3, 2023Assignee: STRIPE, INC.Inventors: Anthony F. Verna, Luis M. Ortiz
-
Patent number: 11756528Abstract: A system for generating videos uses a domain-specific instructional language and a video rendering engine that produces videos against a digital product which changes and evolves over time. The video rendering engine uses the instructions in an instruction markup document written in the domain-specific instructional language to generate a video while navigating a web-based document, (which is different from the instruction markup document), representing the digital product for which the video is generated. The video rendering engine navigates the web-based document, coupled with the instruction markup document, which explains the operations to be performed on the web-based document. The instruction markup document also identifies the special effects that manipulate the underlying product in real-time, includes the spoken text for generating subtitles, and provides formalized change management by design.Type: GrantFiled: April 13, 2021Date of Patent: September 12, 2023Assignee: Videate, Inc.Inventor: David Christian Gullo
-
Patent number: 11736654Abstract: Methods, apparatus and systems related to production of a movie, a TV show or a multimedia content are described. In one example aspect, a system for producing a movie includes a pre-production subsystem configured to receive information about a storyline, cameras, cast, and other assets for the movie from a user. The pre-production subsystem is configured to generate one or more machine-readable scripts corresponding to one or more scenes. The system includes a production subsystem configured to receive the scripts from the pre-production system to determine constraints among the scenes. The production subsystem is further configured to determine actions for the cameras and the cast for obtaining footages according to the storyline.Type: GrantFiled: July 17, 2020Date of Patent: August 22, 2023Assignee: WeMovie TechnologiesInventors: Xidong Wu, Xiubo Wu
-
Patent number: 11670284Abstract: Systems and methods are disclosed herein for detecting dubbed speech in a media asset and receiving metadata corresponding to the media asset. The systems and methods may determine a plurality of scenes in the media asset based on the metadata, retrieve a portion of the dubbed speech corresponding to the first scene, and process the retrieved portion of the dubbed speech corresponding to the first scene to identify a speech characteristic of a character featured in the first scene. Further, the systems and methods may determine whether the speech characteristic of the character featured in the first scene matches the context of the first scene, and if the match fails, perform a function to adjust the portion of the dubbed speech so that the speech characteristic of the character featured in the first scene snatches the context of the first scene.Type: GrantFiled: September 21, 2021Date of Patent: June 6, 2023Assignee: Rovi Guides, Inc.Inventors: Mario Sanchez, Ashleigh Miller, Paul T. Stathacopoulos
-
Patent number: 11665391Abstract: A signal processing device includes an input interface, an image processor, an audio processor, and a controller. The input interface receives signals of a video and an audio acquired concurrently in a space where subjects exist. The image processor recognizes subject images in the video, to determine a first type of area where each subject exists. The audio processor recognizes sound sources in the audio, to determine a second type of area where each sound source exists in the space, independently of the first type of area. The controller uses the first and second types of areas to judge coincidence or non-coincidence between a position of the each subject and a position of the each sound source, to determine a combination of a subject and a sound source whose positions coincide with each other. The controller selectively determines the subject image to be output that corresponds to the combination.Type: GrantFiled: January 8, 2022Date of Patent: May 30, 2023Assignee: Panasonic Intellectual Property Management Co., Ltd.Inventors: Hiroki Kasugai, Shinichi Yamamoto
-
Patent number: 11581018Abstract: There are provided methods and systems for media processing, comprising: providing at least one media asset source selected from a media asset sources library, the at least one media asset source comprising at least one source video, via a network to a client device; receiving via the network or the client device a media recording comprising a client video recorded by a user of the client device; transcoding the at least one source video and the client video which includes parsing the client video and the source video, respectively, to a plurality of client video frames and a plurality of source video frames based on the matching; segmenting one or more frames of the plurality of source video frames to one or more character frames; detecting one or more face images in one or more frames of the plurality of client video frames and provide face markers; resizing the one or more character frames according to the face markers compositing the resized character frames with the background frames using one or more blType: GrantFiled: September 3, 2021Date of Patent: February 14, 2023Assignee: FUSIT, INC.Inventor: Michal Shafir Nir
-
Patent number: 11462248Abstract: A playback of a video may be generated to include accompaniment of music. For parts of the video that includes voice, an instrumental version of the music may be used. For parts of the video that does not include voice, a singing version of the music may be used.Type: GrantFiled: June 29, 2021Date of Patent: October 4, 2022Assignee: GoPro, Inc.Inventors: Guillaume Oulès, Anais Oulès
-
Patent number: 11456017Abstract: In one general aspect, a method can include receiving a video loop portion included in a video file and receiving an audio loop portion included in an audio file. The method can include analyzing at least a portion of the audio file based on a musical characteristic and identifying a plurality of segment locations within the audio file based on the analyzing where the plurality of segment locations define a plurality of audio segments of the audio file. The method can also include modifying the video loop portion based on one of the plurality of segment locations in the audio file.Type: GrantFiled: September 22, 2020Date of Patent: September 27, 2022Assignee: Twitter, Inc.Inventors: Richard J. Plom, Jason J. Mante, Ryan Swigart, Mikhail Kaplinskiy
-
Patent number: 11395088Abstract: A method comprising: causing analysis of a portion of a visual scene; causing modification of a first sound object to modify a spatial extent of the first sound object in dependence upon the analysis of the portion of the visual scene corresponding to the first sound object; and causing rendering of the visual scene and the corresponding sound scene including of the modified first sound object with modified spatial extent.Type: GrantFiled: September 8, 2020Date of Patent: July 19, 2022Assignee: NOKIA TECHNOLOGIES OYInventors: Antti Eronen, Jussi Leppänen, Francesco Cricri, Arto Lehtiniemi
-
Patent number: 11355155Abstract: System and method to summarize one or more videos are provided. The system includes a data receiving module configured to receive videos; a video analysis module configured to analyse the one or more videos to generate one or more transcription text output; a building block data module configured to create a building block model and to apply the building block model on analysed videos; a video presentation module configured to present contents of the videos using elements and to present the one or more transcription texts; a video prioritization configured to generate one or more ranking formulas for the videos, to prioritize building block models, upon receiving feedback from users, based on contents and transcription texts; a video summarization module configured to generate a video summary; a video action module configured to choose an action to be performed on the videos based on the feedback received from the corresponding users.Type: GrantFiled: May 11, 2021Date of Patent: June 7, 2022Assignee: CLIPr Co.Inventors: Humphrey Chen, Cindy Chin, Aaron Sloman
-
Patent number: 11303970Abstract: Systems and methods are disclosed for delivering video content over a network, such as the Internet. Videos are identified and pre-processed by a web service and then separated into a plurality of segments. Based on user interest, video segments may be pre-fetched and stored by a client associated with a user. Upon receiving a selection from a user to play a video, the first video segment may begin playing instantaneously from a local cache. While the first video segment plays, subsequent video segments are transmitted from the web service to the client, so that the subsequent video segments will be ready for viewing at the client when playback of the first video segment has finished.Type: GrantFiled: January 21, 2020Date of Patent: April 12, 2022Assignee: Verizon Patent and Licensing Inc.Inventors: Peter F. Kocks, Rami El Mawas, Ping-Hao Wu
-
Patent number: 11276434Abstract: Systems and methods for generating individualized content trailers. Content such as a video is divided into segments each representing a set of common features. With reference to a set of stored user preferences, certain segments are selected as aligning with the user's interests. Each selected segment may then be assigned a label corresponding to the plot portion or element to which it belongs. A coherent trailer may then be assembled from the selected segments, ordered according to their plot elements. This allows a user to see not only segments containing subject matter that aligns with their interests, but also a set of such segments arranged to give the user an idea of the plot, and a sense of drama, increasing the likelihood of engagement with the content.Type: GrantFiled: November 17, 2020Date of Patent: March 15, 2022Assignee: ROVI GUIDES, INC.Inventors: Jeffry Copps Robert Jose, Mithun Umesh, Sindhuja Chonat Sri
-
Patent number: 11222523Abstract: A method of establishing a communication path between a first terminal and a second terminal, the method including receiving a first call from the first terminal; answering the first call; generating a call establishment request (CER); establishing a second call to the second terminal; forwarding the CER to the second terminal; receiving an acknowledgement from the second terminal; and connecting the first call and the second call.Type: GrantFiled: March 29, 2017Date of Patent: January 11, 2022Assignee: CARRIER CORPORATIONInventors: Ron Johan, Gabriel Daher, Daniel Ming On Wu
-
Method and apparatus for determining background music of a video, terminal device and storage medium
Patent number: 11200278Abstract: Provided are a method and apparatus for determining background music of a video, a terminal device, and a storage medium. The method includes: acquiring a scene selection instruction, and performing video capturing based on the scene selection instruction; after the video capturing is completed and a music selection instruction for the captured video is detected, displaying a music replacement interface; and determining background music of the captured video according to music selected on the music replacement interface.Type: GrantFiled: September 10, 2020Date of Patent: December 14, 2021Assignee: BEIJING MICROLIVE VISION TECHNOLOGY CO., LTDInventor: Yu Song -
Patent number: 11144764Abstract: Systems, methods, and storage media for selecting video portions for a video synopsis of streaming video content are disclosed. Exemplary implementations may: extract at least a portion of an audio track from a live stream of video content over time to create an audio file; convert the audio file from a time domain to a frequency domain; generate a visual representation of the spectrum of frequencies of the audio signal as it varies with time; apply a classification algorithm to the visual representation to generate interest probability scores for portions of the audio signal; select portions of the audio signal that meet or exceed a threshold probability score; correlate the selected portions of the audio signal to corresponding segments of the video content that has been streamed; and select the corresponding segments of the video content for inclusion in the synopsis.Type: GrantFiled: September 30, 2020Date of Patent: October 12, 2021Assignee: CBS Interactive Inc.Inventor: Marc Sharma
-
Patent number: 11127232Abstract: A system and method are presented for combining visual recordings from a camera, audio recordings from a microphone, and behavioral data recordings from behavioral sensors, during a panel discussion. Cameras and other sensors can be assigned to specific individuals or can be used to create recordings from multiple individuals simultaneously. Separate recordings are combined and time synchronized, and portions of the synchronized data are then associated with the specific individuals in the panel discussion. Interactions between participants are determined by examining the individually assigned portions of the time synchronized recordings. Events are identified in the interactions and then recorded as separate event data associated with individuals.Type: GrantFiled: November 26, 2019Date of Patent: September 21, 2021Assignee: On Time Staffing Inc.Inventor: Roman Olshansky
-
Patent number: 10939084Abstract: Disclosed is an approach for displaying 3D videos in a VR and/or AR system. The 3D videos may include 3D animated objects that escape from the display screen. The 3D videos may interact with objects within the VR and/or AR environment. The 3D video may be interactive with a user such that based on user input corresponding to decisions elected by the user at certain portions of the 3D video such that a different storyline and possibly a different conclusion may result for the 3D video. The 3D video may be a 3D icon displayed within a portal of a final 3D render world.Type: GrantFiled: December 19, 2018Date of Patent: March 2, 2021Assignee: Magic Leap, Inc.Inventors: Praveen Babu J D, Sean Christopher Riley
-
Patent number: 10904123Abstract: A route tracing request packet is generated comprising a time-to-live value, a source address of a source of the route tracing request packet, and an address of a destination of the route tracing request packet. The source and destination are in the virtual network; the route tracing request packet is usable to identify the virtual appliance, and the virtual appliance is configured to examine the route tracing request packet for a time-to-live value indicating that the route tracing request packet has expired and sending a time-to-live exceeded message to the source address. The time-to-live exceeded message comprises an identifier for the virtual appliance. The route tracing request packet is forwarded to the destination. The time-to-live exceeded message is received. Data is extracted to determine network virtual appliances that were traversed by the route tracing request packet prior to expiration of the time-to-live. The network virtual appliances are reported.Type: GrantFiled: May 31, 2019Date of Patent: January 26, 2021Assignee: Microsoft Technology Licensing, LLCInventors: Rishabh Tewari, Michael Czeslaw Zygmunt, Madhan Sivakumar, Deepak Bansal, Shefali Garg
-
Patent number: 10860190Abstract: Techniques for presenting and interacting with composite images on a computing device are described. In an example, the device presents a first article and a first portion of a first composite image showing a second article. The first composite image shows a first outfit combination that is different from the first outfit. The device receives a first user interaction indicating a request to change the second article and presents the first article and a second portion of a second composite image showing a clothing article in a second outfit. The second composite image shows a second outfit combination that is different from the first outfit, the second outfit, and the first outfit combination. The device receives a second user interaction indicating a request to use the third article and presents the second composite image showing the second outfit combination.Type: GrantFiled: April 22, 2019Date of Patent: December 8, 2020Assignee: Amazon Technologies, Inc.Inventors: Nicholas Robert Ditzler, Lee David Thompson, Devesh Sanghvi, Hilit Unger, Moshe Bouhnik, Siddharth Jacob Thazhathu, Anton Fedorenko
-
Patent number: 10783930Abstract: A display control device includes an assigning unit and a display control unit. The assigning unit assigns, with respect to S number of units of display (where S is an integer equal to or greater than two) included in a display area in which units of display having the width equal to L number of pixels (where L is an integer equal to or greater than one) are placed in the width direction, M number of sets of data (where M is an integer greater than S) in a divided manner. The display control unit controls display of the units of display in different display formats according to the number of sets of data of a particular type included in the assigned data.Type: GrantFiled: January 25, 2017Date of Patent: September 22, 2020Assignee: KABUSHIKI KAISHA TOSHIBAInventor: Shinichiro Hamada
-
Patent number: 10643366Abstract: Implementations are directed to methods, systems, apparatus, and computer programs for generation of a three-dimensional (3D) animation by receiving a user input defining a two-dimensional (2D) representation of a plurality of elements, processing, by the one or more processors, the 2D representation to classify the plurality of elements in symbolic elements and action elements, generating, by the one or more processors, based on the symbolic elements, the action elements, and a set of rules a 3D animation corresponding to the 2D representation, and transmitting, by the one or more processors, the 3D animation to an extended reality device for display.Type: GrantFiled: December 3, 2019Date of Patent: May 5, 2020Assignee: Accenture Global Solutions LimitedInventors: Matthew Thomas Short, Robert Dooley, Grace T. Cheng, Sunny Webb, Mary Elizabeth Hamilton
-
Patent number: 10573349Abstract: Systems, methods, and non-transitory computer readable media can obtain a first image of a first user depicting a face of the first user with a neutral expression or position. A first image of a second user depicting a face of the second user with a neutral expression or position can be identified, wherein the face of the second user is similar to the face of the first user based on satisfaction of a threshold value. A second image of the first user depicting the face of the first user with an expression different from the neutral expression or position can be generated based on a second image of the second user depicting the face of the second user with an expression or position different from the neutral expression or position.Type: GrantFiled: December 28, 2017Date of Patent: February 25, 2020Assignee: Facebook, Inc.Inventors: Fernando De la Torre, Dong Huang, Francisco Vicente Carrasco
-
Patent number: 10535172Abstract: Implementations are directed to methods, systems, apparatus, and computer programs for generation of a three-dimensional (3D) animation by receiving a user input defining a two-dimensional (2D) representation of a plurality of elements, processing, by the one or more processors, the 2D representation to classify the plurality of elements in symbolic elements and action elements, generating, by the one or more processors, based on the symbolic elements, the action elements, and a set of rules a 3D animation corresponding to the 2D representation, and transmitting, by the one or more processors, the 3D animation to an extended reality device for display.Type: GrantFiled: December 11, 2018Date of Patent: January 14, 2020Assignee: Accenture Global Solutions LimitedInventors: Matthew Thomas Short, Robert Dooley, Grace T. Cheng, Sunny Webb, Mary Elizabeth Hamilton
-
Patent number: 10491947Abstract: Generally, systems and methods for generating personalized videos are disclosed. The systems and methods can enable parallel and/or on-demand and/or video portions-based rendering of a plurality of personalized videos for a plurality of recipients. The disclosed systems and methods can significantly reduce amount of processing resources and amount of time required to render the personalized videos thereof, as compared to current available systems and methods.Type: GrantFiled: December 26, 2018Date of Patent: November 26, 2019Assignee: XMPIE (ISRAEL) LTD.Inventors: Hanan Weisman, Galit Zwickel, Amit Cohen
-
Patent number: 10469968Abstract: In general, techniques are described for adapting higher order ambisonic audio data to include three degrees of freedom plus effects. An example device configured to perform the techniques includes a memory, and a processor coupled to the memory. The memory may be configured to store higher order ambisonic audio data representative of a soundfield. The processor may be configured to obtain a translational distance representative of a translational head movement of a user interfacing with the device. The processor may further be configured to adapt, based on the translational distance, higher order ambisonic audio data to provide three degrees of freedom plus effects that adapt the soundfield to account for the translational head movement, and generate speaker feeds based on the adapted higher order ambient audio data.Type: GrantFiled: October 12, 2017Date of Patent: November 5, 2019Assignee: Qualcomm IncorporatedInventors: Nils Günther Peters, S M Akramus Salehin, Shankar Thagadur Shivappa, Moo Young Kim, Dipanjan Sen
-
Patent number: 10453497Abstract: An information processing apparatus includes a receiving unit that receives, during or after reproduction of a video, a predetermined operation with respect to the video, an associating unit that associates the received operation with a reproduction location where the received operation has been generated in the video, and a setting unit that sets in response to the received operation an importance degree of the reproduction location associated with the received operation.Type: GrantFiled: March 12, 2018Date of Patent: October 22, 2019Assignee: FUJI XEROX CO., LTD.Inventor: Mai Suzuki
-
Patent number: 10425613Abstract: An electronic device can be synchronized with a broadcast of a live sporting event to obtain supplemental sports data over a data network from a server storing data associated with the live sporting event. Supplemental sports data is obtained from the server for display on the electronic device following a triggering activity associated with the broadcast of the live sporting event. Supplemental sports data can be transmitted for rendering on a display associated with the electronic device. Supplemental sports data can include display of an instant replay video of a sports athlete combined with audio of a pre-recorded statement by the sports athlete associated with the instant replay video, an announcement of a score change for a sporting event monitored by the electronic device, and a display of a football widget providing updates on football game status (e.g., possession, ball location, current score) monitored by the electronic device.Type: GrantFiled: May 1, 2015Date of Patent: September 24, 2019Assignee: CRIA, INC.Inventors: Anthony F. Verna, Luis M. Ortiz
-
Patent number: 10423659Abstract: Disclosed subject matter relates to digital media including a method and system for generating a contextual audio related to an image. An audio generating system may determine scene-theme and viewer theme of scene in the image. Further, audio files matching scene-objects and the contextual data may be retrieved in real-time and relevant audio files from audio files may be identified based on relationship between scene-theme, scene-objects, viewer theme, contextual data and metadata of audio files. A contribution weightage may be assigned to the relevant and substitute audio files based on contextual data and may be correlated based on contribution weightage, thereby generating the contextual audio related to the image. The present disclosure provides a feature wherein the contextual audio generated for an image may provide a holistic audio effect in accordance with context of the image, thus recreating the audio that might have been present when the image was captured.Type: GrantFiled: August 17, 2017Date of Patent: September 24, 2019Assignee: Wipro LimitedInventors: Adrita Barari, Manjunath Ramachandra, Ghulam Mohiuddin Khan, Sethuraman Ulaganathan
-
Patent number: 10325397Abstract: Aspects of the present innovations relate to systems and/or methods involving multimedia modules, objects or animations. According to an illustrative implementation, one method may include accepting at least one input keyword relating to a subject for the animation and performing processing associated with templates. Further, templates may generates different types of output, and each template may include components for display time, screen location, and animation parameters. Other aspects of the innovations may involve providing search results, retrieving data from a plurality of web sites or data collections, assembling information into multimedia modules or animations, and/or providing module or animation for playback.Type: GrantFiled: March 20, 2017Date of Patent: June 18, 2019Assignee: OATH INC.Inventors: Doug Imbruce, Owen Bossola, Louis Monier, Rasmus Knutsson, Christian Le Cocq
-
Patent number: 10249341Abstract: A method, apparatus and system for synchronizing audiovisual content with inertial outputs for content reproduced on a mobile content device include, in response to a vibration of the mobile content device, receiving a recorded audio signal and a corresponding recorded inertial signal generated by the vibration. The recorded signals are each processed to determine a timestamp for a corresponding peak in each of the recorded signals. A time distance between the timestamp of the recorded audio signal and the timestamp of the recorded inertial signal is determine and inertial signals for content reproduced on the mobile content device are shifted by an amount of time equal to the determined time distance between the timestamp of the recorded audio signal and the timestamp of the recorded inertial signal.Type: GrantFiled: February 2, 2016Date of Patent: April 2, 2019Assignee: INTERDIGITAL CE PATENT HOLDINGSInventors: Julien Fleureau, Fabien Danieau, Khanh-Duy Le
-
Patent number: 10198843Abstract: Implementations are directed to methods, systems, apparatus, and computer programs for generation of a three-dimensional (3D) animation by receiving a user input defining a two-dimensional (2D) representation of a plurality of elements, processing, by the one or more processors, the 2D representation to classify the plurality of elements in symbolic elements and action elements, generating, by the one or more processors, based on the symbolic elements, the action elements, and a set of rules a 3D animation corresponding to the 2D representation, and transmitting, by the one or more processors, the 3D animation to an extended reality device for display.Type: GrantFiled: July 17, 2018Date of Patent: February 5, 2019Assignee: Accenture Global Solutions LimitedInventors: Matthew Thomas Short, Robert Dooley, Grace T. Cheng, Sunny Webb, Mary Elizabeth Hamilton
-
Patent number: 10129586Abstract: Various implementations process a television content stream to detect program boundaries such as the starting point and ending point of the program. In at least some implementations, program boundaries such as intermediate points between the starting point and ending point of the program are also detected. The intermediate points correspond to where a program pauses for secondary content such as an advertisement or advertisements, and then resumes once the secondary content has run. Once program boundaries are detected, primary content is isolated by removing secondary content that occurs before the starting point and after the ending point. In at least some implementations, secondary content that occurs between detected intermediate points is also removed. The primary content is then recorded without secondary content that originally comprised part of the original television content stream.Type: GrantFiled: December 19, 2016Date of Patent: November 13, 2018Assignee: Google LLCInventors: Joon-Hee Jeon, Jason R. Kimball, Benjamin P. Stewart
-
Patent number: 10109318Abstract: Various embodiments of the invention provide systems and methods for low bandwidth consumption online content editing, where user-created content comprising high definition/quality content is created or modified at an online content editing server according to instructions from an online content editor client, and where a proxy version of the resulting user-created content is provided to online content editor client to facilitate review or further editing of the user-created content from the online content editor client. In some embodiments, the online content editing server utilizes proxy content during creation and modification operations on the user-created content, and replaces such proxy content with corresponding higher definition/quality content, possibly when the user-created content is published for consumption, or when the user has paid for the higher quality content.Type: GrantFiled: November 3, 2016Date of Patent: October 23, 2018Assignee: WeVideo, Inc.Inventors: Jostein Svendsen, Bjørn Rustberggaard