Abstract: Embodiments of systems and methods to determine depth of soil coverage for an underground feature along a right-of-way are disclosed. In an embodiment, the method may include receiving a depth of cover measurement for the right-of-way. The method may include capturing baseline images of the right-of-way within a first selected time of the depth of cover measurement. The method may include rendering a three dimensional elevation model of the right-of-way from the baseline images. The method may include georeferencing the three dimensional elevation model to generate a georeferenced three dimensional elevation model. The method may include adding the depth of cover measurement to the georeferenced three dimensional elevation model. The method may include rendering an updated three dimensional elevation model of the right-of-way from subsequently captured images. The method may include determining a delta depth of coverage based on the georeferenced and the updated three dimensional elevation model.
Type:
Grant
Filed:
February 28, 2024
Date of Patent:
September 10, 2024
Assignee:
MARATHON PETROLEUM COMPANY LP
Inventors:
Luke R. Miller, Joshua J. Beard, Brittan Battles
Abstract: The invention provides a haptic feedback data processing method. During the running of the preset application process, the haptic feedback data of the actuator is collected in real time. At the time of reference data sampling, all collected haptic feedback data are integrated and processed to generate recorded data. Store the executable data corresponding to the recorded data. Through the implementation of the present invention, the haptic feedback data generated in real time is recorded and stored during the running of the application. Therefore, in the application data playback scenario, the haptic feedback data during the running of the application can be output accordingly, which can provide users with a more extreme recording file playback experience.
Abstract: The present invention provides a method for recording session of a multimedia file presentation displayed on external screen device and controlled by one more control device, implemented by one or more processors operatively coupled to a non-transitory computer readable storage device, on which are stored modules of instruction code that when executed cause the one or more processors to perform disapplying said multimedia file on the external screen using a designated application; capturing user interaction of uploaded UI on each mobile device; sending captured interaction command by at least one of the mobile devices to the external screen device; Receiving captured control commands at the external display device over cloud or P2P connection associated with file ID from each mobile device which established P2P connection with external device; executing said instructions based on pre-defined interaction commands definitions; Recording captured control commands: slides presentation and hovering/pointing act
Abstract: Embodiments herein provide for methods of dividing selected areas of a first video clip having a first composition, e.g., by generating individual video data corresponding to the selected areas, arranging the selected areas to provide a second composition, e.g., by combining the individual video data to generate composite video data corresponding to the second composition, and compiling the composite video data to provide a second video clip having the second composition.
Type:
Grant
Filed:
July 1, 2022
Date of Patent:
June 11, 2024
Assignee:
LOGITECH EUROPE S.A.
Inventors:
Thomas Maneri, Sean Elliot Kaiser, Lev Sokolov, Ashray Sameer Urs
Abstract: A multimedia service providing method according to an embodiment of the present invention may comprise: transmitting video data to a display device; transmitting a request message for requesting connection state information to one or more remote wireless speakers; receiving the connection state information from the one or more remote wireless speakers; separately calculating a delay time, which is a time at which a transmission delay is predicted, on the basis of the received connection state information; transmitting no audio data to a remote wireless speaker having a delay time longer than a preconfigured time from among the one or more remote wireless speakers; and transmitting the audio data to a remote wireless speaker having a delay time shorter than the preconfigured time from among the one or more remote wireless speakers, wherein the preconfigured time is configured to be a maximum allowable synchronization time difference between the video data and the audio data.
Abstract: Techniques of providing motion video content along with audio content are disclosed. In some example embodiments, a computer-implemented system is configured to perform operations comprising: receiving primary audio content; determining that at least one reference audio content satisfies a predetermined similarity threshold based on a comparison of the primary audio content with the at least one reference audio content; for each one of the at least one reference audio content, identifying motion video content based on the motion video content being stored in association with the one of the at least one reference audio content and not stored in association with the primary audio content; and causing the identified motion video content to be displayed on a device concurrently with a presentation of the primary audio content on the device.
Abstract: An electronic device according to various embodiments comprises: a first camera; a second camera; a processor that is electrically connected to the first camera and the second camera; a touch screen that is electrically connected to the processor; and a memory that is electrically connected to the processor, wherein the processor may be configured to obtain an initiation command for capturing a first video, to display a plurality of first image frames that are obtained by the first camera onto the touch screen in response to the initiation command for capturing the first video being obtained, to store the first video based on the plurality of first image frames in the memory, to obtain, while capturing the first video, an initiation command for capturing second video with respect to a first object that is included in one or more image frames of the plurality of first image frames through the touch screen, and to store, while capturing the first video, the second video based on a plurality of second image fram
Abstract: An imaging apparatus includes a subject detection unit that detects a subject using an image signal output by an image sensor and a motion vector detection unit that detects a motion vector of the subject from the image signal. A camera control unit performs a process of recognizing a movement pattern using the motion vector of the detected subject and determines a group movement scene if it is determined that a movement of a plurality of subjects is a linear movement and the number of subjects with the same movement pattern is equal to or greater than a predetermined number. The camera control unit sets the subject in the lead in the movement direction among the plurality of subjects performing the liner movement in the determined group movement scene to a tracking target as a main subject.
Abstract: Disclosed herein is a method to facilitate betting in a game. Accordingly, the method may include a step of receiving a game data from a game sensor. Further, the method may include a step of analyzing the game data. Further, the method may include a step of identifying an occurrence of a betting event. Further, the method may include a step of transmitting options. Further, the method may include a step of receiving an option indication. Further, the method may include a step of receiving an actual outcome of the betting event from the game sensor. Further, the method may include a step of comparing the option indication and the actual outcome. Further, the method may include a step of determining a response. Further, the method may include a step of transmitting the response to the participant device. Further, the method may include a step of storing the response.
Abstract: Systems, devices, and methods for detecting and reporting recording status are disclosed. A recording device determines a recording status of the recording device. The recording device receives via wireless communication one or more beacons from one or more other recording devices, the one or more beacons include information regarding a recording status of the one or more other recording devices respectively. The recording device provides information regarding the recording status of the recording device and the one or more other recording devices via a user interface of the recording device.
Abstract: In a robot system, a data amount of operation information (log) to be transferred or recorded is reduced, thereby enabling the operation information (log) to be transferred or recorded with a low load. The system has a control unit for controlling the operation of a robot of a robot device and transfers a log regarding the robot operation to a managing terminal. While making the robot operative, the control unit generates log data regarding the robot operation and stores into a short-term storage log recording unit (temporary storage device) (log data generating step). When log transfer timing comes, a part of the log data stored in the recording unit (temporary storage device) is extracted and transferred as a log to a log storage device in accordance with the robot operation (log transferring step).
Abstract: A video system, a video processing method, a device and a computer readable medium are disclosed. The system includes: a front-end device and a cloud server; the front-end device is configured to collect video stream data, and set a video identifier and a service scenario identifier for the video stream data, upload the video identifier, the video stream data and the service scenario identifier to the cloud server; the cloud server is configured to generate a video file corresponding to the video identifier according to the service scenario identifier, the video identifier and the video stream data; and store the video file.
Type:
Grant
Filed:
December 31, 2019
Date of Patent:
May 3, 2022
Assignee:
BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY CO., LTD
Abstract: Systems and methods for gathering research data using multiple monitoring devices are provided. The systems and methods include a processor to generate message data by supplementing first information collected by a first monitoring device with second information collected by a second monitoring device, the first information and the second information collected from media presented by a monitored device, and process the message data to determine a portion of the media that is encoded and a portion of the media that is unencoded. The example systems and methods also include a memory to store an indication that the message data is valid when the message data represents a code encoded in the media, and store data indicating the portion of the media that is encoded.
Type:
Grant
Filed:
February 4, 2019
Date of Patent:
April 26, 2022
Assignee:
THE NIELSEN COMPANY (US), LLC
Inventors:
Joan G. FitzGerald, Carol J. Frost, Eugene L. Flanagan
Abstract: A method including capturing, by a low latency monitoring device, content visualized in video rendering mode, capturing at least one parameter modified in the video rendering mode, determining at least one correction update message for modifying the captured content based on the at least one captured parameter modified in the video rendering mode, determining a content production stream based on the captured content, sending the content production stream to a receiver device, and sending the at least one correction update message to the receiver device, wherein the at least one correction update message is configured to be used by the receiver device to retroactively fix an audio rendering of the captured content based on aligning the content production stream and the at least one captured parameter modified in the video rendering mode.
Type:
Grant
Filed:
June 6, 2017
Date of Patent:
April 12, 2022
Assignee:
Nokia Technologies Oy
Inventors:
Sujeet Shyamsundar Mate, Arto Juhani Lehtiniemi, Antti Johannes Eronen, Jussi Artturi Leppanen
Abstract: When data generated by devices (30) arrive in an order of time instants of generation of the data, the data are additionally described (written) to an actual data recording unit (12b) in an order of arrival of the data, and, meanwhile, a time instant, at which the data additionally written to the actual data recording unit (12b) is generated, a data size, and a position of postscript in an actual data file are additionally written, as index information, to an index file (12a) having a file name corresponding to the actual data file in which the data is recorded.
Type:
Grant
Filed:
September 20, 2018
Date of Patent:
March 1, 2022
Assignee:
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
Abstract: A multi-camera apparatus includes: a driving base in which a movement path is formed; at least one camera mount, each of which includes a respective camera module mounted therein, and is configured to contact the driving base and move along the movement path; and a shaft provided at a center of the driving base and coupled to the driving base, wherein each of the at least one camera mount is connected to the shaft and configured to move along a circumference of the driving base with the shaft as a rotational center.
Type:
Grant
Filed:
April 7, 2020
Date of Patent:
February 8, 2022
Assignee:
HANWHA TECHWIN CO., LTD.
Inventors:
Dae Kyung Kim, Kil Hwa Hong, Ho Seoung Hwang
Abstract: Techniques of providing motion video content along with audio content are disclosed. In some example embodiments, a computer-implemented system is configured to perform operations comprising: receiving primary audio content; determining that at least one reference audio content satisfies a predetermined similarity threshold based on a comparison of the primary audio content with the at least one reference audio content; for each one of the at least one reference audio content, identifying motion video content based on the motion video content being stored in association with the one of the at least one reference audio content and not stored in association with the primary audio content; and causing the identified motion video content to be displayed on a device concurrently with a presentation of the primary audio content on the device.
Abstract: A magnetic tape reading apparatus including: a reading head which includes a reading unit disposed at a position corresponding to a single track included in a magnetic tape; a controller which controls the reading unit to read data plural times from a specific range of the single track in a running direction of the magnetic tape by a linear scan method; and a synthesis unit which synthesizes a plurality of reproducing signal sequences obtained by reading data plural times from the specific range by the reading unit.
Abstract: In prior art documents, no consideration is given as to how to more faithfully preserve image (here and subsequently also termed “video”) data of larger size during transmission. Provided is an image transmission device for transmission of image data, characterized by having a compression processor for compressing image data, and an output section for outputting compressed data having been compressed by the compression processor, the output section outputting the compressed data separately during a first interval and a second interval different from the first interval.
Abstract: The present disclosure discloses a method, device and system for synchronously playing a message stream and an audio-video stream, and involves in the field of streaming media live broadcast technology. In the present disclosure, a stream-pulling terminal pulls an audio-video stream from an audio-video server and plays the audio-video stream, pulls a message stream from a message server and caches the message stream (201). Herein, each audio-video frame in the audio-video stream is supplemented with an audio-video timestamp, each message in the message stream is supplemented with a message timestamp, and time sources taken by the audio-video timestamp and of the message timestamp are synchronous time sources. The stream-pulling terminal determines a message synchronously played with an audio-video frame to be played in a cached message stream in accordance with the audio-video timestamp of the audio-video frame and the message timestamp of the message, and plays the message (202).
Abstract: Provided is a method for disambiguating an audio component extracted from audiovisual content. Audiovisual content is identified. The audiovisual content includes an audio component and a video component. An ambiguous expression is detected in the audio component. An object referenced by the ambiguous expression is identified in the video component. A verbal description of the object is generated. The verbal description is injected into the audio component to generate a modified audio component.
Type:
Grant
Filed:
April 21, 2020
Date of Patent:
July 27, 2021
Assignee:
International Business Machines Corporation
Inventors:
Jason Malinowski, Swaminathan Balasubramanian, Cheranellore Vasudevan, Thomas G. Lawless, III
Abstract: A system, method and program storage device are provided for automatically associating evidence recorded by a plurality of cameras with a discrete occurrence, including: receiving occurrence data pertaining to the discrete occurrence and storing at least a portion of the occurrence data in an occurrence record; receiving first evidence data comprising at least a video data portion and a metadata portion of the evidence recorded by a first camera of the plurality of cameras and storing it in an evidence record; receiving second evidence data comprising at least a video data portion and a metadata portion of the evidence recorded by a second camera of the plurality of cameras and storing it in the evidence record; automatically associating information stored in the evidence record with information stored in the occurrence record based on a correspondence of at least two criteria including a first criterion of time; identifying, based on the automatic association, a first image data portion of the evidence recor
Abstract: A system and method for integrating voice and data operations into a single mobile device capable of simultaneously performing data and voice actions. The mobile device working in a network capable of exchanging both cell phone calls and data items to the mobile device. By wearing an earphone or an ear-bud device the user is capable of dealing with voice conversations while working with data centric information related to the current caller. By providing a data-centric device with voice capabilities there is a new range of features that allow incoming data events to trigger outgoing voice events.
Type:
Grant
Filed:
September 16, 2019
Date of Patent:
May 25, 2021
Assignee:
BlackBerry Limited
Inventors:
Gary Phillip Mousseau, David Paul Yach, Mihal Lazaridis, Harry Richmond Major, Raymond Paul Vander Veen, Atul Asthana
Abstract: An image recording apparatus can record an appropriate reel number and an appropriate camera identification (ID) in a volume label of a recording medium even in a case where a management file is generated for each moving image format. In a case where a management file of a first moving image format set as a recording format of a moving image file is not recorded, and a management file of another moving image format is recorded in a recording medium, the image recording apparatus determines a reel number of the recording medium based on information about the reel number included in the management file of the another moving image format and records the management file of the first moving image format including the information about the determined reel number in the recording medium, and does not update or record the volume label in the recording medium.
Abstract: An association with a system timing at the time of transmission is secured without changing a display timing in text information of a subtitle, and a reception side displays the subtitle at an appropriate timing. A packet in which a document of the text information of the subtitle having display timing information is included in a payload is generated and transmitted in synchronization with a sample period. A header of the packet includes a time stamp on a first time axis indicating a start time of the corresponding sample period. The payload of the packet further includes reference time information of a second time axis regarding the display timing associated with the start time of the corresponding sample period.
Abstract: A transmission device of the disclosure includes: a clock signal transmitting circuit that outputs a clock signal onto a clock signal line; a data signal transmitting circuit that outputs a data signal onto a data signal line; and a blanking controller that controls the clock signal transmitting circuit to output a predetermined blanking signal, in place of the clock signal, from the clock signal transmitting circuit to the clock signal line in synchronization with a blanking period of the data signal.
Abstract: Embodiments of the present disclosure provide techniques for rendering content from a media item. According to these embodiments, from a file of the media item, track(s) in a group data structure corresponding to the type of content are identified as candidate track(s). From other tracks in the file, a determination may be made whether another track corresponds to the type of content. When another track corresponds to the type of content, feature tags in the file that are associated may be compared with the other track to capabilities of a player device that is to render the type of content. When the feature tags match capabilities of the player device, the other track may be included as a candidate track. Thereafter, a track may be selected from the candidate tracks and rendered by the player device.
Abstract: A mobile audio transportation (MAT) system/method allowing transportation of mobile audio modules (MAMs) is disclosed. The system/method incorporates a perforated acoustic tube (PAT) in the MAM allowing speaker energy to be efficiently emitted from the mobile speaker enclosure (MSE). The PAT is configured with an enclosure alignment pathway (EAP) within the MAM allowing a stack alignment rod (SAR) to penetrate through the PAT/EAP thus capturing and securing the MAM in an aligned MAM stack (AMS). Alignment and insertion of the SAR with a stack index rod (SIR) affixed to a mobile hand truck (MHT) allows the AMS to be coupled with the MHT for transportation of the AMS. The MHT incorporates a hand truck frame (HTF), hand truck wheels (HTW), hand truck platform (HTP) and SIR, hand truck handle (HTH), charger power strip (CPS), battery charger array (BCA), and optional hand truck coupler (HTC) to facilitate AMS transportation.
Type:
Grant
Filed:
May 18, 2020
Date of Patent:
February 16, 2021
Inventors:
Andrew Alexander Maly, Viktor Yevgenievich Vlassov
Abstract: A synchronization setting device and a system include at least one memory storing instructions, and at least one processor that implements the stored instructions to acquire first delay time to be set in a first distribution device for distributing first content data, and output the first delay time to the first distribution device or to a reproduction device that provides the first content data to the first distribution device.
Abstract: There is provided an image processing device and an image processing method for instantaneously displaying an image of a user's field of view. An encoder encodes a celestial sphere image of a cube formed by images of multiple planes generated from omnidirectional images, the encoding being performed plane by plane at a high resolution, to generate a high-resolution encoded stream corresponding to each of the planes. The encoder further encodes, at a low resolution, the celestial sphere image to generate a low-resolution encoded stream. The present disclosure may be applied, for example, to image display systems that generate a celestial sphere image so as to display an image of the user's field of view derived therefrom.
Abstract: Techniques of providing motion video content along with audio content are disclosed. In some example embodiments, a computer-implemented system is configured to perform operations comprising: receiving primary audio content; determining that at least one reference audio content satisfies a predetermined similarity threshold based on a comparison of the primary audio content with the at least one reference audio content; for each one of the at least one reference audio content, identifying motion video content based on the motion video content being stored in association with the one of the at least one reference audio content and not stored in association with the primary audio content; and causing the identified motion video content to be displayed on a device concurrently with a presentation of the primary audio content on the device.
Abstract: The relative health of data storage drives may be determined based, at least in some aspects, on data access information and/or other drive operation information. In some examples, upon receiving the operation information from a computing device, a health level of a drive may be determined. The health level determination may be based at least in part on operating information received from a client entity. Additionally, a storage space allocation instruction or operation may be determined for execution. The allocation instruction or operation determined to be performed may be based at least in part on the determined health level.
Type:
Grant
Filed:
September 1, 2017
Date of Patent:
December 8, 2020
Assignee:
Amazon Technologies, Inc.
Inventors:
Marc J. Brooker, Madhuvanesh Parthasarathy, Danny Wei, Tobias L. Holgers, Yu Li
Abstract: Systems and methods are provided for synchronizing audio feeds. A system obtains a plurality of audio feeds and identifies a base feed, a first feed, a base segment, and a first segment. The system also determines a plurality of time-shifted first segments that are each temporally offset from the first segment by a unique multiple of a granularity parameter. A plurality of correlation values between the base segment and each of the plurality of time-shifted first segments are also determined, as well as a first offset value corresponding to a particular time-shifted first segment of the plurality of time-shifted first segments having the highest correlation value. The first feed and/or supplemental content of the first feed is then synchronized with the base feed by at least temporally offsetting the first feed and/or supplemental content of the first feed by a temporal offset comprising/based on the selected first offset value.
Abstract: A method for determining the quality of a media stream of a computer network including: receiving a packet from a traffic flow; determining whether the packet relates to a media stream; if the packet is related to the media stream, simulating a content player buffer related to the media stream; reviewing further data chunks associated with the media stream to determine quality events affecting the media stream; analyzing the effect of the quality event on a subscriber viewing the quality event; and determining a Quality of Experience score related to the media stream; otherwise allowing the packet to continue to the subscriber without further analysis. A system for determining the quality of a media stream, the system including modules configured to carry out the method for determining the quality of the media stream.
Type:
Grant
Filed:
November 6, 2019
Date of Patent:
November 17, 2020
Assignee:
Sandvine Corporation
Inventors:
Keir Nikolai Spilka, Darrell Reginald May
Abstract: The present disclosure provides a method for generating a video of a body moving in synchronization with music by applying a first artificial neural network (ANN) to a sequence of samples of an audio waveform of the music to generate a first latent vector describing the waveform and a sequence of coordinates of points of body parts of the body, by applying a first stage of a second ANN to the sequence of coordinates to generate a second latent vector describing movement of the body, by applying a second stage of the second ANN to static images of a person in a plurality of different poses to generate a third latent vector describing an appearance of the person, and by applying a third stage of the second ANN to the first latent vector, the second latent vector, and the third latent vector to generate the video.
Type:
Grant
Filed:
April 23, 2019
Date of Patent:
November 3, 2020
Assignee:
ADOBE INC.
Inventors:
Zhaowen Wang, Yipin Zhou, Trung Bui, Chen Fang
Abstract: In prior art documents, no consideration is given as to how to more faithfully preserve image (here and subsequently also termed “video”) data of larger size during transmission. Provided is an image transmission device for transmission of image data, characterized by having a compression processor for compressing image data, and an output section for outputting compressed data having been compressed by the compression processor, the output section outputting the compressed data separately during a first interval and a second interval different from the first interval.
Abstract: Intelligent synchronization of media or other material output from multiple media devices is contemplated. The intelligence synchronization may include instructing the media devices to coordinate playback in concert with a conductor whereby the conductor acts a focal point or reference for the non-conducting media devices. The non-conductor may transmit sync messaging having data or other information sufficient to facilitate coordinating operation of the non-conductors in a manner sufficient to synchronize output of the media.
Abstract: An image processing device and an image processing method for generating a celestial sphere image are provided such that the pixels near the poles of the sphere are kept from increasing in density when the image is mapped to the sphere surface. An encoder encodes, with respect to an omnidirectional image generated by equidistant cylindrical projection to include a top image, a middle image, and a bottom image in a vertical direction, the middle image into an encoded stream at a high resolution, and the top image and the bottom image into encoded streams at a resolution lower than the high resolution in systems including image display systems.
Abstract: Disclosed are systems and methods for converting a control track designed for use with a number and/or type of haptic output devices to be used with other numbers and/or types of haptic output devices. For example, a computing device may convert the control track into another control track that can be applied to other types and/or numbers of haptic output devices. The converted control track may be compatible for use with a smartphone or other system that includes a different number and/or type of haptic feedback devices than the system for which the haptic track was originally designed. In this manner, the user of the smartphone or other system may experience haptic feedback using a device that is different from another haptic feedback system for which the control track was originally designed for use. The conversion may occur locally at the smartphone or other system and/or remotely at another device.
Type:
Grant
Filed:
October 12, 2018
Date of Patent:
May 12, 2020
Assignee:
Immersion Corporation
Inventors:
Vincent Levesque, Jamal Saboune, David M. Birnbaum
Abstract: The purpose is to provide a device, method and program for tactile information conversion, and an element arrangement structure for presenting a plurality of types of stimuli that are capable of presenting a plurality of types of stimuli at one point in a concentrated manner from different points. One of the features is to determine a first stimulation point at which a first type of tactile stimulus is generated or generate the first type of tactile stimulus at the first stimulation point via the output unit and output tactile information for generating a second type of tactile stimulus via the output unit to the output unit at a second stimulation point separated within a temporally and/or spatially predetermined threshold value from the first stimulation point of the first type of tactile stimulus that has been determined or generated by a first stimulation unit.
Abstract: The disclosure describes using sharding generate virtual reality content. A method includes receiving raw virtual reality video data recorded by a camera array, wherein the camera array includes three or more camera modules. The method further includes defining shards of the raw virtual reality video data in a state file. The method further includes assigning each of the shards to a corresponding worker node in a set of worker nodes. The method further includes updating the state file to include metadata that describes a location of each of the shards at the corresponding worker node in the set of worker nodes. The method further includes providing the metadata to the set of worker nodes. The method further includes processing the shards to generate one or more virtual reality video renders for each shard, where each virtual reality video render combines the raw video feeds into a single video file.
Abstract: A method for presenting content items includes receiving, by a user device, a request for a media content item hosted by a content sharing platform, and providing, by the user device, a graphical user interface (GUI) comprising a first GUI portion having a first media player to play the requested media content item, and a second GUI portion having a second media player to play an additional media content item associated with a particular portion of the requested media content item. The method further includes in response to the first media player beginning to play the particular portion of the requested media content item, causing the second media player to play the additional media content item.
Abstract: Non-volatile devices may be configured such that a clear operation on a single bit clears an entire block of bits. The representation of particular data structures may be optimized to reduce the number of clear operations required to store the representation in non-volatile memory. A data schema may indicate that a data structure of an application may be optimized for storage in non-volatile memory. A translation layer may convert an application level representation of a data value associated with the data structure to an optimized storage representation of the data value before storing the optimized storage representation of the data value in non-volatile memory.
Abstract: Systems, methods, and non-transitory computer-readable media can present one or more base segments of a first stream of a content item in a viewport interface, the content item being composed using a set of streams that each capture at least one scene from a particular direction, wherein the viewport interface is provided through a display screen of the computing device. A determination is made that a direction of the viewport interface has changed to a different direction during playback of a first base segment of the first stream. One or more offset segments of a second stream that correspond to the different direction are presented in the viewport interface, the offset segments being offset from the set of base segments of the first stream.
Type:
Grant
Filed:
December 30, 2016
Date of Patent:
January 21, 2020
Assignee:
Facebook, Inc.
Inventors:
Michael Hamilton Coward, Amit Puntambekar, David Young Joon Pio, Evgeny V. Kuzyakov
Abstract: A method for determining the quality of a media stream of a computer network including: receiving a packet from a traffic flow; determining whether the packet relates to a media stream; if the packet is related to the media stream, simulating a content player buffer related to the media stream; reviewing further data chunks associated with the media stream to determine quality events affecting the media stream; analyzing the effect of the quality event on a subscriber viewing the quality event; and determining a Quality of Experience score related to the media stream; otherwise allowing the packet to continue to the subscriber without further analysis. A system for determining the quality of a media stream, the system including modules configured to carry out the method for determining the quality of the media stream.
Type:
Grant
Filed:
October 17, 2017
Date of Patent:
December 10, 2019
Assignee:
SANDVINE CORPORATION
Inventors:
Keir Nikolai Spilka, Darrell Reginald May
Abstract: Spherical video content may define visual content viewable from a point of view as a function of progress through the spherical video content. The spherical video content may include an event of interest that occurs within an event moment and within an event extent of the visual content. The spherical video content may be presented on a display. Display fields of view defining extents of the visual content viewable from the point of view may be determined. The display fields of view may define a display extent of the visual content at the event moment. Whether the event extent is located within the display extent during the presentation of the spherical video content at the event moment may be determined. Responsive to a determination that the event extent is located outside the display extent, visual/audio effect may be applied to the presentation of the spherical video content.
Abstract: Techniques of providing motion video content along with audio content are disclosed. In some example embodiments, a computer-implemented system is configured to perform operations comprising: receiving primary audio content; determining that at least one reference audio content satisfies a predetermined similarity threshold based on a comparison of the primary audio content with the at least one reference audio content; for each one of the at least one reference audio content, identifying motion video content based on the motion video content being stored in association with the one of the at least one reference audio content and not stored in association with the primary audio content; and causing the identified motion video content to be displayed on a device concurrently with a presentation of the primary audio content on the device.
Abstract: A method for responding to a content retrieval request at a server may include receiving the content retrieval request from a computing device; detecting, at a device aware controller, at least one device capability of the computing device; setting, at the device aware controller, a rule boundary for the content retrieval request based on the at least one device capability; forwarding the content retrieval request with the rule boundary to a device agnostic controller, wherein the content retrieval request does not include capability information associated with the computing device; receiving from the device agnostic controller at the device aware controller, data corresponding to the content retrieval request with the rule boundary applied; and providing the data with the rule boundary applied to the computing device for presentation on the computing device.
Type:
Grant
Filed:
September 21, 2018
Date of Patent:
October 1, 2019
Assignee:
Wells Fargo Bank, N.A.
Inventors:
Shailesh Hedaoo, Ashish G. Khapre, Ranganathan Kanchi
Abstract: Multiple broadcasters create live streams of digital content relating to live events, and multiple viewers of each broadcaster receive copies of the live streams. Viewer latency is significantly reduced, and event information relating to live events is synchronized amongst all broadcasters and viewers of live streams relating to the same event. Scalable and flexible access to live streams is provided to different types and numbers of viewers with different qualities of service. A social media platform is provided in tandem with live streaming of digital content relating to live events, to allow a given broadcaster and their associated viewers to communicate with one another, comment on the event and/or the broadcaster's live stream, and send digital gifts.
Type:
Grant
Filed:
February 5, 2019
Date of Patent:
September 24, 2019
Assignee:
SportsCastr.LIVE
Inventors:
Kevin April, Peter Azuolas, Philip Nicholas Schupak, Brian Silston