Abstract: A system and methodologies for controlling the content of a heads up display to provide increased visibility of naturally occurring contours and surfaces in the environment is provided. Depth information is provided along with predefined lighting conditions in order to output an image by the HUD with enhanced feature contours.
Abstract: A method of generating at least one image of a real environment comprises providing at least one environment property related to at least part of the real environment, providing at least one virtual object property related to a virtual object, determining at least one imaging parameter according to the at least one provided virtual object property and the at least one provided environment property, and generating at least one image of the real environment representing information about light leaving the real environment according to the determined at least one imaging parameter, wherein the light leaving the real environment is measured by at least one camera.
Abstract: A method for providing event data from cross-referenced data memories of an on-vehicle event detection and reporting system includes cross-referencing event data stored in an event buffer with continuous video data stored in a continuous DVR memory. A request for additional data corresponding to a detected driving or vehicle event may be received by the on-vehicle system, where the request includes an event identifier corresponding to a detected driving or vehicle event. The requested additional data is identified in the DVR memory using the event identifier and based on said cross-referencing between the event data with the continuous video data stored in the DVR memory. The identified additional data is then transmitted from the DVR memory for display on a user computer.
Type:
Grant
Filed:
December 3, 2018
Date of Patent:
December 14, 2021
Assignee:
Bendix Commercial Vehicle Systems LLC
Inventors:
Mark A. Muncy, Andreas U. Kuehnle, Brendan E. Buzer, Hans M. Molin
Abstract: An information processing apparatus includes a second acquisition unit that acquires a second image captured by a second image capturing unit of which an image capturing area is controlled by a driving control unit, and a display control unit that clips an image of an area corresponding to a selection area from the second image acquired by the second acquisition unit, and displays the clipped image.
Abstract: A system and method for performing image scrolling are disclosed. In one embodiment, a system for image scrolling determines the scroll rate for image scrolling. The scroll rate is based on selection of a scroll rate range, through a user input device, and is further based on movement indicated by the user input device. The system writes a sequence of images from the image cache to the frame buffer for image scrolling on the display at the determined scroll rate.
Abstract: An automobile-mounted imaging apparatus and a computer readable storage medium for detecting a distance to at least one object. The apparatus comprises circuitry configured to select at least two images from images captured by at least three cameras to use for detecting the distance to the at least one object based on at least one condition. Alternatively or additionally, the apparatus comprises circuitry configured to select two cameras of at least three cameras for detecting the distance to the at least one object based on at least one condition. Alternatively or additionally, the apparatus comprises circuitry configured to determine which of at least two cameras to use for detecting the distance to the at least one object based on at least one condition from at least three cameras capturing images.
Abstract: Various implementations disclosed are for detecting moving objects that are in a field of view of a head-mountable device (HMD). In various implementations, the HMD includes a display, an event camera, a non-transitory memory, and a processor coupled with the display, the event camera and the non-transitory memory. In some implementations, the method includes synthesizing a first optical flow characterizing one or more objects in a field of view of the event camera based on depth data associated with the one or more objects. In some implementations, the method includes determining a second optical flow characterizing the one or more objects in the field of view of the event camera based on event image data provided by the event camera. In some implementations, the method includes determining that a first object of the one or more objects is moving based on the first optical flow and the second optical flow.
Abstract: Systems and methods are described to identify jump points indicative of potential time points from which to resume consumption of the media asset in response to receiving a request to skip a portion of a media asset being consumed. The jump points include a first jump point identified based on a content viewing profile and a second jump point identified based on a scene information associated with the media asset. A preview image is displayed at each of the identified jump points. Systems and methods are also described to pause the skipping operation at the identified jump points and provide a preview at the respective jump points. Systems and method are further described to identify jump points based on analysis of the portion of the media asset being skipped.
Abstract: An imaging system includes a body, a first processing circuit provided at the body, a gimbal carried by the body, an image collection circuit carried by the gimbal, a second processing circuit provided at the gimbal, a third processing circuit provided at the body, and a signal transmission line. One end of the signal transmission line is connected to the second processing circuit, and another end of the signal transmission line is connected to the third processing circuit. The second processing circuit is configured to encode a multi-channel first signal output by the image collection circuit to output a second signal. The third processing circuit is configured to receive the second signal through the signal transmission line, decode the second signal to obtain a decoded second signal, and transmit the decoded second signal to the first processing circuit.
Abstract: A data transmission method according to one aspect of the present disclosure includes: generating a plurality of MPUs, reference clock time information, and leading clock time information indicating a leading PTS that is a clock time at which a leading access unit in the MPU is presented, transmitting the generated plurality of MPUs, reference clock time information, and leading clock time information, wherein the leading clock time information indicates the leading PTS of the plurality of MPUs of which presentation is started after the leading clock time information is transmitted in the generated plurality of MPUs, and each of the generated plurality of MPUs indicates a time point at which each access unit that does not exist in a head of the MPU is presented as a relative value to a time point of another access unit in the MPU.
Abstract: A method includes identifying media content items for a playlist, each media content item has an introductory segment, main segment, and ending segment, determining a length of an ending segment of a first media content item and a length of an introductory segment of a second media content item, selecting an interstitial to be added to the playlist between a main segment of the first media content item and the second media content item. A length of the interstitial exceeds a combined length of the ending segment of the first media content item and the introductory segment of the second media content item. The method also includes adding a spacing between the ending segment and the introductory segment, and inserting the interstitial in the playlist between the main segment of the first media content item and the second media content item based on the spacing.
Abstract: Aspects of the subject disclosure may include, for example, a device in which a processing system receives from a multimedia content server a plurality of content streams; each of the content streams includes a portion of an original content stream. The processing system transcodes the plurality of content streams based on a viewport prediction to produce a plurality of viewport streams; the viewport prediction is based at least in part on a visibility map associated with a viewer of the content. The processing system delivers each of the viewport streams to a client device associated with the viewer; each of the viewport streams is buffered at the client device in a separate buffer. Other embodiments are disclosed.
Type:
Grant
Filed:
December 13, 2019
Date of Patent:
October 26, 2021
Assignees:
AT&T INTELLECTUAL PROPERTY I, L.P., THE HONG KONG UNIVERSITY OF SCIENCE AND TECHNOLOGY
Inventors:
Bo Han, Wenxiao Zhang, Pan Hui, Tan Xu, Cheuk Yiu Ip
Abstract: A studio in a box includes displays arranged along the interior of the studio. A camera and microphone is arranged in the studio to capture a multimedia production, using content shown on the displays as background for the production. Other aspects are described.
Abstract: A system includes a video recorder configured to record video data, a sensor configured to sense movement of an object and output sensor data representative of the movement of the object, a transformer configured to transform the sensor data into a haptic output signal, a haptic output device configured to generate a haptic effect to a user based on the haptic output signal, a display configured to display a video, and a processor configured to synchronize the video data and the haptic output signal, and output the video data to the display and the haptic output signal to the haptic output device so that the haptic effect is synchronized with the video displayed on the display.
Type:
Grant
Filed:
February 12, 2016
Date of Patent:
October 5, 2021
Assignee:
IMMERSION CORPORATION
Inventors:
Robert Lacroix, Juan Manuel Cruz-Hernandez, Jamal Saboune
Abstract: The present invention relates to the field of security technology, and in particular to a fine granularity real-time supervision system based on edge computing. A fine granularity real-time supervision system based on edge computing is provided, comprising: an intelligent video monitoring device, an edge computing module, an edge computing node, and a cloud computing center. The system can reduce the redundant information of the system and realizes fine granularity management.
Abstract: Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing a program and a method for generating a summary based on trip information. The program and method include operations for: determining that one or more criteria associated with a user correspond to a trip taken by the user during a given time interval; retrieving a plurality of visual media items generated by a client device of the user during the given time interval; determining location information for the plurality of visual media items; automatically generating a trip graphic to represent the trip based on the plurality of visual media items generated by the user during the given time interval and the determined location information; and causing the trip graphic to be displayed on the client device.
Type:
Grant
Filed:
March 25, 2020
Date of Patent:
September 28, 2021
Assignee:
Snap Inc.
Inventors:
Alexander Collins, Benedict Copping, Justin Huang
Abstract: Satellite onboard imaging systems having a look-down view and a toroidal view of the Earth are disclosed. In one embodiment, a satellite onboard imaging systems include an infrared sensing system and a controller. The infrared sensing system includes a first imager configured to have a first field of view that observes a look-down view of the Earth from a satellite and a second imager configured to have a second field of view that observes a toroidal view of the Earth centered at the satellite. The controller is coupled to the first imager and the second imager and operable to process image data from the first imager and the second imager. The controller is further operable to output indications of thermal energy of an identical, or different objects based on the first thermal image signal, the second thermal image signal, or both.
Abstract: Systems and methods are disclosed for intelligently activating light devices for optimized lighting conditions in a scene, where optimized illumination is provided by a subset of light devices within an array of light devices in communication with the camera. The systems and methods detect a target within a camera's field of view and determine an optimized illumination of the target according to a video analytics model. Lighting is adjusted in accordance with the determined optimized illumination.
Type:
Grant
Filed:
February 19, 2018
Date of Patent:
August 24, 2021
Assignee:
CISCO TECHNOLOGY, INC.
Inventors:
Nicholas Dye Abalos, Ian Matthew Snyder
Abstract: Disclosed are a method and an apparatus for analyzing a driving tendency and a system for controlling a vehicle. The apparatus includes: an image sensor disposed in a vehicle so as to have a field of view exterior of the vehicle, the image sensor configured to capture image data; and a controller comprising at least one processor configured to process the image data captured by the image sensor, wherein the controller is configured to: identify a plurality of objects present in the field of view, responsive at least in part to processing of the image data; determine whether an event is generated, based on at least one of a processing result of the image data and pre-stored driving information of the vehicle; analyze a driving tendency of a driver, based on the driving information and the processing result of the image data, when it is determined that the event is generated; and set a driving level corresponding to the driving tendency of the driver.
Abstract: A method of processing a video includes capturing a first set of video data at a first definition, transmitting the first set of video data at a second definition lower than the first definition wirelessly to a user terminal, receiving a video edit request wirelessly from the user terminal, and finding video corresponding to edited video data described by the video edit request, thereby forming a second set of video data at a third definition. The video edit request is formed from editing the received first set of video data at the second definition at the user terminal.