Abstract: A landing sequence for the unmanned aerial vehicle (“UAV”) wherein once the landing sequence is initiated, the UAV detects the operator's hand after hovering at an operative height, then the UAV begins a dissent at an operative rate toward the operators hands reducing the lift generated by the UAV until a time where the operator's hand exerts enough pressure on the bottom of the UAV to trigger the UAV to cut all power to the motors thereby landing the UAV in the operator's hand giving the operator the ability to “catch” the UAV in their hand and not risk landing the UAV in an undesirable location.
Abstract: Introduced herein are techniques for improving media content production and consumption by utilizing metadata associated with the relevant media content. More specifically, systems and techniques are introduced herein for automatically producing media content (e.g., a video composition) using several inputs uploaded by a filming device (e.g., an unmanned aerial vehicle (UAV) copter or action camera), an operator device, and/or some other computing device. Some or all of these devices may include non-visual sensors that generate sensor data. Interesting segments of raw video recorded by the filming device can be formed into a video composition based on events detected within the non-visual sensor data that are indicative of interesting real world events. For example, substantial variations or significant absolute values in elevation, pressure, acceleration, etc., may be used to identify segments of raw video that are likely to be of interest to a viewer.
Type:
Application
Filed:
October 27, 2017
Publication date:
May 3, 2018
Applicant:
LR Acquisition, LLC
Inventors:
James W. Allison, Henry W. Bradlow, Gillian C. Langor, Antoine Balaresque
Abstract: Introduced herein are systems and techniques for automatically producing media content (e.g., a video composition) using several inputs uploaded by an unmanned aerial vehicle (UAV) copter, an operator device, and/or some other computing device. More specifically, production and modification techniques based on sensor-driven events are described herein that allow videos to be created on behalf of a user of the UAV copter. Interesting segments of raw video recorded by the UAV copter can be formed into a video composition based on sensor events that are indicative of interesting real world events. The sensors responsible for detecting the events may be connected to (or housed within) the UAV copter, the operator device, and/or some other computing device. Sensor measurements can also be used to modify the positioning, movement pattern, focus level/point, etc., of the UAV copter responsible for generating the raw video.
Type:
Application
Filed:
October 11, 2017
Publication date:
April 12, 2018
Applicant:
LR Acquisition, LLC
Inventors:
James W. Allison, Henry W. Bradlow, Gillian C. Langor, Antoine Balaresque
Abstract: Several embodiments include a remote tracker for a videography drone. The remote tracker can include a spatial information sensor and a microphone configured to capture audio data surrounding the remote tracker. The remote tracker can also include a logic control component configured to decorate the audio data with location-based metadata or temporal metadata. A network interface of the remote tracker can communicate with the videography drone, including streaming the audio data captured by the microphone to the videography drone.