AERIAL IMAGING OF A REGION USING ABOVE GROUND AERIAL CAMERA PLATFORM
A system comprises an aerial imaging platform configured to rise to a height above ground. An apparatus allows an entity to move the aerial platform in a desired direction. The aerial platform includes a camera positioned to capture images of the ground. The camera includes a position sensor. A user/entity may move the aerial platform over a region to be imaged. The system includes a device that may be carried by the user/entity. The device receives information about a region to be imaged and a field of vision of the camera, determines a first path, and provides information on the first path to the user/entity. As the user/entity moves the aerial platform along the first path, the device receives data from the camera position sensor and determines a second path. The user/entity may then move the aerial platform along the second path to capture unimaged areas of the region.
Latest Microsoft Patents:
- ENCODING STRATEGIES FOR ADAPTIVE SWITCHING OF COLOR SPACES, COLOR SAMPLING RATES AND/OR BIT DEPTHS
- FAULT-TOLERANT VIDEO STREAMING IN ONE-WAY TRANSFER SYSTEMS
- UDP File Serialization In One-Way Transfer Systems
- HYBRID ENVIRONMENT FOR INTERACTIONS BETWEEN VIRTUAL AND PHYSICAL USERS
- USER ACTIVITY RECOMMENDATION
The present application claims priority to and the benefit of co-pending U.S. Provisional Patent Application No. 62/448,992 filed Jan. 21, 2017, and co-pending U.S. Provisional Patent Application No. 62/449,049 filed Jan. 22, 2017, the entire contents of which are hereby incorporated by reference.
BACKGROUNDHigh-resolution aerial imagery systems have become widely used over the last several years. This use has increased in both the research community and in industry. For example, visual imagery recorded using camera-equipped Unmanned Aerial Vehicles (UAVs) has been used for applications including disaster assessment, agricultural analytics and film-making. Fueled by this increasing array of applications, UAV sales in US have tripled over the last year. In spite of recent advances in UAV technology, several factors severely limit the capabilities and adoption of UAVs. UAVs consume a large amount of power to stay aloft, resulting in very short battery life (on the order of a few tens of minutes for most commercial UAVs). This makes such UAVs infeasible for applications that require long-term continuous monitoring, like agricultural farm monitoring, surveillance and generating aerial time-lapse imagery. Also, the use of UAVs faces regulatory restrictions and the use of UAVs requires high capital investment. Mid-to-heavy payload carrying UAVs are expensive and typically cost over a thousand dollars. This cost factor is compounded by the fact that the UAV batteries have finite charge cycles and need to be replaced frequently if the UAV is used often.
SUMMARYThis summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This summary is not intended to exclusively identify key features or essential features of the claimed subject matter, nor is it intended as an aid in determining the scope of the claimed subject matter.
Embodiments of the disclosure include apparatus and methods for use in an imaging system. The imaging system may be configured to include an aerial platform configured to rise to a height above ground. The aerial platform may be an apparatus such as a balloon or a kite. The aerial platform may include a steering apparatus that allows an entity on the ground to move the aerial platform in a desired direction when the aerial platform is suspended above ground. For example, the steering device may be a tether that allows the user/entity on the ground to control the height and position of the aerial platform. A camera device may be attached to the aerial platform and oriented in a direction to take images, including still images or videos, of the ground. The camera device may also include a position sensor such as a global poisoning satellite (GPS) device and a wireless interface for communicating with other devices. The imaging system may also include a mobile device that also has a wireless interface and that communicates with the camera device over the wireless interface. The mobile device may be a device such as a smart phone or tablet device and may have a user interface to receive input from, and provide output to, a user that may be the user/entity on the ground who controls the aerial platform using the steering device.
In operation, the mobile device may receive first data related to a region to be imaged by the camera device. The first data may be entered at the user interface by the user. The first data may include, for example, dimensional parameters of the region and at least one indication of an area of importance within the region. The mobile device may also receive second data that includes information related to a field of vision of the camera device. The second data may include, for example, data on the height of the aerial platform/camera device and for image resolution data that may be input to the mobile device by the user or by some other method, such a being pre-stored or downloaded to the mobile device. Based at least on the first data and the second data, the mobile device may determine a first path within the region and provide information on the first path to the use/entity at the user interface.
The user/entity may then use the steering device to move the aerial platform with the user/entity as the user/entity moves along the first path according to the information on the first path received at the user interface. As the user/entity moves along the first path the mobile device receives third data from the position sensor of the camera device. The third data may include position data determined at the time of capture of each image or video frame of the region of interest by the camera device. Then, during and/or subsequent to the movement of the user/entity along the first path and based on at least on the third data, the mobile device may determine a second path within the region and provide information on the second path to the user/entity. The second path may be determined to account for at least one unimaged area of the region that remains unimaged in the traverse of the first path. The user/entity may then use the steering device to move the aerial platform along a traverse of the second path according to the information on the second path received at the user interface and capture additional images of the region. The first path may be constructed and/or adjusted in real time as the user moves along the first path with the aerial platform.
In an implementation, the imaging system includes a control device that includes an apparatus that allows a user to increase the longevity of camera battery life by duty cycling and/or controlling the camera device remotely so that only basic functionalities are running, except when other functions are needed. The user may use the apparatus to capture imagery at a time scale of interest by adjusting parameters related to image capture by the camera device. The implementation includes an apparatus that is configured to receive first data, the first data comprising information related to a region to be imaged and information related to a field of vision of a camera device positioned above a mobile entity on an aerial platform having a steering device, receive second data, the second data indicating a parameter related to power consumption by the camera device, and provide the second data to the camera device. The second data may include a time for the camera device to capture an image, a time for the camera device to capture a video, or a number of cycles per time period for the camera device to capture one or more images. The apparatus may then determine, based at least on the first data, path planning to provide to the mobile entity for imaging the region. The apparatus may comprise a mobile device including a user interface and the mobile device receives the second data indicating a parameter related to power consumption from the user interface in response to the mobile entity entering input at the user interface. The apparatus may also determine a battery life of the camera device based at least on the second data, and present an indication of the battery life to the mobile entity at the user interface. A user may then modify the time for the camera device to capture an image, the time for the camera device to capture a video, or the number of cycles per time period for the camera device according to the battery life and the time needed to image a particular region.
In an further implementation, the imaging system includes a control device including an apparatus that is configured to receive a plurality of images from a camera device, extract a plurality of features from the plurality of images, reject selected features of the plurality of features that do not match across the plurality of images, determine a homography that maps features in each image of the plurality of images to features in another image of the plurality of images, and apply the homography to a current image captured from the camera device and ensure that the current image is aligned to an image previous to the current image. This image alignment allows the use of an aerial platform that is less expensive, but may be susceptible to translations and rotations in the air to that cause unwanted motion of the camera device due to the impact of wind on the aerial platform.
In a further implementation, the imaging system may include an aerial platform configured to rise to a height above ground. The aerial platform may include a balloon and a steering apparatus that allows an entity on the ground to move the aerial platform in a desired direction when the aerial platform is above ground. The imaging system may further include a mount including a member having a long axis, a first end, and a second end, the mount further includes a swiveling apparatus that couples the first end of the member to the aerial platform, wherein the member to hangs substantially perpendicular along its long axis to the ground from the aerial platform. The mount also includes a case having a first portion and a second portion. The first portion of the case is coupled to the second end of the elongated member. The case is configured to receive a camera device and hold the camera device so that focal plane of the camera device is perpendicular to the long axis of the member.
The system and method will now be described by use of example embodiments. The example embodiments are presented in this disclosure for illustrative purposes, and not intended to be restrictive or limiting on the scope of the disclosure or the claims presented herein.
The disclosed embodiments include apparatus, devices, and methods for use in a system for large-scale high-resolution aerial photography of a region of interest. The apparatus, devices, and methods provide a practical and low-cost alternative to aerial imaging based on the use of unmanned powered aerial vehicles such as quadcoptcrs and fixed wing drones in situations when the use of these types of aerial vehicles is infeasible due to limited battery life, cost, and/or regulatory restrictions. The embodiments also provide advantages over aerial imaging based on the use of satellites or airplanes, which can be expensive and not flexible enough for many users.
In an example implementation, the system may include a camera device attached to an aerial platform, such as a tethered helium balloon, using a mounting device and a mobile device. The aerial platform may either remain anchored to a stationary ground point or an on ground anchor over an extended period of time or be physically attached to, and moved/guided by, a user/entity, such as person or a vehicle along a path within the region of interest. The mobile device may be utilized to receive control input from a user/entity and provide guidance information to the user/entity related to movement along the path within the region of interest. In other implementations, any other type of aerial platform that may be tethered or attached to a moving entity may be used. For example, a kite may be used as the aerial platform.
Use of the embodiments optimizes coverage of a target area for imaging in a region of interest in spite of variations in the position of the camera device and aerial platform with respect to an anchor or tether point. The embodiments allow a user to acquire high quality aerial imagery over extended areas of a region, or over long stretches of time, while keeping the total cost of such acquisition much lower than when using powered aerial vehicles. For example, implementations have application in agricultural farm monitoring, flood analysis and crowd monitoring. These applications conventionally require expensive infrastructure (such as camera towers or human operated blimps) to be cost effective and feasible.
The embodiments provide advantages over other conventional techniques such as use of UAVs, satellites and cameras attached to airplanes. For example, much satellite imagery data is available at no cost but the imagery data has shortcomings. Commonly available satellite imaging resolution is poor. The best resolution is on the order of 46 cm per pixel. Also, image data collected from satellites is severely affected by cloud cover and, hence, not suitable for many applications. Additionally, satellite images are usually old when obtained since most satellites have a turnaround period on the order of days. Even though better images that provide good resolution may be obtained through private commercial satellite systems, these commercial images are very expensive. These commercial images normally have a minimum order requirement and cost thousands of dollars. The aforementioned factors make satellite imagery unsuitable for many applications and especially the small scale ones. Airplane cameras provide better resolution images than satellites, but suffer from similar disadvantages as they are also very expensive.
Also, although advancements in UAV research have created a range of drone types for various uses, drones also have many disadvantages. A mid to heavy payload carrying drone may cost thousands of dollars. Thus the capital investment cost of using drones may be high. Also UAVs have limited battery life which mandates a limited flight time and a UAV needs to be charged regularly for continuous image capture. If a UAV's batteries undergo several cycles of charging/discharging, then battery longevity is reduced and the batteries would need to be replaced. Thus drones don't just incur an initial capital cost but also a moderately high operational cost. One big disadvantage of using UAVs comes in the form of regulations which restrict how, where and when UAVs may be used.
The embodiments of this disclosure provide low-cost alternatives to the above discussed methods of aerial imaging. The embodiments provide systems having longevity. Once the system is up and running, it should last for a long enough time without the need for human intervention. The embodiments also provide systems that allow as low a cost of the system as possible. This making it economical for persons in developing countries to easily use the system and its new technologies. Also the aerial platform based imaging of the embodiments is flexible and programmable. A user is able to choose the area of interest, the quality of the imagery, the duty cycling etc., flexibly, as per need. A user is also able to make adjustments of battery life for given applications by remotely changing parameters of the camera device related to power consumption. For example, a user may adjust a length of time for capture of a still image, a length of time for capture of a video, or a number of cycles per a time period for taking one or more images in order to adjust power consumption in view of available battery power.
The embodiments utilize an aerial platform such as a tethered balloon to be used as a low-cost drone to carry a camera device. Such an aerial platform is low cost and may be shared across many users. For example, in an agriculture application a group of farmers interested in documenting crop growth could share a single imaging system. The imaging system may be made mobile by tethering the system to a moving vehicle instead of a stationary point on the ground. Usually, an aerial imaging system with an unstable camera is not preferred for creating panoramic views of an area of interest. However, the embodiments use techniques and methods that account for wind motion by flexibly mounting the camera device to the aerial platform to allow the camera device to remain substantially parallel to the ground, and also correct for wind motion by stabilizing the camera and using a pre-planned path to capture aerial imagery. Also, the pre-planned path includes two-step path determination that accounts for wind disturbances mid-way through the imaging. In addition, implementations of the embodiments utilize image matching techniques that extract a plurality of features from the plurality of images, reject selected features of the plurality of features that do not match across the plurality of images, and determine a homography that allows a current image to be aligned to a previous image. This also helps in accounting for wind motion and the susceptibility of the aerial platform to translations and rotations in the air due to the impact of wind.
Stationary mode of operation is suitable for applications where the region of interest for imaging remains constant and changes in the area are to be tracked regularly for a long period of time. In this mode, the level of human intervention is minimal. For example, this mode may be used for imaging crop growth or flooding at a useful time granularity. For example, the aerial balloon 102 may be tethered to a single stationary point on the ground for an extended period of time (days to weeks). The balloon 102 may be a reusable helium filled balloon with a payload which consists of camera device 104 which is programmable and has its own power source. The camera device 104 may be pointed towards the object of interest which in this case is the ground plane of the area of interest.
The GCD 106 may perform two functions. The first function is to upload the imagery 124 to the gateway node 108 which has connectivity to the internet using an appropriate wireless technology having channels configured according to a Wi-Fi standard, or configured to operate using channels in the TV white spaces (TVWS). The second function is to enable duty cycling of the camera device 104. The GCD 106 may be programmed to turn on the camera device 104 remotely, trigger the camera device 104 to capture imagery, transfer the imagery to the interface and power off the camera device 104—essentially acting as a remote controller by sending commands 122 to camera device 104.
The gateway node 108 may also perform two functions. First, gateway node 108 functions as a node with computational capabilities not provided by camera device 104 or GCD 106. The gateway node 108 may execute initial vision based post-processing on the imaging data 118 that is being sent from the GCD 106. Second, the gateway node 106 may act as a gateway to upload data 112 to the cloud network 110 where further applications could run on the imaging data and provide the user with further processed data 116. In one implementation, the local processing performed by GCD 106 may be used to conserve bandwidth (as videos can be very large) as well as to maintain system robustness during any cloud connectivity outages. In this implementation, long-term analytics may be run on the cloud network 110 and the gateway node 108 may processes the immediately data and presents the farmer with immediate short term analytics 120. In an alternative implementation, the GCD 108 may directly upload the videos to the cloud network 110 without using the gateway node 108. Then, instead of accessing imaging data locally, a user may access the imaging data directly from the cloud network 110. In other example implementations, multiple cameras may be strategically placed to ensure coverage. For example, multiple cameras may be used when height regulations and/or the camera's field-of-view (FOV) limits the area imaged by a single camera device.
Mobile mode of operation may be used where the region of interest for imaging is larger than what can be achieved using a stationary camera and when the granularity at which updated imagery data is needed is appropriate for use of mobile mode. For example, if a large region is to be mapped once a day, mobile mode may be used. Mobile mode may also provide an advantage if a community or group of users want to share resources (say for example a farming community in a developing country).
In an example implementation, the system 100 utilizes a path planning function configured in GCD 108. The path planning function is configured to first determine and efficiently utilize the area captured by images or video taken from camera device 104 for a current system configuration.
and the vertical length b covered is given by
The aerial platform/balloon 102 for mounting the camera device 104 may be designed in such a way that the camera faces the ground with maximum/high probability. However, there is a rotatory motion about the axis normal to the ground (in the plane parallel to the ground). If the balloon 102 is currently stationary and wind doesn't shift the tethered balloon, it is still difficult to exactly estimate what area is getting imaged because of the local rotation at the pivot where camera device 104 is attached to the balloon.
Referring to
r=½min(b,l)=½minf(h),g(h)
As the radius of the circle is a function of the height of the balloon 102 and the FOV of the camera (which may remain static during a single imaging session of a region of interest), the area imaged by the camera can be lower-bounded by the circle of the appropriate radius.
Referring to
The operations of
At 304, GCD 106 then pairs with camera device 104 through their connecting wireless interface. At 306, GCD 106 determines the height of the balloon/camera device 104. The height of the camera device may be received as input from the user. At 306, CGD 106 may also receive information on an image resolution to be used for camera device 104. CGD 106 may then determine the FOV of camera device 104 for the imaging of the region of interest.
At 308, GCD 106 determines path 1 and presents path 1 to the user along with an indication of important areas of the region of interest at user interface 107.
At 310, GCD 106 tracks and/or guides the user through user interface 107 to stay on path 1 as the user traverses the path 1 presented to the user. At 312, GCD 106 tracks areas that have not been imaged as the user traverses path 1. GCD 106 may receive data associated with the position of the camera device as the user traverses the first path. For example, GCD 106 may receive GPS data from camera device 104 to allow GCD 106 to determine the position of camera device 104 and the user. Also, GCD 106 may display an indication of the user's position to help guide the user to traverse the path from beginning 402 to end 404.
As the user traverses path 1, the user may move the balloon 102 and camera device 104 along path 1 above the user so that the camera device takes images along the path 1.
At 314, when the path 1 has been traversed, GCD 106 determines the areas that have not been imaged, areas of high importance that are to be imaged, and a path 2.
Path 2 is then presented to the user with an indication of important unimaged areas of the region of interest at user interface 107. At 316, GCD 106 tracks and/or guides the user through user interface 107, to stay on path 2 as the user traverses the path 2 presented to the user. GCD 106 may guide the user along path 2 in the same manner as it guided the user along path 1. As the user traverses the path 2, the user moves the balloon 102 and camera device 104 along the path 2 above the user and ground so that the camera device takes images along the path 2.
In the implementation, GCD 106 determines the first path to image the region at 308 assuming that there are no random effects associated with wind. Wind has both positive and negative effects. Wind causes camera motion which causes a larger area to be imaged than was intended. However, this also means it might be difficult to image intended areas due to the direction or intensity of wind flow. However, GCD 106 ignores the effect of wind on the balloon path and outputs a deterministic path which minimizes the time taken to image an area.
The process begins at 502 where GCD 106 determines the convex hull of the region to be imaged. Next, at 504, GCD 106 determines a path in the direction of the shortest ‘width’ of the convex hull taking into account the height of the balloon. For example, if a straight line of length l is traversed by the user, then the area imaged is of size l×w, where w is the width of each image. GCD 106 determines coverage of the convex polygon with ribbons of width w such that the length of the ribbon plus the number of ribbon stripes used to cover the area is minimized. Laying ribbons out in any direction can potentially incur some wastage on the edges. If GCD 106 ignores those areas, the area covered by any layout is the same. Thus, the length of ribbon used (which is equal to area divided by w) is also the same. The different layouts to cover the area then only differ by the number of stripes. GCD 106 minimizes the number of stripes by laying them down along the smallest ‘width’ of the convex polygon. The smallest ‘width’ of the polygon is defined as the smallest edge of all the rectangles which cover the given polygon. Then, at 506, GCD 106 determines path 1 for the region of interest, and, at 508, presents path 1 to the user of GCD 106.
GCD 106 determines the path 2 using the information obtained from the camera device 104 during the traverse of path 1. Path 2 is determined to provide coverage of areas that remain unimaged after path 1 is traversed by the user with the balloon 102 and camera device 104.
At 516, GCD 106 determines the user's current position and sets the user's current position as a vertex. The distance from the user's position to all the other vertices is then determined. Then, at 518, GCD 106 determines path 2 using, for example, the traveling salesman solution with the user's current position as the starting point. At 520, GCD 106 then presents path 2 to the user.
In the implementation of
An aerial platform such as a tethered balloon is subject to translations and rotations in the air due to the impact of wind. The motion caused by wind makes the imagery collected by the imaging system difficult to interpret. To make sense out of subsequent images, the user is forced to constantly recalibrate his mental mapping between the image plane and the physical world. This makes the user-interface highly cumbersome and non-intuitive. Furthermore, it is difficult to use this data in machine learning algorithms as well as in processing of data. In an implementation, to account for this GCD 106 may realign images across time.
The process begins at 602 where GCD 106 extracts features from each image captured by camera device 104. At 604, GCD 106 matches features extracted from one image with another image. At 606 GCD rejects the features that do not match across images. At 608, GCD 106 determines a homography that maps features in one image to features in another image. At 610, GCD 106 applies the homography to the current image and ensures that the current image is aligned to the previous images. GCD 106 may perform the process of
In an implementation, camera device 104 may comprise a gyro that transmits gyro readings as camera device 104 swings and pivots around various axis during traversal of the paths by a user. The gyro readings may be associated with each image or each frame of a video taken. The gyro readings may be used for frame rejection in the process of panorama construction and/or path planning. For example, the gyro readings may be used for long-term aerial monitoring in the process of panorama construction. The gyro readings may also be used for path planning when determining areas that remain unimaged and need to be imaged again. The gym readings may indicate the images/times when the camera was tilted away from the ground plane. Then, based on the gyro readings, frames/images captured by the camera when it was tilted beyond an acceptable range may be discarded. The range of acceptable gyro readings may be a parameter that is input to the system. For example, a user may input the acceptable range of gym readings into GCD 106 as part of the data entered at operation 302 of
In the example of
In an implementation, execution of imaging application user interface programs 716, imaging application control programs 718, path 1 optimizing programs 720, and path 2 optimizing programs 722 causes processor 702 to perform operations that cause device 700 to perform appropriate operations according to
The example embodiments disclosed herein may be described in the general context of processor-executable code or instructions stored on memory that may comprise one or more computer readable storage media (e.g., tangible non-transitory computer-readable storage media such as memory 712). As should be readily understood, the terms “computer-readable storage media” or “non-transitory computer-readable media” include the media for storing of data, code and program instructions, such as memory 712, and do not include portions of the media for storing transitory propagated or modulated data communication signals.
While implementations have been disclosed and described as having functions implemented on particular wireless devices operating in a network, one or more of the described functions for the devices may be implemented on a different one of the devices than shown in the figures, or on different types of equipment operating in different systems.
Embodiments have been disclosed that include an apparatus comprising one or more processors and memory in communication with the one or more processors, the memory comprising code that, when executed, causes the one or more processors to control the apparatus to receive first data, the first data comprising information related a region to be imaged and information related to a field of vision of a camera device positioned above a mobile entity, determine, based at least on the first data, a first path within the region and provide information on the first path to an entity, receive second data, the second data associated with the position of the camera device as the entity traverses the first path, determine, based at least on the second data, at least one unimaged area of the region that remains unimaged in the traverse of the first path, and, determine, based at least on the at least one unimaged area, a second path within the region and provide information on the second path to the entity. The second data may comprise global positioning data that is received from one or more sensors on the camera device. The second data may comprise camera position data received from one or more sensors on the camera device at the time of capture of each of a plurality of images by the camera device. The information related to the region to be imaged may comprise dimensional parameters of the region and at least one indication of an area of importance within the region. The information related to a field of vision of the camera device may comprise a camera height and a camera resolution.
The code, when executed, may further cause the one or more processors to control the apparatus to determine the first path by controlling the apparatus to determine a height of the camera device from the information related to a field of vision of the camera, determine a convex hull of the region, and, determine at least a portion of the first path by determining a shortest path in the direction of a shortest width of the convex hull taking into account the height of the camera device. The code, when executed, may also further cause the one or more processors to control the apparatus to determine the second path by controlling the apparatus to determine a height of the camera device and at least one area of importance within the region from the first data, generate at least one first vertex associated with the at least one unimaged area and at least one second vertex associated with the at least one area of importance, construct edges between each of the vertices of the at least one first and at least one second vertex, set a position of the mobile entity as starting vertex and determine the distance from the mobile entity to each of the vertices of the at least one first and at least one second vertex, and, determine the second path by using the position of the mobile entity as the starting point. The code, when executed, may still further cause the one or more processors to control the apparatus to generate the at least one first vertex associated with the at least one unimaged area and the at least one second vertex associated with the at least one area of importance by controlling the apparatus to break each unimaged area of the at least one unimaged area that has an area greater than one image circle size into a first plurality of parts, set any unbroken unimaged area of the at least one unimaged area as a vertex, and set each broken unimaged area of the at least one unimaged area as a set of vertices based on the first plurality of parts to generate the at least one first vertex associated with the at least one unimaged area, break each important area of the at least one area of importance that has an area greater than one image circle size into a second plurality of parts, set any unbroken important area of the at least one area of importance as a vertex, and set each broken area of importance of the at least one area of importance as a set of vertices based on the second plurality of parts to generate the at least one second vertex associated with the at least one area of importance.
The apparatus may further comprise a mobile device including a user interface in communication with the one or more processors, and the code may further cause the one or more processors to control the mobile device to receive the information associated with a region to be imaged from the user interface in response a user entering the input at the user interface. The apparatus may also further comprise a mobile device including a user interface in communication with the one or more processors, and the code further causes the one or more processors to control the mobile device to provide the information on the first path and the information on the second path to the entity by providing the information on the first path and the information on the second path to a user of the mobile device at the user interface.
The disclosed embodiments also include a system comprising an aerial platform configured to rise to a height above ground and including a steering apparatus that allows an entity on the ground to move the aerial platform in a desired direction when the aerial platform is above ground, a camera device attached to the aerial platform, the camera device including a position sensor, and, a mobile device including one or more processors and memory in communication with the one or more processors, the memory comprising code that, when executed, causes the one or more processors to control the apparatus to receive first data related to a region to be imaged by the camera device, receive second data, the second data including information related to a field of vision of the camera device, determine, based at least on the first data and the second data, a first path within the region and provide information on the first path to the entity, receive third data from the position sensor of the camera device as the entity moves the aerial platform along a traverse of the first path using the steering apparatus, and, determine, based at least on the third data, a second path within the region and provide information on the second path to the entity.
The second data may include a height of the camera and a camera resolution. The steering apparatus may comprise a tether. The aerial platform may comprise a balloon. The camera and mobile device may include a first and second wireless interface, respectively, and the third data may be sent from the camera on the first wireless interface and the mobile device may receive the third data at the second wireless interface. The code, when executed, may further cause the one or more processors to control the apparatus to determine the second path by controlling the apparatus to determine, based at least on the third data, at least one unimaged area of the region that remains unimaged in the traverse of the first path, and, determine, based at least on the at least one unimaged area, a second path within the region and provide information on the second path to the entity. The first data related to the region to be imaged may comprise dimensional parameters of the region and at least one indication of an area of importance within the region, and the mobile device further may comprise a user interface in communication with the one or more processors, and the code, when executed, may further cause the one or more processors to control the apparatus to receive the first data by controlling the apparatus to receive the first data at the user interface and provide the information on the first and the second paths to the entity at the user interface.
The disclosed embodiments also included a method comprising receiving first data at a device, the first data related to a region to be imaged by a camera suspended above ground by attachment to an aerial platform, receiving second data at the device, the second data including information related to a field of view of the camera, determining, at the device based at least on the first data and the second data, a first path within the region, providing, at the device, guidance information on the first path to an entity that moves the aerial platform in a desired direction along with the entity as the entity moves on the ground using a steering apparatus, receiving third data from the position sensor of the camera as the entity moves the aerial platform along a traverse of the first path using the steering apparatus, and, determining, based at least on the third data, a second path within the region and providing guidance information on the second path to the entity at the device. The method may further comprise receiving fourth data from the position sensor of the camera as the entity moves the aerial platform along a traverse of the second path using the steering apparatus, and, providing imaging information at the device showing imaged and unimaged areas of the region subsequent to a traverse of the second path by the entity. The determining the second path may comprise determining, based at least on the third data, at least one unimaged area of the region that remains unimaged in the traverse of the first path, and, determining, based at least on the at least one unimaged area, a second path within the region.
While the functionality disclosed herein has been described by illustrative example using descriptions of the various components and devices of embodiments by referring to functional blocks and processors or processing units, controllers, and memory including instructions and code, the functions and processes of the embodiments may be implemented and performed using any appropriate functional blocks, type of processor, circuitry or combinations of processors and/or circuitry and code. This may include, at least in part, one or more hardware logic components. For example, and without limitation, illustrative types of hardware logic components that can be used include field programmable gate arrays (FPGAs), application specific integrated circuits (ASICs), application specific standard products (ASSPs), system-on-a-chip systems (SOCs), complex programmable logic devices (CPLDs), etc. Use of the term processor or processing unit in this disclosure is meant to include all such implementations.
Also, although the subject matter has been described in language specific to structural features and/or methodological operations or acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features, operations, or acts described above. Rather, the specific features, operations, and acts described above are disclosed as example embodiments, implementations, and forms of implementing the claims and these example configurations and arrangements may be changed significantly without departing from the scope of the present disclosure. Moreover, although the example embodiments have been illustrated with reference to particular elements and operations that facilitate the processes, these elements, and operations may or combined with or, be replaced by, any suitable devices, components, architecture or process that achieves the intended functionality of the embodiment. Numerous other changes, substitutions, variations, alterations, and modifications may be ascertained to one skilled in the art and it is intended that the present disclosure encompass all such changes, substitutions, variations, alterations, and modifications as falling within the scope of the appended claims.
Claims
1. An apparatus comprising:
- one or more processors and,
- memory in communication with the one or more processors, the memory comprising code that, when executed, causes the one or more processors to control the apparatus to:
- receive first data, the first data comprising information related to a region to be imaged and information related to a field of vision of a camera device positioned above an entity;
- determine, based at least on the first data, a first path within the region and provide information on the first path to the entity;
- receive second data, the second data associated with the position of the camera device as the entity traverses the first path;
- determine, based at least on the second data, at least one unimaged area of the region that remains unimaged in the traverse of the first path; and,
- determine, based at least on the at least one unimaged area, a second path within the region and provide information on the second path to the entity.
2. The apparatus of claim 1, wherein the second data comprises global positioning data that is received from one or more sensors on the camera device.
3. The apparatus of claim 1, wherein the second data comprises camera position data received from one or more sensors on the camera device at the time of capture of each of a plurality of images by the camera device.
4. The apparatus of claim 1, wherein the information related to the region to be imaged comprises dimensional parameters of the region and at least one indication of an area of importance within the region.
5. The apparatus of claim 1, wherein the information related to a field of vision of the camera device comprises a camera height and a camera resolution.
6. The apparatus of claim 1, wherein the code, when executed, further causes the one or more processors to control the apparatus to determine the first path by controlling the apparatus to:
- determine a height of the camera device from the information related to a field of vision of the camera;
- determine a convex hull of the region; and,
- determine at least a portion of the first path by determining a shortest path in the direction of a shortest width of the convex hull taking into account the height of the camera device.
7. The apparatus of claim 1, wherein the code, when executed, further causes the one or more processors to control the apparatus to determine the second path by controlling the apparatus to:
- determine a height of the camera device and at least one area of importance within the region from the first data;
- generate at least one first vertex associated with the at least one unimaged area and at least one second vertex associated with the at least one area of importance;
- construct edges between each of the vertices of the at least one first and at least one second vertex;
- set a position of the mobile entity as starting vertex and determine the distance from the mobile entity to each of the vertices of the at least one first and at least one second vertex; and,
- determine the second path by using the position of the mobile entity as the starting point.
8. The apparatus of claim 7, wherein the code, when executed, further causes the one or more processors to control the apparatus to generate the at least one first vertex associated with the at least one unimaged area and the at least one second vertex associated with the at least one area of importance by controlling the apparatus to:
- break each unimaged area of the at least one unimaged area that has an area greater than one image circle size into a first plurality of parts;
- set any unbroken unimaged area of the at least one unimaged area as a vertex, and set each broken unimaged area of the at least one unimaged area as a set of vertices based on the first plurality of parts to generate the at least one first vertex associated with the at least one unimaged area;
- break each important area of the at least one area of importance that has an area greater than one image circle size into a second plurality of parts;
- set any unbroken important area of the at least one area of importance as a vertex, and set each broken area of importance of the at least one area of importance as a set of vertices based on the second plurality of parts to generate the at least one second vertex associated with the at least one area of importance.
9. The apparatus of claim 1, wherein the apparatus further comprises a mobile device including a user interface in communication with the one or more processors, and the code further causes the one or more processors to control the mobile device to receive the information associated with a region to be imaged from the user interface in response a user entering the input at the user interface.
10. The apparatus of claim 1, wherein the apparatus further comprises a mobile device including a user interface in communication with the one or more processors, and the code further causes the one or more processors to control the mobile device to provide the information on the first path and the information on the second path to the entity by providing the information on the first path and the information on the second path to a user of the mobile device at the user interface.
11. A system comprising:
- an aerial platform configured to rise to a height above ground and including a steering apparatus that allows an entity on the ground to move the aerial platform in a desired direction when the aerial platform is above ground;
- a camera device attached to the aerial platform, the camera device including a position sensor, and,
- a mobile device including one or more processors and memory in communication with the one or more processors, the memory comprising code that, when executed, causes the one or more processors to control the apparatus to:
- receive first data related to a region to be imaged by the camera device;
- receive second data, the second data including information related to a field of vision of the camera device;
- determine, based at least on the first data and the second data, a first path within the region and provide information on the first path to the entity;
- receive third data from the position sensor of the camera device as the entity moves the aerial platform along a traverse of the first path using the steering apparatus; and,
- determine, based at least on the third data, a second path within the region and provide information on the second path to the entity.
12. The system of claim 11, wherein the second data includes a height of the camera and a camera resolution.
13. The system of claim 11, wherein the steering apparatus comprises a tether.
14. The system of claim 11, wherein the aerial platform comprises a balloon.
15. The system of claim 11, wherein the camera and mobile device include a first and second wireless interface, respectively, and the third data is sent from the camera on the first wireless interface and the mobile device receives the third data at the second wireless interface.
16. The system of claim 15 the code, when executed, further causes the one or more processors to control the apparatus to determine the second path by controlling the apparatus to:
- determine, based at least on the third data, at least one unimaged area of the region that remains unimaged in the traverse of the first path; and,
- determine, based at least on the at least one unimaged area, a second path within the region and provide information on the second path to the entity.
17. The system of claim 11, wherein the first data related to the region to be imaged comprises dimensional parameters of the region and at least one indication of an area of importance within the region, and the mobile device further comprises a user interface in communication with the one or more processors, and the code, when executed, further causes the one or more processors to control the apparatus to receive the first data by controlling the apparatus to receive the first data at the user interface and provide the provide information on the first and the paths to the entity at the user interface.
18. A method comprising:
- receiving first data at a device, the first data related to a region to be imaged by a camera suspended above ground by attachment to an aerial platform;
- receiving second data at the device, the second data including information related to a field of view of the camera;
- determining, at the device based at least on the first data and the second data, a first path within the region;
- providing, at the device, guidance information on the first path to an entity that moves the aerial platform in a desired direction along with the entity as the entity moves on the ground using a steering apparatus;
- receiving third data from the position sensor of the camera as the entity moves the aerial platform along the first path using the steering apparatus; and,
- determining, based at least on the third data, a second path within the region and providing guidance information on the second path to the entity at the device.
19. The method of claim 18 further comprising:
- receiving fourth data from the position sensor of the camera as the entity moves the aerial platform along a traverse of the second path using the steering apparatus; and,
- providing imaging information at the device showing imaged and unimaged areas of the region subsequent to a traverse of the second path by the entity.
20. The method of claim 18 wherein the determining the second path comprises:
- determining, based at least on the third data, at least one unimaged area of the region that remains unimaged in the traverse of the first path; and,
- determining, based at least on the at least one unimaged area, a second path within the region.
Type: Application
Filed: Jan 25, 2017
Publication Date: Jul 26, 2018
Applicant: Microsoft Technology Licensing, LLC (Redmond, WA)
Inventors: Ranveer Chandra (Kirkland, WA), Manohar Swaminathan (Bangalore), Vasuki Narasimha Swamy (Berkeley, CA), Zerina Kapetanovic (Seattle, WA), Deepak Vasisht (Cambridge, MA), Akshit Kumar (Chennai), Apurv Mehra (Bangalore), Avikalp Gupta (Bangalore), Sudipta Sinha (Redmond, WA), Rohit Patil (Hubli)
Application Number: 15/414,949