Methods and apparatus to reduce depth map size in collision avoidance systems
Methods and apparatus to reduce a depth map size for use in a collision avoidance system are described herein. Examples described herein may be implemented in an unmanned aerial vehicle. An example unmanned aerial vehicle includes a depth sensor to generate a first depth map. The first depth map includes a plurality of pixels having respective distance values. The unmanned aerial vehicle also includes a depth map modifier to divide the plurality of pixels into blocks of pixels and generate a second depth map having fewer pixels than the first depth map based on distance values of the pixels in the blocks of pixels. The unmanned aerial vehicle further includes a collision avoidance system to analyze the second depth map.
Latest Intel Patents:
- ENHANCED LOADING OF MACHINE LEARNING MODELS IN WIRELESS COMMUNICATIONS
- DYNAMIC PRECISION MANAGEMENT FOR INTEGER DEEP LEARNING PRIMITIVES
- MULTI-MICROPHONE AUDIO SIGNAL UNIFIER AND METHODS THEREFOR
- APPARATUS, SYSTEM AND METHOD OF COLLABORATIVE TIME OF ARRIVAL (CTOA) MEASUREMENT
- IMPELLER ARCHITECTURE FOR COOLING FAN NOISE REDUCTION
This disclosure relates generally to collision avoidance systems, and, more particularly, to methods and apparatus to reduce depth map size in collision avoidance systems.
BACKGROUNDUnmanned aerial vehicles (UAVs), commonly referred to as drones, are becoming more readily available and have developed into a rapidly growing market. UAVs are now being used in a wide variety of industries, such as farming, shipping, forestry management, surveillance, disaster scenarios, gaming, etc. Some UAVs employ collision avoidance systems that help control the UAV if a potential collision is detected. The collision avoidance system analyzes depth maps from one or more depth sensors to determine the location of object(s) near the UAV.
The figures are not to scale. Instead, the thickness of the layers or regions may be enlarged in the drawings. In general, the same reference numbers will be used throughout the drawing(s) and accompanying written description to refer to the same or like parts.
DETAILED DESCRIPTIONDisclosed herein are example methods, apparatus, systems, and articles of manufacture for reducing (downsizing) depth maps for analysis by a collision avoidance system. Collision avoidance systems (sometimes referred to as object detection systems) are commonly used on unmanned aerial vehicles (UAVs) to monitor the surroundings of a UAV and prevent the UAV from colliding with an object, such as a tree, a building, a vehicle, etc. A collision avoidance system analyzes depth maps generated by a depth sensor. Depth maps, sometimes referred to as depth images, are produced with relatively high resolution, such as 848×480 pixels. However, processing over 400,000 pixels for every depth map, which may occur at a frequency of 30 or 60 Hertz (Hz), can be quite taxing on a microprocessor of a UAV. As such, larger, heavier, more robust processors are required to analyze the depth maps, which may be less desirable or feasible in a UAV.
Disclosed herein are example depth map modifiers that may be used to reduce the size of the depth maps provided to a collision avoidance system while still maintaining relevant data within the depth maps. Example depth map modifiers disclosed herein may utilize one or more example downsizing techniques to downsize the depth maps and reduce the overall data in each depth map. As a result, the downsized depth maps require fewer calculations to process by the collision avoidance system, thereby enabling the smaller, lighter, lower power processors to be utilized for performing the collision avoidance operations, which is desirable on UAVs and other vehicles.
Also disclosed herein are example foveated filter techniques to retain more or fewer pixels in certain areas of a depth map based on an orientation and current direction of travel of a UAV. For example, if a UAV is flying in a forward direction, then the location of objects in front of the UAV may be more important than objects to the sides and/or behind the UAV. In some examples, the depth map modifier applies a first size filter block capable or retaining relatively more detail in areas in the depth map that are of higher importance, and a second size filter block to areas in the depth map that are of lower importance, where detail is not as important. Such an example technique helps further reduce the amount of data included in the depth maps and, thus, reduces the computational load imparted on the collision avoidance system.
While examples disclosed herein are described in connection with a collision avoidance system implemented by a rotorcraft UAV, examples disclosed herein may likewise be implemented in connection with collision avoidance systems in other types of vehicles, such as fixed wing UAVs, manned aircraft, on-road vehicles, off-road vehicles, submarines, etc. Thus, examples disclosed herein are not limited to UAVs. Examples disclosed herein can be broadly used in other applications to similarly reduce computational loads on a collision avoidance system or any other type of system that uses depth maps.
The speeds of the motors 102a-102h are controlled by one or more motor controller(s) 108, sometimes referred to a motor driver(s). In the illustrated example, the motor controller(s) 108 are implemented by a processor 110 of the UAV 100. The motor controller(s) 108 apply power (e.g., via a pulse width modulation (PWM) signal) to control the speeds of the motors 102a-102h and, thus to generate more or less thrust. In the illustrated example of
In the illustrated example, the UAV 100 includes a sensor system 114 including one or more sensors for detecting the orientation, position, and/or one or more other parameters of the UAV 100. In the illustrated example, the sensor system 114 includes an inertial measurement unit (IMU) 116. The IMU 116 may include one or more sensors that measure linear and/or angular velocities and accelerations to determine orientation, position, velocity, and/or acceleration of the UAV 100. For example, the IMU 208 may include a solid-state accelerometer, a gyro, a magnetometer, a static or dynamic pressure sensor, and/or any other IMU. In the illustrated example, the sensor system 114 includes a Global Position System (GPS) sensor 118 to detect GPS signals and determine the position (location), velocity, and/or acceleration of the UAV 100.
In the illustrated example, the UAV 100 includes a flight control system 120 that is implemented by the processor 110. The flight control system 120 is configured to control the flight of the UAV 100. In particular, the flight control system 120 sends commands or instructions to the motor controller(s) 108 to control the speeds of the motors 102a-120h of the UAV 100 in accordance with a desired flight path. The flight control system 120 uses inputs from the sensor system 114 to maintain the UAV 100 in a desired location or fly the UAV 100 along a desired path. In some examples, the flight control system 120 controls the flight of the UAV 100 based on one or more commands from a manual controller, for example, such as a remote control operated by a human pilot. In the illustrated example, the UAV 100 includes a wireless transceiver 122, which operates as a receiver and a transmitter, to communicate wirelessly with a ground controller and/or another component of an aerial system (e.g., an unmanned aerial system (UAS)). The wireless transceiver 122 may be a low frequency radio transceiver, for example. In other examples, other types of transceivers may be implemented. Additionally or alternatively, the flight control system 120 may control the flight of the UAV 100 based on one or more autonomous navigation operations. In the illustrated example of
In the illustrated example, the UAV 100 includes a collision avoidance system 126 that is implemented by the processor 110. The collision avoidance system 126 analyzes depth maps from one or more depth sensor(s) 128 to track objects in the area surrounding the UAV 100. In particular, the collision avoidance system 126 analyzes the depth maps to monitor for a potential collision with an object (e.g., a building, a tree, a vehicle, etc.). If the collision avoidance system 126 detects a potential collision, the collision avoidance system 126 instructs (or overrides) the flight control system 120 to take an appropriate course of action, such as decreasing the velocity of the UAV 100, changing the trajectory of the UAV 100, changing the altitude of the UAV 100, sending an alert to a remote controller, etc. For example, if the collision avoidance system 126 determines an object in the field of view is within a threshold distance (e.g., 1m) from the UAV 100, the collision avoidance system 126 may prohibit the UAV 100 from flying in that direction.
In the illustrated example, one depth sensor 128 is depicted, which is carried on the UAV 100 adjacent the camera 124. The depth sensor 128 is facing forward and measures the area in front of the UAV 100. In other examples, as disclosed in further detail herein, the UAV 100 may include more than one depth sensor and/or the depth sensor(s) may be positioned to face other directions (e.g., rearward, upward, etc.). The depth sensor 128 may include one or more devices, such as one or more cameras (e.g., a stereo vision system including two or more cameras, a Time-of-Flight (TOF) camera, which is a sensor that can measure the depths of scene points by illuminating the scene with a controlled laser or LED source and analyzing the reflected light, etc.), an infrared laser, and/or any other device that obtains measurements using any type of detection system such as visual, sonar, radar, lidar, etc. The depth sensor 128 generates depth maps (sometimes referred to depth images) based on objects in the field of view. In some examples, the depth sensor 128 generates depth maps at a frequency of 60 Hz (i.e., 60 depth maps are generated every second). However, in other examples, the depth sensor 128 may generate depth maps at a higher or lower frequency. Therefore, in some examples, the depth sensor 128 provides means for obtaining a depth map.
In some examples, a depth map includes a plurality of pixels (digital values). Each pixel may be a data element or vector (e.g., a 16 bit string) defining a location of the respective pixel and a distance or depth value associated with the respective pixel. The location may be defined using a two-dimensional (2d) coordinate system (with coordinates X and Y), for example. The distance values represent the distances between the depth sensor 128 and surface(s) in the field of view corresponding to the respective pixels. In some examples, the distance values are defined by values ranging from 0 to 65,535 (corresponding to the unsigned numbers of a 16 bit integer), where 0 means no depth (or an invalid value or a preset minimum from the depth sensor 128 (e.g., 10 cm), 100 means 10 centimeters (cm), 1,000 means 1 m, and 65,535 means 65.535 m. In other examples, distance values may be defined using other numbering schemes (e.g., a 32 bit integer may have a floating point from 0.0 to numbers higher than 65,353, within the precision limits of floating point accuracy).
In
To address the above drawbacks, the example UAV 100 of
In the illustrated example, the depth map modifier 130 includes an example block definer 304, an example pixel distance analyzer 306, an example resizer 308, an example pixel distance assignor 310, an example depth map complier 312, an example high detail area definer 314, and an example memory 316. In a first example technique, the depth map modifier 130 downsizes or down-samples the depth map 300 to a smaller resolution based on the smallest (closest) distance value within each block of pixels. For example, the block definer 304 applies a filter block to the depth map 300 that divides the pixels into a plurality of blocks. An example filter block size is 5×5 pixels. For example, referring back to
In some examples, the first downsizing technique may be implemented by retaining the same two-dimensional coordinate system from the depth map 300. For example, after the pixel distance analyzer 306 identifies the smallest distance values in each of the blocks 204-210, the pixel distance assignor 310 assigns the smallest distance value in each of the blocks 204-210 to the center pixel (or coordinate) in each of the blocks 204-210. For example, if pixel X=2, Y=2 has the smallest distance value of the pixels 202 in the first block 204, the pixel distance assignor 310 assigns this smallest distance value to the center pixel, which is pixel X=3, Y=3. The depth map complier 312 generates the updated depth map 302 by saving only the center pixels from each of the blocks 204-210 (e.g., X=3, Y=3; X=8, Y=3; X=3, Y=8; X=8, Y=X; etc.) and their assigned distance values, and discarding the other pixels in the blocks 204-210. As such, similar to the example disclosed above, the amount of pixels is reduced from 500,000 pixels to 20,000.
In some examples, it may be desirable to retain the information regarding which of the pixels contributed to the closest distance value in each of the blocks. Therefore, in a second example downsizing technique, the depth map modifier 130 instead saves the pixels in each of the blocks 204-210 that contributed to the smallest (closest) distance values in the respective blocks 204-210. For example,
However, as can be seen in
While in the example downsizing techniques disclosed above one pixel from each block is saved and/or used to generate the updated depth map 302, in other examples, more than one pixel from each block may be saved and/or used to generate the updated depth map 302. For example, when using the second downsizing technique, the two pixels having the two smallest distance values may be saved and/or used to generate the updated depth map 302. In another example, the pixel with the smallest distance value and the pixel with the largest distance value may be saved from each block. In still other examples, more than two pixels from each block may be saved, or different numbers of pixels may be saved from different blocks (e.g., two pixels from a first block are saved, three pixels from a second block are saved, etc.).
In some examples, the UAV 100 may include two or more depth sensors and the depth map modifier 130 may similarly perform one or more of the example downsizing techniques (disclosed above) on the depth maps generated by multiple depth sensors. For example,
In some examples, to help further reduce the computational load on the collision avoidance system 126, the example depth map modifier 130 may apply a foveated depth filter technique based on the orientation and current direction of travel of the UAV 100. For example, as illustrated in
To further downsize or reduce the amount of data in the depth maps, in some examples, the block definer 304 applies a first size filter block to the pixels that fall within the high detail area and applies a second size filter block to the pixels that are outside of the high detail area (i.e., within the low detail areas). For example, for pixels within the high detail area of the depth map from the first depth sensor 128a, the block definer 304 may apply a filter block of 5×5 pixels. However, for pixels within the low detail area(s) of the depth map from the first depth sensor 128a and the entire depth map from the second depth sensor 128b, the block definer 304 applies a larger size filter block, such as 10×10 pixels. Then, one or more of the examples downsizing techniques disclosed herein (e.g., the first downsizing technique disclosed in connection with
In other examples, the UAV 100 may include more depth sensors and/or the depth sensors may be oriented in different directions. In some examples, the UAV 100 may include six (6) cameras, one facing in each direction, such as forward, backward, left, right, up, and down. In such an example, the depth sensors could provide a complete 360° view of all sides of the UAV 100, including above and below. In such an example, one or a few of the depth sensors would include the high detail area, whereas the remaining depth sensors would include low detail areas that can be drastically reduced to decrease computational loads on the collision avoidance system. Also, while in the example above the pixels were divided into high and low detail areas, in other examples, the pixels may be divided into more than two types of areas, and different size filter blocks may be used for each area (e.g., using a first size filter block on a high detail area, using a second size filter block (larger than the first size filter block) on an intermediate or medium detail area, and using a third size filter block (larger than the second size filter block) on a low detail area).
In some examples, the depth map modifier 130 may change the filter block size based on a future flight path. For example, referring back to
While in the illustrated example of
While an example manner of implementing the depth map modifier 130 of
Flowcharts representative of example hardware logic or machine readable instructions for implementing the processor 110 and/or the depth map modifier of
As mentioned above, the example processes of
“Including” and “comprising” (and all forms and tenses thereof) are used herein to be open ended terms. Thus, whenever a claim employs any form of “include” or “comprise” (e.g., comprises, includes, comprising, including, having, etc.) as a preamble or within a claim recitation of any kind, it is to be understood that additional elements, terms, etc. may be present without falling outside the scope of the corresponding claim or recitation. As used herein, when the phrase “at least” is used as the transition term in, for example, a preamble of a claim, it is open-ended in the same manner as the term “comprising” and “including” are open ended. The term “and/or” when used, for example, in a form such as A, B, and/or C refers to any combination or subset of A, B, C such as (1) A alone, (2) B alone, (3) C alone, (4) A with B, (5) A with C, and (6) B with C.
At block 1206, the depth map modifier 130 provides the updated depth map 302 (which has less data than the originally generated depth map 300) to the collision avoidance system 126. The collision avoidance system 126 analyzes the updated depth map 300 to track the distance(s) of object(s) near the UAV 100. If a potential collision is detected, the collision avoidance system 126 may communicate with the flight control system 120 to at least one of change a trajectory or change a velocity of the UAV 100. In some examples, the UAV 100 may transmit an alert back to a remote controller to alert a user of the potential collision.
At block 1208, the depth map modifier 130 determines whether another depth map is received from the depth sensor 128. If so, the example process begins again. As mentioned above, in some examples, the depth sensor 128 generates depth maps at a relatively high frequency, such as 60 Hz. Each time a depth map is generated, the example depth map modifier 130 may reduce the size of the depth map using the example process of
At block 1302, the block definer 304 applies a filter block to the depth map 300, which divides the pixels into a plurality of blocks (e.g., the blocks 204-210 shown in
At block 1304, the pixel distance analyzer 306 identifies the pixel in each block of pixels (e.g., in each of the blocks 204-210) with the smallest distance value (i.e., representing the closest surface to the UAV 100). At block 1306, the pixel distance assignor 310 assigns the smallest distance value identified in each block (at block 1304) to the center pixel or center coordinate in each block. Therefore, in some examples, the pixel distance assignor 310 provides means for assigning the smallest distance value identified in each block to the center pixel or center coordinate of the respective block.
At block 1308, the depth map complier 312 generates the updated depth map 302 by saving the center pixels and their assigned distance values from each block and discarding (e.g., deleting) the other pixels in each block. The updated depth map 302 is output to the collision avoidance system 126 in block 1206 of
At block 1402, the block definer 304 applies a filter block to the depth map 300, which divides the pixels into a plurality of blocks (e.g., the blocks 204-210 of
At block 1404, the pixel distance analyzer 306 identifies the pixel in each block of pixels (e.g., in each of the blocks 204-210) with the smallest distance value (i.e., representing the closest surface to the UAV 100). Therefore, in some examples, the pixel distance analyzer 306 provides means for identifying the smallest distance values in the blocks of pixels. At block 1406, the depth map complier 312 generates the updated depth map 302 by saving the pixels in each block having the smallest distance values (identified at block 1404) and discarding the other pixels in each block. In other words, certain ones of the pixels (with their original locations and distance values) from the depth map 300 are used in the updated depth map 302, whereas other ones of the pixels are discarded. The updated depth map 302 is output to the collision avoidance system 126 in block 1206 of
At block 1502, the block definer 304 applies a filter block to the depth map 300, which divides the pixels into a plurality of blocks (e.g., the blocks 204-210 of
At block 1504, the pixel distance analyzer 306 identifies the pixel with the smallest distance value (i.e., representing the closest surface to the UAV 100) and the pixel with the largest distance value in each of the blocks (e.g., in each of the blocks 204-210). At block 1506, for a given block of pixels in the original depth map 300, the pixel distance analyzer 306 compares the difference in the smallest distance value and the largest distance value for the block to a threshold distance. At block 1508, the pixel distance analyzer 306 determines whether the difference satisfies a threshold distance. If the difference satisfies the threshold distance (e.g., is below the threshold distance), at block 1510, the pixel distance assignor 310 assigns the smallest distance value identified in the block to the center pixel (or center coordinate) in the block (e.g., similar to the operation disclosed in connection with
At block 1514, the pixel distance analyzer 306 determines whether there is another block to be analyzed. If so, the process in blocks 1506-1512 is repeated. Once all the blocks are analyzed, at block 1516, the depth map complier 312 generates the updated depth map 302 by saving the center pixels and their assigned distance values for the block(s) that satisfied the threshold distance, saving the pixels having the smallest distance values for the block(s) that did not satisfy the threshold distance, and discarding the other pixels in the blocks. The updated depth map 302 is output to the collision avoidance system 126 in block 1206 of
At block 1602, the example depth map modifier 130 receives the depth map(s) from one or more depth sensor(s), such as the depth map sensors 128a, 128b. At block 1604, the high detail area definer 314 determines the orientation and current direction of travel of the UAV 100. In some examples, the high detail area definer 314 determines the orientation and direction of travel based on input from the sensor system 114 and/or the flight control system 120. Therefore, in some examples, the high detail area definer 314 provides means for determining a direction of travel of the UAV 100. At block 1606, the high detail area definer 314 defines the high detail area based on the orientation and direction of travel. In particular, the high detail area is oriented in the direction of travel (based on the UAV reference frame). The high detail area may be represented by a virtual cone extending from the UAV 100 in the direction of travel, for example.
At block 1608, the high detail area definer 314 identifies or determines the area(s) in the depth map(s) falling within the high detail area as being high detail area(s) and the other area(s) (falling outside the high detail area) as being low detail area(s). Therefore, in some examples, the high detail area definer 314 provides means for identifying the high detail area(s) and/or the low detail area(s) within pixels of a depth map. At block 1610, the block definer 304 applies a first size filter block, such as a 5×5 filter block, to the high detail area(s) and applies a second size filter block, such as 10×10 filter block, to the low detail area(s). At block 1612, the depth map complier 312 generates one or more updated depth maps using one or more of the example downsizing techniques disclosed herein (e.g., as disclosed in connection with
At block 1614, the depth map modifier 130 provides the updated depth map(s) to the collision avoidance system 126. The collision avoidance system 126 analyzes the updated depth map(s) to track the distance(s) of object(s) near the UAV 100. If a potential collision is detected, the collision avoidance system 126 may communicate with the flight control system 120 to at least one of change a trajectory or change a velocity of the UAV 100. In some examples, the UAV 100 may transmit an alert back to a remote controller to alert a user of the potential collision.
At block 1616, the depth map modifier 130 determines whether one or more other depth map(s) is/are received from the depth sensor(s). If so, the example process begins again. As mentioned above, in some examples, the depth sensors generate depth maps at a relatively high frequency, such as 60 Hz. Each time depth maps are generated, the example depth map modifier may reduce the size of the depth maps using the example technique in
The processor platform 1700 of the illustrated example includes a processor 1712. The processor 1712 of the illustrated example is hardware. For example, the processor 1712 can be implemented by one or more integrated circuits, logic circuits, microprocessors, GPUs, DSPs, or controllers from any desired family or manufacturer. The hardware processor may be a semiconductor based (e.g., silicon based) device. In this example, the processor 1712 may implement the example motor controller(s) 108, the example flight control system 120, the example collision avoidance system 126, the example depth map modifier 130, including any of the example block definer 304, the example pixel distance analyzer 306, the example resizer 308, the example pixel distance assignor 310, the example depth map complier 312, and/or the example high detail area definer 314, and/or, more generally, the example processor 110.
The processor 1712 of the illustrated example includes a local memory 1713 (e.g., a cache). The processor 1712 of the illustrated example is in communication with a main memory including a volatile memory 1714 and a non-volatile memory 1716 via a bus 1718. The volatile memory 1714 may be implemented by Synchronous Dynamic Random Access Memory (SDRAM), Dynamic Random Access Memory (DRAM), RAMBUS® Dynamic Random Access Memory (RDRAM®) and/or any other type of random access memory device. The non-volatile memory 1716 may be implemented by flash memory and/or any other desired type of memory device. Access to the main memory 1714, 1716 is controlled by a memory controller.
The processor platform 1700 of the illustrated example also includes an interface circuit 1720. The interface circuit 1720 may be implemented by any type of interface standard, such as an Ethernet interface, a universal serial bus (USB), a Bluetooth® interface, a near field communication (NFC) interface, and/or a PCI express interface.
In the illustrated example, one or more input devices 1722 are connected to the interface circuit 1720. In this example, the input device(s) 1722 may include the example sensor system 114, including the example IMU 116 and/or the example GPS sensor 118, the example camera 124, and/or the example depth sensor(s) 128. Additionally or alternatively, the input device(s) 1722 may permit a user to enter data and/or commands into the processor 1712.
One or more output devices 1724 are also connected to the interface circuit 1720 of the illustrated example. In this example, the output device(s) 1024 may include the motors 102a-102h. Additionally or alternatively, the output device(s) 1024 can be implemented, for example, by display devices (e.g., a light emitting diode (LED), an organic light emitting diode (OLED), a liquid crystal display (LCD), a cathode ray tube display (CRT), an in-place switching (IPS) display, a touchscreen, etc.), a tactile output device, a printer and/or speaker. Thus, in some examples, the interface circuit 1720 may include a graphics driver card, a graphics driver chip and/or a graphics driver processor.
The interface circuit 1720 of the illustrated example also includes a communication device such as a transmitter, a receiver, a transceiver (e.g., the wireless transceiver 122), a modem, a residential gateway, a wireless access point, and/or a network interface to facilitate exchange of data with external machines (e.g., computing devices of any kind) via a network 1726. The communication can be via, for example, an Ethernet connection, a digital subscriber line (DSL) connection, a telephone line connection, a coaxial cable system, a satellite system, a line-of-site wireless system, a cellular telephone system, etc.
The processor platform 1700 of the illustrated example also includes one or more mass storage devices 1728 for storing software and/or data. Examples of such mass storage devices 1728 include floppy disk drives, hard drive disks, compact disk drives, Blu-ray disk drives, redundant array of independent disks (RAID) systems, and digital versatile disk (DVD) drives. The mass storage device may include, for example, the memory 316.
The machine executable instructions 1732 of
From the foregoing, it will be appreciated that example methods, apparatus, systems, and articles of manufacture have been disclosed for reducing or downsizing depth maps to reduce the computational load imparted on collision avoidance systems while still ensuring high accuracy results can be achieved. As such, less computational power is needed to analyze the depth maps, thereby enabling smaller, lighter, processors to be used to implement the collision avoidance systems. Smaller, lighter processors are desirable with UAVs to reduce weight and reduce power consumption (which is limited in UAVs). Further, by reducing the computational loads on a processor, examples disclosed herein enable processor power to instead be used for other tasks. Examples disclosed herein can be implemented with collision avoidance systems used in any industry, such as on UAVs (fixed wing or rotorcraft), manned aircraft, autonomous driving vehicles (e.g., cars, trucks, etc.), robotics, etc.
Example methods, apparatus, systems, and articles of manufacture to reduce depth map size are disclosed herein. Further examples and combinations thereof include the following:
Example 1 includes an unmanned aerial vehicle including a depth sensor to generate a first depth map. The first depth map includes a plurality of pixels having respective distance values. The unmanned aerial vehicle of Example 1 also includes a depth map modifier to divide the plurality of pixels into blocks of pixels and generate a second depth map having fewer pixels than the first depth map based on distance values of the pixels in the blocks of pixels. The unmanned aerial vehicle of Example 1 further includes a collision avoidance system to analyze the second depth map.
Example 2 includes the unmanned aerial vehicle of Example 1, wherein the depth map modifier is to generate the second depth map by downsizing respective blocks into single pixels having assigned distance values corresponding to respective smallest distance values within the respective blocks.
Example includes 3 includes the unmanned aerial vehicle of Example 1, wherein, to generate the second depth map, the depth map modifier is to assign the smallest distance value in each block to a center pixel of the respective block, save the center pixels with the assigned smallest distance values, and discard the other pixels in the blocks.
Example 4 includes the unmanned aerial vehicle of Example 1, wherein the depth map modifier is to generate the second depth map by saving respective pixels from respective ones of the blocks having the smallest distance values in the respective blocks, and discarding the other pixels in the blocks.
Example 5 includes the unmanned aerial vehicle of Example 1, wherein the depth map modifier is to generate the second depth map by comparing a difference between the smallest distance value to a largest distance value in each block to a threshold distance.
Example 6 includes the unmanned aerial vehicle of Example 5, wherein, if the difference for a block satisfies the threshold distance, the depth map modifier is to assign the smallest distance value in the block to a center pixel in the block, save the center pixel, and discard the other pixels in the block. If the difference for a block does not satisfy the threshold distance, the depth map modifier is to save the pixel having the smallest distance value in the block and discard the other pixels in the block.
Example 7 includes the unmanned aerial vehicle of any of Examples 1-6, wherein the depth map modifier is to determine a direction of travel of the unmanned aerial vehicle, determine a high detail area and a low detail area within the plurality of pixels of the first depth map based on the direction of travel, and divide the pixels within the high detail area into blocks of pixels with a first size filter block and divide the pixels within the low detail area into blocks of pixels with a second size filter block having a larger size filter block than the first size filter block.
Example 8 includes the unmanned aerial vehicle of Example 7, wherein the depth sensor has a field of view, and the high detail area corresponds to a virtual cone projecting from the unmanned aerial vehicle in the direction of travel, the virtual cone being smaller than the field of view.
Example 9 includes the unmanned aerial vehicle of any of Examples 1-8, wherein the depth sensor is a first depth sensor, and further including a second depth sensor, the second depth sensor facing a different direction than the first depth sensor.
Example 10 includes the unmanned aerial vehicle of Example 9, wherein the second depth sensor is a generate a third depth map, the depth map modifier is to generate a fourth depth map based on the third depth map, the fourth depth map having fewer pixels than the third depth map, and the collision avoidance system is to analyze the fourth depth map.
Example 11 includes the unmanned aerial vehicle of any of Examples 1-10, wherein the collision avoidance system is to modify at least one of a trajectory or a velocity of the unmanned aerial vehicle if a potential collision is detected.
Example 12 includes method to reduce a size of a depth map, the method including applying, by executing an instruction with at least one processor, a filter block to a plurality of pixels of a first depth map to divide the plurality of pixels into blocks of pixels, where the pixels of the first depth map have respective distance values, identifying, by executing an instruction with the at least one processor, in the respective blocks, respective smallest distance values associated with the pixels in the respective blocks, and generating, by executing an instruction with the at least one processor, a second depth map based on the smallest distance values in the blocks of pixels, where the second depth map has fewer pixels than the first depth map.
Example 13 includes the method of Example 12, wherein the generating of the second depth map includes downsizing each block of pixels into one pixel having an assigned distance value corresponding to the smallest distance value within the block of pixels.
Example 14 includes the method of Example 12, wherein the generating of the second depth map includes assigning a smallest distance value in each respective block of pixels to a center pixel of the respective block, saving the center pixels with the assigned smallest distance values, and discarding the other pixels in the blocks.
Example 15 includes the method of Example 12, wherein the generating of the second depth map includes saving the pixels having the smallest distance values from the respective blocks of pixels, and discarding the other pixels in the blocks.
Example 16 includes the method of Example 12, wherein the generating of the second depth map includes determining, for each respective block, a difference between the smallest distance value and a largest distance value for the pixels in the respective block to determine differences for the respective blocks of pixels, and comparing the differences to a threshold distance. If the respective difference for a respective block satisfies the threshold distance, the method includes assigning the smallest distance value in the respective block to a center pixel in the respective block, saving the center pixel of the respective block, and discarding the other pixels in the respective block. If the respective difference for a respective block does not satisfy the threshold distance, the method includes saving the pixel having the smallest distance in the respective block, and discarding the other pixels in the respective block.
Example 17 includes the method of any of Examples 12-16, further including obtaining the first depth map with a depth sensor carried on a vehicle.
Example 18 includes the method of Example 17, further including determining, by executing an instruction with the at least one processor, a direction of travel of the vehicle, and identifying, by executing an instructions with the at least one processor, a high detail area and a low detail area within the first plurality of pixels based on the direction of travel, and wherein the applying of the filter block includes applying a first size filter block to the pixels within the high detail area and applying a second size filter block to the pixels within the low detail area, the second size filter block larger than the first size filter block.
Example 19 includes the method of Example 17, wherein the vehicle is an unmanned aerial vehicle.
Example 20 includes the method of Example 19, further including analyzing, via a collision avoidance system, the second depth map to monitor a location of an object near the unmanned aerial vehicle.
Example 21 includes a non-transitory machine readable storage medium including instructions that, when executed, cause one or more processors to at least apply a filter block to a first plurality of pixels of a first depth map to divide the first plurality of pixels into blocks of pixels, the first plurality of pixels having respective distance values, identify, in the respective blocks, respective smallest distance values associated with the pixel in the respective blocks, and generate a second depth map based on the smallest distance values in the blocks of pixels, where the second depth map have a second plurality of pixels, and the second plurality of pixels is less than the first plurality of pixels.
Example 22 includes the non-transitory machine readable storage medium of Example 21, wherein the instructions, when executed, cause the one or more processors to generate the second depth map by downsizing each block of pixels into one pixel having an assigned distance value corresponding to the smallest distance value within the block of pixels.
Example 23 includes the non-transitory machine readable storage medium of Example 21, wherein the instructions, when executed, cause the one or more processors to generate the second depth map by assigning a smallest distance value in each respective block of pixels to a center pixel of the respective block, saving the center pixels with the assigned smallest distance values, and discarding the other pixels in the blocks, the second plurality of pixels corresponding to the saved center pixels with the assigned smallest distance values.
Example 24 includes the non-transitory machine readable storage medium of Example 21, wherein the instructions, when executed, cause the one or more processors to generate the second depth map by saving the pixels having the smallest distance values from the respective blocks of pixels, and discarding the other pixels in the blocks, the second plurality of pixels corresponding to the saved pixels.
Example 2 includes the non-transitory machine readable storage medium of Example 21, wherein the instructions, when executed, cause the one or more processors to generate the second depth map by determining, for each respective block, a difference between the smallest distance value and a largest distance value for the pixels in the respective block to determine differences for the respective blocks of pixels, comparing the differences to a distance threshold, if the respective difference for a respective block satisfies the threshold distance, assigning the smallest distance value in the respective block to a center pixel in the respective block, saving the center pixel of the respective block, and discarding the other pixels in the respective block, and if the respective difference for a respective block does not satisfy the threshold distance, saving the pixel having the smallest distance in the respective block, and discarding the other pixels in the respective block.
Example 26 includes the non-transitory machine readable storage medium of any of Examples 21-25, wherein the first depth map is obtained by a depth sensor carried on a vehicle.
Example 27 includes the non-transitory machine readable storage medium of Example 26, wherein the instructions, when executed, cause the one or more processors to determine a direction of travel of the vehicle, identify a high detail area and a low detail area within the first plurality of pixels based on the direction of travel, and apply the filter block by applying a first size filter block to the pixels within the high detail area and applying a second size filter block to the pixels within the low detail area where the second size filter block is larger than the first size filter block.
Example 28 includes the non-transitory machine readable storage medium of Example 26, wherein the vehicle is an unmanned aerial vehicle.
Example 29 includes an apparatus including means for applying a filter block to a first plurality of pixels of a first depth map to divide the first plurality of pixels into blocks of pixels, the first plurality of pixels having respective distance values, means for identifying, in the respective blocks, respective smallest distance values associated with the pixel in the respective blocks, and means for generating a second depth map based on the smallest distance values in the blocks of pixels, where the second depth map has a second plurality of pixels, and the second plurality of pixels is less than the first plurality of pixels.
Example 30 includes the apparatus of Example 29, further including means for downsizing each block of pixels into one pixel having an assigned distance value corresponding to the smallest distance value within the block of pixels, and wherein the means for generating the second depth map is to generate the second depth map based on the downsized blocks of pixels.
Example 31 includes the apparatus of Example 29, further including means for assigning a smallest distance value in each respective block of pixels to a center pixel of the respective block, and wherein the means for generating the second depth map is to save the center pixels with the assigned smallest distance values and discard the other pixels in the blocks, where the second plurality of pixels corresponds to the saved center pixels with the assigned smallest distance values.
Example 32 includes the apparatus of Example 29, wherein the means for generating the second depth map is to save the pixels having the smallest distance values from the respective blocks of pixels and discard the other pixels in the blocks, where the second plurality of pixels corresponds to the saved pixels.
Example 33 includes the apparatus of Example 29, wherein the means for identifying is to determine, for each respective block, a difference between the smallest distance value and a largest distance value for the pixels in the respective block to determine differences for the respective blocks of pixels, and compare the differences to a distance threshold. If the respective difference for a respective block satisfies the threshold distance, a means for assigning is to assign the smallest distance value in the respective block to a center pixel in the respective block, and the means for generating the second depth map is to save the center pixel of the respective block and discarding the other pixels in the respective block. If the respective difference for a respective block does not satisfy the threshold distance, the means for generating the second depth map is to save the pixel having the smallest distance in the respective block and discard the other pixels in the respective block.
Example 34 includes the apparatus of any of Examples 29-33, further including means for obtaining the first depth map carried on a vehicle.
Example 35 includes the apparatus of Example 34, further including means for determining a direction of travel of the vehicle and means for identifying a high detail area and a low detail area within the first plurality of pixels based on the direction of travel, wherein the means for applying is to apply a first size filter block to the pixels within the high detail area and apply a second size filter block to the pixels within the low detail area, where the second size filter block is larger than the first size filter block.
Example 36 includes the apparatus of Example 34, wherein the vehicle is an unmanned aerial vehicle.
Example 37 includes an unmanned aerial vehicle system including an unmanned aerial vehicle having a depth sensor to obtain a first depth map, the first depth map including a plurality of pixels having respective distance values, a depth map modifier to divide the plurality of pixels into blocks of pixels and generate a second depth map having fewer pixels than the first depth map based on distance values of the pixels in the blocks of pixels, and a collision avoidance system to analyze the second depth map to track a location of an object relative to the unmanned aerial vehicle.
Example 38 includes the unmanned aerial vehicle system of Example 37, wherein the depth map modifier is implemented by a processor carried on the unmanned aerial vehicle.
Example 39 includes the unmanned aerial vehicle system of Examples 37 or 38, wherein the collision avoidance system is implemented by the processor carried on the unmanned aerial vehicle system.
Although certain example methods, apparatus, systems, and articles of manufacture have been disclosed herein, the scope of coverage of this patent is not limited thereto. On the contrary, this patent covers all methods, apparatus, systems, and articles of manufacture fairly falling within the scope of the claims of this patent.
Claims
1. An unmanned aerial vehicle comprising:
- a depth sensor to generate a first depth map, the first depth map including a plurality of pixels having respective distance values;
- a processor to: divide the plurality of pixels into blocks of pixels; and generate a second depth map having fewer pixels than the first depth map based on the distance values of the pixels in the blocks of pixels, the processor to generate the second depth map by saving respective pixels from respective ones of the blocks of pixels having the smallest distance values in the respective blocks of pixels and discarding the other pixels in the blocks of pixels; and
- a collision avoidance system to analyze the second depth map.
2. An unmanned aerial vehicle comprising:
- a depth sensor to generate a first depth map, the first depth map including a plurality of pixels having respective distance values;
- a processor to: divide the plurality of pixels into blocks of pixels; and generate a second depth map having fewer pixels than the first depth map based on the distance values of the pixels in the blocks of pixels, the processor to generate the second depth map by downsizing the respective blocks of pixels into single pixels having assigned distance values corresponding to respective smallest distance values within the respective blocks of pixels; and
- a collision avoidance system to analyze the second depth map.
3. An unmanned aerial vehicle comprising:
- a depth sensor to generate a first depth map, the first depth map including a plurality of pixels having respective distance values;
- a processor to: divide the plurality of pixels into blocks of pixels; and generate a second depth map having fewer pixels than the first depth map based on the distance values of the pixels in the blocks of pixels, the processor to generate the second depth map by assigning the smallest distance value in each block of pixels to a center pixel of the respective block of pixels, saving the center pixels with the assigned smallest distance values, and discarding the other pixels in the blocks of pixels; and
- a collision avoidance system to analyze the second depth map.
4. An unmanned aerial vehicle comprising:
- a depth sensor to generate a first depth map, the first depth map including a plurality of pixels having respective distance values;
- a processor to: divide the plurality of pixels into blocks of pixels; and generate a second depth map having fewer pixels than the first depth map based on the distance values of the pixels in the blocks of pixels by comparing a difference between the smallest distance value to a largest distance value in each block of pixels to a threshold distance; and
- a collision avoidance system to analyze the second depth map.
5. The unmanned aerial vehicle of claim 4, wherein:
- if the difference for a block of pixels satisfies the threshold distance, the processor is to assign the smallest distance value in the block of pixels to a center pixel in the block of pixels, save the center pixel, and discard the other pixels in the block of pixels; and
- if the difference for a block of pixels does not satisfy the threshold distance, the processor is to save the pixel having the smallest distance value in the block of pixels and discard the other pixels in the block of pixels.
6. An unmanned aerial vehicle comprising:
- a depth sensor to generate a first depth map, the first depth map including a plurality of pixels having respective distance values;
- a processor to: determine a direction of travel of the unmanned aerial vehicle; determine a high detail area and a low detail area within the plurality of pixels of the first depth map based on the direction of travel; divide the pixels of the plurality of pixels within the high detail area into first blocks of pixels with a first size filter block; divide the pixels of the plurality of pixels within the low detail area into second blocks of pixels with a second size filter block, the second size filter block having a larger size filter block than the first size filter block; and generate a second depth map having fewer pixels than the first depth map based on the distance values of the pixels in the first and second blocks of pixels; and
- a collision avoidance system to analyze the second depth map.
7. The unmanned aerial vehicle of claim 6, wherein the depth sensor has a field of view, and the high detail area corresponds to a virtual cone to be projected from the unmanned aerial vehicle in the direction of travel, the virtual cone being smaller than the field of view.
8. The unmanned aerial vehicle of claim 1, wherein the collision avoidance system is to modify at least one of a trajectory or a velocity of the unmanned aerial vehicle if a potential collision is detected.
9. A method to reduce a size of a depth map, the method comprising:
- applying, by executing an instruction with at least one processor, a filter block to a plurality of pixels of a first depth map to divide the plurality of pixels into blocks of pixels, the pixels of the first depth map having respective distance values;
- identifying, by executing an instruction with the at least one processor, in the respective blocks of pixels, respective smallest distance values associated with the pixels in the respective blocks of pixels; and
- generating, by executing an instruction with the at least one processor, a second depth map based on the smallest distance values in the blocks of pixels, the second depth map having fewer pixels than the first depth map.
10. The method of claim 9, wherein the generating of the second depth map includes:
- assigning a smallest distance value in each respective block of pixels to a center pixel of the respective blocks of pixels;
- saving the center pixels with the assigned smallest distance values; and
- discarding the other pixels in the blocks of pixels.
11. The method of claim 9, wherein the generating of the second depth map includes:
- saving the pixels having the smallest distance values from the respective blocks of pixels; and
- discarding the other pixels in the blocks of pixels.
12. The method of claim 9, wherein the generating of the second depth map includes:
- determining, for each respective block of pixels, a difference between the smallest distance value and a largest distance value for the pixels in the respective block of pixels to determine differences for the respective blocks of pixels;
- comparing the differences to a threshold distance;
- if the respective difference for a respective block of pixels satisfies the threshold distance: assigning the smallest distance value in the respective block of pixels to a center pixel in the respective block of pixels; saving the center pixel of the respective block of pixels; and discarding the other pixels in the respective block of pixels; and
- if the respective difference for a respective block of pixels does not satisfy the threshold distance: saving the pixel having the smallest distance in the respective block of pixels; and discarding the other pixels in the respective block of pixels.
13. The method of claim 9, further including obtaining the first depth map with a depth sensor carried on a vehicle.
14. The method of claim 13, wherein the vehicle is an unmanned aerial vehicle.
15. The method of claim 14, further including:
- analyzing, via a collision avoidance system, the second depth map to monitor a location of an object near the unmanned aerial vehicle.
16. A non-transitory machine readable storage medium comprising instructions that, when executed, cause one or more processors to at least:
- apply a filter block to a first plurality of pixels of a first depth map to divide the first plurality of pixels into blocks of pixels, the first plurality of pixels having respective distance values;
- identify, in the respective blocks of pixels, respective smallest distance values associated with the pixel in the respective blocks of pixels; and
- generate a second depth map based on the smallest distance values in the blocks of pixels, the second depth map having a second plurality of pixels, the second plurality of pixels less than the first plurality of pixels.
17. The non-transitory machine readable storage medium of claim 16, wherein the instructions, when executed, cause the one or more processors to generate the second depth map by downsizing each block of pixels into one pixel having an assigned distance value corresponding to the smallest distance value within the block of pixels.
18. The non-transitory machine readable storage medium of claim 16, wherein the instructions, when executed, cause the one or more processors to generate the second depth map by:
- assigning a smallest distance value in each respective block of pixels to a center pixel of the respective block;
- saving the center pixels with the assigned smallest distance values; and
- discarding the other pixels in the blocks of pixels, the second plurality of pixels corresponding to the saved center pixels with the assigned smallest distance values.
19. The non-transitory machine readable storage medium of claim 16, wherein the first depth map is obtained by a depth sensor carried on a vehicle.
20. The unmanned aerial vehicle of claim 1, wherein the collision avoidance system is implemented by the processor.
9778662 | October 3, 2017 | Tang |
20130009955 | January 10, 2013 | Woo |
20170177937 | June 22, 2017 | Harmsen |
20170301109 | October 19, 2017 | Chan |
20180032042 | February 1, 2018 | Turpin |
- Smirnov et al., “Methods for depth-map filtering in view-plus-depth 3D video representation,” EURASIP Journal on Advances in Signal Processing, published Feb. 14, 2012, retrieved from [https://asp-eurasipjournals.springeropen.com/articles/10.1186/1687-6180-2012-25] on Dec. 21, 2017, 41 pages.
- Le et al., “Directional Joint Bilateral Filter for Depth Images,” Sensors (Basel), Jul. 2014, 14(7), retrieved from [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4168506/] on Dec. 21, 2017, 12 pages.
- Guenter et al. “Foveated 3D Graphics,” Microsoft Research, Nov. 20, 2012, retrieved from [https://www.microsoft.com/en-us/research/publication/foveated-3d-graphics/] on Dec. 21, 2017, 13 pages.
Type: Grant
Filed: Dec 21, 2017
Date of Patent: Jun 9, 2020
Patent Publication Number: 20190051007
Assignee: Intel IP Corporation (Santa Clara, CA)
Inventors: Daniel Pohl (Puchheim), Markus Achtelik (Deutsch)
Primary Examiner: Yon J Couso
Application Number: 15/851,127
International Classification: G06T 7/55 (20170101); G06T 7/73 (20170101); H04N 19/176 (20140101); G08G 5/04 (20060101); G06T 3/40 (20060101); B64C 39/02 (20060101);