![]() Using the equation of a right-angled triangle again,ĭ^2 = A^2 + (r * A)^2 - substituting the value of B from (2)Ī^2 = D^2 / (1 + r^2) - flipping the terms of the equation Independent variable and (B) to be the dependent variable. If (A) and (B) are the sides of the image, then the aspect ratio ![]() ![]() To create two identical right-angled triangles). Of the right-angled triangle with the two included angles as θ°/2Īnd (180°-θ°)/2 (as becomes clear when the FoV angle is bisected If the diagonal of the image is D, D/2 is the length of the base For a far more complex solution (which I have to admit I barely understand) look up this post (StackExchange) where mountainunicycler (Github user) describes nesting a Python script within Latex (what) which then calculates the FoV of a drone-mounted camera and outputs a PDF with graphics ( I can’t even.) ! ! These equations assume the camera to be perpendicular to the ground and don’t account for lens distortion. For more on aspect ratios, see this post which recommends using the native aspect ratio for any given camera. In combination with the height of the drone, these two camera-parameters determine the final image footprint. For example, from the technical descriptions, a Phantom 3 Advanced camera has a FoV of 94° and user-selectable aspect ratios of 4:3 and 16:9, while the Phantom 4 Pro 2.0 has an FoV of 84° and user-selectable aspect ratios of 3:2, 4:3 and 16:9. To break the problem down, the two fixed variables are a camera’s field of view (FoV), which is described as the angle which it can ‘see’ at any given instant, and its aspect ratio, which is the ratio between the length and width of its images. All of this required some high-school level trigonometry to work out I was never a fan of trigonometry as a teenager, but using it to understand both the Triangular Greenness Index (as detailed in a previous post on vegetation indices) and the current problem was actually a lot of fun. ![]() Also, as we work with a number of different drones (and thus drone cameras), I wanted to have a set of equations in place that we could use for a variety of situations. However, one of the projects we’re currently working on is not so much a mapping project as it is ecological research, and to cut a long story short, ensuring that we can correctly apply a set of ecological statistical tools that account for double-counting and observer error requires us to be able to ascertain the length and width of each image at different drone altitudes. This really doesn’t require us to think a lot about the length and width of each image the only two limiting factors we usually need to consider are the final map’s (or technically, orthorectified mosaic’s) spatial resolution and the drone’s maximum legal height. Once we have enough images covering the area to be mapped, we stitch them all together to create the final map. When we use drones for mapping purposes, we usually program them to fly autonomously along a pre-programmed flight path, collecting images with a specified front- and side- overlap. ![]()
0 Comments
Leave a Reply. |