Documentation‎ > ‎

Functions

voxelviewshed.m

posted Jun 23, 2014, 4:27 AM by Aslak Grinsted   [ updated Oct 23, 2014, 2:55 AM ]

voxelviewshed calculates the viewshed over a DEM. The viewshed is all the locations that are potentially visible from a given location. 
 
  USAGE: vis=voxelviewshed(X,Y,Z,camxyz)
 
  INPUTS:
  X,Y,Z: input DEM (regular grid).
  camxyz: 3-element vector specifying viewpoint.
 
  OUTPUT:
     vis: boolean visibility matrix (same size as Z)
 
  EXAMPLE:
      [X,Y]=meshgrid(-299:300);
      Z=abs(ifft2(ifftshift((hypot(Y,X)+1e-5).^(-2.1).*exp(rand(size(X))*2i*pi)))); %example terrain inspired by rafael.pinto
      Z=(Z-Z(300,300))/std(Z(:)); camxyz=[0,0,0.5];
      vis=voxelviewshed(X,Y,Z,camxyz);
      clf;
      surf(X,Y,Z,Z.*vis-~vis*2,'EdgeColor','none','FaceColor','interp');
      hold on;
      plot3(camxyz(1),camxyz(2),camxyz(3),'k*')
      camlight


See also Engabreen feature tracking example for an example use case. 

The algorithm is custom made and very fast. Conceptually it works by draping curtains hanging down from the DEM grid points in progressively greater circles around the camera while keeping track of visually highest 'curtain point' for every compass direction.


Camera.m

posted Jun 23, 2014, 4:04 AM by Aslak Grinsted   [ updated Jun 24, 2014, 5:33 AM ]

The camera model holds both all the internal camera parameters (focal lengths, distortion coefficients, etc) and all the external camera parameters (camera location and view direction). This class can be used to project back and forth between pixel and world coordinates once the camera parameters have been specified. It can also be used to calibrate the camera parameters given a set of ground control points.


A good example of how the camera class is used can be found in the Engabreen example.

 

   imgsz   - size of image in pixels [#rows, #columns]  camera properties:
   f       - focal length in pixel units (two element vector [fx,fy])
   c       - camera center in pixel coordinates (two element vector: [cx,cy])
   k       - radial distortion coefficients. (six element vector: [k1-k6])
   p       - tangential distortion coefficients (two element vector: [p1,p2])
   xyz     - world coordinates of camera.
   viewdir - [yaw,pitch,roll]. Yaw: rotation about z (0=looking east)
                               Pitch: look up/down angle
                               Roll: camera roll (horizon tilt).
 
  camera Dependent/derived properties:
   (i.e. properties calculated from the camera parameters)
   R         - camera rotation matrix calculated from camera view
               direction (read only)
   fullmodel - a 20-element vector containing all camera properties.
               [camx,camy,camz,imgszy,imgszx,viewdiryaw,viewdirpitch,viewdirroll,fx,fy,cx,cy,k1-6,p1-2]
 
  camera Methods:
   camera      - constructor
   optimizecam - optimize the camera to mimimize misfit between
                 projected world coordinates and pixel coordinates.
   project     - project world coordinates to image coordinates (3d->2d)
   invproject  - project image coordinates to world coordinates (2d->3d)



Setting up camera parameters

You setup the camera parameters using its contructor and then you can modify the parameters afterwards. E.g. like this:

cam=camera([446722 7396671 770]) %specifying world coordinates only
cam.f=[6000 6000];
cam.viewdir=[90 0 0]*pi/180; %looking north

The Engabreen and Schneefernerkopf examples show some practical use cases. There are additional ways the constructor can be called. Type "help camera.camera" on the matlab prompt to see the full help of the constructor. 

Calibrating camera parameters from ground control points

The camera class contains an optimizeCam method which can be used to calibrate the camera and also determine the external camera parameters. You give it a set of ground control points. I.e. a set of 3d world coordinates and corresponding pixel coordinates. OptimizeCam will then work to minimize the misfit between the projected 3d points and the pixel coordinates. You can specify which parameters which are free in the optimization. 

[newcamera,rmse,AIC] = cam.optimizecam(worldcoordinates,pixelcoordinates,'00000111000000000000')

In this case the string '00000111000000000000' says that the 6,7,8th parameters should be free. The ordering here refers to the ordering in the "fullmodel" where 6-8 corresponds to the three view direction angle. So, in this case we are only optimizing the view direction of the camera. 

The Engabreen and  Schneefernerkopf examples show how a simple radial distortion model is determined from the ground control points. Type "help camera.optimizeCam" on the matlab prompt for more details on how to use it. 

Projecting from world coordinates to image coordinates (3d to 2d)

The project method makes it simple to project from world coordinates to pixel coordinates.
 [uv,~,inframe]=cam.project(xyz)
Here uv are the projected pixel coordinates. inframe is a boolean array which is true if the projected point is within the frame of the image. 

See the  Schneefernerkopf example for a simple example how it can be used.

Projecting from image coordinates to world coordinates (2d to 3d)

Projecting from 2d to 3d can also be considered raytracing conceptually. Usually you want to find the point on a DEM, that corresponds to the pixel coordinates after projection. You can do this with invproject . 

xyz=cam.invproject(uv,demX,demY,demZ)

The Engabreen example shows how this can be used in practice. Type "help camera.invproject" on the matlab prompt for more details on how to use it. 

How do you calculate the focal length in pixel units from the camera specifications?

The focal length from the manufacturer alone does not determine the field of view / zoom level of a lens. The focal length in mm together with sensor size and image size determines the field of view that the lens is able to see. The camera needs the focal length in pixel units. These can be calculated thus:

FocalLength=30; %mm (as per lens specifications)
SensorSize=[22.0 14.8]; %mm (this is the physical size of the sensor in the camera. It will depend on model.)
imgsz=size(A);
f=imgsz([2 1]).*(FocalLength./SensorSize);


templatematch.m

posted Jun 23, 2014, 1:56 AM by Aslak Grinsted   [ updated Mar 2, 2015, 1:48 AM ]

 Feature tracking by template matching 

You provide templatematch with two images (A and B) and a set of coordinates in A that should be tracked. Templatematch will then cut out small templates around each point and calculate the similarity within a search region in image B. The measure of similarity will typically be normalized cross correlation. The 'optimal' displacement xy will be returned with sub-pixel precision. 

A simple example of template matching can be found here

In its simplest form you can use templatematch thus:

    [du, dv, peakCorr, meanAbsCorr, pu, pv] = templatematch(A,B) 

Note: u,v refers to pixel coordinates in the code below.






  USAGE:
     [du, dv, peakCorr, meanAbsCorr, pu, pv] = templatematch(A,B[,pu,pv][,parameter-value-pairs])
 
  INPUTS
     A,B: images
     pu,pv: pixel coordinates in A that should be located in B. (Default is a regular grid)
 
 
  NAMED PARAMETERS
     TemplateWidth,TemplateHeight: Size of templates in A (Default: 21).
     SearchWidth,SearchHeight: Size of search region in B (Default: TemplateWidth+40).
     SuperSample: super sampling factor of input images for improved subpixel accuracy. (default=1)
     Initialdu,Initialdv: initial guess of the displacement between A & B
     super: supersampling factor (input to imresize)
     ShowProgress: Boolean or cell-array of strings.
                   true (default) is used for a text progress bar.
                   A cell of strings is used to name the A & B images in a progress figure.
     Method: 'NCC'(default), 'NORMXCORR2' or 'PC' (normalized cross correlation or phase correlation)
 
  OUTPUTS:
    du,dv: displacement of each point in pu,pv. [A(pu,pv) has moved to B(pu+du,pv+dv)]
    peakCorr: correlation coefficient of the matched template. 
    meanAbsCorr: The mean absolute correlation coefficitent over the search
                 region is an estimate of the noise level.
    pu,pv: actual pixel centers of templates in A (may differ slightly from inputs because of rounding). 
 


A schematic of the key features of the Template Matching processing:

TODO: update figure so that it uses u,v for pixel coordinates rather than x,y.

Coordinates of features to be tracked (pu,pv):

pu and pv are used to specify the pixel coordinates of the features in A that should be tracked. If none is specified then templatematch will drape a regular grid over the entire image.

Template Width / Height:

The template size is controlled with the TemplateWidth/TemplateHeight input parameters. You can specify different template sizes for each row in points. There is no choice that works well in all situations and there are trade-offs to consider.

Pros of a large template size are:
  • Fewer false matches as the 'fingerprint' of the feature is more unique.
Cons of a large template size;
  • Reduced resolution of output. Hi resolution is necessary for narrow outlet glaciers and more data also helps in post-processing to find erroneous results.
  • Performance decreases with template size. This may be partly compensated by reducing the number of tracked points as the effective resolution is any decreased. 
  • Possibly greater total shear inside the template. Shear reduces the cross correlation and thus makes it harder to recognize the feature in image B.
So I recommend considering what is the desired output resolution and then also simply looking at the image to see what template size you would expect should work. Finally i would experiment with different template sizes to find what works best in the particular situation.  

Search region size (SearchWidth/SearchHeight):

The search region size is controlled with the SearchWidth & SearchHeight input parameters. Generally you want to make that as small as possible to minimize the probability of false matches. I recommend using SearchWidth=TemplateWidth+max_expected_displacement . Here, you want to be generous with the max_expected_displacement, so as to not exclude the possibility you may be wrong. 

Sometimes there may be an advantage to choosing an even larger search region as it allows for more accurate calculation  of the signal to noise ratio: One of the outputs is the average absolute correlation inside the search region. This is normally used as the noise level in a calculation of the signal to noise ratio. 

Super-sampling:

You can specify a super sampling factor which will resample the template and search images in order to improve the sub-pixel precision. This super sampling factor can also be used to downsample the images (when super<1) to coarser resolution if you want to speed up the calculation at the expense of precision. This approach is used in the 'rock matching' of the Engabreen example

The good performance of super-sampling to achieve sub-pixel precision can be seen in Gilo and Kääb (2011).

Initial guess of displacement (initialdu/initialdv):

The larger the search region, the greater the chances are that other features will be visually similar to the template image being sought. Therefore it is generally a good idea to reduce the search region to being as small as possible. However, at the same time the search region has to be sufficiently wide to allow for the true displacement. By providing an initial guess for the displacement then you can reduce the search window size.

Advanced TIP: you can use progressively finer scales with initial displacements taken from the preceding coarse scale matches. This approach will be similar to pyramidal approaches to feature tracking. 

Initialdu/initialdv can also be used as a mask by inserting nans at locations where 


Showprogress

The showprogress input parameter can be used to open a window where the progress of the feature tracking can be followed as it is being done. The showprogress parameter can switch between three modes:
  • If showprogress is empty or false. (default) 
    • Do not show a progress window. This is if showprogress is empty or false.
  • If showprogress is a cell-array of image names/ids:
    • Show the two images and the tracked points as they are being matched (colored according to the quality of the match).  
  • if showprogress is true. Show a text progressbar on the command prompt.

Methods

Currently normalized cross correlation and phase correlation are the only two similarity measures implemented. These measures do not allow for rotation of the template. It should, however, be relatively easy to expand with additional similarity measures. 

Phase correlation is still experimental, and it is strongly biased towards integer pixel displacements. NCC is therefore recommended. In my opinion it is better to use image filters to flatten the spectrum or emphasize short scale features to achieve similar response. 


Outputs

Displacement (du,dv)

For each point in points a "correlation" maximum within the search region is found and the corresponding displacement is returned in du,dv.  This will have the same dimensions as the du, dv inputs.

Match quality (peakCorr, meanAbsCorr)

peakCorr is the maximum correlation coefficient found at the location of each match.  
meanAbsCorr is the average or typical correlation coefficient over the entire search window. 

The ratio peakCorr./meanAbsCorr can be interpreted as a signal-to-noise ratio and can be used to filter false matches. 


Advanced tip: Sometimes it works better to spatially smooth meanAbsCorr before evaluating the signal-to-noise ratio 

Pre-processing images

In some cases you may improve the feature tracking by pre-processing the images. This FAQ page has a few suggestions for how you might pre-process the images.


1-3 of 3