Documentation‎ > ‎

FAQ

How do i do orientation correlation.

posted Nov 23, 2015, 3:47 AM by Aslak Grinsted   [ updated Sep 3, 2018, 4:01 AM by Alexandra Messerli ]

Orientation correlation has been added to the templatematch function. It often out performs NCC, and has good performance on Landsat 7 images with SLC off.

You can also do it manually, which used to be the solution for doing it. ImGRAFT can also do orientation correlation if you apply a simple "orientation" pre-processing step of the images.  Here's how you might implement that:

forient=@(A)exp(i*atan2(imfilter(A,[1 0 -1],'replicate'),imfilter(A,[1;0;-1],'replicate'))); 

oA=forient(A);
oB=forient(B);

[du,dv]=templatematch(oA,oB,'method','CCF')

Reference: 


CCF-O

How do i get velocities from Landsat images?

posted Aug 31, 2015, 2:29 AM by Aslak Grinsted   [ updated Aug 31, 2015, 3:30 AM ]

Here are a few tips:

You can download landsat scenes from Earth Explorer. Usually it is best to use the high-resolution pan-chromatic band in feature tracking. 
It is important that both images are taken from the same view point as the georectification in the L1G product is not 100% exact. However, if you choose an image pair where both images share the same viewpoint then these georectification errors will tend to cancel out. You should therefore only track features in images that share the same row and path number.

Landsat scenes are very large, and you may run into memory issues if you attempt to track whole scenes at once. I have written a small function that lets you load a smaller study area of a large geotiff. It is called geoimread and can be downloaded on matlabcentral

It is generally a good idea to high-pass filter the input images as a pre-processing step. See this FAQ on preprocessing for how to do that.

Once you have loaded the images, and applied any pre-filtering, then you can just call templatematch to get the displacement. Here you can use these two examples as templates: Batura, Bindschadler.

A 'default' template size that usually works is around 21 pixels on a side. But larger templates may work better in slow moving regions. Larger templates have the disadvantage that they are more sensitive to shear. So, large templates work best when ice flow is relatively uniform on the spatial scales of the template. 

In areas with slow ice flow, you need larger temporal baselines in the image pairs to accurately estimate movement. 

It can be a good idea to make an animated gif of the two images in the image pair. It is a great way to get an intuition about what is going on. It is great for trouble shooting. 



Monte Carlo sampling of uncertainties

posted Dec 11, 2014, 1:38 AM by Aslak Grinsted   [ updated Dec 15, 2014, 12:18 AM ]

Monte Carlo sampling of uncertainties from different sources is a simple way to propagate uncertainties through the entire processing chain. On this page we show how you can use this approach to estimate the 3d uncertainties arising from uncertainties in the DEM and in the ground control points. The method is however, much more generally applicable and it is relatively simple to expand on this to get the uncertainties in 3d velocities given uncertainties in DEM, GCPs, camera-location, subpixel feature tracking uncertainties, etc. 
  

MC Example: DEM & GCP uncertainties at Schneefernerkopf. 

Uncertainties in e.g. the DEM or the GCPs will propagate through the processing chain to the final estimate of the 3D position associated with a given pixel.  On this page we outline how you can use a relatively simple Monte Carlo sampling of the input uncertainties to estimate the uncertainty in the final outcome.  We start from the data and code in the Schneefernerkopf example

Lets say we have run the Schneefernerkopf example and now we want to know what is the position of a target pixel. 

First we run demoschneeferner which fits a model camera to the GCPs. To obtain the world coordinates of the target pixel then we would make an inverse projection (raytracing) of the pixel onto the DEM.

demoschneeferner
pixel=[875 1394];
world=camA.invproject(pixel,X,Y,dem)

The question is how uncertain is the resulting world coordinate because of uncertainties in the DEM and the GCPs. Lets say that the vertical standard uncertainty is 2m in the DEM, and that the pixel uncertainties of the GCPs have a standard uncertainty of 2 pixels. In the Monte Carlo we repeat the calculation but perturb the inputs in accordance with these uncertainties and see what impact it has on the output. 

sigmaDEM = 2; %estimated uncertainty in DEM. (m)
sigmaGCP = 2; %GCP pixel uncertainty.

mcworld=nan(1000,3); %this is where we store the monte carlo sample of 3d coordinates

%As we will perturb the dem in the monte carlo we might end up in a situation
%where the camera appears to be below the DEM. For that reason we hide the
%foreground from view by making a hole in the DEM around the camera. 
camdist=sqrt((X-camxyz(1)).^2+(Y-camxyz(2)).^2);
dem(camdist<100)=nan;

for ii=1:size(mcworld,1)
    mcgcpAuv=gcpA(:,4:5)+randn(size(gcpA,1),2)*sigmaGCP/sqrt(2);
    mccam=camA.optimizecam(gcpA(:,1:3),mcgcpAuv,'00000111110011000000');
    mcdemerror=randn*sigmaDEM;
    mcworld(ii,:)=mccam.invproject(pixel,X,Y,dem+mcdemerror,world(1:2));
    timedwaitbar(ii/length(mcworld))
end


close all
pcolor(X/1000,Y/1000,dem)
shading interp
colormap gray
hold on
h=plot(world(1)/1000,world(2)/1000,'rx',mcworld(:,1)/1000,mcworld(:,2)/1000,'b.',camxyz(1)/1000,camxyz(2)/1000,'cs')
xlabel('Easting (km)')
ylabel('Northing (km)')
axis equal 
ylim([5252.5 5253.5])
legend('DEM','estimated pos',sprintf('MC pos \\sigma=[%.0fm %.0fm %.0fm]',nanstd(mcworld)), ...
    'camera','location','south')




We find uncertainties of 4,6, and 1m on the x,y, and z coordinates. We can also see that the x and y uncertainties covary and the uncertainty is predominantly in the distance to the camera. 

You can use a similar MC approach to estimate the propagated uncertainties associated with:
  • Uncertainties in templatematch displacements. (provided you have an estimate of how good the templatematcher works on your dataset). 
  • Uncertainties in camera parameters such as the position. 
  • Uncertainties in xyz position rather than the uv coordinates of the GCPs. 
  • etc.

This illustrates one of the benefits of ImGRAFT being a scriptable tool: It enables these sorts of approaches. 

Preprocessing images for feature tracking

posted Jul 29, 2014, 10:56 AM by Aslak Grinsted   [ updated Nov 23, 2015, 3:42 AM ]

In some cases you may improve the feature tracking by pre-processing the images. The aim of the preprocessing may be to reduce effects from different lighting conditions, remove noise, or reduce the weight of outliers. Here are a set of proposed pre-processing steps that may improve your results. 

Many of the filters below are written using matlabs @-syntax for anonymous functions. You would apply the filters like this:

A=im2single(imread('test.tif'));
hpass=@(A,sigma)A-imfilter(A,fspecial('gaussian',sigma*3,sigma)); %define a filter function
A=hpass(A,3); %apply the filter
imagesc(A)


High-pass filtering

Feature tracking using NCC will attempt to align bright and dark regions of the image. These may correspond to illuminated vs shadow regions of the ice and thus depend strongly on the direction of the illumination. For that reason you may want to apply a high pass filter to focus on finer details than large patches of bright and dark. This can be written in matlab like this:

hpass=@(A,sigma)A-imfilter(A,fspecial('gaussian',sigma*3,sigma))

Local histogram equalization

Histogram equalization will bring in outliers so that they are not allowed to dominate the output.

histequalfilt=@(A,tilesize)adapthisteq(A,'NumTiles',ceil(size(A)./tilesize));

See help on adapthisteq from the image processing toolbox. 

Noise reduction

See e.g. the wiener2 function in the image processing toolbox. 

Combination of filters

You can combine multiple filters.  E.g.

myfilter=@(A)histequalfilt(hpass(A,3),50);

Structure texture separation
Structure texture separation separates the image into a structure component which can be thought of as the background color, and a residual called the texture. In this respect the structure component is similar to a low-pass filtered image except that structure-texture methods allow for sharp transitions between regions of different brightness. 

I have not tried structure-texture separation techniques yet but have read papers that indicate that this may be a very fruitful venue for tracking between scenes where the sun position has changed.  It could help suppress the sharp brightness change at the edge of a shadow from a mountain. 


Gradient orientation transformation

See also Orientation correlation

Nice 3D plots of velocities

posted Jul 10, 2014, 2:21 AM by Aslak Grinsted   [ updated Jul 10, 2014, 2:25 AM ]

This example shows how you can make nice 3d plots of the output. 



In this case we show the downslope X,Y velocities from engabreen on top of the 3d terrain. In this code we show the velocities as 50m long cones coloured by velocity magnitude. Note: This code relies on the arrow3 function from the matlab filexchange.

demoengabreen %run the engabreen example

close all
axes('pos',[0 0 1 1])
surface(dem.X,dem.Y,dem.Z,dem.rgb,'EdgeColor','none','FaceColor','texturemap');
view([-4 8 1])
hold on;
axis equal off
%h=quiver3(xyzA(keep,1),xyzA(keep,2),xyzA(keep,3)+5,Vg(keep,1)./Vgn(keep),Vg(keep,2)./Vgn(keep),Vg(keep,1)*0,'r.');
%h=quiver3(xyzA(keep,1),xyzA(keep,2),xyzA(keep,3)+5,Vg(keep,1),Vg(keep,2),Vg(keep,1)*0,'r');

from=bsxfun(@plus,xyzA(keep,:),[0 0 10]);
Vdir=bsxfun(@rdivide,Vg(keep,:)*50,Vgn(keep)); %
Vdir(:,3)=-20; %only show xy components of velocity
cmap=jet(128);
clim=[0 prctile(Vgn,90)];
caxis(clim);
colors=interp1q(linspace(clim(1),clim(2),length(cmap))',cmap,max(min(Vgn(keep),clim(2)),0)); 
colorbar('south')

for ii=1:length(from)
    h=arrow3(from(ii,:),Vdir(ii,:),[],1,[],'cone');
    set(h,'facecolor',colors(ii,:));
end




Exporting templatematch results in CIAS format.

posted Jul 9, 2014, 4:05 AM by Aslak Grinsted   [ updated Jul 11, 2014, 1:05 AM ]

You can use the output of templatematch with Ice Tools if you save the results in CIAS format. The following piece of code shows how the ImGRAFT output of the Batura example can be saved in CIAS format. The variables we need to save are: uvA, dxy, and C in addition to some derived quantities.


demobatura % run the batura example

%We want to export uvA, dxy, and C but converted to utm coords.

%convert pixel displacement to utm coordinates.
geoxa=interp1(1:length(x),x,uvA(:,1));
geoya=interp1(1:length(y),y,uvA(:,2));
geoxb=interp1(1:length(x),x,uvA(:,1)+dxy(:,1));
geoyb=interp1(1:length(y),y,uvA(:,2)+dxy(:,2));
dgeo=[geoxb geoyb]-[geoxa geoya]; 

% --- Construct output matrix. --- 
ciasoutputformat=[geoxa,geoya,dgeo,sqrt(sum(dgeo.^2,2)),atan2(dgeo(:,1),dgeo(:,2))*180/pi,C];
ciasoutputformat(any(isnan(ciasoutputformat),2),:)=[];

% --- save the output matrix and header. ---
fid=fopen('baturaout.dat','w')
fprintf(fid,'X,Y,dx,dy,length,direction,max_corrcoeff,avg_corrcoeff\n');
fprintf(fid,'%16.4f,%16.4f,%16.4f,%16.4f,%16.4f,%16.4f,%16.4f,%16.4f\n',ciasoutputformat')
fclose(fid)

For more complicated coordinate systems you may have to use interp2 instead of interp1.  

How does it look in Ice Tools + Google Earth

James Turrin (developer of Ice Tools) sent me a KML file generated with Ice Tools using this ImGRAFT output. This is a screenshot from google earth of that KML.



Tip: Batch processing of a time lapse sequence

posted Jun 30, 2014, 2:55 AM by Aslak Grinsted   [ updated Jul 6, 2014, 11:43 PM ]

When batch processing a time-lapse sequence then it is a good idea to separate the work flow into smaller digestible chunks and then save the results along the way. This outline shows how you might construct such a script.  
  1. Determine camera orientation and internal parameters of a master image.
    • Get as many GCPs for that image as possible.
    • Use camera.optimizecam to determine unknown parameters. (see Schneefernerkopf for an example).
  2. Determine the camera orientation of all the remaining images with respect to master image.
    • This can be done by tracking the features of background movement.
    • See the Engabreen demo for an example.
  3. Pick a set of real world coordinates to feature track.
  4. Determine the pixel displacement of all candidate image pairs.
    • Some proposed criteria that can be used to select candidate pairs:
      • Roughly same time of day (to minimize issues from shadow movement). 
      • E.g. dt>1day and dt<14days.
      • Criteria that removes ill-suited images: under-exposed, cloudy, ice/water on lens. 
        • This can be done using manually or automatically using e.g. image statistics, or exposure settings. 
  5. For each pair: Convert all pixel locations of tracked features to 3d positions, and then to velocities. 

I would script each of these steps separately and save the results along the way. 

    
Many of these steps require you to do some task for each file. Here's how you would typically solve that in Matlab:

files=dir(fullfile(imagefolder,'img_*.jpg'));
for ii=1:length(files)
fname=fullfile(imagefolder,files(ii).name);

[... insert your code here ...]

end


Refinements:

There can be a slight error in calculating the change in view direction of the image with respect to the master image. Especially if the light and/or snow conditions are very different between images. This introduces a small error in the 2d->3d coordinates projection. You can reduce this error by having multiple master images depending on the conditions (early/late season, direct/diffuse light, morning/evening). When you calculate velocities then you are differencing two the locations derived from an image pair. If part of the location error is shared then this will cancel out. It can therefore be advantageous to derive the camera parameters for image B from camera A rather than from the master camera.

1-7 of 7