Table of Contents |
Documentation >
FAQ
How do i do orientation correlation.
Orientation correlation has been added to the templatematch function. It often out performs NCC, and has good performance on Landsat 7 images with SLC off. You can also do it manually, which used to be the solution for doing it. ImGRAFT can also do orientation correlation if you apply a simple "orientation" pre-processing step of the images. Here's how you might implement that: forient=@(A)exp(i*atan2(imfilter(A,[1 0 -1],'replicate'),imfilter(A,[1;0;-1],'replicate'))); oA=forient(A); oB=forient(B); [du,dv]=templatematch(oA,oB,'method','CCF') Reference: CCF-O |
How do i get velocities from Landsat images?
Here are a few tips: You can download landsat scenes from Earth Explorer. Usually it is best to use the high-resolution pan-chromatic band in feature tracking. It is important that both images are taken from the same view point as the georectification in the L1G product is not 100% exact. However, if you choose an image pair where both images share the same viewpoint then these georectification errors will tend to cancel out. You should therefore only track features in images that share the same row and path number. Landsat scenes are very large, and you may run into memory issues if you attempt to track whole scenes at once. I have written a small function that lets you load a smaller study area of a large geotiff. It is called geoimread and can be downloaded on matlabcentral. It is generally a good idea to high-pass filter the input images as a pre-processing step. See this FAQ on preprocessing for how to do that. Once you have loaded the images, and applied any pre-filtering, then you can just call templatematch to get the displacement. Here you can use these two examples as templates: Batura, Bindschadler. A 'default' template size that usually works is around 21 pixels on a side. But larger templates may work better in slow moving regions. Larger templates have the disadvantage that they are more sensitive to shear. So, large templates work best when ice flow is relatively uniform on the spatial scales of the template. In areas with slow ice flow, you need larger temporal baselines in the image pairs to accurately estimate movement. It can be a good idea to make an animated gif of the two images in the image pair. It is a great way to get an intuition about what is going on. It is great for trouble shooting. |
Monte Carlo sampling of uncertainties
Monte Carlo sampling of uncertainties from different sources is a simple way to propagate uncertainties through the entire processing chain. On this page we show how you can use this approach to estimate the 3d uncertainties arising from uncertainties in the DEM and in the ground control points. The method is however, much more generally applicable and it is relatively simple to expand on this to get the uncertainties in 3d velocities given uncertainties in DEM, GCPs, camera-location, subpixel feature tracking uncertainties, etc. MC Example: DEM & GCP uncertainties at Schneefernerkopf.Uncertainties in e.g. the DEM or the GCPs will propagate through the processing chain to the final estimate of the 3D position associated with a given pixel. On this page we outline how you can use a relatively simple Monte Carlo sampling of the input uncertainties to estimate the uncertainty in the final outcome. We start from the data and code in the Schneefernerkopf example. Lets say we have run the Schneefernerkopf example and now we want to know what is the position of a target pixel. First we run demoschneeferner which fits a model camera to the GCPs. To obtain the world coordinates of the target pixel then we would make an inverse projection (raytracing) of the pixel onto the DEM. demoschneeferner pixel=[875 1394]; world=camA.invproject(pixel,X,Y,dem) The question is how uncertain is the resulting world coordinate because of uncertainties in the DEM and the GCPs. Lets say that the vertical standard uncertainty is 2m in the DEM, and that the pixel uncertainties of the GCPs have a standard uncertainty of 2 pixels. In the Monte Carlo we repeat the calculation but perturb the inputs in accordance with these uncertainties and see what impact it has on the output. sigmaDEM = 2; %estimated uncertainty in DEM. (m) sigmaGCP = 2; %GCP pixel uncertainty. mcworld=nan(1000,3); %this is where we store the monte carlo sample of 3d coordinates %As we will perturb the dem in the monte carlo we might end up in a situation %where the camera appears to be below the DEM. For that reason we hide the %foreground from view by making a hole in the DEM around the camera. camdist=sqrt((X-camxyz(1)).^2+(Y-camxyz(2)).^2); dem(camdist<100)=nan; for ii=1:size(mcworld,1) mcgcpAuv=gcpA(:,4:5)+randn(size(gcpA,1),2)*sigmaGCP/sqrt(2); mccam=camA.optimizecam(gcpA(:,1:3),mcgcpAuv,'00000111110011000000'); mcdemerror=randn*sigmaDEM; mcworld(ii,:)=mccam.invproject(pixel,X,Y,dem+mcdemerror,world(1:2)); timedwaitbar(ii/length(mcworld)) end close all pcolor(X/1000,Y/1000,dem) shading interp colormap gray hold on h=plot(world(1)/1000,world(2)/1000,'rx',mcworld(:,1)/1000,mcworld(:,2)/1000,'b.',camxyz(1)/1000,camxyz(2)/1000,'cs') xlabel('Easting (km)') ylabel('Northing (km)') axis equal ylim([5252.5 5253.5]) legend('DEM','estimated pos',sprintf('MC pos \\sigma=[%.0fm %.0fm %.0fm]',nanstd(mcworld)), ... 'camera','location','south') We find uncertainties of 4,6, and 1m on the x,y, and z coordinates. We can also see that the x and y uncertainties covary and the uncertainty is predominantly in the distance to the camera. You can use a similar MC approach to estimate the propagated uncertainties associated with:
This illustrates one of the benefits of ImGRAFT being a scriptable tool: It enables these sorts of approaches. |
Preprocessing images for feature tracking
In some cases you may improve the feature tracking by pre-processing the images. The aim of the preprocessing may be to reduce effects from different lighting conditions, remove noise, or reduce the weight of outliers. Here are a set of proposed pre-processing steps that may improve your results. Many of the filters below are written using matlabs @-syntax for anonymous functions. You would apply the filters like this: A=im2single(imread('test.tif')); hpass=@(A,sigma)A-imfilter(A,fspecial('gaussian',sigma*3,sigma)); %define a filter function A=hpass(A,3); %apply the filter imagesc(A) High-pass filteringFeature tracking using NCC will attempt to align bright and dark regions of the image. These may correspond to illuminated vs shadow regions of the ice and thus depend strongly on the direction of the illumination. For that reason you may want to apply a high pass filter to focus on finer details than large patches of bright and dark. This can be written in matlab like this: hpass=@(A,sigma)A-imfilter(A,fspecial('gaussian',sigma*3,sigma)) Local histogram equalizationHistogram equalization will bring in outliers so that they are not allowed to dominate the output. histequalfilt=@(A,tilesize)adapthisteq(A,'NumTiles',ceil(size(A)./tilesize)); Noise reductionSee e.g. the wiener2 function in the image processing toolbox. Combination of filtersYou can combine multiple filters. E.g. myfilter=@(A)histequalfilt(hpass(A,3),50); Structure texture separation |
Nice 3D plots of velocities
This example shows how you can make nice 3d plots of the output. In this case we show the downslope X,Y velocities from engabreen on top of the 3d terrain. In this code we show the velocities as 50m long cones coloured by velocity magnitude. Note: This code relies on the arrow3 function from the matlab filexchange. demoengabreen %run the engabreen example close all axes('pos',[0 0 1 1]) surface(dem.X,dem.Y,dem.Z,dem.rgb,'EdgeColor','none','FaceColor','texturemap'); view([-4 8 1]) hold on; axis equal off %h=quiver3(xyzA(keep,1),xyzA(keep,2),xyzA(keep,3)+5,Vg(keep,1)./Vgn(keep),Vg(keep,2)./Vgn(keep),Vg(keep,1)*0,'r.'); %h=quiver3(xyzA(keep,1),xyzA(keep,2),xyzA(keep,3)+5,Vg(keep,1),Vg(keep,2),Vg(keep,1)*0,'r'); from=bsxfun(@plus,xyzA(keep,:),[0 0 10]); Vdir=bsxfun(@rdivide,Vg(keep,:)*50,Vgn(keep)); % Vdir(:,3)=-20; %only show xy components of velocity cmap=jet(128); clim=[0 prctile(Vgn,90)]; caxis(clim); colors=interp1q(linspace(clim(1),clim(2),length(cmap))',cmap,max(min(Vgn(keep),clim(2)),0)); colorbar('south') for ii=1:length(from) h=arrow3(from(ii,:),Vdir(ii,:),[],1,[],'cone'); set(h,'facecolor',colors(ii,:)); end |
Exporting templatematch results in CIAS format.
You can use the output of templatematch with Ice Tools if you save the results in CIAS format. The following piece of code shows how the ImGRAFT output of the Batura example can be saved in CIAS format. The variables we need to save are: uvA, dxy, and C in addition to some derived quantities. demobatura % run the batura example %We want to export uvA, dxy, and C but converted to utm coords. %convert pixel displacement to utm coordinates. geoxa=interp1(1:length(x),x,uvA(:,1)); geoya=interp1(1:length(y),y,uvA(:,2)); geoxb=interp1(1:length(x),x,uvA(:,1)+dxy(:,1)); geoyb=interp1(1:length(y),y,uvA(:,2)+dxy(:,2)); dgeo=[geoxb geoyb]-[geoxa geoya]; % --- Construct output matrix. --- ciasoutputformat=[geoxa,geoya,dgeo,sqrt(sum(dgeo.^2,2)),atan2(dgeo(:,1),dgeo(:,2))*180/pi,C]; ciasoutputformat(any(isnan(ciasoutputformat),2),:)=[]; % --- save the output matrix and header. --- fid=fopen('baturaout.dat','w') fprintf(fid,'X,Y,dx,dy,length,direction,max_corrcoeff,avg_corrcoeff\n'); fprintf(fid,'%16.4f,%16.4f,%16.4f,%16.4f,%16.4f,%16.4f,%16.4f,%16.4f\n',ciasoutputformat') fclose(fid) For more complicated coordinate systems you may have to use interp2 instead of interp1. How does it look in Ice Tools + Google EarthJames Turrin (developer of Ice Tools) sent me a KML file generated with Ice Tools using this ImGRAFT output. This is a screenshot from google earth of that KML. |
Tip: Batch processing of a time lapse sequence
When batch processing a time-lapse sequence then it is a good idea to separate the work flow into smaller digestible chunks and then save the results along the way. This outline shows how you might construct such a script.
I would script each of these steps separately and save the results along the way. Many of these steps require you to do some task for each file. Here's how you would typically solve that in Matlab: files=dir(fullfile(imagefolder,'img_*.jpg')); for ii=1:length(files) fname=fullfile(imagefolder,files(ii).name); [... insert your code here ...] end Refinements:There can be a slight error in calculating the change in view direction of the image with respect to the master image. Especially if the light and/or snow conditions are very different between images. This introduces a small error in the 2d->3d coordinates projection. You can reduce this error by having multiple master images depending on the conditions (early/late season, direct/diffuse light, morning/evening). When you calculate velocities then you are differencing two the locations derived from an image pair. If part of the location error is shared then this will cancel out. It can therefore be advantageous to derive the camera parameters for image B from camera A rather than from the master camera. |
1-7 of 7