Report: ISPRS Comparison of Filters

 

 

 

 

George Sithole, George Vosselman

 

 

Department of Geodesy, Faculty of Civil Engineering and Geosciences

Delft University of Technology

The Netherlands

 

Current e-mails gsithole@ebe.uct.ac.za and vosselman@itc.nl

 

 

 

 

 

 

 

 

Commission III, Working Group 3

 

 

 

 

 

 

 

 

 

 

 

 

August 2003


Table of Contents

Abstract 4

1       Introduction  5

PART I 6

2       Definitions  6

3       Filter characteristics  7

3.1    Data Structure  7

3.2    Test neighborhood and the number of points filtered at a time  7

3.3    Measure of Discontinuity  7

3.4    Filter Concept 7

3.5    Single step vs. iterative  8

3.6    Replacement vs. Culling  8

3.7    Using first pulse and reflectance data  8

4       Participants  9

4.1    M. Elmqvist 9

4.2    G. Sohn  9

4.3    M. Roggero  10

4.4    M. Brovelli 10

4.5    R. Wack  11

4.6    P. Axelsson  11

4.7    G. Sithole , M.G. Vosselman  12

4.8    N. Pfeifer, C. Briese  12

4.9    Summary  13

PART II 14

5       Test data  14

5.1    Reference data sets  14

5.2    Filtered data sets  14

PART III 15

6       Results/ Comparisons  15

6.1    Data  15

6.2    Qualitative assessment 15

6.2.1      Filtering difficulties  15

6.2.1.1        Outliers  15

6.2.1.2        Object complexity  16

6.2.1.3        Attached Objects  16

6.2.1.4        Vegetation  16

6.2.1.5        Discontinuity  17

6.2.2      Assessment 17

6.3    Quantitative assessment 18

6.3.1      Cross-matrices  18

6.3.2      Interpolation error and the predefined threshold  18

6.3.3      Assessment measures  19

6.3.4      Assessment 20

6.3.4.1        Type I vs Type II 20

6.3.4.2        Steep Slopes  20

6.3.4.3        Discontinuities  20

6.3.4.4        Bridges  21

6.3.4.5        Complex scenes  21

6.3.4.6        Outliers  21

6.3.4.7        Vegetation on slopes  22

6.3.4.8        Low Bare Earth point count 22

6.3.4.9        Effect of resolution  22

7       Discussion and Conclusion  24

7.1    Performance  24

7.1.1      Main problems  24

7.1.2      Strategy  24

7.1.3      Which error should be reduced?  25

7.2    Point  density  25

7.3    Research Issues  25

7.3.1      Classification using context knowledge and external information  25

7.3.2      Quality reporting, Error flagging and Self diagnosis  26

7.3.3      Effort vs. Result 26

7.4    Conclusion  26

8       Acknowledgements  26

9       Reference  27

10     Bibliography  28

 


Abstract

 

As one of the tools for rapid topographic feature extraction, the commercial use of airborne laser scanning (ALS) has gained wider acceptance in the last few years as more reliable and accurate systems are developed. While airborne laser scanning systems have come a long way, the choice of appropriate data processing techniques for particular applications is still being researched.  The tasks in data processing include the “modeling of systematic errors”, “filtering”, “feature detection” and “thinning”. Of these tasks manual classification (filtering) and quality control pose the greatest challenges, consuming an estimated 60 to 80% of processing time and thus underlining the necessity for research in this area.

Numerous filter algorithms have been developed to date. To determine the performance of filtering algorithms a study was conducted in which eight groups filtered data supplied to them. The study aimed to determine the general performance of filters, the influence of point resolution on filtering and future research directions. To meet the objectives the filtered data was compared against reference data (contained in eight data sets) that was generated by manually filtering real ALS data.

For the purposes of the test, the ALS data was defined as an abstraction of a landscape. This definition was necessary for distinguishing between the Bare Earth and Objects. The landscape was defined as being composed of the Bare Earth and Objects. Objects were further defined as being either Detached (free of the Bare Earth, e.g. buildings) or Attached (connected to the Bare Earth, e.g. bridges).

Having conceptually defined the landscape, seven characteristics of filters were identified based on the filter algorithms submitted and other filter algorithms documented in publications. These characteristics were set aside because it was judged that they influenced the performance of a filter. The seven characteristics are (1) data structure, (2) test neighbourhood, (3) measure of discontinuity, (4) filter concept, (5) single vs. iterative processing, (6) replacement vs. culling, and (7) use of first pulse and reflectance data. These characteristics were used to understand the behaviour of filter algorithms.

Each filter algorithm was then studied, and the result of the output of the filter algorithms was visually compared against reference data. This formed the qualitative comparison. The main problems faced by the filter algorithms were in the reliable filtering of complex scenes, filtering of buildings on slopes, filtering of disconnected terrain (courtyards), and the preservation of discontinuities. Fifteen sub samples were extracted from the eight data sets. The fifteen samples were representative of different environments, but more focused in respect to the expected difficulties (as determined by the qualitative comparison). The output of the filtered algorithms was numerically compared against these fifteen sub samples. This formed the quantitative comparison.

From the results it has been found that in general the filters performed well in landscapes of low complexity. However, complex landscapes as can be found in city areas and discontinuities in the bare earth still pose challenges. It is suggested that future research be directed at heuristic classification of point-clouds (based on external data), quality reporting, improving the efficiency of filter strategies.


1           Introduction

As one of the tools for rapid topographic feature extraction, the commercial use of airborne laser scanning (ALS) has gained wider acceptance in the last few years (Flood 1999, Flood 2001a, Flood 2001b), as more reliable and accurate systems are developed. While airborne laser scanning systems have come a long way, the choice of appropriate data processing techniques for particular applications is still being researched.  Data processing, here, is understood as being either semiautomatic or automatic, and includes such tasks as “modeling of systematic errors”, “filtering”, “feature detection” and “thinning”. Of the aforementioned tasks manual classification (including filtering) and quality control pose the greatest challenges, consuming an estimated 60 to 80% of processing time (Flood 2001a), and thus underlining the necessity for research in this area.  The importance of filtering becomes further clear when it is considered that for many applications a distinction between the bare-earth and the features residing on it is necessary.

To date a number of algorithms have been developed for semi automatically/automatically extracting the bare-earth from point-clouds obtained by airborne laser scanning and InSAR. While the mechanics of some of these algorithms have been published, those of others are not known because of proprietary restrictions. Some comparison of known filtering algorithms and difficulties have been mentioned in Huising and Gomes Pereira (1998), Haugerud and Harding (2001), Tao and Hu (2001). However, an experimental comparison was not available. Because of this it was felt that an evaluation of filters was required to assess the strengths and weaknesses of the different approaches based on available control data.

In line with the framework of ISPRS Commission III, the Working Group III/3 "3D Reconstruction from Airborne Laser Scanner and InSAR Data" initiated a study to compare the performance of various automatic filters developed to date, with the aim of identifying future research directions in filtering of point-clouds for Bare Earth extraction.

The study was driven by three main objectives, which are:

1.        To determine the comparative performance of existing filters. It is accepted that filters will not be perfect and that most will not be universally applicable. They will work under most scenarios (combination and distribution of features on the terrain), but there are situations in which they will fail. Therefore it is of interest to find what filter strategy will work under what circumstances.

2.        To determine the performance of filtering algorithms under varying point densities. Cost efficiency is a significant factor in the choice of resolution at which the landscape is scanned. The lower the resolution, the lower the flight cost and vice versa. But the choice of resolution also depends on the level fidelity required in the abstraction. The lower the resolution the lower the fidelity. Therefore, a balance has to be struck between lowering costs and ensuring fidelity. Therefore, it is of interest to find out the impact of resolution on the quality of filtering, in relation to the algorithm used.

3.        To determine problems in the filtering of point-clouds that still require further attention

In line with the objectives of the study a web site was set up in which twelve sets of data were provided for testing. Individuals and groups wishing to participate in the study were kindly requested to process all twelve data sets if possible. A total of 8 data sets (results) were received. The algorithms used by participants come from a cross-section of the most common strategies (or variants) for extracting the bare-earth from laser point-clouds, which should allow for a broader comparison.

 

Outline of the report

The report is broken into three main parts. The first part (sections 2, 3 and 4) of the paper discusses the problem of filtering, characteristics of filtering algorithms and a brief description of filters used by participants is given. In the second part (sections 5 and 6) of the paper the data used in the study and the results of the filtering are discussed. In the third part (section 7) the results from the study are discussed and conclusions are drawn in respect to the objectives of the project.


PART I

2           Definitions

Before proceeding, some of the terms used in the text will be defined. It has to be understood that the definitions are generalized and meant to be conceptual (abstract), and used only for differentiating between different aspects of the environment scanned by an ALS system. A more precise definition and one useful in the implementation of filters would require mathematical formulations, and even then might not encompass all possible situations.

 

Item

Definition

 

Landscape

The topography. A scene consisting of the earth and any other features (buildings, trees, power lines, etc.,) residing on it.

Bare Earth

Topsoil or any thin layering (asphalt, pavement, etc.) covering it. Haugerud and Harding (2001) define Bare Earth as the continuous and smooth surface that has nothing visible below it. However, this definition is for the purpose of implementation of a filter and because of that it is restrictive.

 

Object

Vegetation and other artificial features that have been crafted by human hand.

 

 

             

 

Detached Object

Objects that rise vertically (on all sides) above the Bare Earth or other Objects.

 

 

                            

 

Attached Object

Objects that rise vertically above the Bare Earth only on some sides but not all (e.g., bridges, gangways, ramps, etc.,).

 

 

Point-cloud

A collection of points (acquired by ALS) in a 3D Cartesian co-ordinate system.

 

 

Filtering

Abstraction of the Bare Earth from an ALS point-cloud.

 

 

 

Outlier

Point/s in a point-cloud that are not off the landscape (e.g., birds, gross errors from the ALS system, etc.).

 

 


3           Filter characteristics

Filters are built from combinations of different elements. Therefore, to understand or predict the behaviour and output of a filter the way in which elements are combined has to be understood. Seven elements have been identified:

3.1     Data Structure

The output of an ALS is a cloud of irregularly spaced 3D points. Some filters work with the raw point-cloud. However, to take advantage of image processing toolkits some filtering algorithms resample the ALS produced point-cloud into an image grid, before filtering.

3.2     Test neighborhood and the number of points filtered at a time

Filters always operate on a local neighborhood. In the classification operation (Bare Earth or Object) two or more points are classified at a time. This also forms a basis for categorizing the algorithms.

Point-to- Point:

In these algorithms two points are compared at a time. The discriminant function is based on the positions of the two points. If the output of the discriminant function is above a certain threshold then one of the points is assumed to belong to an Object. Necessarily only one point can be classified at a time.

 

Point-to-Points:

In these algorithms neighboring points (of a point of interest) are used to solve a discriminant function. Based on the output of the discriminant function the point of interest can then be classified. Again only one point is classified at a time.

 

 

Points-to-Points:

In these algorithms several points are used to solve a discriminant function. Based on the discriminant function the points can then be classified. Necessarily one or more points can be classified with certainty in such a formulation.

 

 

3.3     Measure of Discontinuity

Most algorithms classify based on some measure of discontinuity. Some of the measures of discontinuity used are, height difference, slope, shortest distance to TIN facets, and shortest distance to parameterised surfaces.

3.4     Filter Concept

Every filter makes an assumption about the structure of Bare Earth points in a local neighborhood. This forms the concept of the filter.

Slope based:

In these algorithms the slope or height difference between two points is measured. If the slope is above a certain threshold then the highest point is assumed to belong to an Object.  The assumption is based on the rational that the steepest slopes in a landscape belong to Objects.

 

Block-minimum:

Block-minimum algorithms: Here the discriminant function is a horizontal plane with a corresponding buffer zone above it. The plane locates the buffer zone, and the buffer zone defines a region in 3D space where Bare Earth points are expected to reside.

 

Surface based:

Surface fitting algorithms: In this case the discriminant function is a parametric surface with a corresponding buffer zone above it. The surface locates the buffer zone, and as before the buffer zone defines a region in 3D space where Bare Earth points are expected to reside.

 

 

    

 

 

Clustering/ Segmentation:

The rational behind such algorithms is that any points that cluster must belong to an Object if their cluster is above its neighborhood. For such a concept to work the clusters/ segments must delineate Objects and not facets of Objects.

There are various ways in which cluster boundaries or segments can be obtained. Clustering methods have been proposed by Filin (Filin 2002) and Roggero (Roggero 2002). These clustering methods work by projecting and separating the data in a feature space. Segmentation algorithms have been proposed by Lee and Schenk (Lee and Schenk 2002), and Sithole (Sithole 2002). Experimental comparison of some segmentation algorithms has also been done by Hoover et. al. (Hoover et. al. 1996).

Another way of obtaining cluster boundaries is to contour the point-cloud. An Object is then suspected to exist where the length (or internal area) of a contour does not grow significantly from a lower contour. This idea is employed by (Zhan et. al. 2002) and Elmqvist (Elmqvist 2001a, 2001b).

Most of the filters submitted by the participants do not use any of the clustering and segmentation methods mentioned above.

 

3.5     Single step vs. iterative

Some filter algorithms classify points in a single pass while others iterate, and classify points in multiple passes (or as Hamming, (1989), calls them, recursive and non-recursive filters). The advantage of a single step algorithm is computational speed. However, computational speed is traded for accuracy by iterating the solution, with the rational that in each pass more information is gathered about the neighborhood of a point and thus a much more reliable classification can be obtained.

3.6     Replacement vs. Culling

In culling a filtered point is removed from a point-cloud. Culling is typically found in algorithms that operate on irregularly spaced point-clouds. In a replacement a filtered point is returned to the point-cloud with a different height (usually interpolated from its neighborhood). Replacement is typically found in algorithms that operate on regularly spaced (rasterized) point-clouds.

3.7     Using first pulse and reflectance data

Some scanners record multiple pulse returns. This feature is advantageous in forested areas, where there first pulse is usually off vegetation and subsequent pulses are from surfaces below the vegetation canopy. Additional to multiple pulse measurements the intensity of the returned pulses is also measured. Different surfaces in the landscape will absorb/reflect pulses differently and therefore it may be possible to use this information in classifying points.


4           Participants

A brief breakdown of participants is given in Table 4.1 and a brief review of their filters is given afterwards.

Table 4.1

 

Developer(s)

Filter Description

Reference

1

M. Elmqvist

(magel@foi.se)

 

FOI (Swedish Defense Research Institute), Sweden

Active Contours

Elmqvist (2001a, 2001b)

2

G. Sohn

(gsohn@ge.ucl.ac.uk)

 

University College London (UCL)

Regularization Method

Sohn (2002)

3

M. Roggero

(roggero@atlantic.polito.it)

 

Politecnico di Torino

Modified Slope based filter

 

Roggero (2001)

4

M. Brovelli

(maria@geomatica.ing.unico.it)

 

Politecnico di Milano

Spline interpolation

 

Brovelli (2002)

5

R. Wack, A. Wimmer

(roland.wack@joanneum.at)

 

Joanneum Research Institute of Digital Image Processing

Hierarchical Modified Block-Minimum

Wack (2002)

6

P. Axelsson

peter.axelsson@digpro.se

 

DIGPRO

Progressive TIN densification

 

Axelsson (1999, 2000, 2001)

7

G. Sithole, G. Vosselman

(gsithole@ebe.uct.ac.za, vosselman@itc.nl)

 

TU Delft

Modified Slope based filter

 

Sithole (2001)

8

N. Pfeifer, C. Briese,

(cb@ipf.tuwien.ac.at)

 

TU Vienna

Hierarchical robust interpolation

Kraus et. al. (1998), Pfeifer et. al. (1998, 1999a, 1999b, 2001), Kraus et. al. (2001),  Briese et. al. (2001)

 

 

 

 

4.1     M. Elmqvist

Filter principle: This algorithm estimates the ground surface by employing active shape models. It is based on a general technique of matching a deformable model to an image by means of energy minimization. Applied to laser data the active shape model behaves like a membrane floating up from underneath the data points. The manner in which the membrane sticks to the data points is determined by an energy function. For the membrane to stick to the ground points, it has to be chosen in such a way that its energy function is minimized.

Elmqvist (2001a, 2001b) provides an application using an active contour spline in a two dimensional image (the laser data has to be re-sampled into a grid format). For the active contour, the energy function, which is to be minimized, is a weighted combination of internal and external forces. The internal forces originate from the shape of the contour and the external forces come from the image (laser data) and/or other external constraint forces. The algorithm is reported to be robust. However, active contour models need a start state. The start state used by Elmqvist is a plane below the lowest point in the data set.

4.2     G. Sohn

Filter principle: In this algorithm a TIN is progressively densified in a two-step process, Downward Densification and Upward Densification. The TIN at the end of the densification becomes a representation of the Bare Earth, and points not included in the TIN are treated as Object.

Downward Densification – The purpose of this step is to obtain an initial representation of the Bare Earth. A rectangular bound for the point-cloud is determined. The nearest point to each of the four corners of the bound is assumed to be Bare Earth. These four points are triangulated using Delaunay Triangulation. Next, the lowest point below each triangle in the TIN is assumed to be Bare Earth, and used to recompute the TIN. This last process is repeated until there are no more points below the TIN. This TIN is now assumed to be an initial representation of the Bare Earth.

Upward Densification – The purpose of this step is to refine the initial TIN obtained in the Downward Densification. A buffer is defined above every triangle in the TIN. The depth of this buffer is given as dh. For each triangle in the TIN, points above the buffer are labeled as “off-terrain” and those in and below the buffer are labeled as “on-terrain”. A label of “on-terrain” indicates that the point is potentially a Bare Earth point. From these “on-terrain” points one is chosen (explained in the next paragraph) and added to its underlying triangle and the TIN is recomputed. This process is repeated until there are no more “on-terrain” points.

A tetrahedral can be formed from every point and its underlying triangle (in the TIN), and that a facet of the tetrahedral is an approximation of the Bare Earth. It is assumed that the flattest tetrahedral is the best estimation of the Bare Earth and therefore, the objective is to choose a point that delivers the flattest tetrahedral. Determining which of the “on-terrain” points delivers the flattest tetrahedral is done by means of an MDL criterion that codes the conditional probability of the angle that all facets of the tetrahedrals make with the underlying triangle. The point for which the coded description of the conditional probability is the least is taken as being a Bare Earth point. Further details can be found in Sohn (2002).

Notes by the author:

When we make virtual corner points as initial terrain surface model, they are intentionally assigned as on-terrain point without any reason. It degrades the final results since they (corner points) may be located over a building roof or tree. Thus, when we use a number of sub-divided small areas, it generates worse filtering performance than when we use the entire data domain as one initial terrain model. For our future experiment, we will test our filtering technique over the same dataset used here in a way of adopting above idea.

4.3     M. Roggero

Filter principle: This filter is a variant on the morphological filter developed by Vosselman (2000, 2001). In this algorithm the data set is first gridded, to support a database structure; outliers are detected and rejected. Then a local operator is applied to the rasterized data so as to characterize the initial Bare Earth, by determining a local slope about the lowest point in the local operator.

The local slope is estimated in a local linear regression criterion. In the linear regression each point is compared to the lowest point in the local operator neighborhood. The distance and height difference from the lowest points are weighted and used as observations in the linear regression. The distances and height differences are weighted in such a way that points furthest from the lowest point contribute less to the parameters of the line. The assumption here is that the further a point is from the lowest point the less effect it is likely to have on the local slope. The estimated parameters and their standard deviation are used to compute the maximum height differences from the regressed line at defined distances from the lowest point. A curve is obtained from these maximum heights above the regressed line. This curve represents the initial Bare Earth. Once an initial Bare Earth has been determined points are classified as Bare Earth, Object, or Unclassified, based on their distance from the initial Bare Earth.

The threshold (for the distance from the initial Bare Earth) used in determining the class of a point is based on the underlying slope in the initial DEM.

The size of the operator is tuned to the size of the largest buildings in the landscape.

The above steps are repeated once more, but this time more stringent parameter (size of local operator, local slope, coefficients of variance propagation, and threshold value) values are employed. Further details can be found in Roggero (2001).

Known variants: The recent version of the algorithm (not used for the ISPRS test) extracts the DEM after segmentation and classification. The steps are: 1) data set segmentation (Object extraction); 2) Object classification; 3) DEM reconstruction; Steps 2 and 3 are similar to the algorithm used for the test. Further details can be found in (Roggero 2002).

4.4     M. Brovelli

Filter principle: This algorithm is made up of five steps, Preprocessing, Edge detection, Region growing, Correction, and DTM computation.

Preprocessing – The objective in this step is to remove outliers. This is achieved with a bicubic spline interpolation regularized by a Tychonov approach. The spline step is set in relation to the planimetric resolution of the point-cloud. The Tychonov regularization parameter is employed to avoid local and global singularity in the least squares approach, thus assuring regularity of the surface and minimization of curvature in empty areas. The high value imposed on the parameter allows for surfaces to behave differently from an exact interpolator. The surfaces are less susceptible to outliers. The choice of threshold is determined by an analysis of the histogram of residuals between observed and interpolated values. Points corresponding to residuals exceeding the threshold are considered as outliers.

Edge detection – The point-cloud is tiled (the size of tiles are set in relation to the spline steps, and each tile is set to have 200 x 200 splines). Bilinear spline interpolation with a Tychonov regularization parameter is performed. The gradient is minimized and the low Tychonov regularization parameter brings the interpolated functions as close as possible to the observations. Bicubic spline interpolation with Tychonov regularization is now performed. However, now the curvature is minimized and the regularization parameter is set to a high value. For each point, an interpolated value is computed from the bicubic surface and an interpolated gradient is computed from the bilinear surface. At each point the gradient magnitude and the direction of the edge vector are calculated, and the residual between interpolated and observed values is computed. Two thresholds are defined on the gradient, a low and a high. For each point, if the gradient magnitude is greater than or equal to the high threshold and its residual is greater than or equal to zero, it is labeled as an “Object” point. Similarly a point is labeled as being an “Object” point if the gradient magnitude is greater than or equal to the low threshold, its residual is greater than or equal to zero, and the gradient to two of eight neighboring points is greater than the high threshold. Other points are classified as “Bare Earth”.

Region growing – The classification categories are now, rasterized. For each cell, it is evaluated if it (the cell) contains a point with double impulse (difference between the first and last pulse greater than a given threshold). Starting from cells classified as “Object” and with only one pulse all linked cells are selected and a convex hull algorithm is applied to them. Simultaneously, the mean of the corresponding heights (mean edge height) are computed. Points inside the convex hull are classified as Object if their height is greater than or equal to the previously mean computed edge height. This last step is done only in case of high planimetric resolution.

Correction - Bilinear spline interpolation with a Tychonov regularization parameter is performed on the “Bare Earth” points only. The gradient is minimized by the regularization parameter. Analysis of the residuals between the observations and the interpolated values results in four cases. If the residual is greater than a chosen high threshold the point is reclassified as “Object”. If the point was initially classified as “DOUBLE IMPULSE GROUND” and the residual is greater than the chosen high threshold the point is reclassified as “DOUBLE IMPULSE Object” (edge or vegetation). If the point was initially classified as “Object” and the absolute residual is less than a chosen low threshold the point is reclassified as “GROUND”. If the point was initially classified as “DOUBLE IMPULSE Object” and the absolute residual is less than the chosen low threshold the point is reclassified as “DOUBLE IMPULSE GROUND”. The procedure is iterated several times.

DEM computation: Bilinear spline interpolation with a Tychonov regularization parameter is performed on the “Bare Earth” points only. The DEM is then provided in grid form.

4.5     R. Wack

Filter principle: In this algorithm non-terrain raster elements are detected in a hierarchical approach that is loosely based on a block-minimum algorithm. In the first step of the algorithm a 9m raster DEM is generated from raw point-cloud. A resolution of 9m is used to overcome large buildings or dense vegetation. The height value of each raster element is computed from the lowest height from 99% (to overcome the problem of low outliers) of all points within the raster element. Because of the size of the raster elements, most buildings and dense vegetation should now not exist in the DEM.

In the next step, all none terrain raster elements are detected and removed (this assumes that objects are characterized by sharp elevation change in the landscape). This is achieved by using a Laplacian of Gaussian (LoG) operation on the 9m DEM. The resulting 9m DEM is used as basis for computing a 3m DEM.

From the point-cloud a 3m raster is obtained. The representative height of each element is computed from those points inside the 3m elements that are within a given threshold of the corresponding height in the 9m DEM. Remaining raster elements that do not contain Bare Earth are again detected by an LoG operation on the 3m DEM. Where such elements are detected their heights are replaced with those from the 9m DEM.

At a resolution of 3m and below, a weight function that considers the standard deviation of the data points within each raster element and the shape of the terrain is applied to the output of the LoG operation. This is because at resolutions under 3m break lines in the Bare Earth can appear as elements that don’t contain terrain points.

In a repetition of the above procedure the 3m DEM is now used to obtain a 1m DEM, and so on.

To achieve good results user intervention is required in setting optimal parameter in the determination of the initial 9m DEM. After that no further user intervention is required. Further details can be found in Wack (2002).

4.6     P. Axelsson

Filter principle: In this filtering algorithm used by Axelsson (1999, 2000, 2001), a sparse TIN is derived from neighborhood minima, and then progressively densified to the laser point cloud. In every iteration points (from the point cloud) are added to the TIN if they are below data derived thresholds. The parameters that are thresholded are the angle points make to the TIN facets and the distance to nearby facet nodes.

At the end of each iteration the TIN and the data derived thresholds are recomputed (newly identified ground points are included in the computations). New thresholds are computed based on the median values estimated from the histograms at each iteration. Histograms are derived for the angle points make to the TIN facets and the distance to the facet nodes. The iterative process ends when no more points are below the threshold.

The main strength of this algorithm lies in its ability to handle surfaces with discontinuities, which is a particularly useful characteristic in urban areas.

In order to make the test as unbiased as possible, no manual editing has been made, but only the result from the automatic classification and filter procedures has been saved. This has the consequence that some obvious filtering and classification mistakes can be seen in the data sets, especially in set 1 and 3 where some strange points below the ground surface remains. This should be mentioned when comparing the results, since these errors would have been edited manually if data were to be delivered to a customer. We have left them in the data set since the automatic procedure could not eliminate all of them.

Notes by the author:

The filtering for the data set has followed a standardized procedure consisting of the following steps:

1.        Finding low erroneous points. Some of the data sets have large number of erroneous low points. In some cases it is hard to see also for an operator if these points are wrong or not. TerraScan has a procedure for removing low points and in most cases this function has worked properly. There is however cases, when there are large clouds of erroneous points, where it has failed.

2.        Classification of ground points. On the remaining points the ground classification procedure of TerraScan is applied. The procedure has a number of parameters, which have to be set. Only two different settings have been used, one which is the default settings.

3.        After ground classification, all points closer than 0.1 m to the TIN-surface were added to the ground class.

4.7     G. Sithole , M.G. Vosselman

Filter principle: This filter is a variant on the morphological filter developed by Vosselman (2000, 2001). A morphological filter works by pushing up vertically a structuring element (in the shape of an inverted bowl) from underneath a point cloud. The structuring element is centered (horizontally) on a point. It is then raised until it encounters a point in the point cloud. If the structuring element encounters a point that is not the point that it was centered on, then the centering point is treated as not belonging to the terrain surface. A point is accepted as belonging to the terrain only if it is the first point that the structuring element encounters on its trip upwards. In this process the structuring element moves from point to point in the point cloud. The slope of the structuring element (the parameter of the filter) is determined using a training set. Further Reference: Slope-based filter (Vosselman et. al. 2001). To improve the performance of the algorithm in steep terrain the slope of the cone is altered with the slope of the terrain. This is achieved by computing a rasterized slope (gradient) map, from the lowest points in each cell. The algorithm is run as before, except now the slope of a cone at a point is set equal to the slope in the gradient map below it, if that slope is less than a chosen minimum slope. The slope from the slope map is also pre-multiplied by a chosen scale factor to evade the influence of low high frequency variations in the point-cloud (e.g., low vegetation). Further details can be found in Sithole (2001).

Notes by the authors:

Before the morphological filter is applied low outliers are detected and rejected.

4.8     N. Pfeifer, C. Briese

Filter principle: In this algorithm the derivation of the terrain as well as the classification of the original points is performed in a hierarchic method. In each hierarchy level robust interpolation for the classification of the points and the surface derivation is done. This robust interpolation is presented in (Kraus et al., 1998 and Pfeifer et al., 1998).

Briefly, a rough approximation of the terrain is first computed using the points of the respective hierarchy level. The vertical distance of the points to this approximate surface is then used in a weight function to assign weights to all points. Points above the surface are given a small weight and those below the surface are given a large weight. The surface is then recomputed using a linear interpolation function and the assigned weights. In this way the recomputed surface is attracted to the low points. The process is iterated until a certain number of iterations have been reached or the computed surface does not change significantly between iterations.

On completion of the iterations points are classified. If a point is vertically above or below the surface within a predefined threshold, the point is classified as a terrain point. If a point is outside this threshold then it is classified as a non-ground point. This robust interpolation has been extended to the hierarchic robust interpolation (Pfeifer et al., 2001). It works in a coarse to fine approach using data pyramids (i.e. using coarser and coarser selections of the original points "as we move to the top of the pyramid"). Starting with the coarsest/highest level of points the robust interpolation is applied. To move from one level to the next finer/lower one, the surface of the rougher level is compared to the points of the finer level. Those within a predefined threshold are selected and are the input for the robust interpolation on the next finer level. To better handle discontinuities, the algorithm allows break lines to be defined, but manually, but this has not been implemented yet (Briese et. al. 2001).

This algorithm has been implemented in the SCOP software at the Vienna University of Technology. Further details can be found in (Kraus et. al., 1998), (Pfeifer et. al., 1998), (Pfeifer et. al. 1999a), (Pfeifer et. al. 1999b), (Pfeifer et. al. 2001), (Kraus et. al., 2001), and (Briese et. al. 2001).

Known variants: Schickler (Schickler et al. 2001) modified this algorithm to include break-lines, curvature constraints and slope constraints to control the estimated surface. Additional to this they input a classification map (vegetation types, water bodies, urban areas, etc.,) into their algorithm and associate with each class type a parameters set ideal for that class type. Briese et. al. (2001) also include the handling of break-lines in their algorithm.

Notes by the author: Notes on the parameters:

The hierarchic robust filtering is a very general method to remove gross errors in interpolation tasks. In one robust interpolation step some parameters for the interpolation of the surface itself (i.e. the linear prediction) as well as parameters for the robust removal of gross errors have to be set. The number of parameters for the robust interpolation is 9. These parameters are - for the special case of LIDAR data filtering - predefined for each hierarchy level. We call these parameters default parameters. Additionally for LIDAR data filtering we have a default strategy (i.e. number of hierarchy levels, sequence of filter steps). We use 3 hierarchy levels.

As written above, to come from a coarser to a finer level we apply "predefined threshold" values to separate between ground points and off-terrain points. We do also have for these parameters default values. All together we have 35 parameters in a default strategy. If we apply a more complex strategy, it could be up to 100 parameters.

Our workflow for laser scanner data filtering is the following: We apply the default strategy adapted to the point density (first parameter). We look at the intermediate results and refine the predefined parameters if necessary. All together approx. 9 parameters are changed (or less). This change takes place primarily in the coarse levels.


4.9     Summary

Several characteristics of the tested algorithms is given in Table 4.2.

Table 4.2

 

Elmqvist

Sohn

Roggero

Brovelli

Wack

Axelsson

Vosselman, Sithole

Pfeifer,

Briese

Description

Active Contours

Regularization Method

Modified Slope based filter

Splines

Hierarchic Modified Block Minimum

Progressive TIN densification

Modified Slope based filter

Hierarchic robust interpolation

Input Format

Grid

Point list

Point list, plus grid as data base support.

Grid

Grid

Point list

Point list

Point list

Output Format

Grid

Point list

Point list or Grid

Grid

Grid

Point list

Point list

Point list

Also uses second pulse

No

No

No

Yes

No

No

No

No

# of operator settings

 

3

5

20

 

2

4

 

Iterative

Yes

Yes

Yes

No

Yes

Yes

No

Yes

 

Filled Gaps

Yes

No

No

Yes

No

No

No

No

 

 


PART II

5           Test data

As part of the second phase of the OEEPE project on laser scanning companies were invited to fly over the Vaihingen/Enz test field and Stuttgart city center. These areas were chosen because of their diverse feature content (open fields, vegetation, buildings, roads, railroads, rivers, bridges, power lines, water surfaces, etc.,). However, the areas fall into two groupings, Urban and Rural. From within the two groupings eight sites (numbered 1 through 8) were selected for the comparison of filtering algorithms (see Appendix A). The sites represent four regions with urban characteristics and another four with rural characteristics.
The data for the sites is extracted from laser scanning data produced by FOTONOR AS. The areas were scanned with an Optech ALTM scanner, and both first and last pulse data were recorded. Some characteristics of the test-sites are provided in Table 5.1. It is this data that was offered to participants for processing.

Table 5.1 Characteristics of the test sites

Site

Region

Point Spacing

Special features

1

Urban

1 - 1.5m

2 - 3.5m

4 - 6m

 

Steep slopes, mixture of vegetation and buildings on hillside, buildings on hillside, data gaps

2

Urban

1 - 1.5m

 

Large buildings, irregularly shaped buildings, road with bridge and small tunnel, data gaps

3

Urban

1 - 1.5m

Densely packed buildings with vegetation between them, building with eccentric roof, open space with mixture of low and high features, data gaps

4

Urban

1 - 1.5m

 

Railway station with trains (low density of terrain points), data gaps

5

Rural

2 - 3.5m

 

Steep slopes with vegetation, quarry, vegetation on river bank, data gaps

6

Rural

2 - 3.5m

 

Large buildings, road with embankment, data gaps

7

Rural

2 - 3.5m

 

Bridge, underpass, road with embankments, data gaps

8

Rural

2 - 3.5m

4 - 5.5m

7 - 10m

High bridge, break-line, vegetation on river bank, data gaps

 

As can be seen results for Site 1 and 8 are provided at three different resolutions. This was provided to test filter performance at different point-cloud resolutions. To obtain the first reduced resolution the scan lines in the original point-clouds were detected and every second point in a scan line was dropped. Similarly the second reduced point-cloud was produced from the first reduced point-cloud.

5.1     Reference data sets

The reference data was generated, by manual filtering of the eight test data sets. In the manual filtering (or classification), knowledge of the landscape and some aerial imagery where available, and these were used. All points in the reference data sets were labeled Bare Earth or Object. From the eight data sets fifteen samples were abstracted (appendix A). These fifteen samples were rechecked and it is these samples that are used in the quantitative analysis. The fifteen samples (appendix A) are representative of different environments (but are more focused in respect to the expected difficulties).

5.2     Filtered data sets

Text Box:    				 		

Figure 5.1 (Left) Reference data contains both Bare Earth and Object points, (Right) Filtered data only contains Bare Earth points
Participants filtered the eight test data sets. Corresponding samples to the fifteen reference samples were also extracted from the filtered data. The filtered data sets contain only the Bare Earth points and therefore all points are labelled as Bare Earth.

 



PART III

6           Results/ Comparisons

6.1     Data

For information on the data used in the comparison see section 5.

6.2     Qualitative assessment

As already mentioned fifteen samples were extracted from the eight data sets. The samples were extracted with a view to examining and comparing how the different filters behave and to identify difficulties in filtering. Based on an examination of the eight data sets and the fifteen sample sets (appendix A), each of the filters was rated for each of the difficulties. The ratings are all relative to the sample sets.

6.2.1            Filtering difficulties

The filtering difficulties identified from the qualitative comparison relate to Outliers, Object Complexity, Attached Objects, Vegetation and Discontinuities in the Bare Earth. Each is briefly discussed below.

6.2.1.1             Outliers

High points – high outliers: These are points that normally do not belong to the landscape (Figure 6.1). They originate from hits off Objects like birds, low flying aircraft, etc. Most filters handle such features easily, because they are so far elevated above neighboring points. Therefore it is included here for completeness only.

 

Low points – low outliers: These are points that also normally do not belong to the landscape (Figure 6.1). They originate from multi-path errors and errors in the laser range finder. Most filters work on the assumption that the lowest points in a point-cloud must belong to the terrain. These points are naturally an exception to the rule.

 

Influence: Many algorithms also work on the assumption that points neighboring a low point must belong to an Object. In practice this assumption usual holds. However, in cases where the lowest point is an outlier, the assumption fails completely, resulting in erosion of points in the neighborhood of the low outlier.

 

 

Figure 6.1 (top left) high outliers, (top right) low outliers, (bottom) erosion caused by the presence of a low outlier.

6.2.1.2             Object complexity

Very large Objects: Because many of the filtering algorithms are localized, large Objects may not be filtered if the size of Objects exceed that of the test neighborhood (Figure 6.2).

 

Very small Objects (elongated Objects, low point count): These are Objects that have a small footprint.  Prominent examples of such Objects are vehicles. Comprising of 10 or less points such Objects tend to be pill shaped (Figure 6.2).

 

Very low Objects (walls, cars, etc.): The closer an Object is to the Bare Earth, the more difficult it becomes for algorithms to differentiate between it and the Bare Earth. This problem is complicated even further by the need not to incorrectly filter off small but sharp variations in terrain (Figure 6.2).

 

Complex Shape/Configuration (terraces): A major difficulty posed by urban environments is the variety and complexity of Objects found in them. This complexity manifests itself in the shape, configuration, and lay of Objects (Figure 6.2).

 

Disconnected terrain (courtyards, etc.): In urban environments it is common to find patches of Bare Earth enclosed by Objects (Figure 6.2). The decision of whether an enclosed patch is Bare Earth is not always clear-cut.

 

 

Figure 6.2 (top left) large, small and low Objects, (top right) complex configurations, (bottom) courtyard

6.2.1.3             Attached Objects

Building on slopes: Such Objects have roofs that are elevated above the Bare Earth on some sides and minimally or not at all on other sides. Because of this it becomes difficult to distinguish between such Objects and the Bare Earth.

 

Bridges: Artificial structures spanning the gap (road, river, etc.,) between Bare Earth surfaces.

 

Ramps: Natural/Artificial structures spanning the gaps between Bare Earth surfaces; one lower than the other.

 

Figure 6.3 (top left) gangway connecting terrain and building (top right) gangway spanning a road, (bottom) bridge

6.2.1.4             Vegetation

Vegetation on slopes: Vegetation points can be filtered based on the premise that they are significantly higher than their neighborhoods. This assumption naturally falls away in steep terrain where terrain points may lie at the same height as vegetation points (Figure 6.4).

 

Low vegetation: Similar to the problem of low Objects (Figure 6.4).

 

Figure 6.4 (left) vegetation on slope, (right) low vegetation on slope

6.2.1.5             Discontinuity

Preservation (Steep slopes): Generally Objects are filtered because they appear as discontinuous in the landscape. Occasionally it also happens that the Bare Earth is piecewise continuous. At discontinuities some filters will operate as they would on Objects. As a result, discontinuities in the Bare Earth are lost (Figure 6.5).

 

Sharp ridges:

The preservation of ridges is a similar but more drastic problem of retaining convex slopes as described by Huising and Pereira (Huising et. al. 1998), (Figure 6.5).

Figure 6.5 (left) steep slopes, (right) ridge

 

6.2.2            Assessment

The assessment of how filters appear to perform against the difficulties mentioned above is presented in Tables 6.1 and 6.2. The qualitative assessment was based on a visual examination and comparison of the filtered data sets. From Table 6.2 it can be seen that the main problems faced by the filter algorithms are in the reliable filtering of complex scenes, filtering of buildings on slopes, filtering of disconnected terrain (courtyards), and the preservation of discontinuities.

Table 6.1 Meaning of Good, Fair and Poor (used in Table 6.2)

Rating

Item filter rating

Influence rating

Good (G)

Item filtered most of the time (> 90%)

No influence

Fair (F)

Item not filtered a few times

Small influence on filtering of neighboring points

Poor (P)

Item not filtered most of the time (< 50%)

Large influence on filtering of neighboring points

 

Table 6.2 Qualitative analysis

 

Elmqvist

Sohn

Roggero

Brovelli

Wack

Axelsson

Vosselman,

Sithole

Pfeifer, et. al.

Data Format

Grid

Point list

Point list

Point list/Grid

Grid

Point list

Point list

Point list

 

Outliers

High points

G

G

G

G

G

G

G

G

High points influence

G

G

G

G

G

G

G

G

Low points

G

F

F

G

G

F

F

G

Low points influence

G

G

G

G

G

P

P

G

 

Object complexity

Objects in general

G

G

G

G

G

G

G

G

Very large Objects

G

G

G

G

G

G

F

G

Very small Objects (elongated Objects, low point count)

F

F

G

F

F

G

F

G

Complex shape/ Configuration (terraces)

F

F

F

F

F

F

F

F

Very low Objects

P

P

G

G

G

F

F

F

Disconnected terrain (courtyards)

F

F

F

F

F

F

F

F

 

Detached Objects

Building on slopes

G

F

F

F

F

G

F

G

Bridges

G/R

G/R

G/R

G/R

G/R

F

G/R

G/R

Ramps

P

P

P

P

P

F

P

P

 

Vegetation

Vegetation in general

G

G

G

G

G

G

G

G

Vegetation on slopes

G

G

F

F

F

F

F

G

Low vegetation

G

F

F

F

G

F

F

G

 

Discontinuity

Preservation

P

P

P

P

F

F

P

F

Sharp ridges

P

P

P

P

F

P

P

P

R: Removed

6.3     Quantitative assessment

The quantitative comparison was done with the aim of generating cross-matrices and generating visual representations of the cross-matrices. The cross-matrices are then used to evaluate Type I and Type II errors, and visual representation is used to determine the relationship of Type I and Type II errors to features in the landscape. Furthermore the size of the error between the reference and filtered DEMs is computed and analysed. The purpose of this is to determine the potential influence of the filtering algorithms on the resulting DEM, based on the predominant features in the data set.

6.3.1            Cross-matrices

Text Box:  

Figure 6.6 cross-matrix generation
The process of generating the cross-matrices is shown in Figure 6.6. This process of comparing points against a DEM was necessary because some of the participants supplied their data in a grid format, or a format in which the original reference points were altered. Therefore, the predefined threshold has a bearing on the size of Type I and Type II errors. If in the output of the participant’s filter points are gridded or altered in position then the interpretation has to take into account the predefined threshold used in the comparison. The two possible interpretations are briefly explained below.

Points in the reference not altered: If the output of a filter is a list of unedited points (position of a point is not altered), then there should be point-to-point correspondence with the reference (this was the case for Sithole, Roggero, Pfeifer). Here therefore, the predefined threshold has little relevance.

Points in the reference altered: If there is no point-to-point correspondence then a predefined threshold is used as a measure of agreement between a point in the reference and its correspondence in the filtered (this was the case for Wack, Elmqvist, Axelsson, Brovelli, Sohn). The choice of the threshold was chosen based on the largest expected interpolation error participants could have made in their algorithms.

How the threshold was computed is explained in the next section and the evaluation of the cross-matrices is explained in the section after that.

6.3.2            Interpolation error and the predefined threshold

To determine the largest possible interpolation error, a few small samples (<150 points) were extracted from the data sets. The samples were extracted at points of strong curvature in the terrain.

To these polynomial surfaces (degree 4) were fitted using least squares. The mean of the standard deviation of the residuals was then used as a measure of the interpolation error that may have been made by those participants that output gridded data. From this a value of 0.20 m (approx. 0.218m, Table 6.5) was used when generating the cross-matrices in appendix C.

Table 6.5 Determination of predefined threshold

 

N

53

40

138

75

151

Mean

- 0.003

- 0.000

0.006

- 0.000

- 0.001

 

Std. Dev

0.223

0.138

0.118

0.355

0.255

Min

- 0.587

- 0.367

- 0.436

- 0.836

- 0.579

Max

0.624

0.370

0.249

0.696

0.741

 

 Mean of Std. Deviations

0.218

 


6.3.3            Assessment measures

For each of the samples a cross-matrix is presented graphically and numerically in appendix C. The colour coding of the numerical result is shown below;

PARTICIPANT

Unused

 

 

Filtered

 

 

 

 

Bare Earth (BE)

Object (Obj)

 

 

 

Reference

Bare Earth (BE)

a

b

Object (Obj)

c

d

 

 

 

 

 

 

 

 

 

 

 

Ratio

Where

·          a, is the count of Bare Earth points that have been correctly identified as Bare Earth

·          b, is the count of Bare Earth points that have been incorrectly identified as Object (contribute to Type I errors)

·          c, is the count of Object points that have been incorrectly identified as Bare Earth (contribute to Type II errors)

·          d, is the count of Object points that have been correctly identified as Object

·          e, is the total number of points tested

·          a+b,  c+d,  are the total number of Bare Earth and Object points in the reference data respectively

·          a+c,  b+d,  are the total number of Bare Earth and Object points in the filtered data respectively

·          f, g, are the proportions of Bare Earth and Object points in the reference data in relation to the tested data respectively

·          h, i, are the proportions of Bare Earth and Object points in the filtered data in relation to the tested data respectively. Comparison of f,g, h and i  shows how composition of data is changed by Type I and Type II errors.

·          k, ratio of the number of Type I and Type II errors. This value shows the bias towards Type I and Type II errors.

·          j, is the number of points not used in the comparison, because of a lack of correspondence in the filtered data. The value of j is relatively small compared to the total number of points used.

Additionally from the cross-matrices another table was generated to show Type I and Type II errors and the magnitude and distribution of the errors. A sample table is shown below;

 

%

Min

Max

Mean

Std Dev.

Type I

Type II

Total

 

 

 

 

Where

·          %, is the percentage of Type I, Type II, or Total error

·          l, m, is the magnitude of the minimum and maximum Type I error respectively

·          n, is the mean of the Type I errors

·          p, is the standard deviation of the Type I errors

·          q, r, is the magnitude of the minimum and maximum Type II error respectively

·          s, is the mean of the Type II errors

·          t, is the standard deviation of the Type II errors

The minimum, maximum, mean and standard deviations (l, m, q, r, n, s, p, t) mentioned are over their corresponding errors. Therefore, the actual DEM bias and standard deviation (including correct points) will be much better.

It would be time consuming to document every one of the 15 samples on its own. Therefore, in the next sections focus will be placed on the problems that stand out from comparison of the graphical cross-matrices.

6.3.4            Assessment

A more detailed presentation of the quantitative results is provided in Appendix B. What follows is an abstraction of the outstanding features of the results. It has to be stressed that what is presented covers the difficulties in filtering as observed in the data and in general all the filters worked quite well.

6.3.4.1             Type I vs Type II

All the filtering algorithms examined make a separation between Object and Bare Earth based on the assumption that certain structures are associated with the former and others with the latter. This assumption while often valid does sometimes fail. This failure is caused by the fact that filters are blind to the context of structures in relation to their neighbourhoods. Because of this a trade off is involved between making Type I (classify Bare Earth points as Object points) and Type II errors (classify Object points as Bare Earth points).

The range of the errors is approximately 0-64%, 0-19%, 2-58% for Type I, Type II and the Total errors respectively. This shows that the tested filtering algorithms focus on minimising Type II errors. It can be seen even more clearly from the graphical comparison that most filters focus on minimising Type II errors, except the filters by Axelsson and Sohn. In others words filter parameters are chosen to remove as many Object points, even if it is at the expense of removing valid terrain, suggesting that participants consider the cost of Type II errors to be much higher than that of Type I errors.

 

6.3.4.2             Steep Slopes

Steep slopes are present in samples 11, 51, 52. Visually it can be seen that the Axelsson filter generates the least total error (total number of points misclassified). One explanation for this could lie in the Axelsson filter’s (or parameterizations) bias towards Type II errors. In general there are fewer Object points than there are Bare Earth points, and if bias is on making Type II errors then it also means that the Type II misclassifications will be fewer than Type I misclassifications.

In the lower slopes (south west of the road) of the sample 11 there are many buildings, and complicating matters further there are terraces in the Bare Earth which filters appear to have difficulties with, hence the large number of Type I errors in these areas.

Figure 6.7 Steep slopes with buildings and dense vegetation

6.3.4.3             Discontinuities

From samples 22, 23 and 53 it can be seen that overall the two slope based filters have the most difficulty with discontinuities in the Bare Earth. This is borne by the large number of Type I errors. However, when the height difference at discontinuities increases the performance of the slope-based filters remains the same. This is not the case with some of the other filters, where a discontinuity can also affect filtering in the neighbourhood of the discontinuity.

Another interesting aspect of filtering at discontinuities is where the Type I errors occur. Some filters only cause Type I errors at the Top edge, whereas others cause errors at both the top and bottom edges. The potential for the latter case happening is relatively higher for surface based filters.

Figure 6.8 Bare Earth filtered incorrectly at base and top of discontinues

6.3.4.4             Bridges

As already mentioned filters are blind. Because of this filters do not make a reasoned distinction between Objects that stand clear of the Bare Earth and those that are attached to the Bare Earth (e.g., bridges, etc.,). From the results it can be seen that the removal of bridges complete partial or not at all. All the algorithms for the exception of Axelsson’s seem to remove bridges completely. It is only Axelsson’s algorithm that is not consistent in removing bridges. A possible reason for this could be the method of point seeding used in the algorithm.

Another problem with the filtering of bridges relates to the decision made about where a bridge begins and ends. This problem is detected by Type II errors at the beginning and end of bridges (Bridges in the test were treated as Objects). This problem can be seen in samples 21 and 71 (and figure 6.9, top right). This error is generally not large.

Similar to bridges are ramps as shown in sample 24. Ramps bear similarity to bridges in that they span gaps in the Bare Earth. However, they differ in that they do not allow movement below them and according to the definitions set out in section 2 they are therefore Bare Earth. As such ramps were treated as Bare Earth in the reference data. All the algorithms filtered off the ramp in sample 24.

 

Figure 6.9 (top left) bridge, (top right) bridge correctly filtered, (bottom) ramp

6.3.4.5             Complex scenes

Complex scenes are present in samples 11, 22, 23.

Sample 23 (figure 6.10) presents the most difficult challenge. Shown in the scene is a plaza surrounded on three sides by a block of buildings. From the plaza it is possible to walk onto the road to the east and also descend via stairs to the road below (west). Further, complicating matters there is a sunken arcade in the center of the plaza. Defining what is and what is not Bare Earth in such a scenario is rather difficult. For the purpose of the test the plaza and arcade were assumed to be Bare Earth based on the rational that it is possible to walk without obstruction from the plaza to the roads on the west and east. Also visible to the south west of the scene is an excavation, which was also treated as Bare Earth, based on the definition in section 2.

The two slope filters faced the greatest difficulties in this area. This is understandable because of the high number of large buildings and discontinuities in the Bare Earth.

Figure 6.10 complex urban scene

6.3.4.6             Outliers

The number of outliers (both high and low) are relatively small and therefore their contribution to Type I and Type II errors is small. However, their influence on filtering in their neighbourhoods can be considerable. This can be seen in sample 31 and 41. In sample 31 it will be noticed that the filters by Axelsson and Sithole produce Type I errors that are arranged in a rather circular shape. This is because of a single low outlier at the centre of the circle.

In sample 41 the situation is rather different in that there is not one but many low outlier (apparently caused by a skylight in one of the roofs). Here the filters by Brovelli, Axelsson, Sohn and Sithole faced problems (particularly in the courtyard). What this shows is that while single outliers cause problems for some filters numerous close outliers will cause problems for many filters. Even more, the influence of numerous outliers an their neighbourhoods can be significant depending on the concept base of the filter.

 

6.3.4.7             Vegetation on slopes

An example of vegetation on steep slopes is given in samples 51 and 52. In the sample what stands out is the fact that most of the filters do well in identifying the vegetation (forest). However, some of this is done at the cost of increased Type I errors in the underlying slope, and in the case of Elmqvist and Brovelli as shown by their total errors, quite significantly.

In sample 52, the vegetation is lies much closer to the slope (<1m).

Figure 6.11 steep slope with dense vegetation in rural scene

6.3.4.8             Low Bare Earth point count

Because filters depend on detecting structures, especially those that detect Bare Earth it is essential that there be enough sample Bare Earth points. In sample 42 is shown a railway station. The sparseness of Bare Earth points. Most of the filters do well in identifying Bare Earth points despite the low count of Bare Earth points.

Steep slopes with low Bare Earth point counts were sought after but the best example of this that could be found is shown in sample 51. Therefore, this aspect still requires further tests.

 

Figure 6.12 railway station

6.3.4.9             Effect of resolution

Effect on Type I and Type II errors

As the resolution of the data is lowered, the bare earth and objects begin to lose definition. Therefore, to determine how filters cope when the resolution of the bare earth and objects breakdown filtered data at different resolutions (for sites 1 and 8) were compared with reference data of corresponding resolution. A cross-matrix was generated (as described in section 6.3.1) for each comparison. Measurement of the effect of resolution on filtering is therefore, based on the variation of Type I and Type II errors at different resolutions. A detailed comparison is provided in Appendix B, Tables B20 through to B25. Table 6.3 and Figure 6.13 show the summary of results.

Overall Type I and Type II errors increase with decreasing resolution. However, comparing site 1 and 8 (figure 6.13) it can be seen that there are variations and exceptions. Four possible reasons are thought to explain this.

Landscape characteristics – The size of Type I errors for site 1 are much larger than those for site 8. This is due to (i) more complex objects in site 1, (ii) buildings and vegetation on steep slopes in site 1.

Filter concept vs. Neighbourhood size – In section 3.4 four different filter concepts were identified. The choice of neighbourhood was also touched on in section 3.2. The combination of these factors is thought to be responsible for the variations in Type I errors. For site 1 both slope based filters (Roggero and Sithole) show decreasing Type I errors with decreasing resolution. As resolution is decreased there are fewer points against which a point is tested (fixed neighbourhood), hence in steep slopes a drop in Type I errors is to be expected. Naturally Type II errors will also increase. For surface based and minimum-block filters (Pfeipfer and Wack) the neighbourhood has to be expanded to achieve a minimum sampling of a surface. Because of this the surface fit becomes more general and an increase in Type I errors can be expected.

Filter parameter optimality – Filter parameters have to be tweaked to obtain optimal results at different resolutions. However, it is not always guaranteed that the most optimal resolution will be obtained at different resolutions. The small decreases in Type I or Type II errors are believed to be due to this.

Edge effects – For filters that work on gridded data, artefacts   can be become pronounced along edges of the data (or where there are gaps), especially at the lower resolutions. The large increase in Type I error in Site 8 (10m resolution) for the Wack filter is due to this.

Table 6.4 Resolution vs. Type I and II errors for Site 1 and Site 8 (data was not available for blank fields). All errors are given as percentages.

Site 1

 

 

 

 

Site 8

 

 

 

 

Original

Reduction 1

Reduction 2

 

 

Original

Reduction 1

Reduction 2

 

1 – 1.5m

2 – 3.5m

4 – 6m

 

 

2 – 3.5m

4 – 5.5m

7 – 10m

Type I

 

 

 

 

Type I

 

 

 

Elmqvist

 

 

 

 

Elmqvist

 

 

 

Sohn

17

15

20

 

Sohn

3

4

7

Axelsson

16

18

19

 

Axelsson

1

1

4

Pfeifer

21

30

38

 

Pfeifer

3

8

6

Brovelli

27

37

 

 

Brovelli

5

9

 

Roggero

27

19

24

 

Roggero

5

10

12

Wack

21

26

34

 

Wack

3

7

34

Sithole

35

23

19

 

Sithole

5

4

 

 

 

 

 

 

 

 

 

 

Type II

 

 

 

 

Type II

 

 

 

Elmqvist

 

 

 

 

Elmqvist

 

 

 

Sohn

6

15

12

 

Sohn

5

17

14

Axelsson

2

2

3

 

Axelsson

8

13

14

Pfeifer

2

1

1

 

Pfeifer

2

1

5

Brovelli

4

3

 

 

Brovelli

4

5

 

Roggero

1

6

5

 

Roggero

4

5

6

Wack

2

3

6

 

Wack

3

8

5

Text Box:   
  
Figure 6.13 Effect of reducing resolution on Type I and II errors
Sithole

2

4

8

 

Sithole

3

10

 

 

 


7           Discussion and Conclusion

The objectives of the study were to, (1) determine the performance of filter algorithms, (2) determine how filtering is affected by point density and (3) establish future research issues. These objectives are treated individually in the sections below.

7.1     Performance

It has to be stressed that what has been presented are some of the striking difficulties in filtering as observed in the data. In general all the filters worked quite well in landscape of low complexity (characterised by gently sloped terrain, small buildings, sparse vegetation, high proportion of Bare Earth points).

7.1.1            Main problems

Problems that pose the greatest challenges appear to be complex cityscapes (multi-tier buildings, courtyards, stairways, plazas, etc.,) and discontinuities in the Bare Earth. It is expected that tailoring algorithms specifically for these areas may improve results, albeit by a small amount.

To improve results significantly in complex landscapes will require context based classification and the use of external data to reinforce the classification of points.

7.1.2            Strategy

The success of a filtering strategy/approach depends on the concept and its implementation. Both aspects have to be strong for the strategy to succeed. The concept deals more with the abstract models used to describe the discrete landscape and the basis on which the components in the landscape are separated. The implementation deals with the way in which the concepts are translated into code and issues of physical memory, computing power and optimization influence this translation. For example several algorithms may use a surface based approach but differ in the number of points they test at a time. This can influence the performance of the filter, especially if the data is irregularly spaced. The larger the number of points used the slower the algorithm will become.

Overall surface based strategies appear to yield better results. However, from the numerical results, the filter by Wack and Wimmer also seem to perform well. This again demonstrates that what matters, is not only the concept but also the way the concepts are implemented.

Having noted that surface based strategies perform better it is the opinion of the authors that clustering and segmentation algorithms (or some hybrid based on these concepts) hold more promise. The rationale behind this is:

At a local level there are fewer structural differences between Bare Earth regions and Object regions and both can be modeled easily with simple models. Therefore, separation between Bare Earth and Object must be based on predefined structural variations in these simple models.

However, if the size of the locale is increased the modeling becomes more global, complex and difficult to realize. Therefore, modeling becomes feasible only if an object has uniform global structure. It has to be remembered that there is essentially only one Bare Earth and it extends over a large space in contrast to Objects that are many in number and cover relatively small spaces. As a result it is much easier to model Objects than it is to model the Bare Earth. Put differently if a surface is easily modeled then there is a strong likelihood that it is an Object rather than Bare Earth. This in a way explains why in the test, surface based methods performed better than the other more localized methods. Naturally a segmentation approach or clustering approach is a more global modeling of the point-cloud and because the data is viewed as a collection of higher level Objects a robust reasoning can be done in the classification between Bare Earth and Objects.

There is also another aspect of current filtering strategies that can be improved and it relates to the number of steps in classifying points. Current filtering strategy only makes two distinctions between features in a landscape, Bare Earth or Object. But from the results it is evident that this distinction is insufficient. A hierarchical approach to filtering as shown in figure 7.1 will potentially yield more controlled results.

Text Box:    			 

Figure 7.1 cross-matrix generation
In the hierarchical approach, points that are identified as Object are further classified to search out Objects (e.g., bridges, ramps, etc.,) that have strong associations with the Bare Earth.

 


7.1.3            Which error should be reduced?

Three error measures (Type I, Type II, Total) have been used to assess the quality of the filter results. To some extent a decision has to be made between minimising Type I and Type II errors. The question of which error to minimise depends on the cost of the error for the application that will use the filtered data. But from a practical point of view it will also depend very much on the time and cost of repairing the errors manually, which is often done during quality control. Experience with manual filtering of the data showed that it is far easier to fix Type II errors than Type I errors. Firstly, because there will generally be fewer Type II than Type I errors. Secondly, Type II errors are conspicuous by the fact that they standout in their neighbourhoods. In contrast, Type I errors result in gaps in the landscape, and deciding whether a gap has been caused by a Type I error or from the removal of Objects is not easy. 

There is also the third alternative, and that is to minimise the Total error. But as already explained reducing the total error is biased in favour of minimising Type I errors because very often in a landscape there are relatively more Bare Earth points then there are Object points.

It is suggested here that filtering should be biased in favour of minimising Type I errors, because Type II errors will be easier to correct manually during quality control.

7.2     Point density

More tests on decreasing resolution will need to be done, as the test sites chosen have proved inadequate to obtain a conclusive picture of the effects of resolution on filtering. The complexity of the sites has meant that even at the highest resolutions the filters have difficulties, which then masks the performance of the filters at lower resolutions. Typical results are shown for one of the filters in Table 7.1.

Table 7.1 Typical results for a filter (at three different resolutions)

Site 1

Original (1 – 1.5m)

Reduction 1 (2 – 3.5m)

Reduction 2 (4 – 6m)

 

%

Min (m)

Max (m)

Mean (m)

s

%

Min (m)

Max (m)

Mean (m)

s

%

Min (m)

Max (m)

Mean (m)

s

Type I

21

-17.25

35.07

-2.36

1.78

30

-14.07

34.29

-2.36

1.82

38

-13.37

17.99

-2.34

2.03

Type II

2

-13.24

6.14

-1.33

2.36

1

-7.54

7.27

-0.43

1.12

1

-8.84

2.82

-0.40

1.34

 

 

 

 

Site 8

Original (2 – 3.5m)

Reduction 1 (4 – 5.5m)

Reduction 2 (7 – 10m)

 

%

Min (m)

Max (m)

Mean (m)

s

%

Min (m)

Max (m)

Mean (m)

s

%

Min (m)

Max (m)

Mean (m)

s

Type I

3

-17.90

2.26

-1.63

2.15

8

-16.18

3.29

-1.50

1.54

6

-16.14

0.87

-2.33

1.90

Type II

2

-1.95

1.49

-0.16

0.44

1

-0.80

1.44

0.14

0.40

5

-3.36

1.05

-0.53

0.74

 

The mean and standard deviation of the errors at all three resolutions are comparable. This because of the presence of large errors in the filtering result, as can be seen from the size of the minimum and maximum errors.

Nonetheless, in choosing the scan resolution the filter concept used becomes critical, especially in landscapes with steep slopes and complex objects. This based on the explanation given for the variations in Type I and Type II errors in the charts in figure 6.13. Additionally a large Type I error does not necessarily mean the resulting DEM will be poor. Importantly it depends on where Type I and Type II errors occur in the landscape.

7.3     Research Issues

It is recognised that full automation is not possible because of the problems mentioned at the beginning of section 7, nonetheless the difficulties observed in complex urban landscapes and Bare Earth characterised by discontinuities provide challenges that can potentially be overcome and thus improve the reliability of filtering.

7.3.1            Classification using context knowledge and external information

As indicated in section 7.1.1 filtering of complex scenes is difficult and to obtain significant improvement will require:

Firstly, algorithms that reason the classification of points based on the context of their neighbourhoods. This is as opposed to current algorithms that classify solely based on structures (i.e., slopes, surfaces, etc.,).  This assertion is confirmed by the fact that in the comparisons surface-based filters performed better than point-based filters that examine less neighbourhood context.

Secondly, the use of additional information such as imagery to support the classification process. Even with more context, reliable results may not be realised because:

·          The semantics of Objects in a landscape change with the context (urban, rural, industrial, etc.,) of the environment. Establishing the context of the environment from the position of points alone is impossible.

·          It is not possible to extract sufficient semantic information from the position of the points alone

·          Where there are insufficient or no Bare Earth points a classification of Objects cannot be made (this is particularly true in forested areas).

·          Point-clouds will contain systematic errors (multi-path, etc.,) and noise. Separating between these errors and the landscape points is non-trivial.

7.3.2            Quality reporting, Error flagging and Self diagnosis

Checking the quality has been possible because some reference data could be generated. Furthermore the results have shown that filters are not foolproof and performance can vary from one type of environment to another. Therefore, while testing a filter against reference data is a good measure of gaining an appreciation of the filters performance, it is not a guarantee that a filter will perform as expected. If the type of environment being filtered is untested then unpredictable results can be and should be expected.

Therefore, it would be advantageous if filters could be designed to report on the anticipated quality of the filtering and/or flag where the filter may have encountered difficulties.

Additional to this there is the matter of perception of reliability. This particularly relates to filters that output interpolated data. The existence of data after filtering creates the perception that it is accurate (i.e., it is the Bare Earth). However, the tests have shown that with interpolated data it is arguable about what accurate is, since it depends very much on the threshold used in the comparison.

7.3.3            Effort vs. Result

This section is in a way linked to the previous section. It is intuitive that the law of diminishing returns also applies to the filtering of point clouds particularly when filtering is done on the basis of positional information (of the points) alone. At a certain instant depending on the concept or implementation used better results will not be obtained. Ascertaining when that limit has been reached is difficult, but it is important to be aware of it. This awareness is important because of the large volumes of data to be processed. If the limit of each algorithm were known then a multi-algorithm approach could be used to increase the efficiency of filtering. In this way the most efficient filter (in terms of computing effort and algorithm complexity) could be used for specific regions in a data set.

Additionally there is the aspect of parameter selection. As shown by the reduced resolution results filter performance depends on the slope and complexity of the Bare Earth. In any landscape the slope and complexity of the Bare Earth can vary considerably, and filter parameters are chosen with the most difficult situations in mind. For some filters, choosing parameters in such situations translates into more processing time. For such filters it would be more efficient to automatically detect the Bare Earth characteristics in an area and use the most optimal filter parameters.

7.4     Conclusion

The results from eight algorithms were compared against reference data sets. For typically non-complex landscapes most of the algorithms did well. However, for complex landscapes performance varied and surface based filters tended to do better.

The effect of lowered resolutions on the performance of filters was also tested. Comparison of the results at lower resolutions confirms that amongst other factors the method of filtering also has an impact on the success of filtering and hence on the choice of scanning resolution. However, more tests are required to form a clear impression of which filter characteristics have a significant impact on filtering at lower resolutions.

The filtering of complex urban landscapes still poses the greatest challenges. As has been suggested elsewhere, filtering using segmentation, and understanding of the context of the landscape being filtered and data fusion might be one of the ways in which this challenge could be overcome.

8           Acknowledgements

This study would not have been possible without the help and co-operation of participants who took time from their busy schedules to filter the twelve data sets. The authors wish to extend their gratitude to P. Axelsson, C. Briese, M. Brovelli, M. Elmqvist, N. Pfeifer, M. Roggero, G. Sohn, R. Wack and A. Wimmer.


9           Reference

[Axelsson, 1999]                 Axelsson P., 1999: "Processing of laser scanner data - algorithms and applications". ISPRS Journal of Photogrammetry & Remote Sensing, 54 (1999). pp. 138 -147.

[Axelsson, 2000]                 Axelsson P., 2000: “DEM Generation from Laser Scanner Data Using Adaptive TIN Models. In IAPRS. Vol. 33. part B4/1, pp. 110-117.

[Axelsson, 2001]                 Axelsson P., 2001: “Ground estimation of laser data using adaptive TIN-models”. Proceedings of OEEPE workshop on airborne laserscanning and interferometric SAR for detailed digital elevation models 1-3 March 2001. Official Publication No. 40. CD-ROM. pp 185-208. Royal Institute of Technology Department of Geodesy and Photogrammetry 100 Stockholm, Sweden.

[Briese et. al. 2001]             Briese C., Pfeifer N., 2001: "Airborne laser scanning and derivation of digital terrain models". Proceedings of the 5th conference on optical 3D measurement techniques, Vienna, Austria.

[Brovelli 2002]                    Brovelli M.A., Cannata M. and Longoni U.M., 2002: “Managing and processing LIDAR data within GRASS”. Proceedings of the GRASS Users Conference 2002, Trento, 11-13 September 2002  (in print on Transaction in GIS).

[Elmqvist, 2001a]               Elmqvist M., 2001: “Ground Estimation of Lasar Radar Data using Active Shape Models”. Paper presented at the OEEPE workshop on airborne laserscanning and interferometric SAR for detailed digital elevation models 1-3 March 2001, paper 5 (8 pages). Royal Institute of Technology Department of Geodesy and Photogrammetry 100 Stockholm, Sweden.

[Elmqvist et. al. 2001b]      Elmqvist M., Jungert E., Lantz F., Persson A., Soderman U., 2001: “Terrain modelling and analysis using laser scanner data”. IAPRS, Vol. XXXIV –3/W4 Annapolis, MD, 22-24 October 2001. pp. 219-227.

[Filin, 2002]                        Filin S., 2002: “Surface Clustering from Airborne Laser Scanning Data. ISPRS Commission III Symposium “Photogrammetric Computer Vision”, Graz, Austria, September 9 – 13, 2002. pp. 119-124.

[Flood, 1999]                      Flood M., 1999: "Commercial development of airborne laser altimetry: a review of the commercial instrument market and its projected growth". International Archives of Photogrammetry and Remote Sensing. WG III/5 and WG III/2.  9-11 Nov. 1999, Vol. 32, Part 3W14, La Jolla, CA, USA. pp. 13-20.

[Flood, 2001a]                    Flood M., 2001: “LIDAR activities and research priorities in the commercial sector”. IAPRS. WG IV/3., Vol XXXIV. Annapolis,MD, 22-24 Oct 2001. pp.678-684.

[Flood, 2001b]                    Flood M., 2001: “Laser altimetry: From science to commercial LIDAR mapping”. PE&RS, Vol. 67, No. II. November 2001. pp. 1209-1217.

[Hamming 1989]                 Hamming R.W., 1989: “Digital Filters”. Third Edition. Prentice Hall International Inc. 284 pages.

[Haugerud et. al. 2001]       Haugerud R.A., Harding D.J., 2001: “Some algorithms for virtual deforestation (VDF) of LIDAR topographic survey data”. IAPRS, Vol. XXXIV –3/W4 Annapolis, MD, 22-24 October 2001. pp. 211-218.

[Hoover et. al. 1996]           Hoover A., Jean-Baptiste G., Jiang X., Flynn P.J., Bunke H., Goldgof D.B., Bowyer K., Eggert D.W., Fitzgibbon A., Fisher R.B., 1996: “An Experimental Comparison of Range Image Segmentation Algorithms”. IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 18, no. 7, pp. 673-689, July 1996.

[Huising, et. al., 1998]        Huising E.J., Gomes Pereira L. M., 1998: "Errors and accuracy estimates of laser altimetry data acquired by various laser scanning systems for topographic applications". ISPRS JPRS, Vol. 53, no. 5. pp.245-261.

[Kraus et. al., 1998]            Kraus K., Pfeifer N., 1998: “Determination of terrain models in wooded areas with airborne laser scanner data". ISPRS JPRS. Vol. 53, pp. 193-203.

[Kraus et. al., 2001]            Kraus K., Pfeifer N., 2001: “Advanced DTM generation from LIDAR data". IAPRS, Vol. XXXIV –3/W4 Annapolis, MD, 22-24 October 2001. pp. 23-30.

[Lee and Schenk, 2001]       Lee I., Schenk T., 2001: “3D perceptual organization of laser altimetry data”. IAPRS, Vol. XXXIV –3/W4 Annapolis, MD, 22-24 October 2001. pp. 57 – 65.

[OEEPE, 2000]                   OEEPE: 2000 Working Group on laser data acquisition. ISPRS Congress 2000. http://www.geomatics.kth.se/~fotogram/OEEPE/ISPRS_Amsterdam_OEEPE_presentation.pdf

[Pfeifer et. al. 1998]            Pfeifer N.; Kostli A., Kraus K., 1998: "Interpolation and filtering of laser scanner data - implementation and first results." International archives of photogrammetry and remote sensing, Vol XXXII, Columbus, part 3/1 Columbus, pp.153 - 159.

[Pfeifer et. al. 1999a]          Pfeifer N., Reiter T., Briese C., Rieger W., 1999: "Interpolation of high quality ground models from laser scanner data in forested areaas". International Archives of Photogrammetry and Remote Sensing. WG III/5 and WG III/2.  9-11 Nov. 1999, Vol. 32, Part 3W14. La Jolla, CA, USA. pp. 31-36.

[Pfeifer, et. al. 1999b]         Pfeifer N., Kostli A., Kraus K., 1999: "Restitution of airborne laser scanner data in wooded area". GIS Geo-Information-Systems, 12 (1999). pp. 18 - 21.

[Pfeifer, et. al. 2001]           Pfeifer N., Stadler P., 2001: "Derivation of digital terrain models in the SCOP environment". Proceedings of OEEPE workshop on airborne laserscanning and interferometric SAR for detailed digital elevation models 1-3 March 2001; Editor Kennert Torlegard; Royal Institute of Technology Department of Geodesy and Photogrammetry 100 Stockholm, Stockholm.

[Roggero, 2001]                  Roggero M., 2001: “Airborne Laser Scanning: Clustering in raw data”. IAPRS, Vol XXXIV –3/W4 Annapolis, MD, 22-24 Oct, 2001. pp. 227-232.

[Roggero 2002]                   Roggero M., 2002: “Object segmentation with region growing and principal component analysis”. ISPRS Commission III Symposium “Photogrammetric Computer Vision”, Graz, Austria, September 9 – 13, 2002. pp. 289-294.

[Schickler et. al., 2001]       Schickler W., Thorpe A., 2001: “Surface Estimation Based on LIDAR”.  Proceedings ASPRS conference April 23-27, 20001. St. Louis Missouri. CD-ROM, 11 pages.

[Sithole, 2001]                    Sithole G., 2001: “Filtering of laser altimetry data using a slope adaptive filter”. IAPRS, Vol. XXXIV –3/W4 Annapolis, MD, 22-24 October 2001. pp. 203-210.

[Sithole, 2002]                    Sithole G., 2002: "Filtering Strategy: Working Towards Reliability". ISPRS Commission III Symposium “Photogrammetric Computer Vision”, Graz, Austria, September 9 – 13, 2002. pp. 330-335.

[Sohn, 2002]                       Sohn G., Dowman I., 2002: “Terrain Surface Reconstruction by the Use Of Tetrahedron Model With the MDL Criterion”. IAPRS, Vol XXXIV Part 3A. ISPRS Commission III, Symposium.
September 9 - 13, 2002, Graz, Austria. pp. 336-344.

[Tao, 2001]                         Tao C. V., Hu Y., 2001: “A review of post-processing algorithms for airborne LIDAR Data”. Proceedings ASPRS conference April 23-27, 2001. St. Louis Missouri. CD-ROM, 14 pages.

[Vosselman, 2000]              Vosselman G., 2000: “Slope based filtering of laser altimetry data”. IAPRS, Vol XXXIII, Part B3, Amsterdam, The Netherlands. pp. 935-942.

[Vosselman et. al., 2001]    Vosselman G., Maas H., 2001: “Adjustment and Filtering of Raw Laser Altimetry Data”. Proceedings of OEEPE workshop on airborne laserscanning and interferometric SAR for detailed digital elevation models 1-3 March 2001, paper 5 (11 pages). Royal Institute of Technology Department of Geodesy and Photogrammetry 100 Stockholm, Sweden.

[Wack, 2002]                      Wack R., Wimmer A., 2002: “Digital Terrain Models From Airborne Laser Scanner Data – A Grid Based Approach”. IAPRS, Vol XXXIV Part 3B. ISPRS Commission III, Symposium.
September 9 - 13, 2002, Graz, Austria. pp. 293-296.

[Zhan et. al., 2002]             Zhan, Q., Molenaar, M., and Tempfli, K., 2002. “Building extraction from laser data by reasoning on image segments in elevation slices“. The International Archives of Photogrammetry and Remote Sensing, Vol. 34, Part 3 (B), Graz, Austria, pp. 305-308.

 

 

10        Bibliography

[Ackermann, 1999]             Ackemann F., 1999: “Airborne laser scanning – present status and future expectations”. ISPRS Journal of Photogrammetry & Remote Sensing, 54 (1999). pp. 64 -67.

[Csatho, 1999]                    Csatho B., Boyer K., Filin S., 1999: "Segmentation of laser surfaces". IAPRS. WG III/5 and WG III/2.  9-11 Nov. 1999, Vol. 32, Part 3W14. La Jolla, CA, USA. pp.73-80.

[Dell’Acqua et. al. 2001]    Dell’Acqua F., Gamba P., 2001: “Digital Terrain Models in dense urban areas”. IAPRS, Vol. XXXIV –3/W4 Annapolis, MD, 22-24 October 2001. pp. 195-202.

[Gooch, Chandler, 1999]    Gooch, M., Chandler, J., 1999; “Failure Prediction in Automatically Generated Digital Elevation Models”. 4th International Conference on GeoComputation, Virginia, USA, 1999, CDRom.

[Haala, 1999]                      Haala N., Brenner C., 1999: “Extraction of buildings and trees in urban environments”. ISPRS Journal of Photogrammetry & Remote Sensing, 54 (1999). pp. 130 -137.

[Jacobsen, 2001]                 Jacobsen K., 2001: “New developments in digital elevation modelling: Problem of automatic DEM generation”. GeoInformatics. Vol. 4. June 2001.

[Lindenberger, 1993]           Lindenberger J.; 1993: “Laser-Profilmenssungen zur topographischen Gelandeaufnahme. PhD dissertation, Institute for Photogrammetry, Stuttgart University. Deutshce Geodatische Kommission, Reihe C, No. 400, Munich.

[Lohmann, 1999]                Lohmann P.; Koch A.; 1999: “Quality assessment of laser-scanner data.” Proceedings of the ISPRS Joint workshop "Sensors and Mapping from space", On CD-ROM

[Lohmann, 2000]                Lohmann P.; KOCH A.; Schaeffer M., 2000: "Approaches to the filtering of laser scanner data". Proceedings of XIXth Congress of the international society for photogrammetry and remote sensing (ISPRS), Amsterdam The Netherlands 16 - 23 July 2000, On CD-ROM.  Amsterdam, The Netherlands. pp. 534 - 547.

[OEEPE, 2000]                   OEEPE: 2000 Working Group on laser data acquisition. ISPRS Congress 2000. http://www.geomatics.kth.se/~fotogram/OEEPE/ISPRS_Amsterdam_OEEPE_presentation.pdf

[Oude Elberink et. al., 2000]   Oude Elberink S., Maas H., 2000: "The use of anisotropic height texture measures for the segmentation of airborne laser scanner data." IAPRS. WG III/3., Vol XXXIII, Part B3. Amsterdam, 2000. pp. 678-684.

[Petzold, 1999]                   Petzold B.; Reiss P.; Stossel W.; 1999: “Laser scanning – surveying and mapping agencies are using a new technique for the derivation of digital terrain models.” ISPRS Journal of Photogrammetry and Remote Sensing. Vol. 54 (1999). pp. 95 – 104.

[Schenk, 1999]                    Schenk T., Csatho B., Lee D.C., 1999: "Quality control issues of airborne laser ranging data and accuracy study in an urban area". International Archives of Photogrammetry and Remote Sensing. WG III/5 and WG III/2.  9-11 Nov. 1999, Vol. 32, Part 3W14. La Jolla, CA, USA. pp.101-108.

[Schiewe, 2001]                  Schiewe J., 2001: “Region-based information extraction from laser scanning data and optical imagery”. Proceedings of OEEPE workshop on airborne laserscanning and interferometric SAR for detailed digital elevation models 1-3 March 2001, (7 pages). Royal Institute of Technology Department of Geodesy and Photogrammetry 100 Stockholm, Sweden.

[Tokunaga et. al., 2001]      Tokunaga M., Thuy Vu T., 2001: "Wavelet-based algorithm in segmentation of airborne laser scanner data". IAPRS, Vol. XXXIV –3/W4 Annapolis, MD, 22-24 October 2001. Abstract.

[TopScan, 1999]                 MunsterstaBe 19, D-48566 Steinfurt, Germany, info@topscan.de, www.topscan.de (accessed 16 February 1999)

[Wehr et. al. 1999]              Wehr A., Lohr U., 1999: "Airborne laser scanning  - an introduction and overview". ISPRS Journal of Photogrammetry & Remote Sensing, 54 (1999). pp. 68 -82