Visualisation Techniques
Introduction
- Use of Visualisation
- The technology of Visualisation
- Visualisation and Geographic Information Systems
- Perceived realism and the validity of visualised
images
Photo-realism, schematic images and
validity
- Realism
- Schematic images
- Validity
- Example of an experiment in validity of
simulations
Rendering
- Present uses of technology
- Degree of realism of visual displays
- Relation of detail to distance - variable object
representation
- Geometric transformations
- Earth curvature
- Perspective projections
- Specific details in rendering
- Lighting models and illumination
- Ray tracing
- Colour and texture
- Objects
- Atmospheric effects
- Variable object representation
Tree and forest simulation
- Image creation techniques
- Tree simulation
- Block draping
- 2.5 and 3D tree patterns
- The ramification matrix method - an example of how to grow a tree
- Examples of projects using tree or forest
simulation
- The Sulphur Pass Project
- SmartForest
References
Visualisation provides additional insights to results which would otherwise
be displayed as text or numbers (Loh et al, 1992). It is a form of
communication which is universal, and which has the ability to form an
abstraction of the real world into a graphical representation which is
comprehensible to a wide range of people. Increasingly, computer Visualisation
is used to communicate the implication of natural and management changes in
biological systems in national parks and forests (Orland, 19??). However, it
can be said that society is becoming dependent on information presented in
three-dimensional visual format (Faust, 1995), virtual reality is no longer a
word only applicable to computer games.
The emerging role of environmental managers is to mediate between the
environment and its many users. That role has four important components
(Orland, 1992b):
1. to identify and interpret the complex interactions of environmental
systems;
2. to communicate their implications to environmental scientists, other
managers, policy makers, decision makers, and the general public;
3. to enable the testing and evaluation of alternative scenarios by experts
as well as non-experts; and
4. to implement the resource plans resulting from this wide range of inputs.
Visualisation techniques are a useful aid in the second and third components
above.
Until recently, Visualisation has been an added feature, not an essential
part, of the decision making process. Too often it has been regarded as
decorative in function rather than substantive. Visualisations have been
developed after the real work is completed, and often only to "sell"
the resulting proposals. Presently, the use of GIS in Visualisation is limited
to mere pictures of landscape. The aim of researchers and managers should be to
ensure that the Visualisations are tied to underlying databases, and that the
links between Visualisations and data are verifiable, reliable, and accurate
(Imaging Systems Laboratory, 1995).
For natural resource managers to plan for a more healthy environment, and to
elicit public and political support for such plans, two needs have been
identified by Orland (1994):
1. To predict the responses of public groups to changes in the environment,
for some of which the visual impact may be the dominant indicator, and to plan
to minimise any negative impacts;
2. Once a proposal is developed, to communicate the effects of proposed
changes to other agencies and public review groups to facilitate
decision-making.
It is possible to visualise the landscape impacts of changes in the building
code or engineering standards as well as in natural resource management. Models
may be used that predict changes in landscape preferences, economic behaviour,
ecological succession, wave erosion or even rises in sea level. All of these
and other models have implications for the assessment of regional visual
landscapes and as as result they may be used to produce new landscapes which
may be visualised using the modelling process described above (Mayall and Hall,
1994). Visual impact assessments (VIA) are beginning to depend on
Visualisations and visibility analysis.
Traditional tools for visual communication of resource issues have included
simple graphic devices such as maps, line charts, sketches, photographs, and
renderings. The new tools include coloured computer maps, 3-d models,
animations, and interactive virtual reality environments used to explore design
ideas (Imaging Systems Laboratory, 1995).
Photomontage techniques use a combination of photographs, renderings and
artistic license. Kennie and McLaren (1988) define photomontage as "a
physical or image composite of photographs of the existing landscape with a
registered computer generated image of the proposed design objects (s)".
Computer graphics is incapable of substantiating all the details
recognisable in the immediate surroundings. The visual simulation program must
therefore be considered as a support system to help a designer to outline the
basic features of the environment (Pukkala and Kellomaki, 1988), it will never
be able to represent the landscape as it is in reality. The closest images will
be produced by reproductions of the phenomena based on the laws of physics
(Kennie and McLaren, 1988).
The programs needed to manage and utilise natural resource data include:
database management systems (DBMS); geographic information systems (GIS);
simulation models; expert systems; report generation systems; and other
relevant application programs (Loh et al, 1992). Such programs are continually
being designed, combined and upgraded.
GIS-based Visualisation goes beyond the simple ability to discuss
anticipated outcomes via traditional graphic tools. It offers the opportunity
to visualise relationships across time and space, and to explore more
comprehensive ranges of possibility (Orland, 1994). More flexible Visualisation
methods would enable users to select their own viewpoints and be free of
weather, seasonal and other restrictions. Currently the emphasis for providing
that flexibility is on GIS. However, GIS-driven image creation does not
currently provide a means of integrating detailed, small-scale Visualisation
with large-scale regional views. The coarse grain of data sources such as
digital elevation models and remote sensed imagery make GIS most appropriate
for large-scale, synoptic, views of resource issues (Orland, 19??). It is as
yet not so useful for small scale, detailed Visualisations.
Currently, Visualisation in GIS systems is generally limited to
two-dimensional viewing either of individual GIS layers or of the results of
GIS analyses. Pseudocolour representations of data variables are often used to
identify different regions within a raster GIS whereas line styles, widths,
colour and symbolisation are used for data representation for vector GIS. True
colour combinations of multiple raster GIS layers can also be accomplished for
viewing the spatial relationships between the layers (Faust, 1995).
Several GIS systems currently have the capability of creating
three-dimensional perspective images by using elevation data for geographic
areas overlaid with a GIS variables such as land cover or land use. In most
cases this imagery is simply used for show, and there is little or no
analytical work that can be accomplished with the perspective image.
Measurements of size and shape are not valid in a perspective image without
significant ancillary information being present along with the perspective
image. In addition, these perspective image are generally well received as
showing the relationships of the GIS data to the natural terrain; these factors
have limited the usefulness of the perspective images to `show and tell' type
applications (Faust, 1995).
Bishop and Hull (1991) have the following to say about GIS and Visualisation
in the future:
"It is an attractive thought that, at some future time, we will have
sufficient accumulated research to assess probable changes in visual resources
entirely from a GIS without further recourse to psychophysics or video-imaging.
This point is unlikely to occur, however, because even if the process of
modelling from mapped /mappable information is shown to be valid and reliable,
the landscape experience is dependent upon purposes and values and therefore
varies from place-to-place and time-to-time. Recalibration of such models will
therefore always be required."
A question to address in producing photo-realistic simulations is: how good
is good enough? A good enough image is one that has a high degree of perceived
realism, conveys maximum quality, contains enough data, yet is efficient in
terms of equipment costs, storage and management (Perkins, 1992). Perceived
realism may not necessarily vary directly with image quality. Image quality may
be very high in technical terms, while perceived realism is not (Perkins,
1992), although image quality will affect perceived realism, so will the
content of the image, the viewpoint of the image and the receptivity of the
viewer. Daniel (1992) suggests that image quality is sufficient when additional
inputs to improve image quality do not result in an increase in image validity,
or indeed, in perceived realism.
Some basic understanding of the factors that influence the perception of
image quality is therefore needed to increase the `fit' between
computer-generated images and real world conditions (Perkins, 1992). Public
perception studies have been conducted with images and they indicate that
simulations are achieving a high degree of validity (Orland, 19??).
Graphics in the realm of illustration have two characteristics: precision
(or detail) and realism. Precision is necessary because small variations (in
terrain, for example) can have large effects on design. What might be noise in
the domain of inference becomes a high priority in golf course design
(Buttenfield and Ganter, 1990).
Consideration of the validity of substituting computer-generated simulations
for photographs only makes sense if it is accepted that photographs are
themselves an adequate surrogate for direct experience of the landscape in
question (Bishop and Leahy, 1989). This question is discussed in the review of
landscape preference and perception.
The importance of realism is noted by Buttenfield and Ganter (1990):
"Realism provides the visual context within which constraints and
externalities may be considered."
If the intent is to convey the potential impact of proposed management
actions, with the goal of informing public review and approval processes, then
more realism and detail may be demanded in the Visualisation. Techniques
include photo-realistic simulations, and hand-crafted graphic renderings and
models (Orland, 1994).
Computer image editing methods can be more realistic than hand rendering and
less expensive than photo-retouching and they do much to convey a realistic
visual experience of the planned new environment. The realism and transparency
of the medium to a non-expert viewer is high, but the accuracy and validity
less easy to defend (Orland and Daniel, 1995). In a study by Oh (1994) only
image processing was successful in separating the visual attractiveness of one
landscape from another in simulations; the other methods used were wire frame,
surface modelling and combined surface modelling and scanned photographic
images.
In the search for true realism, a compromise must be found between reality
and costs (Zewe and Koglin, 1995).
Many studies have tended to focus on the realism or accuracy of simulation.
But simulation cannot reproduce that reality completely. Rather, it selects
critical aspects of that reality for the particular purpose at hand (Oh, 1994).
In practice the simplification or abstraction of detail is directly related to
the savings of effort, time and costs of simulation.
In many ways, schematic images, those which do not attempt photo-realism,
are as useful as the more realistic images. They do not require the technician
complications of photo-realistic visual simulation and do not have the
theoretical problems associated with defining realism (Ervin, 1993). These
views are approximations - more realistic than just plan views, but more
schematic than photorealistic renderings or presentation drawings (Ervin,
1993).
In schematic renderings the images are rough; colours are shaded, the ground
is broken up into half-acre coloured triangles, and all trees and buildings are
alike and rather diagrammatic. There is no provision for subtleties of texture
or curved surfaces. These representational conventions are no more limiting
than any others commonly encountered in design, and they are no more difficult
to understand (Ervin, 1993).
For some purposes, the Visualisation media may intentionally be highly
abstract and intended to convey information most effectively to experts within
the same discipline. In this category are coloured maps, where the need is
primarily to represent single issues such as the extent or locations of an
impact, or statistical displays of simulation models where the purpose of the
Visualisation is to prompt further modification of resource models and hence to
understand better the underlying physical system (Orland, 1994).
There are two questions which need to be answered when the validity of
computer simulations is discussed (Daniel, 1992):
1) What is the validity of data Visualisation systems?
2) What level of data Visualisation is sufficient for environmental planning
and management?
The primary concern for data Visualisation intended for decision support in
environmental management is to achieve accurate and verifiable representations
of existing and projected environmental conditions. The validity and
sufficiency of a given data Visualisation system, therefore, depends in part on
the purposes for which it is intended (Daniel, 1992). The Visualisations must
also be accurate, because their power to convince is high, and any
misrepresentations must be avoided.
The resolution and fidelity of environmental simulations seems limited only
by the computer resources and peripheral devices allocated to the task.
However, the validity of data Visualisation systems is not necessarily related
to judgements of realism, believability, or other such qualities. Neither is
there any necessary dependence upon resolution, colour fidelity or other
technical criteria (Daniel, 1992).
The answers to the questions provided by Daniel (1992) are that the
Visualisations are valid to the extent that responses to environmental
representations correlate with appropriate responses made directly to the
environments represented. Data Visualisations are sufficient to the extent that
adding detail, higher resolution, colour fidelity, animation or other features
does not improve the match between representation based and direct responses.
Bishop and Leahy (1989) conducted an experiment to determine under what
conditions computer simulations may be used reliably. They decided to select
only those factors for which some objective measure could reasonably be applied
in future simulations.
Prospect/depth, refuge, and complexity could conceivably be established from
photographs of the type used by Shafer and Brush (1977) for estimating
preference. Extent of background clearly relates to prospect, refuge, and
perimeter lengths to complexity. Landform is largely a reflection of measurable
relief. Water can be measured by area, flow and drop. Cultural modifications
can be clearly outlined - although the effect of different types of
modification is highly subjective (Bishop and Leahy, 1989).
In the scheme, colour is judged on the subjective basis of vividness (high
score) or subtlety (low score) while vegetation ranges from the harmonious
variation (high score) to little variation (low score) (Bishop and Leahy,
1989).
In looking at simulations, therefore, it is the degree to which the
simulation can capture the subtlety of colour which may prove a decisive
variable. Measures of both subtlety and veracity could be derived objectively
by using image analysis colour filtration, and signal processing techniques
(Bishop and Leahy, 1989).
The only attribute to which an objective measure was eventually applied was
complexity; the size of the run-length encoded file containing each digitised
image was used. Regression analysis based on selected variables indicates the
importance of maintaining both the greenness and subtle colour variations of
the original slides, the inadvisability of basing simulations on scenes with
significant background or major relief, and the advantage of having a
recognisable visual focus even if this is itself simulated (Bishop and Leahy,
1989).
It is possible to simulate landscapes as they might look as a result of
environmental impacts such as building or forestry developments, or to test
environmental models (Fisher et al, 1993). Terrain modelling is possible in a
range of different guises from the simple vector (or wire-frame) surfaces
displayed as rectangular or triangular matrices, to sophisticated colour
perspectives generated from complex hidden-line and hill-shading algorithms
(Moore, 1990).
Commercially available, generic toolkits, such as PV Wave, AVS and Explorer,
can be used for the production of visual displays of different types of data.
This generation of dedicated graphics software epitomise the Visualisation
revolution in computer graphics (Fisher et al, 1993), however, they require
powerful hardware, such as Sun or Silicon Graphics workstations. Less powerful
PCs and Apple Macintoshes have an increasing potential for visual display and
can be used as effectively in developing new Visualisation strategies.
Degree of realism is dependent on several factors including nature of the
application, the objective of the Visualisation, the capabilities of the
Visualisation, capabilities of available software/hardware package and the
amount of detail required and/or available (Kennie and McLaren, 1988).
The majority of computer graphics techniques have been developed to address
the Visualisation of objects that have their geometry defined using a mesh of
planar surfaces such as triangles. This can lead to jagged edges on curves,
this is often solved using anti-aliasing techniques (Kennie and McLaren, 1988).
The anti-aliasing for montage methods should be different from other usual ones
because the background scene includes pixels which change pixel by pixel, and
image overlay operation is performed several times (Nakamae et al, 1986). The
aliasing problem also occurs when foregrounds are superimposed onto the
computer-generated images.
Special problems are posed by the realistic rendering of landscape because
of the amount of detail required. Fractal methods have been proposed to allow
database amplification, which is the generation of controlled random detail
from a fairly sparse description. An alternative approach is to use texture
mapping methods on a few simple primitives. The texturing is defined
procedurally, which may be expanded without loss of high frequency detail, or
shrunk without the occurrence of aliasing artifacts (Miller, 1986). Fractal
subdivision methods are slow and generate defects due to what is known as the
`creasing problem'. The texture map methods, on the other hand, display visible
discontinuities in texture gradient where two surfaces intersect.
An accurate representation of each detail in the scenery is not possible and
also not necessary (Gross, 1991). Using only satellite data, observer positions
close to the ground or narrow fields of views result in large quadrilaterals in
the foreground due to perspective foreshortening. Therefore, resolution should
be high towards the foreground or local zone, but the resolution should
decrease towards the background (Graf et al, 1994). Far away from the observer
it is sufficient to present the scenery at a low level of detail using
hierarchical data sets in order to limit the corresponding data. Photorealistic
rendering is provided with great success by texture mapping of remote sensing
data on the digital terrain model. This method is useful for great distances
from the observer (Gross, 1991).
Earth curvature
Simulation methods must take into account the curvature of the earth and
refraction of light; there is a formula using the true elevation of the view
point from which reductions to all elevations can be calculated to allow for
curvature and the refraction of light (Aylward and Turnbull, 1977).
Since all topographic data is based on an assumed flat reference plane,
usually height above sea level, it is necessary to reduce all elevations from
the specified view point to take into account the curvature of the earth and
refraction of light, if the survey area is greater than 2 miles (3.2 km). Since
the distance from the view point to all elevations can be calculated, a simple
formula can be used to modify each elevation value (Aylward and Turnbull,
1977).
Perspective projections
3D terrain is normally mapped into 2D space by a perspective projection. For
applications where the geometric fidelity of the rendered scene is of vital
importance such as a photomontage for a Visual Impact Assessment (VIA), it is
necessary to incorporate both earth curvature and atmospheric refraction
corrections into the viewing model (Kennie and McLaren, 1988).
The geometric transformation described by Kennie and McLaren (1988) for
landscape view generation is a perspective projection similar to that of a
photographic system and of the human visual system. It uses perspective
fore-shortening. No shadows are calculated since the model is illuminated and
shaded already by the overlay image which is lighted by the sun.
In order to use satellite images and aerial photographs for perspective
viewing, they must refer to the same geometric reference system as the digital
elevation or terrain model (DEM/DTM). Although a vertical aerial photograph
provides a map-like view of the earth's surface, it differs fundamentally in
geometric terms from a map (Graf et al, 1994). There are many geometric
distortions of satellite image data with severe effects: the satellite
(platform) monitor and earth rotation, the imaging geometry of the sensor and
the terrain variations in the scene.
There are a number of details to take account of when rendering a landscape
Visualisation. As well as the model and associated landscape features, a number
of other parameters need to be defined (Kennie and McLaren, 1988).
- viewing position and direction of view
- lighting model to describe illumination conditions
- `conditional modifiers' e.g. wet, snowy
- `environmental modifiers' e.g. atmospheric conditions such a haze
- sky and cloud model representing the prevailing conditions.
Some of the specific attributes of the Visualisations are discussed below;
these include colour and texture, object, lighting, ray tracing and atmospheric
effects.
Lighting models and illumination
The appearance of a surface is dependent on several factors: type of light;
condition of atmosphere; surface colour; reflectance and texture; position and
orientation of surface relative to the light source; other surfaces; and the
viewer (Kennie and McLaren, 1988). Normal lighting models are simplified by
assuming only a single parallel light source located at infinity (the sun).
There are two types of light source - direct and ambient (reflected).
A number of models have been published for displaying natural scenes,
terrains, flames, eruptions, glasses, and trees, such as the fractal model, the
procedure model, the growth model, and the semi-transparent mapping model
(Nakamae and Tadamura, 1995). Mirror images and transparent effects have been
realised using refraction techniques.
Radiosity
Radiosity is defined as the simultaneous global solution for the
intensity of light leaving each surface by constructing and solving a set of
linear equations describing the transfer of diffuse light energy between all
surfaces (Kennie and McLaren, 1988).
Shadows
Surfaces visible from both viewpoint and light source are not in shadow,
those visible from viewpoint but not light source are in shadow (Kennie and
McLaren, 1988).
Depth cueing
Depth cueing is used to increase the 3D interpretability to match the
perceived computer generated image to human `natural' visual cue methods.
Intensity depth cueing is concerned with putting the relative lighting in the
sky (Kennie and McLaren, 1988).
Ray tracing
This approach (view dependent) involves tracing a ray from the viewpoint
through a pixel and into the model where its interaction with objects in
analysed. Each collision with an object produces 3 rays - diffusely reflected
light, specularly reflected light and transmitted (refracted) light. Normally
the last two continue to be traced (Kennie and McLaren, 1988).
Ray tracing is a process which considers each pixel in the image, a ray is
defined as a line joining the viewpoint to the pixel. Ray tracing is a
recursive algorithm, dealing with the rays as they branch through object
interaction (Evans, 1993).
Colour and texture
Objects in computer graphics can be visualised using various attributes such
as smoothed curved surfaces, 2D texture, half-transparency, specular reflection
effects and bumped texture. There are also strong rendering techniques
including z-buffer and scan-line algorithms and anti-aliasing algorithms
(Nakamae and Tadamura, 1995).
For non-exact modelling of terrain surface, data amplification primitives
such as fractals which automatically densify the model are used. Texture
mapping shades a surface mathematically, bump mapping stores surface normal
perturbations in the texture map, achieving roughness without explicitly
modelling the geometry. The latter is good for terrain Visualisations.
Stochastic fractal models, a class of irregular shapes that are defined
according to the laws of probability, can accurately model natural terrain
(Kennie and McLaren, 1988).
Objects
In traditional manual photo-montage, objects are painted over a background
image. This technique has several drawbacks: the result depends much on the
artist's skill; the position of an object can only be estimated; and the
geometric accuracy therefore is limited (Graf et al, 1994). The montage system
developed by Nakamae et al (1986) included some unique characteristics along
with anti-aliasing processing enabling superior simulation of natural shading
and shadows. Atmospheric moisture effects (fogginess) was added to simulate
various weather conditions. The system allows the objects to be constructed,
rendered and placed in the scenery without relying solely on an artist.
Nakamae and Tadamura (1995) looked at creating photorealistic images based
on optics, taking into account inter-reflection between illuminated objects,
skylight with spectral effects and atmospheric scattering and absorption. They
also looked at complex objects such as fur and trees. In order to reduce the
cost for creating such complex phenomena, research on effective polygonal
surface techniques, interactive modelling tools, and powerful graphics engines
for rendering has increased (Nakamae and Tadamura, 1995).
Atmospheric effects
Due to scattering effects by water molecules, dust and pollution, objects
appear to lose their colour and intensity with increasing distance. The
reduction rate of colour and tone depends on the various factors such as
seasons, weather conditions and time. The colour and tone become grey as the
distance becomes large. The hazing affect created due to the atmospheric
moisture content leads objects to undergo exponential decay of contrast with
respect to distance from the viewpoint (Kennie and McLaren, 1988; Nakamae et
al, 1986; Graf et al, 1994).
The human eye relies on colour fading for correct depth perception.
Therefore in order to enhance realism and to maintain the estimation of
distance, atmospheric effects must be modelled (Graf et al, 1994). The fog
effect increases the sense of perspective in the montages. The shading and
shadows of the computer generated images help a montage match to the background
scene (Nakamae et al, 1986).
Variable object representation
Any image that is attempting to simulate realism in a spatially deep scene
should include a mechanism to vary the perceived level of detail of an object
based on its distance from the viewer (Kennie and McLaren, 1988).
It is important that any landscape Visualisation system allows realistic
tree and forest simulation. The symbols used can be very effective in improving
the visual realism of a rendering. Often, logged areas which are visible before
trees are drawn will become invisible once tree symbols are added; the
occluding effect of trees is especially noticeable in gently sloping landscapes
(Smart et al, 1991).
At a basic level, trees have been reproduced as simple geometric forms,
circles or spheres with thin cylinders as trunks, or as pattern of branches
rotated through 360 degrees to give wire-line form skeletons. When images are
drawn on colour raster devices, more impressive effects are possible: shapes
can be flood-filled in subtle green hues, textured patterns can be applied and
give a naturalistic, organic quality to the vegetation (Moore, 1990).
To create an image with many trees for environmental assessment the
following conditions should be satisfied (Nakamae and Tadamura, 1995): easy
construction of a tree database in view of the variety of species of trees;
easy display of a cluster of trees in view of the necessity for making a
variety of images; creating still images from arbitrary viewpoints as well as
animation.
The branching patterns of higher plants are evident everywhere and are
relatively easy to formalise, providing excellent examples for study (Aono and
Kunii, 1984). There are two basic categories of tree branching patterns -
dichotomous and monopodial, most trees have the latter. A branch divides in two
at the growth point but one follows the direction of the main axis and the
other goes in a different direction to form a lateral branch.
Some examples of tree and forest simulation are discussed below, including
block draping (a method often used by the Forestry Commission) and 2.5D
rotating trees. The ramification matrix method, a method of growing trees in
the computer rather than using simulated or scanned fully grown trees, is also
discussed.
Tree simulation
In the method of tree simulation described by Nakamae and Tadamura (1995),
two textures digitised from two photographs taken from the right side and from
above the tree were mapped onto a set of transparent planes. For shading and
shadowing, the shape of a tree is approximated by a transparent polyhedron
surrounding it. Shadows cast onto trees, as well as tree shadows cast onto
objects, look natural whether the trees is lush or has sparse leaves. Shadowing
is available for the following four cases: shadows cast by trees onto their own
trunks; shadows cast by trees onto objects; shadows cast by objects onto trees;
and shadows cast by trees onto other trees (Nakamae and Tadamura, 1995).
Block draping
Block draping is a technique which uses a straight forward digital terrain
model, and then in regions of forest stands, the elevation values are increased
by the height of the trees (Evans, 1993). Stylised trees use wire frame
symbolised trees created using the simple graphics entities of lines and filled
circles and triangles. Coniferous and deciduous trees are easily
distinguishable due to the different symbol trees. Using colour on the terrain
model and within the filled symbols provides a greater degree of realism.
2.5 and 3D tree patterns
If trees are needed for close range drawings, 2D tree patterns are not
sufficient; using 3D tree patterns, however, requires much computational time
and memory storage. If the location of the viewpoint is fixed, some parts of
the data will not be visible. It is not necessary, therefore, to use up the
memory storage with useless information (Sasada, 1987).
2.5D tree patterns are a logical alternative to using 3D patterns. A 2D tree
pattern that rotates around a vertical line passing through the centre of the
trunk can be used, this kind of rotating 2D pattern in called a 2.5D pattern.
In a program that produces perspectives, the tree pattern automatically rotates
with the viewpoint's rotation so that the front view always shows. If, however,
the viewpoint rotates above the tree, a 3D representation will be required to
see the top of the tree instead of just a line (Sasada, 1987).
The ramification matrix method - an example of how to grow a tree
This method provides a simple, easily understood process of tree growth. A
tree structure is developed by hand to portray the style of the natural tree
required. This is then represented in a computer by the binary tree data
structure. From the binary tree, which can only split into two at each
branching point, the contained nodes or branching points, are assigned a
branching biorder, derived from the node's position within the structure
(Evans, 1993). This follows the monopodial branching mentioned by Aono and
Kunii (1984). This branching biorder is then used to develop the ramification
matrix. This stochastic matrix is a lower triangular matrix consisting of the
probability of a branch forming in a particular way (Evans, 1993).
New tree structures are formed by computation, using the ramification
matrix, each being similar to the original hand designed tree structure.
However, each tree produced is slightly different due to the random choice of
branching biorder from the matrix. Development of the tree from the computer's
binary structure is a straight forward process. The lengths and widths of the
branches are controlled by the order weighting for each node in the binary
tree. Linear and quadratic functions are used for the length calculations and
polynomial or exponential functions are utilised for the width calculations.
The branching angles are also governed by the order values of the nodes. The
width, lengths and branching angles will all vary according to the shape and
size of the tree being simulated (Evans, 1993).
As a final step leaves are added to the structure, the colour and shape
again depending on the type of tree to be shown on the image. A more natural
tree structure has been developed, varying the ramification matrix, length,
width and branching angle functions, which allows both coniferous and
broad-leaved trees to be portrayed (Evans, 1993).
The Sulphur Pass Project
The Sulphur Pass project showed that landscape Visualisation by DTM produces
realistic and accurate images. Of the variety of enhancements used to increase
realism, the use of tree symbols seemed to be the most effective. Not only did
they make the landscape more lifelike, but they also improved the accuracy of
the model. Another way to have improved accuracy would have been to allow trees
in different polygons to have different average heights to correspond to their
ages (Smart et al, 1991).
SmartForest
SmartForest uses three dimensional modelling based on a simple stem list to
generate Visualisations that can be rotated and "walked" in real
time. Visualisations can be created entirely from outside sources such as
GIS-based models or can be developed interactively using built in biological
models for tree growth, pest spread, and various silvicultural processes
(Imaging Systems Laboratory, 1995). Forest prescriptions can be applied and the
results modelled using the incorporated growth models. Although the resulting
images are an accurate representation of the gross spatial characteristics of
the forest, they are not realistic, neither in the sense that each tree symbol
relates to a real tree, nor in the sense that the image faithfully shows the
colour and texture of a photographic image. However, as discussed in the
section on photo-realism and schematic images, it is not always necessary to
have truly realistic images to understand what the simulation means.
Aono, M. and Kunii, T.L. (1984) Botanical Tree Image Generation. IEEE
Computer Graphics and Applications, 4, 10-34.
Aylward, G. and Turnbull, M. (1977) Visual analysis: a computer-aided
approach to determine visibility. Computer-Aided Design, 9,
103-108.
Bishop, I.D. and Hull, R.B. (1991) Integrating technologies for visual
resource management. Journal of Environmental Management, 32,
295-312.
Bishop, I.D. and Leahy, P.N.A. (1989) Assessing the Visual Impact of
Development Proposals: The Validity of Computer Simulations. Landscape
Journal, 8 92-100.
Buttenfield, B.P. and Ganter, J.H. (1990) Visualisation and GIS: What should
we see? What night we miss? In Proceedings of the 4th International
Symposium on Spatial Data Handling, Vol 1, 307-316.
Daniel, T.C. (1992) Data Visualisation for decision support in environmental
management. Landscape and Urban Planning, 21, 261-263.
Ervin, S.M. (1993) Landscape Visualisation with Emaps. IEEE Computer
Graphics and Applications, 13, 28-33.
Evans, J.A. (1993) Simulation of realistic landscapes. Mapping Awareness
and GIS in Europe, 7, 36-40.
Faust, N.L. (1995) The virtual reality of GIS. Environment and Planning
B: Planning and Design, 22, 257-268.
Fisher, P., Dykes, J. and Wood, J. (1993) Map design and Visualisation.
The Cartographic Journal, 30, 136-142.
Graf, K.Ch., Suter, M., Hagger, J., Meier, E., Meuret, P. and Nuesch, D.
(1994) Perspective terrain Visualisation - a fusion of remote sensing, GIS and
computer graphics. Computers and Graphics, 18, 795-802.
Gross, M. (1991) The analysis of visibility - environmental interactions
between computer graphics, physics, and physiology. Computers and
Graphics, 15, 407-415.
Imaging Systems Laboratory (1995) SmartForest: An Interactive Forest Data
Modelling and Visualisation Tool. Department of Landscape Architecture,
University of Illinois at Urbana-Champaign.
Kennie, T.J.M. and McLaren, R.A. (1988) Modelling for digital terrain and
landscape Visualisation. Photogrammetric Record, 12, 711 - 745.
Loh, D.K., Holtfrerich, D.R., Choo, Y.K. and Power, J.M. (1992) Techniques
for incorporating Visualisation in environmental assessment: an object-oriented
perspective. Landscape and Urban Planning, 21, 305-307.
Mayall, K. and Hall, G.B. (1994) Information Systems and 3-D Modeling in
Landscape Visualisation. In Urban and Regional Information Systems
Association Annual Conference Proceedings, Vol 1, 796-804.
Miller, G.S.P. (1986) The definition and rendering of terrain maps. ACM
Computer Graphics, 20, 39-48.
Moore, R. (1990) Landscapes on Pluto: improving computer-aided
Visualisation. The Cartographic Journal, 27, 132-136.
Nakamae, E. and Tadamura, K. (1995) Photorealism in computer graphics.
Computers and Graphics, 19, 119-130.
Nakamae, E., Harada, K., Ishizaki, T. and Nishita, T. (1986) A montage
method: the overlaying of the computer generated images onto a background
photograph. ACM Computer Graphics, 20, 207-214.
Oh, K. (1994) A perceptual evaluation of computer-based landscape
simulations. Landscape and Urban Planning, 28, 201-216.
Orland, B. (1992a) Data Visualisation Techniques in Environmental
Management. Landscape and Urban Planning, 21, 237-244.
Orland, B. (1992b) Evaluating regional changes on the basis of local
expectations: a Visualisation dilemma. Landscape and Urban Planning,
21, 257-259.
Orland, B. (1994) Visualisation techniques for incorporation in forest
planning geographic information systems. Landscape and Urban Planning,
30, 83-97.
Orland, B. (19??) SmartForest: A 3-D Interactive Forest Visualisation and
Analysis System. Unknown conference proceedings.
Orland, B. and Daniel, T.C. (1995) Impact of Proposed Water Withdrawls on
the Perceived Scenic Beauty of Desert Springs and Wetlands: Image
Generation. Imaging Systems Laboratory, Department of Landscape
Architecture, University of Illinois at Urbana-Champaign.
Perkins, N.H. (1992) Three questions on the use of photo-realistic
simulations as real world surrogates. Landscape and Urban Planning,
21, 265-267.
Pukkala, T. and Kellomaki, S. (1988) Simulation as a tool in designing
forest landscape. Landscape and Urban Planning, 16, 253-260.
Sasada, T.T. (1987) Drawing natural scenery by computer graphics.
Computer-aided Design, 19, 212-218.
Shafer, E.L. and Brush, R.O. (1977) How to measure preferences for
photographs of natural landscapes. Landscape Planning, 4,
237-256.
Smart, J., Mason, M. and Corrie, G. (1991) Assessing the Visual Impact of
Development Plans. In GIS Applications in Natural Resources (eds Heit,
M. and Shortreid, A.), GIS World Inc., 295-303.
Zewe, R. and Koglin, H.-J. (1995) A method for the visual assessment of
overhead lines. Computers and Graphics, 19, 97-108.
|