Remote Sensing and GIS
FUNDAMENTAL CONCEPT OF REMOTE
SENSING
·
DIFINITION: -
Remote sensing is the science and art of obtaining information about an
object, area or phenomenon through the analysis of data acquired by a device
that is not in contact with the object, area or phenomenon under investigation.
Without direct contact, some means of transferring information through space
must be utilized. In remote sensing, information transfer is accomplished by
use of electromagnetic radiation (EMR). EMR is a form of energy that reveals
its presence by the observable effects it produces when it strikes the matter.
Using various sensors, we collect the data, process and analysis it to obtain
information about the earth. Today the term Remote Sensing is used in this real
sense. Our eyes also act as a sensor. Satellite Remote Sensing from platforms as far as 36000 km from the earth, extract information which visible to
our own eyes.
·
Types of Remote Sensing:
1.
In respect
to the type of Energy Resources,
Passive Remote Sensing: Makes use of sensors that detect the reflected or emitted
Electromagnetic radiation from natural sources.
Active Remote Sensing: Makes use of sensors that detect
reflected responses from objects that are irradiated from artificially -
generated energy sources, esp. radar.
2.
In respect
to wavelength Regions: Remote Sensing is
classified into three types in respect to the wavelength regions.
&
nbsp;
I.
Visible and reflective infra-red Remote Sensing.
II.
II.Thermal infra red Remote Sensing
III.
Microwave Remote Sensing.
·
Stages of Remote Sensing:
The important stages of Remote Sensing are -
1.
Origin of
electromagnetic radiation.
2.
Transmission
of energy from source to target.
3.
Interaction
of energy with the target.
4.
Transmission
of reflected / emitted energy to the sensor.
5.
Detection
of energy by sensor converting it into photographic output.
6.
Transmission
of sensor output.
7.
Preprocessing
of data for generation of data product.
8.
Collection
of Ground truth and other collateral information.
9. Data processing and interpretation.
Platforms:
The vehicles or carriers for remote sensors are called the platforms. In
the sense platform is the base from where the remote sensing is done. The key
factor for the selection of a platform is the attitude that determines the
ground resolution and which is also dependent on the instantaneous field of
view of the sensor on board the platform. There are three main categories of
platforms namely ground borne, air borne and space borne.
Sensors:
Sensor is a device to detect electromagnetic radiation. All sensors
employed on earth observation platforms use electromagnetic radiation to
observe the terrain features. The best example of sensor is the human eye
through which we can see the objects around us without any physical contact.
Basically two types of sensors are there - image forming and non-image forming. Within image forming there has Active Sensor and Passive Sensor.
Active Sensor: Active Sensor uses own EMR to illuminate the target.
Passive Sensor: Passive Sensor relies on
naturally occurring radiation.
Energy
Interaction With Earth Surface Features:
When
electromagnetic energy is incident on any given earth surface feature three
fundamental energy interactions with the feature are possible. Various
fractions of the energy incident on the element are reflected, absorbed and /
or transmitted.
The relationship b/u three energy
interaction by applying the principle of conservation of energy -
EI (l) =3D ER(l) + EA (l) + ET (l)
Where,
EI =3D
Incident Energy
ER =3D
Reflected Energy
EA =3D
Absorbed Energy
ET =3D
Transmitted Energy
l =3D Function
of wavelength
FUNDAMENTAL CONCEPT OF AERIAL
PHOTOGRAPHY
DIFINITION:-
A photograph taken from the air with the camera axis pointing downwards
at the time of exposure is known as Aerial photograph. An aerial photograph
differs from a normal photograph in that; its gives a three dimensional value
of the feature with the help of stereoscope.
TYPES OF
AERIAL PHOTOGRAPH:
The aerial photograph can be divided into four classes depending its
orientation of the optical axis of the camera.
1.
VERTICAL PHOTOGRAPHS:
The
photograph taken with the optical axis of the camera positioning vertically
downwards is called a vertical photograph.
2.OBLIQUE PHOTOGRAPH: -
The
photograph taken with the optical axis of the camera tilted is called oblique
photograph. This type of photograph can be subdivided into two categories-a
.low oblique b. high oblique
3. CONVERGENT PHOTOGRAPH: -
A convergent
photograph is a low oblique photograph taken with two cameras simultaneously at
successive camera stations with the camera axis tilted from the vertical at a
fixed angle in the direction of the flight line, so that the forward exposure
of the first station forms a stereo pair with the next station.
4.
Trimetrogon photograph: -
These
photograph taken simultaneously with three cameras, held in a single mount of
which one is held vertically &the photographs the area below the plain. And
the other, aligned at right angles to the azimuth, is held at an angle of 60
degree from the vertical & photographs the areas adjacent to the area being
photographed by the vertical camera.
DETERMINATION
OF SCALE FROM AERIAL PHOTOGRAPH:
The scale in a photograph can be
determined from the focal length of the camera used for taking the photograph
and the flying height, and is usually given by the relationship f/h, where the
f is the focal length of the camera &h is the flying height.
FUNDAMENTAL
CONCEPT OF GLOBAL POSITIONING SYSTEM
INTRODUCTION:-
The global positioning system (GPS) is a worldwide radio navigation
system formed from a constellation of 24 satellite and their ground stations
.GPS uses these” man made stars” as reference points to calculate the precise
position and time utilized as a modern surveying technique in the spheres of
geodesy, surveying &mapping. The GPS is funded and controlled by US Defense
Department and is primarily used in the military navigation system for real
time positioning. This initially known as “The Satellite Timing and
Ranging”(NAVSTAR).
THE GPS
SYSTEM:
There are three segments in GPS system-space segment, control segment
&user segment.
1.
Space segment:
This segment of GPS consists of a
constellation of 24 satellites each of which orbits the earth in 24 hours at an
altitude of 20,200 kms. The orbital planes of this system are inclined to the
equatorial plane by55 degree. Its has six such orbit of equal interval (i.e.60
degree separation) and each orbit has 4 satellite. These satellites
continuously transmit messages in two carrier frequencies L1 at 1575.42 MHz and
L2 at 1227.60 MHz along with the satellite ephemeredes (position &time) in
two wavelength bands of 19 &24 cms respectively. The L1 band is modulated
with two signals –the P code (precision code) & the C.A (coarse acquisition
code). The L2 band provides only the P code. This is called space segment.
2.
Controlled segment:
This segment is based on the earth which regularly updated, ephemeris
i.e the time position and other data of satellites in the orbit on the basis of
the data obtained from a numbed of globally distributed network of tracking
stations.
3.
USER segment:
This is one or more GPS receivers
and antennas utilized at the earth observation system. This receives the
transmitted signals from the satellite with visible cone over observer’s
horizon and processes it to provide the 3Dposition of the point of
conservation.
FUNDAMENTAL
CONCEPT OF DIGITAL IMAGE PROCESSING
Digital Image Processing
Definition:-
A digital image is a representation of a two-dimensional image as a finite
set of digital values of
the optical energy reflected by a feature on the earth’s surface. Digital image
processing involves many procedures like formatting
& correcting the data, associating map coordinate system with image
data Digital Enhancement—both visual
& computer aided.
WHY WE USE DIGITAL IMAGE PROCESSING:
In spite of other techniques
like visual or electro-optical methods of image analysis, we prefer digital
image processing, because-
·
Fast &accurate and flexible
·
Radio metric accuracy can be maintained
in digital processing.
·
Numerous of user can use the same image
for different interpretation simultaneously
·
Some processing can be repeated either
old or new.
·
256 shades of grey can easily be
processed by a computer.
Steps of
digital image processing
Digital image
processing is done in three steps:-
·
Image restoration or preprocessing
·
Image enhancement
·
Image segmentation or pattern
recognition or classification.
Image restoration:
Image restoration involves the correction of
distortion, degradation and noise, introduced during the imaging process. Image
restoration produces a corrected image that is as close as possible both
geometrically and radiometrically.
Image
Interpretation:
Remotely sense data are used to interpret an image digitally or visually
or both can be used simultaneously. Computer is only key tool for digital
interpretation. In case of visual interpretation we need some helping
guidelines to identify the features.
To visually interpret of satellite image the main criteria as follows -
Ø Shape
Ø Size
Ø Pattern
Ø Tone
Ø Association.
1.
Image Reduction:
In the early stage of a remote sensing project it is often necessary to
view the entire image in order to locate the row and column co-ordinates of a
sub image the encompasses the study area. Remote sensor data are composed of
> 3000 rows and 3000 columns in a number of bands. Most digital image
processing systems display £ 1024 X 1024 pixels at one time. Therefore, it is useful to have a simple
procedure for reducing the size of the original image data set down to a
smaller data set that can be viewed on the screen at one time for orientation
purposes. To reduce a digital image to just 1/m2 of the original data, every m
th row and m th column of the imagery are systematically selected and displayed
row and column deletion is the simplest form of reduction.
2.
Image Magnification:
Digital image magnification is usually performed to improve the scale of
a display for visual interpretation purposes, or occasionally to match the
scale of another image. Row and column replication is the simplest form of
image magnification. The Image Magnifying factor is m2.
Image
Enhancement:
The data after image
restoration are not yet suitable for image interpretation. A sort of
enhancement to accurate the apparent contrast between features is necessary on
the image space of redden it easier for interpretation.
Contrast generally refers to
the difference in luminance or gray level values in an image. It can be defined
as the ratio at maximum and minimum intensity of the image
C =3D I maximum / I minimum.
Contrast ration has a strong
bearing on the resolving power and delectability of an image i.e. larger this
ratio, more easily to interpret.
3.
Image restoration:
The original data recorded by the
sensors are often distorted. The distortion can be both in intensity value (called
radiometric distortion) and in spatial location (called geometric distortion).
Hence, each and every pixel has to undergo certain corrective processing before
they are indeed fit to be of any use.
Preprocessing in image processing is used for processing performed
initially on raw data before one starts the actual image processing. This type
of processing of raw data is implemented in order to correct the impact of
various radiometric and geometric distortions. To correct image data, the
internal and external errors must be determined. Internal errors are due to
sensor effects, generally systematic and stationary. External errors are due to
platform perturbations and scene characteristics.
The errors are classified into
radiometric and geometric errors.
Radiometric
errors:
Internal: - Detector response (Bias and Gain)
- Calibration sources errors
External: - Atmospheric
alternation
-
Sun
elevation
Geo-metric
errors:
Internal: - Mirror Scan Velocity
- Profile detector-sampling delay.
External: - Panoramic or
cross track distortion
-
Scan skew
-
Earth
rotation
-
Spacecraft
velocity
-
Perspective
geometry
-
Attitude
(yaw, roll, pitch)
-
Altitude
-
Desired
projection
Restoration
Process:
1)
Radiometric Corrections:
i)
Periodic
Line drop out
ii)
Destripting
iii)
Atmospheric
correction
iv)
Sun
elevation correction
2)
Geo-metric Correction:
Rectification by making reference with well distributed ground control
points on image. Some features that make good GCP's are highway interaction
water bodies, sharp turning of drainage lines, shoreline etc. As the GCP's are
known points on set of co-ordinates least square regression is applied to fit
in a curve or to calculate a transformation, which the distorted image has to
undergo to be geometrically correct maps.
For this we can treat this process as follows -
Remotely sensed data usually contains -
-
Systematic
error.
-
Non-systematic
geometric errors.
The error occur due to -
-
Variation
of the orbital parameters of the satellite.
-
Due to
curvature of the earth.
So broadly we can classify the error as -
Those can be created
using platform ephemeris and the knowledge.
Those can't be corrected
with sufficient no of GCP’s.
So, we can restore the
images by correcting the above all ephemeris as earth rotation correction,
correction of spacecraft's velocity, adjusting the altitude and mirror scan
velocity.
Radiometric
error:
Systematic error
Internal error
-Detector response
-Calibration source of error
External error
-Atmospheric attenuation
-Sun elevation angle
. Geometric
error:
Systematic
error
Internal error
-Mirror scans velocity
-Profile
detector sampling delay external error.
External error
-
Scan skew
-
Earth rotation
-
Platform velocity
-
Perspective geometry
Unsystematic
error.
- Platform
altitude
-
Spacecraft attitude (Yaw, roll, pitch)
-Desired projection
Rectification:
Rectification is the process of transforming the data
from one grid system into another grid system using a geometric transformation.
The
rectification process become essential when-
·
A large scale map are to be
generated based on image
·
A mosaic image are to be prepared
·
The multi-temporal data are to be
registered
·
The images of different resolutions
are to be brought to the same scale.
·
Comparing pixels scene to scene in
applications, such as change detection.
·
Developing GIS data bases for GIS
modeling
·
Creating accurate scaled photomaps
·
Overlaying an image with vector
data, such as Arc/Info.
·
Overlaying an image with vector
data, such as Arc/Info.
·
Comparing images that are originally
at different scales.
·
Extracting accurate distance and
area measurements.
·
Performing any other analyses
requiring precise geographic locations
Steps of
Rectification:
1. Location of ground control point (GCP) on a topographic sheet and the
corresponding points on the image.
2. Determination of
a suitable transformation relation (usually polynomial equation).
3. Creation of
output image files with the new co-ordinate system.
Disadvantages of
Rectification
During rectification, the
data file values of rectified pixels must be resampled to fit into a new grid
of pixel rows and columns. Although some of the algorithms for calculating
these values are highly reliable, some spectral integrity of the data can be lost
during rectification. If map coordinates or map units are not needed in the
application, then it may be wiser not to rectify the image. An unrectified
image is more spectrally correct than a rectified image.
Image to image Rectification is
performed by two steps:
i. Spatial interpolation.
ii. Intensity interpolation.
Image
Classification:
The intent
of the classification process is to categorize all pixels in a digital image
into one of several land cover classes, or "themes". This
categorized data may then be used to produce thematic maps of the land cover
present in an image. The objective of image
classification is to identify and portray, as a unique gray level (or color),
the features occurring in an image in terms of the object or type of land cover
these features actually represent on the ground.
a.
Hard Classification
i.
Supervised Classification
ii.
Unsupervised Classification
b.
Soft Classification
i.
Fuzzy Classification
ii.
Spectral Mixing Analysis
c.
Hybrid Classification
Accuracy
Assessment:
Accuracy assessment is a general term for comparing the classification to
geographical data that are assumed to be true, in order to determine the
accuracy of the classification process. Usually, the assumed-true data are
derived from ground truth data. It is usually not practical to ground truth or
otherwise test every pixel of a classified image. Therefore, a set of reference
pixels is usually used. Reference pixels are points on the classified image for
which actual data are (or will be) known. The reference pixels are randomly
selected
Image
enhancement:
Image enhancement is the
modification of an image to improve the appearance of an image for better human
visual analysis. Generally, enhancement alter the original DN values,
therefore, these are not performed until the image restoration process are
completed.
Image magnification:
It is the process inverse to the image reduction to show the different
object in image clearly we will go for zoom in of the image. This is called
image magnification.
Image reduction:
It is a process of image enhancement where a large
image is not possible to show by computer monitor act a glance, and then we go
for image reduction.A systematic sampling process use for this purpose this
process first all pixel are grouped into many equal classes with equal
interval. Then randomly one pixel is selected and the selection of other pixels
is performed systematically from other groups.
Fourier
Transform:
Fourier analysis is a mathematical technique for
separating an image into its various spatial frequency components. This
operation amounts to fitting a continuous function through the discrete DN
values if they were plotted along each row and column can be described
mathematically by a combination of sine and cosine waves with various
amplitudes, frequencies and phases. A Fourier transforms results from the
calculation of the amplitude and phase for each possible spatial frequency in
an image.
NDVI (Normalized Differential
Vegetation Index)
To determine
the density of green on a patch of land, researchers must observe the distinct
colors (wavelengths) of visible and near-infrared sunlight reflected by the
plants.
Nearly all satellite Vegetation
Indices employ this difference formula to quantify the density of plant growth
on the Earth — near-infrared radiation minus visible radiation divided by
near-infrared radiation plus visible radiation. The result of this formula is
called the Normalized Difference Vegetation Index (NDVI). Written
mathematically, the formula is:
NDVI =3D (NIR —
Change
Detection:
Change detection involves use of
multitemporal data sets to determine the area of change using multidate of
imagery. There are two types of change detection are found –
1.
Short term Change detection – Snow cover, flooded area.
2.
Long term Change detection – Desertification, urban fringe construction.
Change Detection Procedure:
The change detection procedure consider the following parameter –
1. The imagery is acquired with the same sensor, same
spectral band, spatial resolution is same and at the same time of the day.
2. The image is acquired at anniversary date of
imagery to minimize the solar illumination angle and seasonal differences
3. Registration of imagery with same
projection type.
4.
The images may be free from
atmospheric effects such as lake level, tidal stage, wind, soil moisture
condition etc.
FUNDAMENTAL CONCEPT OF GIS
DIFINITION:-
GIS
is a decision support system comprising of computer hardware, software,
geographic data and personal designed to efficiently capture, store,
manipulate, analyses and display all forms of spatial and non-spatial
(attribute) data for better management of geographic area.
Why use GIS
GIS
is a rigorous and objective analysis tool for assessing complex environmental
problems. GIS provides an excellent media for communicating the results of
these analyses to policy makers, managers and the public.
Data
used in GIS:
Two basic
types of data are normally entered into a GIS.
Ø Spatial
Data
Ø
Non-spatial Data
Ø Data Model:
Ø Data model
determine how data are structure, store, processed and analyze in a GIS. Many
of GIS functions are either vector based or raster based. Raster and vector
data can be displayed simultaneously. Raster data can be converted to vector
data and vice versa. GIS is a useful tool to integrate raster data and vector
data.
GIS DATA STRUCTURE
Raster data
structure:
The simple raster data structures
consist of an array of grid cells (pixels). Each grid cell is referenced by a
row & column number & it contains a number representing the type or
value of the attribute being mapped. In raster structures a point is
represented by a single grid cell; a line by a number of neighboring cells
strung out in a given direction & an area by an agglomeration of
neighboring cells.
There are
several compact methods for storing raster data, namely
1.Cell by
cell encoding method.
2.
Run-Length encoding method.
3.Block code
encoding method.
4.Quadtrees
encoding method.
VECTOR DATA
STRUCTURE:
The vector representation of an object is an attempt
to represent the object as exactly as possible. The co-ordinate space is
assumed to be continous, not quantized as with the raster space, allowing all
positions, lengths & dimensions to be defined precisely.
Point
entities can be
considered to embrace all geographical& geographical entities that are
positioned by a single XY co-ordinate pair.
Line
entities can be
defined as all linear features built up of straight-line segments made up of
two or more co-ordinates.
Areas of
polygon (regions) entities can be represented in various ways in a vector
database. The aim of a polygon data structure is to be able to describe the
topological
Properties
of areas (that is their shapes, neighbors & hierarchy) in such a way that
the associated properties of these basic spatial building blocks can be
displayed & manipulated as thematic map data.
There are
several ways in which vector data structure can be put together into a vector
data model enabling us to examine the relationship between variables in single
coverage or among variables in different coverage. These are 3 basic types:
1.Spaghetti Model.
2.Topological Model.
3.Vector chain code Model.
Advantage
& Disadvantage of Raster & Vector Data Model
Vector Methods
Advantage:
1.Good
representation of phenomenological data structure.
2.Compact
data structure.
3.Topology
can be completely described with network linkages.
4.Accurate
graphics.
5.Retrieval,
updating & generalization of graphics & attributes are possible.
Disadvantage:
1.Complex
data structure.
2.Display
& plotting can be expensive, particularly for high quality, color &
cross-hatching.
3.The
topology is expensive, particularly for the more sophisticated software.
4.Spatial
analysis & filtering within polygons are difficult.
Raster
Methods
Advantage:
1.Simple
data structure.
2.The
overlay & combination of mapped data with remotely sensed data is easy
3.Various
kind of spatial analysis are easy.
4.The
topology is cheap.
Disadvantage
:-
1.Volumes of
graphic data
2.The use of
large cells to reduce data volumes means there can be a serious loss of
information.
3.Network
linkages are difficult to establish.
4.Crude
raster maps are considered to be less beautiful.
Digitizing:
It is a process of digital data
conversion in vector based GIS. Map digitizing performed by capturing spatial
data of map, by spatial data capturing device, call a digitizer. Map digitizing
is particularly useful for GIS application. By using point, line, polygon tools
digitizing is performed. An X and Y movement can do digitizing in a point mode,
where single points are recorded one at a time, or in a stream mode, where a
point is collected on regular intervals of time or distance, measured, Most
GIS's use a spaghetti mode of digitizing. This allows the user to simply
digitize lines by indicating a start point and an end point. Data can be
captured in point or stream mode. However, some systems do allow the user to
capture the data in an arc/node topological data structure. The arc/node data
structure requires that the digitizer identify nodes.
In the
digitizing process some error comes in the use of vector tools – point, line
and
polygon. The common error in the digitizing process are-
i.
Under shoot.
ii.
Over shoot.
iii.
Wrong arc direction.
iv.
Intersection (no. Node).
v.
Resembling arc.
vi.
Pseudo node.
vii.
Hanging node.
viii.
Missing level.
ix.
Duplicate level.
Data
analysis:
Buffering:
Buffering involves the creation
of zone of a specified width around a point, line and polygon. The resulting
buffer is a new polygon, which can be used in queries to determine which
entities occur either within or outside the defined buffer zone.
Buffering can also be
defined as the vector equivalent to distance analysis in raster Geographical
Information System.
Overlay:
Overlay technique various geographic data comprised of multiple layers
are overlaid with logical operations including logical addition or logical multiplication.
For example, overlaying deforested and slope gradient maps in a mountainous
area can estimate a hazard risk area of soil erosion.