lesforgesdessalles.info Business DIGITAL IMAGE PROCESSING 3RD EDITION PDF

Digital image processing 3rd edition pdf

Thursday, March 28, 2019 admin Comments(0)

DIGITAL IMAGE. PROCESSING. PIKS Inside. Third Edition. WILLIAM K. PRATT. PixelSoft, Inc. Los Altos, California. A Wiley-Interscience Publication. I think this link would help you Digital Image Processing (3rd Edition) Completely self-contained—and heavily illustrated—this introduction to basic concepts and. Digital Image Processing, 3rd ed. lesforgesdessalles.info Gonzalez & Woods. Chapter 2. Digital Image Fundamentals. Chapter 2. Digital Image.


Author: JOEANN VANDEVEER
Language: English, Spanish, Portuguese
Country: Burundi
Genre: Children & Youth
Pages: 797
Published (Last): 18.04.2016
ISBN: 400-1-14591-879-2
ePub File Size: 24.72 MB
PDF File Size: 19.39 MB
Distribution: Free* [*Regsitration Required]
Downloads: 35304
Uploaded by: MICHAELA

Digital Image. Processing. Third Edition. Rafael C. Gonzalez. University of Tennessee. Richard E. Woods. NledData Interactive. Pearson International Edition. GONZFM-i-xxii. Page iii Digital Image Processing Second Edition Rafael C. Gonzalez University of Tennessee Richard E. Woods MedData . Digital Image Processing, 2/E is a completely self-contained book. The A database containing images from the book and other educational sources.

We can define 4-, 8-, or m-paths de- pending on the type of adjacency specified. Interactions occur inside the irradiated sample, affecting the electron beam. Image acquisition is the first process shown in Fig. In particular, the review material on probability, matri- ces, vectors, and linear systems, was prepared using the same notation as in the book, and is focused on areas that are directly relevant to discussions in the text. The same effect, this time with a circle, can be seen in Fig. Vision is the most advanced of our senses, so it is not surprising that images play the single most important role in human perception.

The results have prompted the following new and reorganized material:. What Is Digital Image Processing? The Origins of Digital Image Processing. Fundamental Steps in Digital Image Processing. Components of an Image Processing System. Elements of Visual Perception. Light and the Electromagnetic Spectrum. Image Sensing and Acquisition. Image Sampling and Quantization. Some Basic Relationships Between Pixels.

Linear and Nonlinear Operations. Some Basic Gray Level Transformations. Histogram Processing. Basics of Spatial Filtering. Smoothing Spatial Filters. Sharpening Spatial Filters. Combining Spatial Enhancement Methods. Introduction to the Fourier Transform and the Frequency Domain.

Smoothing Frequency-Domain Filters. Sharpening Frequency Domain Filters. Homomorphic Filtering. Noise Models. Linear, Position-Invariant Degradations. Estimating the Degradation Function.

Inverse Filtering. Constrained Least Squares Filtering. Geometric Mean Filter. Geometric Transformations. Color Fundamentals. Color Models. Pseudocolor Image Processing. Basics of Full-Color Image Processing. Color Transformations. Smoothing and Sharpening. Color Segmentation. Noise in Color Images. Color Image Compression. Multiresolution Expansions. Wavelet Transforms in One Dimension.

The Fast Wavelet Transform. Wavelet Transforms in Two Dimensions. Wavelet Packets. Image Compression Models. Elements of Information Theory.

Error-Free Compression. Lossy Compression. Image Compression Standards. Dilation and Erosion. Opening and Closing. The Hit-or-Miss Transformation. Some Basic Morphological Algorithms. Extensions to Gray-Scale Images.

Detection of Discontinuities. Edge Linking and Boundary Detection. Region-Based Segmentation. Segmentation by Morphological Watersheds. The Use of Motion in Segmentation. Boundary Descriptors. Regional Descriptors. Use of Principal Components for Description. Relational Descriptors. Patterns and Pattern Classes. Recognition Based on Decision-Theoretic Methods. Structural Methods. Pearson offers special pricing when you package your text with other student resources.

In dedicated applications, some- times specially designed computers are used to achieve a required level of per- formance, but our interest here is on general-purpose image processing systems. In these systems, almost any well-equipped PC-type machine is suitable for off- line image processing tasks.

Software for image processing consists of specialized modules that perform specific tasks. A well-designed package also includes the capability for the user to write code that, as a minimum, utilizes the specialized modules. More so- phisticated software packages allow the integration of those modules and gen- eral-purpose software commands from at least one computer language.

Mass storage capability is a must in image processing applications. When dealing with thousands, or even millions, of images, providing adequate storage in an image processing system can be a challenge. Storage is measured in bytes eight bits , Kbytes one thousand bytes , Mbytes one mil- lion bytes , Gbytes meaning giga, or one billion, bytes , and Tbytes meaning tera, or one trillion, bytes.

One method of providing short-term storage is computer memory. Another is by specialized boards, called frame buffers, that store one or more images and can be accessed rapidly, usually at video rates e.

The latter method allows virtually instantaneous image zoom, as well as scroll vertical shifts and pan horizontal shifts. Frame buffers usually are housed in the specialized image processing hardware unit shown in Fig. On- line storage generally takes the form of magnetic disks or optical-media stor- age. The key factor characterizing on-line storage is frequent access to the stored data.

Finally, archival storage is characterized by massive storage requirements but infrequent need for access. Image displays in use today are mainly color preferably flat screen TV mon- itors.

Monitors are driven by the outputs of image and graphics display cards that are an integral part of the computer system. Seldom are there requirements for image display applications that cannot be met by display cards available com- mercially as part of the computer system. In some cases, it is necessary to have stereo displays, and these are implemented in the form of headgear containing two small displays embedded in goggles worn by the user.

Hardcopy devices for recording images include laser printers, film cam- eras, heat-sensitive devices, inkjet units, and digital units, such as optical and CD-ROM disks. Film provides the highest possible resolution, but paper is the obvious medium of choice for written material. For presentations, images are dis- played on film transparencies or in a digital medium if image projection equip- ment is used. The latter approach is gaining acceptance as the standard for image presentations.

Networking is almost a default function in any computer system in use today. Because of the large amount of data inherent in image processing applications, the key consideration in image transmission is bandwidth. In dedicated net- works, this typically is not a problem, but communications with remote sites via the Internet are not always as efficient.

Fortunately, this situation is improving quickly as a result of optical fiber and other broadband technologies. Summary The main purpose of the material presented in this chapter is to provide a sense of per- spective about the origins of digital image processing and, more important, about cur- rent and future areas of application of this technology. Although the coverage of these topics in this chapter was necessarily incomplete due to space limitations, it should have left the reader with a clear impression of the breadth and practical scope of digital image processing.

Upon concluding the study of the final chapter, the reader of this book will have arrived at a level of understanding that is the foundation for most of the work currently underway in this field. References and Further Reading References at the end of later chapters address specific topics discussed in those chap- ters, and are keyed to the Bibliography at the end of the book.

However, in this chapter we follow a different format in order to summarize in one place a body of journals that publish material on image processing and related topics. We also provide a list of books from which the reader can readily develop a historical and current perspective of activ- ities in this field.

Thus, the reference material cited in this chapter is intended as a general- purpose, easily accessible guide to the published literature on image processing. Major refereed journals that publish articles on image processing and related topics include: The following books, listed in reverse chronological order with the number of books being biased toward more recent publications , contain material that complements our treatment of digital image processing.

These books represent an easily accessible overview of the area for the past 30 years and were selected to provide a variety of treat- ments. They range from textbooks, which cover foundation material; to handbooks, which give an overview of techniques; and finally to edited books, which contain material rep- resentative of current research in the field.

Duda, R. Pattern Classification, 2nd ed. Ritter, G. Shapiro, L. Dougherty, E. Etienne, E. Goutsias, J, Vincent, L. Mallot, A. Marchand-Maillet, S. Binary Digital Image Processing: Edelman, S. Lillesand, T. Mather, P. Computer Processing of Remotely Sensed Images: Petrou, M. Image Processing: Russ, J. The Image Processing Handbook, 3rd ed. Smirnov, A.

Sonka, M. Umbaugh, S. Computer Vision and Image Processing: Haskell, B. Digital Pictures: Jahne, B. Digital Image Processing: Castleman, K. Digital Image Processing, 2nd ed. Geladi, P. Bracewell, R. Sid-Ahmed, M. Jain, R. Mitiche, A. Baxes, G. Gonzalez, R. Haralick, R. Computer and Robot Vision, vols. Pratt, W. Lim, J. Schalkoff, R. Giardina, C.

Serra, J. Ballard, D. Fu, K. Nevatia, R. Pavlidis, T. Rosenfeld, R. Digital Picture Processing, 2nd ed. Hall, E. Syntactic Pattern Recognition: Andrews, H. Tou, J. Aristotle Preview The purpose of this chapter is to introduce several concepts related to digital im- ages and some of the notation used throughout the book. Section 2. Additional topics discussed in that section include digital image representation, the effects of varying the number of samples and gray levels in an image, some important phenomena associated with sampling, and techniques for image zooming and shrinking.

Finally, Section 2. As noted in that section, linear operators play a central role in the development of image processing techniques. Hence, developing a basic under- standing of human visual perception as a first step in our journey through this book is appropriate. Given the complexity and breadth of this topic, we can only aspire to cover the most rudimentary aspects of human vision.

In particu- lar, our interest lies in the mechanics and parameters related to how images are formed in the eye. We are interested in learning the physical limitations of human vision in terms of factors that also are used in our work with digital im- ages.

Thus, factors such as how human and electronic imaging compare in terms of resolution and ability to adapt to changes in illumination are not only inter- esting, they also are important from a practical point of view. The eye is nearly a sphere, with an average diameter of approximately 20 mm.

Three membranes enclose the eye: Continuous with the cornea, the sclera is an opaque mem- brane that encloses the remainder of the optic globe. The choroid lies directly below the sclera.

This membrane contains a net- work of blood vessels that serve as the major source of nutrition to the eye.

Edition 3rd pdf digital processing image

Even superficial injury to the choroid, often not deemed serious, can lead to se- vere eye damage as a result of inflammation that restricts blood flow. The choroid coat is heavily pigmented and hence helps to reduce the amount of ex- traneous light entering the eye and the backscatter within the optical globe.

At its anterior extreme, the choroid is divided into the ciliary body and the iris diaphragm. The latter contracts or expands to control the amount of light that enters the eye. The central opening of the iris the pupil varies in diameter from approximately 2 to 8 mm.

The front of the iris contains the visible pig- ment of the eye, whereas the back contains a black pigment. The lens is made up of concentric layers of fibrous cells and is suspended by fibers that attach to the ciliary body. The lens is colored by a slightly yel- low pigmentation that increases with age. In extreme cases, excessive clouding of the lens, caused by the affliction commonly referred to as cataracts, can lead to poor color discrimination and loss of clear vision.

Both infrared and ultraviolet light are absorbed appreciably by pro- teins within the lens structure and, in excessive amounts, can damage the eye. When the eye is properly focused, light from an object outside the eye is imaged on the retina.

Pattern vision is afforded by the distribution of discrete light receptors over the surface of the retina. There are two classes of receptors: The cones in each eye number between 6 and 7 million.

They are located primarily in the central portion of the retina, called the fovea, and are highly sensitive to color. Humans can resolve fine de- tails with these cones largely because each one is connected to its own nerve end. Muscles controlling the eye rotate the eyeball until the image of an object of in- terest falls on the fovea.

Cone vision is called photopic or bright-light vision. The number of rods is much larger: Some 75 to million are distributed over the retinal surface. The larger area of distribution and the fact that sever- al rods are connected to a single nerve end reduce the amount of detail dis- cernible by these receptors. Rods serve to give a general, overall picture of the field of view.

They are not involved in color vision and are sensitive to low lev- els of illumination. For example, objects that appear brightly colored in day- light when seen by moonlight appear as colorless forms because only the rods are stimulated.

This phenomenon is known as scotopic or dim-light vision. Figure 2. The absence of receptors in this area results in the so-called blind spot see Fig. Except for this region, the distribution of receptors is radially sym- metric about the fovea. Receptor density is measured in degrees from the fovea that is, in degrees off axis, as measured by the angle formed by the visual axis and a line passing through the center of the lens and intersecting the retina.

The fovea itself is a circular indentation in the retina of about 1. However, in terms of future discussions, talking about square or rec- tangular arrays of sensing elements is more useful.

Thus, by taking some liberty in interpretation, we can view the fovea as a square sensor array of size 1. The density of cones in that area of the retina is approxi- mately , elements per mm2.

Based on these approximations, the number of cones in the region of highest acuity in the eye is about , elements. While the ability of humans to integrate intelli- gence and experience with vision makes this type of comparison dangerous. Keep in mind for future discussions that the basic ability of the eye to resolve detail is certainly within the realm of current electronic imaging sensors.

As illustrated in Fig. The shape of the lens is controlled by tension in the fibers of the ciliary body. To focus on distant objects, the controlling muscles cause the lens to be relatively flattened. Similarly, these muscles allow the lens to become thicker in order to focus on objects near the eye. The distance between the center of the lens and the retina called the focal length varies from approximately 17 mm to about 14 mm, as the refractive power of the lens increases from its minimum to its maximum.

Point C is the optical center of the lens. When the eye focuses on a nearby object, the lens is most strong- ly refractive. This information makes it easy to calculate the size of the retinal image of any object. In Fig. If h is the height in mm of that object in the retinal image, the geometry of Fig.

As indicated in Section 2. Perception then takes place by the relative excitation of light recep- tors, which transform radiant energy into electrical impulses that are ultimate- ly decoded by the brain. The range of light intensity lev- els to which the human visual system can adapt is enormous—on the order of —from the scotopic threshold to the glare limit.

Experimental evidence in- dicates that subjective brightness intensity as perceived by the human visual system is a logarithmic function of the light intensity incident on the eye.

Fig- ure 2. The long solid curve represents the range of intensities to which the visual system can adapt. In photopic vision alone, the range is about The transition from scotopic to photopic vision is gradual over the approximate range from 0.

The essential point in interpreting the impressive dynamic range depicted in Fig. Rather, it accomplishes this large variation by changes in its overall sen- sitivity, a phenomenon known as brightness adaptation.

The total range of distinct intensity levels it can discriminate simultaneously is rather small when compared with the total adaptation range. For any given set of conditions, the current sensitivity level of the visual system is called the brightness adaptation level, which may correspond, for example, to brightness Ba in Fig. The short intersecting curve represents the range of subjective brightness that the eye can perceive when adapted to this level. This range is rather restricted, having a level Bb at and below which all stimuli are perceived as indistinguishable blacks.

The upper dashed portion of the curve is not actually restricted but, if ex- tended too far, loses its meaning because much higher intensities would simply raise the adaptation level higher than Ba. The ability of the eye to discriminate between changes in light intensity at any specific adaptation level is also of considerable interest.

A classic experiment used to determine the capability of the human visual system for brightness dis- crimination consists of having a subject look at a flat, uniformly illuminated area large enough to occupy the entire field of view. This area typically is a dif- fuser, such as opaque glass, that is illuminated from behind by a light source whose intensity, I, can be varied.

This curve shows that brightness discrimination is poor the Weber ratio is large at low levels of illumination, and it improves significantly the Weber ratio decreases as background illumination increases.

Digital Image Processing, 3rd Edition

The two branches in the curve reflect the fact that at low levels of illumination vision is carried out by activity of the rods, whereas at high levels showing better discrimination vi- sion is the function of cones. If the background illumination is held constant and the intensity of the other source, instead of flashing, is now allowed to vary incrementally from never being perceived to always being perceived, the typical observer can dis- cern a total of one to two dozen different intensity changes.

Roughly, this re- sult is related to the number of different intensities a person can see at any one point in a monochrome image. This result does not mean that an image can be represented by such a small number of intensity values because, as the eye roams about the image, the average background changes, thus allowing a different set of incremental changes to be detected at each new adaptation level. The net consequence is that the eye is capable of a much broader range of overall intensity discrimination.

In fact, we show in Section 2. Two phenomena clearly demonstrate that perceived brightness is not a sim- ple function of intensity. The first is based on the fact that the visual system tends to undershoot or overshoot around the boundary of regions of different intensities.

Al- though the intensity of the stripes is constant, we actually perceive a brightness pattern that is strongly scalloped, especially near the boundaries [Fig. These seemingly scalloped bands are called Mach bands after Ernst Mach, who first described the phenomenon in All the center squares have exactly the same intensity. The relative vertical positions between the two profiles in b have no special significance; they were chosen for clarity.

Perceived brightness Actual illumination However, they appear to the eye to become darker as the background gets lighter. A more familiar example is a piece of paper that seems white when lying on a desk, but can appear totally black when used to shield the eyes while look- ing directly at a bright sky.

All the inner squares have the same in- tensity, but they appear progressively darker as the background becomes lighter. Other examples of human perception phenomena are optical illusions, in which the eye fills in nonexisting information or wrongly perceives geometrical properties of objects. Some examples are shown in Fig. The same effect, this time with a circle, can be seen in Fig. The two horizontal line segments in Fig. Finally, all lines in Fig.

Yet the crosshatching cre- ates the illusion that those lines are far from being parallel. Optical illusions are a characteristic of the human visual system that is not fully understood. We now consider this topic in more detail. The visible spectrum is shown zoomed to facilitate explanation, but note that the visible spectrum is a rather narrow portion of the EM spectrum.

As shown in Fig. On one end of the spectrum are radio waves with wavelengths billions of times longer than those of visible light.

On the other end of the spectrum are gamma rays with wavelengths millions of times smaller than those of visible light. The electromagnetic spectrum can be expressed in terms of wavelength, fre- quency, or energy. Frequency is measured in Hertz Hz , with one Hertz being equal to one cycle of a sinusoidal wave per second. A commonly used unit of en- ergy is the electron-volt. Electromagnetic waves can be visualized as propagating sinusoidal waves with wavelength l Fig. Each bundle of energy is called a photon.

We see from Eq. Thus, radio waves have photons with low energies, microwaves have more energy than radio waves, infrared still more, then visible, ultraviolet, X-rays, and finally gamma rays, the most energetic of all. This is the reason that gamma rays are so dangerous to living organisms. Light is a particular type of electromagnetic radiation that can be seen and sensed by the human eye. The visible color spectrum is shown expanded in Fig. The visible band of the electromagnetic spectrum spans the range from approximately 0.

For convenience, the color spectrum is divided into six broad regions: No color or other component of the electromagnetic spectrum ends abrupt- ly, but rather each range blends smoothly into the next, as shown in Fig. The colors that humans perceive in an object are determined by the nature of the light reflected from the object.

A body that reflects light and is relatively bal- anced in all visible wavelengths appears white to the observer. However, a body that favors reflectance in a limited range of the visible spectrum exhibits some shades of color. For example, green objects reflect light with wavelengths primarily in the to nm range while absorbing most of the energy at other wavelengths.

Light that is void of color is called achromatic or monochromatic light. The only attribute of such light is its intensity, or amount. The term gray level gen- erally is used to describe monochromatic intensity because it ranges from black, to grays, and finally to white. Chromatic light spans the electromagnetic ener- gy spectrum from approximately 0. Three basic quantities are used to describe the quality of a chromatic light source: Radiance is the total amount of energy that flows from the light source, and it is usually measured in watts W.

Luminance, measured in lumens lm , gives a measure of the amount of energy an observ- er perceives from a light source. For example, light emitted from a source op- erating in the far infrared region of the spectrum could have significant energy radiance , but an observer would hardly perceive it; its luminance would be almost zero.

Finally, as discussed in Section 2. Continuing with the discussion of Fig. As discussed in Section 1. Hard high-energy X-rays are used in industrial applications. Chest X-rays are in the high end shorter wavelength of the soft X-rays region and dental X-rays are in the lower energy end of that band.

The soft X-ray band transitions into the far ultraviolet light region, which in turn blends with the visible spectrum at longer wavelengths. The opposite end of this band is called the far-infrared region.

This latter region blends with the microwave band. This band is well known as the source of energy in microwave ovens, but it has many other uses, including communication and radar. Finally, the radio wave band encompasses television as well as AM and FM radio. In the higher energies, radio signals emanating from certain stellar bodies are useful in as- tronomical observations. Examples of images in most of the bands just discussed are given in Section 1. In principle, if a sensor can be developed that is capable of detecting energy radiated by a band of the electromagnetic spectrum, we can image events of in- terest in that band.

For example, a water molecule has a diameter on the order of 10—10 m. Thus, to study molecules, we would need a source capable of emitting in the far ultraviolet or soft X-ray region. This limitation, along with the physical properties of the sensor material, establishes the fundamental lim- its on the capability of imaging sensors, such as visible, infrared, and other sen- sors in use today.

Although imaging is based predominantly on energy radiated by electro- magnetic waves, this is not the only method for image generation. For example, as discussed in Section 1. Other major sources of digital images are electron beams for electron microscopy and synthetic images used in graphics and visualization.

We enclose illumina- tion and scene in quotes to emphasize the fact that they are considerably more general than the familiar situation in which a visible light source illuminates a common everyday 3-D three-dimensional scene. But, as noted earlier, it could originate from less traditional sources, such as ultrasound or even a computer-generated illumination pattern.

Similarly, the scene elements could be familiar objects, but they can just as eas- ily be molecules, buried rock formations, or a human brain.

We could even image a source, such as acquiring images of the sun. Depending on the nature of the source, illumination energy is reflected from, or transmitted through, objects. An example in the first category is light reflected from a planar surface. In some applications, the re- flected or transmitted energy is focused onto a photoconverter e.

Electron microscopy and some applications of gamma imaging use this approach. The idea is simple: Sensing material Power in b Line sensor. The output voltage waveform is the response of the sensor s , and a dig- ital quantity is obtained from each sensor by digitizing its response.

In this section, we look at the principal modalities for image sensing and generation. Image digitizing is discussed in Section 2. Perhaps the most fa- miliar sensor of this type is the photodiode, which is constructed of silicon ma- terials and whose output voltage waveform is proportional to light. The use of a filter in front of a sensor improves selectivity. For example, a green pass fil- ter in front of a light sensor favors light in the green band of the color spec- trum.

As a consequence, the sensor output will be stronger for green light than for other components in the visible spectrum. In order to generate a 2-D image using a single sensor, there has to be rela- tive displacements in both the x- and y-directions between the sensor and the area to be imaged. The single sensor is mounted on a lead screw that provides motion in the perpendicular direction. Since me- chanical motion can be controlled with high precision, this method is an inex- pensive but slow way to obtain high-resolution images.

Other similar mechanical arrangements use a flat bed, with the sensor moving in two linear directions. These types of mechanical digitizers sometimes are referred to as microdensitometers. Another example of imaging with a single sensor places a laser source coin- cident with the sensor.

Moving mirrors are used to control the outgoing beam in a scanning pattern and to direct the reflected laser signal onto the sensor. This arrangement also can be used to acquire images using strip and array sen- sors, which are discussed in the following two sections.

Film Sensor Rotation Linear motion One image line out per increment of rotation and full linear displacement of sensor from left to right. The strip provides imaging elements in one direction. Motion perpen- dicular to the strip provides imaging in the other direction, as shown in Fig. This is the type of arrangement used in most flat bed scanners. Sens- ing devices with or more in-line sensors are possible.

In-line sensors are used routinely in airborne imaging applications, in which the imaging system is mounted on an aircraft that flies at a constant altitude and speed over the ge- ographical area to be imaged. One-dimensional imaging sensor strips that re- spond to various bands of the electromagnetic spectrum are mounted perpendicular to the direction of flight.

The imaging strip gives one line of an image at a time, and the motion of the strip completes the other dimension of a two-dimensional image. Lenses or other focusing schemes are used to pro- ject the area to be scanned onto the sensors. This is the basis for medical and industrial computerized axial tomography CAT imaging as indicated in Sections 1. It is important to note that the out- put of the sensors must be processed by reconstruction algorithms whose ob- jective is to transform the sensed data into meaningful cross-sectional images.

In other words, images are not obtained directly from the sensors by motion alone; they require extensive processing. A 3-D digital volume consisting of stacked images is generated as the object is moved in a direction perpendicu- lar to the sensor ring. The illumination sources, sensors, and types of images are different, but conceptually they are very similar to the basic imaging approach shown in Fig.

Numerous electromagnetic and some ultrasonic sensing devices frequently are arranged in an array format. This is also the predominant arrangement found in digital cameras. CCD sensors are used wide- ly in digital cameras and other light sensing instruments. The response of each sensor is proportional to the integral of the light energy projected onto the sur- face of the sensor, a property that is used in astronomical and other applica- tions requiring low noise images.

Noise reduction is achieved by letting the sensor integrate the input light signal over minutes or even hours we discuss noise reduction by integration in Chapter 3. Since the sensor array shown in Fig. Mo- tion obviously is not necessary, as is the case with the sensor arrangements dis- cussed in the preceding two sections.

The principal manner in which array sensors are used is shown in Fig. This figure shows the energy from an illumination source being reflected from a scene element, but, as mentioned at the beginning of this section, the energy also could be transmitted through the scene elements.

The first function per- formed by the imaging system shown in Fig. If the illumination is light, the front end of the imaging system is a lens, which projects the viewed scene onto the lens focal plane, as Fig. The sensor array, which is coincident with the focal plane, produces outputs proportional to the integral of the light received at each sensor. Digital and analog circuitry sweep these outputs and convert them to a video signal, which is then digitized by another section of the imag- ing system.

The output is a digital image, as shown diagrammatically in Fig. Conversion of an image into digital form is the topic of Section 2. The value or amplitude of f at spatial coordinates x, y is a positive scalar quantity whose physical meaning is determined by the source of the image. Most of the images in which we are interested in this book are monochromatic images, whose values are said to span the gray scale, as discussed in Section 2. When an image is generated from a physical process, its values are proportional to energy radiated by a physical source e.

Appropriately, these are called the illumination and reflectance components and are denoted by i x, y and r x, y , respectively. The two functions combine as a product to form f x, y: The nature of i x, y is determined by the illumination source, and r x, y is determined by the characteristics of the imaged objects. It is noted that these expressions also are applicable to images formed via trans- mission of the illumination through a medium, such as a chest X-ray. In this case, we would deal with a transmissivity instead of a reflectivity function, but the limits would be the same as in Eq.

This figure decreases to less than reflectance. On a clear evening, a full moon yields about 0. Similarly, the following are some typical values of r x, y: The interval CLmin , Lmax D is called the gray scale. All intermediate values are shades of gray varying from black to white. The output of most sensors is a continuous voltage waveform whose amplitude and spatial behavior are related to the physical phenomenon being sensed.

To create a digital image, we need to convert the continuous sensed data into digital form. This involves two processes: An image may be continuous with respect to the x- and y-coordinates, and also in amplitude. To convert it to digital form, we have to sample the func- tion in both coordinates and in amplitude.

Digitizing the coordinate values is called sampling. Digitizing the amplitude values is called quantization. The one-dimensional function shown in Fig. The random variations are due to image noise. To sample this func- tion, we take equally spaced samples along line AB, as shown in Fig. The location of each sample is given by a vertical tick mark in the bottom part of the figure. The samples are shown as small white squares superimposed on the func- tion.

The set of these discrete locations gives the sampled function. However, the values of the samples still span vertically a continuous range of gray-level val- ues. In order to form a digital function, the gray-level values also must be con- verted quantized into discrete quantities.

The right side of Fig. The vertical tick marks indicate the specific value assigned to each of the eight gray levels. The continuous gray levels are quantized simply by assigning one of the eight discrete gray levels to each sample.

The assignment is made depending on the vertical proximity of a sample to a vertical tick mark. The digital samples resulting from both sampling and quantization are shown in Fig.

Starting at the top of the image and carrying out this procedure line by line produces a two-dimensional digital image. Sampling in the manner just described assumes that we have a continuous image in both coordinate directions as well as in amplitude.

In practice, the method of sampling is determined by the sensor arrangement used to generate the image. When an image is generated by a single sensing element combined with mechanical motion, as in Fig. However, sampling is accomplished by selecting the number of individual mechanical increments at which we activate the sen- sor to collect data. Mechanical motion can be made very exact so, in principle, there is almost no limit as to how fine we can sample an image.

When a sensing strip is used for image acquisition, the number of sensors in the strip establishes the sampling limitations in one image direction. Quantiza- tion of the sensor outputs completes the process of generating a digital image. When a sensing array is used for image acquisition, there is no motion and the number of sensors in the array establishes the limits of sampling in both di- rections.

Quantization of the sensor outputs is as before. Clearly, the quality of a digital image is determined to a large degree by the number of samples and discrete gray levels used in sampling and quantiza- tion. However, as shown in Section 2. We will use two principal ways in this book to represent digital images.

Assume that an image f x, y is sampled so that the resulting digital image has M rows and N columns. The values of the coordinates x, y now become discrete quantities.

For nota- tional clarity and convenience, we shall use integer values for these discrete co- ordinates. It is important to keep in mind that the notation 0, 1 is used to signify the second sample along the first row. It does not mean that these are the actual values of physical coordinates when the image was sampled. N-1 Coordinate 0 y convention used 1 in this book to 2 represent digital images.

Each element of this matrix array is called an image element, picture element, pixel, or pel. The terms image and pixel will be used throughout the rest of our discussions to de- note a digital image and its elements.

In some discussions, it is advantageous to use a more traditional matrix no- tation to denote a digital image and its elements: Expressing sampling and quantization in more formal mathematical terms can be useful at times. Let Z and R denote the set of real integers and the set of real numbers, respectively.

The sampling process may be viewed as parti- tioning the xy plane into a grid, with the coordinates of the center of each grid being a pair of elements from the Cartesian product Z2, which is the set of all ordered pairs of elements Azi , zj B, with zi and zj being integers from Z. Hence, f x, y is a digital image if x, y are integers from Z2 and f is a function that assigns a gray-level value that is, a real number from the set of real numbers, R to each distinct pair of coordinates x, y.

If the gray levels also are integers as usually is the case in this and subsequent chapters , Z replaces R, and a digital image then becomes a 2-D function whose coordinates and am- plitude values are integers. This digitization process requires decisions about values for M, N, and for the number, L, of discrete gray levels allowed for each pixel. There are no require- ments on M and N, other than that they have to be positive integers. However, due to processing, storage, and sampling hardware considerations, the number of gray levels typically is an integer power of 2: Sometimes the range of values spanned by the gray scale is called the dynamic range of an image, and we refer to images whose gray levels span a significant portion of the gray scale as having a high dynamic range.

When an appreciable number of pixels exhibit this property, the image will have high contrast. Conversely, an image with low dynamic range tends to have a dull, washed out gray look.

This is discussed in much more detail in Section 3. The number of gray levels corresponding to each value of k is shown in parentheses. TABLE 2. Ba- sically, spatial resolution is the smallest discernible detail in an image.

Suppose that we construct a chart with vertical lines of width W, with the space between the lines also having width W. A line pair consists of one such line and its adjacent space.

Edition 3rd image digital pdf processing

A widely used definition of resolution is simply the smallest number of discernible line pairs per unit distance; for example, line pairs per millimeter. Gray-level resolution similarly refers to the smallest discernible change in gray level, but, as noted in Section 2. We have considerable discretion regarding the number of samples used to generate a digital image, but this is not true for the number of gray levels.

Digital Image Processing, Third Edition

Due to hardware considerations, the number of gray levels is usually an integer power of 2, as mentioned in the previous section. The most common number is 8 bits, with 16 bits being used in some applica- tions where enhancement of specific gray-level ranges is necessary. Sometimes we find systems that can digitize the gray levels of an image with 10 or 12 bits of accuracy, but these are the exception rather than the rule.

The other images shown in Fig. The number of allowable gray levels was kept at The subsampling was accomplished by deleting the appropriate number of rows and columns from the original image.

The number of allowed gray levels was kept at These images show the dimensional proportions between various sampling densities, but their size differences make it difficult to see the effects resulting from a reduction in the number of samples. The results are shown in Figs. Compare Fig. The level of detail lost is simply too fine to be seen on the printed page at the scale in which these im- a b c d e f FIGURE 2.

A slightly more pronounced graininess throughout the image also is beginning to appear. Images such as this levels in a digital are obtained by fixing the X-ray source in one position, thus producing a 2-D image image. Projection images are used as guides to set up the para- meters for a CAT scanner, including tilt, number of slices, and range. Figures 2. The , , and level images are visually identical for all practical purposes.

The level image shown in Fig. This effect, caused by the use of an insufficient num- ber of gray levels in smooth areas of a digital image, is called false contouring, so called because the ridges resemble topographic contours in a map. False con- touring generally is quite visible in images displayed using 16 or less uniform- ly spaced gray levels, as the images in Figs. Original courtesy of Dr. However, these results only partially answer the question of how varying N and k affect images because we have not considered yet any relationships that might exist between these two parameters.

An early study by Huang [] attempted to quantify experimentally the ef- fects on image quality produced by varying N and k simultaneously. The exper- iment consisted of a set of subjective tests. Images similar to those shown in Fig. Sets of these three types of images were generated by varying N and k, and observers were then asked to rank them according to their subjective quality.

Results were summarized in the form of so-called isopreference curves in the Nk-plane Fig. Each point in the Nk-plane rep- resents an image having values of N and k equal to the coordinates of that point.

Points lying on an isopreference curve correspond to images of equal subjective quality. It was found in the course of the experiments that the isopreference curves tended to shift right and upward, but their shapes in each of the three image categories were similar to those shown in Fig.

This is not unexpect- ed, since a shift up and right in the curves simply means larger values for N and k, which implies better picture quality. The key point of interest in the context of the present discussion is that iso- preference curves tend to become more vertical as the detail in the image in- creases. Image b courtesy of the Massachusetts Institute of Technology.

Face k Cameraman Crowd 4 32 64 N a few gray levels may be needed. For example, the isopreference curve in Fig.

This indicates that, for a fixed value of N, the perceived quality for this type of image is nearly indepen- dent of the number of gray levels used for the range of gray levels shown in Fig. It is also of interest to note that perceived quality in the other two image categories remained the same in some intervals in which the spatial res- olution was increased, but the number of gray levels actually decreased.

The most likely reason for this result is that a decrease in k tends to increase the ap- parent contrast of an image, a visual effect that humans often perceive as im- proved quality in an image. Suppose that this highest frequency is fi- nite and that the function is of unlimited duration these functions are called band-limited functions.

Then, the Shannon sampling theorem [Bracewell ] tells us that, if the function is sampled at a rate equal to or greater than twice its highest frequency, it is possible to recover completely the original function from its samples. If the function is undersampled, then a phenomenon called aliasing corrupts the sampled image.

The corruption is in the form of addition- al frequency components being introduced into the sampled function. These are called aliased frequencies. Note that the sampling rate in images is the num- ber of samples taken in both spatial directions per unit distance.

As it turns out, except for a special case discussed in the following paragraph, it is impossible to satisfy the sampling theorem in practice. We can only work with sampled data that are finite in duration. Unfortunately, this function itself has frequen- cy components that extend to infinity.

Thus, the very act of limiting the duration of a band-limited function causes it to cease being band limited, which causes it to violate the key condition of the sampling theorem. The principal approach for reducing the aliasing effects on an image is to reduce its high-frequency com- ponents by blurring the image we discuss blurring in detail in Chapter 4 prior to sampling. However, aliasing is always present in a sampled image. There is one special case of significant importance in which a function of in- finite duration can be sampled over a finite interval without violating the sam- pling theorem.

When a function is periodic, it may be sampled at a rate equal to or exceeding twice its highest frequency, and it is possible to recover the func- tion from its samples provided that the sampling captures exactly an integer number of periods of the function. A simi- lar pattern can appear when images are digitized e. This topic is related to image sam- pling and quantization because zooming may be viewed as oversampling, while shrinking may be viewed as undersampling.

The key difference between these two operations and sampling and quantizing an original continuous image is that zooming and shrinking are applied to a digital image. Zooming requires two steps: Let us start with a simple ex- ample.

Obviously, the spacing in the grid would be less than one pixel be- cause we are fitting it over a smaller image. In order to perform gray-level assignment for any point in the overlay, we look for the closest pixel in the original image and assign its gray level to the new pixel in the grid.

When we are done with all points in the overlay grid, we simply expand it to the origi- nal specified size to obtain the zoomed image. This method of gray-level as- signment is called nearest neighbor interpolation.

Pixel neighborhoods are discussed in the next section. Pixel replication, the method used to generate Figs. Pixel replication is applicable when we want to increase the size of an image an integer number of times.

For instance, to double the size of an image, we can duplicate each column. This doubles the image size in the horizontal direction. Then, we duplicate each row of the enlarged image to double the size in the vertical direction. The same pro- cedure is used to enlarge the image by any integer number of times triple, quadruple, and so on. Duplication is just done the required number of times to achieve the desired size. The gray-level assignment of each pixel is predeter- mined by the fact that new locations are exact duplicates of old locations.

Although nearest neighbor interpolation is fast, it has the undesirable feature that it produces a checkerboard effect that is particularly objectionable at high factors of magnification. A slightly more sophisticated way of accomplishing gray-level assignments is bilinear interpolation using the four nearest neighbors of a point.

Image shrinking is done in a similar manner as just described for zooming. The equivalent process of pixel replication is row-column deletion. For example, to shrink an image by one-half, we delete every other row and column. To reduce possible aliasing effects, it is a good idea to blur an image slight- ly before shrinking it. Blurring of digital images is discussed in Chapters 3 and 4. It is possible to use more neighbors for interpolation. Using more neighbors implies fitting the points with a more complex surface, which generally gives smoother results.

This is an exceptionally important consideration in image gen- eration for 3-D graphics [Watt ] and in medical image processing [Lehmann et al. The equivalent re- using bilinear interpolation. Bottom row: In spite of this, the result of bilinear interpolation shown in Fig. As mentioned before, an image is denoted by f x, y. When refer- ring in this section to a particular pixel, we use lowercase letters, such as p and q. Each pixel is a unit distance from x, y , and some of the neighbors of p lie outside the digital image if x, y is on the border of the image.

These points, together with the 4-neighbors, are called the 8-neighbors of p, denoted by N8 p. As before, some of the points in ND p and N8 p fall outside the image if x, y is on the border of the image. To es- tablish if two pixels are connected, it must be determined if they are neighbors and if their gray levels satisfy a specified criterion of similarity say, if their gray lev- els are equal. For instance, in a binary image with values 0 and 1, two pixels may be 4-neighbors, but they are said to be connected only if they have the same value.

Let V be the set of gray-level values used to define adjacency. In a gray- scale image, the idea is the same, but set V typically contains more elements. For example, in the adjacency of pixels with a range of possible gray-level values 0 to , set V could be any subset of these values.

We consider three types of adjacency: Two pixels p and q with values from V are 4-adjacent if q is in the set N4 p. Two pixels p and q with values from V are 8-adjacent if q is in the set N8 p. Mixed adjacency is a modification of 8-adjacency. It is introduced to eliminate the ambiguities that often arise when 8-adjacency is used.

For example, consid- er the pixel arrangement shown in Fig. The three pixels at the top of Fig. This ambiguity is removed by using m-adjacency, as shown in Fig. Two image subsets S1 and S2 are adjacent if some pixel in S1 is ad- jacent to some pixel in S2. It is understood here and in the following definitions that adjacent means 4-, 8-, or m-adjacent. In this case, n is the length of the path. We can define 4-, 8-, or m-paths de- pending on the type of adjacency specified.

For example, the paths shown in Fig. Note the absence of ambiguity in the m-path. Let S represent a subset of pixels in an image. Two pixels p and q are said to be connected in S if there exists a path between them consisting entirely of pix- els in S. For any pixel p in S, the set of pixels that are connected to it in S is called a connected component of S. If it only has one connected component, then set S is called a connected set. Let R be a subset of pixels in an image. We call R a region of the image if R is a connected set.

The boundary also called border or contour of a region R is the set of pixels in the region that have one or more neighbors that are not in R. If R happens to be an entire image which we recall is a rectangular set of pixels , then its boundary is defined as the set of pixels in the first and last rows and columns of the image.

This extra definition is required because an image has no neighbors beyond its border.