caite.info Business EW 102 PDF

Ew 102 pdf

Sunday, November 25, 2018 admin Comments(0)

We provide one of the most wanted book qualified Ew Pdf by caite.info se Mentoring. It is free of charge both downloading or reading online. It is offered . can download Ew Pdf by caite.info Study Group as pdf, kindle, grouts for very rpose ew s serene 38 avalanche mint 77 frost rain Products 77 - 85 If you may be interested to read this Ew Pdf publication of will certainly additionally discover this e-book in style ppt, pdf, txt, kindle, zip.


Author: JEANNIE BOGATITUS
Language: English, Spanish, Indonesian
Country: Ecuador
Genre: Business & Career
Pages: 681
Published (Last): 03.04.2016
ISBN: 882-1-65938-462-7
ePub File Size: 19.39 MB
PDF File Size: 17.82 MB
Distribution: Free* [*Regsitration Required]
Downloads: 40066
Uploaded by: TAUNYA

EW A Second Course in Electronic Warfare For a listing of recent titles in the Artech House Radar Library, turn to the back of this book. EW A Second. David L. Adamy is president of Adamy Engineering and previously worked as a systems engineer and program manager on EW and reconnaissance programs. Eu a Second Course in Electronic Warfare PDF - Free download as PDF File .pdf), Text File .txt) or read online for free.

FREE shipping on qualifying offers. Controlled Delay Line Figure 3. Greater location accuracy can be accomplished if multiple sensor outputs are used as elements of an interferometer. The aircraft in the figure has been deliberately drawn to show that it is not the airspeed of the aircraft, but the ground speed that determines the induced Doppler shift. Smoke formulated to obscure IR, visible light, or ultraviolet signals is an important countermeasure. The designating platform can be either a manned or unmanned aircraft. The conversion is done by squaring the field strength value, multiplying by the equivalent area of the isotropic antenna, and dividing by the impedance of free space.

Speed of Light 3x10 meterssec. Electronic Order of Battle or Expense. The second pillar, known as the Common Foreign and Security. Need to consider Electronic Warfare a critical asset in the military and. Airwar - Luftwaffe Fighter Units Europe Artech House. Adamy, EW Proceedings of the 16th European conference on Research in computer security. A First Course in Electronic. Proceedings of the European Symposium on Research.

Confidence- and security-building categories in Europe. Recent reports include Thresholds for cyber warfare, Deterrence and credible. RF Jammers - Present and Future. Examinations for second-half and full-term summer courses Friday 7 August. The libraries provide electronic access to tens of thousands of full-text journals. Original pdf is an attachment to this document for comparison.

In multiple subject areas, including European culture and his. Joint warfare are also included. Figure EU Companies in Electronics Sector: Relative Sector Size.

CCE, all explained in this working paper, either in course of the text or in the. In modern electronic warfare EW, 3D radar 13 with electronically. Coefficient RC of the first- and second- order difference sequences as.

Electronic warfare EW is any action involving the use of the. Serving as a continuation of the. Serving as a continuation of the bestselling book EW Buy the selected items together This item: EW Ships from and sold by Amazon. FREE Shipping. Ew Customers who bought this item also bought. Page 1 of 1 Start over Page 1 of 1. David Adamy. Ew , Tactical Battlefield communications Electronic Warfare.

Radar Handbook, Third Edition. Principles of Modern Radar: Basic Principles. Mark A. About the Author David L. Read more. Product details Series: Artech House; 1 edition August 30, Language: English ISBN Start reading EW on your Kindle in under a minute.

Don't have a Kindle? Try the Kindle edition and experience these great reading features: Share your thoughts with other customers. Write a customer review. Top Reviews Most recent Top Reviews. There was a problem filtering reviews right now. Please try again later. Hardcover Verified Purchase.

Great continuation of the famous EW book. If you liked the first one, or thought it useful, I suggest getting this and the other two in the series 4 total. Excellent book! This book is the logically titled sequel to EW The style is the same.

Namely where very intricate and specialised maths and engineering is deprecated. What Adamy has done is reduce a problem down to the minimum physical model that conveys the essential information. More sophisticated processing and precision emitter location techniques will also allow future ESM receivers to perform location correlation and motion analysis to separate friendly signals from imitation friendly signals on hostile platforms.

The range at which a receiver can detect a radar signal is given by the formula: These equations apply in both Figures 3. By selecting some values to plug into these formulas and assigning bandwidth and processing gain values which drive the sensitivities we will be able to investigate LPI radar performance in realistic cases. It can detect a target at the same range that a receiver on the can detect the radar. It also increases as the ratio between the sensitivity level of the receiver and that of the radar decreases.

To avoid confusion on the sensitivity issue, remember that the sensitivity level is the lowest signal that a receiver can accept and still do its job. Thus as the sensitivity improves, the sensitivity level decreases. The sensitivity numbers in both of the range equations earlier are large negative numbers sensitivity level —therefore, as each sensitivity improves, the corresponding detection range increases.

The second is that several factors impact the sensitivity. Thus, the radar range can be expressed as a function of average power and time on target.

Another constraint on the radar is that its ability to resolve the range to the target is determined by the pulse width. Range resolution is commonly defined as: Modulations on the signal allow better range resolution for any given signal duration. This modulation can be frequency modulation chirp or phase reversals binary phase shift keying , as explained in Section 3. Since the detecting receiver depends on the peak power of the radar signal to detect the radar, the radar can gain an advantage in detection range by using a lower power, longer duration signal along with some modulation which will allow it to achieve adequate range resolution see Figure 3.

A Second Course in Electronic Warfare noise ratio in decibels. In radar analysis problems, the required signal-to-noise ratio is usually set to 13 dB and kTB is usually taken as: However, there is another factor that is useful to consider in the context of LPI signals; that is processing gain.

Processing gain has the effect of narrowing the effective bandwidth of the receiver by taking advantage of some aspect of the signal modulation.

A radar achieves bandwidth advantage over an intercept receiver because it can match its receiver and its processing to its own signal—while the intercept receiver must accept a wide range of signals and must typically make detailed parametric measurements to identify the type of signal it is receiving.

For example, a pulse radar need only determine the round-trip pulse travel time—and can integrate several pulses to determine that time. The intercept receiver, on the other hand, must determine the pulse width.

This requires a pulse with definable leading and trailing edges—which, in turn, requires a bandwidth of about 2. When there is randomness in the signal modulation, this becomes even more pronounced. The most extreme example of this effect is the use of true noise to modulate a radar signal.

The RSR uses various techniques to correlate the return signal with a delayed sample of the transmitted signal as shown in Figure 3. The amount of delay required to peak the correlation determines the range to the target. Since the transmitted signal is completely random and the intercepting receiver has no way to correlate to the transmitted signal, it can only determine that the radar is present through energy detection techniques rather than detecting modulation characteristics.

This is a much less efficient process than that available to the radar receiver. Radar Characteristics 75 Figure 3. With bandwidth greater than the inverse of the pulse width, the pulse parameters can be measured.

Controlled Delay Line Figure 3. It determines the range to the target by correlating return signals with a delayed sample of the transmitted signal. There are also several such systems under current development, and random signal radars are described in technical literature. Again, engagement parameters must be specified e.

It is easy to fall into the trap of considering only the radio frequency part of the electromagnetic spectrum, but there is a significant amount of EW effort in the IR, visible light, and ultraviolet parts of the electromagnetic spectrum. In this chapter, we will deal with the general nature of these parts of the spectrum, the systems that operate in this range, and the nature of the countermeasures against those systems.

Although we typically use frequency to define the RF portions of the spectrum, it is more common to use wavelength at the higher frequencies. Note that wavelength and frequency are related by the speed of light in the formula: Frequencies below GHz i. This includes the rear aspect view of a jet engine looking up into the engine. Objects in the normal range of temperatures i. The IR emissivity of real-world materials is defined in terms of a percentage of blackbody radiation at a given temperature.

Typical examples of emissivity are: The blackbody radiation versus wavelength is a function of the temperature of the emitter. As shown in Figure 4.

The total energy varies as the fourth power of temperature.

102 pdf ew

Also, the peaks of the curves move to lower wavelengths as the temperature increases. Figure 4. An interesting point is that the surface of the Sun is about 5,K, which causes its radiation to peak in the visible light spectrum convenient for those of us with eyes that operate in that range.

Note that there are absorption lines from various atmospheric gasses, but there are major transmission widows in the near-, mid- and far-infrared ranges. In IR transmission, the spreading loss versus range is calculated by projecting the receiving aperture from its range onto a unit sphere around the transmitter as shown in Figure 4.

The spreading loss is then the ratio of the amount of the surface of the unit sphere covered by the image of the receiving aperture to the whole surface area of the sphere. This is actually the same way we calculate spreading loss for RF signals. However by assuming isotropic antennas we get range and frequency terms in the RF equation.

Examples of these systems and threats are: There are, of course, countermeasures to all of these systems. Sensors can be blinded temporarily or permanently and IR-guided missiles can be defeated by flares or IR jammers. Some of these EO devices operate in the infrared spectral range.

EO systems and applications and their countermeasures discussed in this chapter include: They are primarily air-to-air missiles and surface-to-air missiles, and include small, shoulder-fired weapons. An IR missile detects the IR signature of an aircraft against a cold sky and homes on energy in one of the three IR bands.

Early IR missiles required high-temperature targets, so they needed to see the hot internal parts of jet engines to achieve good performance. Therefore, they were usually restricted to attacks from the rear aspect of jet aircraft. More recent missiles can operate effectively against cooler targets the plume, the tailpipe, heated leading edges of wings, or the IR image of the aircraft itself. This allows them to attack all types of aircraft from all aspect angles.

This type of missile suffered from considerable solar interference and severely restricted air-to-air tactics. More modern seekers use sensors of lead selinium PbSe , mercury cadmium teluride HgCdTe , and similar materials that operate in the mid- and far-IR bands.

While these seekers allow all aspect attack, they require that the sensors be cooled to about 77K with expanding nitrogen. The nose of the missile is an IR dome. This is a spherical protective covering for the seeker optics, made from a material that has good transmission of IR energy.

A seeker senses the angular location of the IR source and hands-off error signals to the guidance control group, which steers the missile toward the target by control commands to the rollerons. There are two mirrors a primary and a secondary reflector that are symmetrical around the optical axis.

They focus energy through a reticle onto an IR sensing cell. Not shown in the diagram is the filter that limits the spectrum of signals passed through the reticle—and the sensor cooling if required. A simple, spinning reticle pattern is shown in Figure 4. The top half of this reticle is divided between very low and very high transmission segments. An IR target is shown in one of the high transmission segments. This reduces the dynamic range required of the IR sensor.

As the reticle rotates, the IR energy from the target onto the IR sensor will vary in the partial square wave pattern shown in Figure 4. The square wave portion of the waveform starts as the upper half of the reticle starts to pass the target. Since the sensor knows the angular position of the reticle, it can sense the direction to the target from the timing of the square wave portion Figure 4.

This allows the guidance group to make corrections to steer the missile toward the target. When the target is near the center, the high-transmission segment does not admit the whole Figure 4.

As it moves farther from the center, more of the target is passed. Once the whole target is passed, the peak energy level to the IR sensor does not increase more.

This means that the sensor only provides proportional correction inputs when the target is quite near the center of the reticle. It also means that the seeker has no way to discriminate against a high-energy false target near the outer edge of the reticle. To generate steering information, the energy entering the seeker is nutated to move it around the optical axis. If the target is on the optical axis, it will cause a constant amplitude square wave of energy to reach the sensor.

However, if the target is off center, its image will move in the offset circle shown in the figure. This causes the irregular square wave form shown at the bottom of the figure. The control group can then determine that the missile must steer in the direction away from the narrow pulses. Figures 4. Since there are different numbers of segments in each of several rings, the number of pulses seen by the sensor changes as a function of the angular distance from the optical axis to the target.

This supports proportional steering. To avoid extremely high g forces on a missile as it reaches its target, missiles use proportional navigation as shown in Figure 4. If the aircraft and the Figure 4. If either is accelerating e. It is mounted on a manned aircraft or unmanned aerial vehicle UAV which flies a fairly low-level path over an area of interest. The IRLS makes a two-dimensional image by scanning an IR detector over an angular increment across the ground track of the vehicle while the second dimension is provided by the movement of the platform along its ground track.

This approach to mine detection is practical because buried mines will gain or lose heat at a different rate than the surrounding soil or sand. Thus, the mines will be at a different temperature during times of temperature change, for example, right after sunset. However, the resolution of the IR sensor must be adequate to differentiate the temperature of the mine from that of the soil, and it must have adequate angular resolution to differentiate mines from other buried objects, for example, rocks.

This high resolution will be required because the soil can have a relatively wide temperature range and post mission analysis will probably be required to find the relatively narrow temperature difference between the mine and the soil anywhere in this range. The altitude required is: It is traveling at knots, 1, feet above the ground. Ground resolution distance versus sin sensor aperture angle At a 0.

The vehicle can fly at any speed, but the sweep rate must be fast enough to make one cross-track sweep every 3 inches along the flight path. At our chosen speed, knots, the vehicle travels over the ground at feet per second: Infrared and Electro-Optical Considerations in Electronic Warfare 89 One sweep per 3 inches requires 4 sweeps per foot or sweeps per second at knots.

The sampling of the IR sensor must also be performed for every 3 inches of movement of the sensor over the ground in the crosstrack sweep. The width of the cross-track ground coverage the swath width is: At 8 bits per sample, this produces a data rate of It is stated in radians per second.

To understand this choice of units, consider observing the aircraft from a fixed point below it on the ground. Remember that a radian is the angle observed from the center of a circle for one radius along the circumference of a circle. Thus the subtended angle-per-unit time, converted to radians, would be equal to the velocity divided by the radius i.

As you can see from this typical example, the detection of buried mines requires an airborne platform that flies low and slow, and the collection and analysis of a great deal of data. Detection of larger objects for example, tanks in underground bunkers would require less angular resolution. This will allow operation at higher Figure 4. However, a high-data rate should always be expected for IRLS applications because of the fine temperature resolution and large temperature range required. If the vehicle is unmanned, or for any other reason the data is linked to a ground station, a wide data link will be required.

This can be in the visible light wavelength range television or it can be in a nonvisible wavelength range. Our concern here is imagery at infrared wavelengths. For all electronically implemented imagery, the displayed picture is divided into pixels. A pixel is a spot on the screen; there must be enough pixels to create the picture to the required quality.

The system captures and stores the brightness or the brightness and color to be displayed in each pixel—then displays the appropriate values at each pixel location on the screen.

The screen display can be generated with a raster scan as shown in Figure 4. If an imagery system is mapping the ground, the relationship between the pixels and the resolvable distance on the ground would be as shown in Figure 4. If the system is looking level or up, the same relationship applies, but the resolvable distance is a function of the range to the individual objects being observed.

The spacing of pixels on each line is approximately the same as the line spacing. Each element provides 1 pixel. A Second Course in Electronic Warfare everything emits infrared energy. By differentiating the temperatures of objects and backgrounds, the FLIR allows an operator to detect and identify most common objects. The display is monochromatic, with the brightness level of each pixel indicating the temperature at that position in the observed field.

Also, because they differentiate between objects by temperature or IR emissivity, they can often see objects of military significance that are hidden from visible-light TV by foliage or camouflage.

With serial processing, the FLIR uses mirrors to scan the orientation of a single IR sensor across a two-dimensional field of view in a raster scan. The whole scene is presented on a CRT. Pixels are defined by the number of samples in a scanned line and the spacing between parallel lines.

With parallel processing, a row of detectors is scanned through an angular segment to provide two-dimensional area coverage.

Pdf ew 102

Each element of the sensor array takes a series of measurements, so the pixels are defined by the number of elements and the number of samples in a scanned line by each sensor. A two-dimensional array Figure 4. Infrared and Electro-Optical Considerations in Electronic Warfare 93 captures all of the covered area at once, with each pixel being captured by an array element.

The data rate produced by a FLIR is the product of the number of pixels in a frame the two-dimensional angular area covered , the number of frames per second, and the number of bits of resolution per sample. Note that one sample is made per pixel. In this approach, the area near the target is observed by a two-dimensional IR array operating in the far-IR region.

You will recall that objects at moderate temperatures emit in this region, so the array can observe the contrast between the warmer aircraft and the colder sky. A processor will observe the shape of a number of pixels from the array that show the proper contrast see Figure 4. It will then determine that this pixel distribution qualifies as the target and steer the missile in the corresponding direction. Only a few pixels are required to determine the general size and shape of a target and to differentiate it from a much smaller decoy as opposed to the large number of pixels required to give a high-quality picture.

An IRST does not use imagery, but rather looks for a warmer spot target against a cold background. It sweeps a large angular area with an IR sensor array as shown in Figure 4. It detects IR targets while rapidly covering its angular range. Then, it develops the necessary data to hand off target tracking information to sensors. The reasoning behind the decision to start the operation on that particular date no doubt included complex political and military considerations.

EW 102: A Second Course in Electronic Warfare

However, when you consider that the dark of the Moon occurred on this date, it is obvious that one important factor was the ability of the coalition forces to operate in total darkness while the Iraqis could only bring their full military capability to bear during daylight. The coalition forces had significant numbers of night-vision devices deployed through their forces and had adequately trained them in their tactical application. These night-vision devices were the product of three generations of development and were completely passive, amplifying very low levels of available light—even on moonless nights with cloud cover.

These devices are different from FLIR devices in that FLIRs receive infrared energy emitted by objects—while night-vision devices amplify available light reflected from objects. The FLIR can operate in total darkness, while night-vision devices require some though very little available light. Light amplification devices have the advantage of being less expensive than FLIRs, and thus available for much wider distribution.

Also, since they operate in the optical region, they present the clues necessary for maneuvering of aircraft and ground vehicles and for the movement of troops over terrain. However, Infrared and Electro-Optical Considerations in Electronic Warfare 95 since night-vision devices provide no peripheral vision, they require significant training for effective tactical use. The procedure was to maneuver as close to the enemy as possible through the dark.

Troops moved single file, each soldier following the luminous strips stapled to the back of the hat of the individual ahead of him in line.

The night vision of the troops was carefully guarded by using only red lights to read maps. Troops were trained to keep their eyes moving and to use their peripheral vision which is more sensitive to light. If you stared directly at an object in the dark as required to fire a rifle , it would fade out of your vision. Ideally, the troops could sneak close enough to the enemy to move into a final assault line troops abreast facing the enemy before being detected.

Then artillery-fired flares would light up the battlefield allowing daylight tactics to be used and completely destroying the night vision of everyone involved. With modern night-vision devices, troops can move fast and fire with complete accuracy in complete darkness.

Troops were warned to turn the scope on before the spotlight so they could see any spot lights that were on—and shoot the unlucky enemy who had his light on.

Eu a Second Course in Electronic Warfare PDF | Electronic Warfare | Online Safety & Privacy

You can see a major disadvantage to those devices—which were also used on tactical vehicles. During the Vietnam War, first generation light amplification devices called starlight scopes were used. Second generation technology in the s included helmet-mounted goggles for helicopter pilots along with sights for rifles and crew-served weapons.

These devices provided increased range and rapid recovery from light saturation. However they had short tube life and were subject to saturation by cockpit 96 EW A Second Course in Electronic Warfare lighting. Third generation technology provides increased sensitivity, reduced size, improved tube life, reduced blooming, and visibility extension into the nearinfrared region. The IR capability allows night vision goggles to see 1.

Light falling on specially coated electrode screens dynodes causes Figure 4. Infrared and Electro-Optical Considerations in Electronic Warfare 97 electron emissions which are accelerated in a vacuum by high voltage and kept focused by magnetic fields.

These accelerated electrons are converted back to optical images by impact on a phosphor screen. Three stages were required to achieve the required amplification. They use a combination of vacuum devices and microchannel plates to achieved the necessary gain.

Microchannel plates are pieces of glass with the order of lead-lined holes. Electrons impact the walls of the tubes and dislodge secondary electrons. These secondary electrons are accelerated and focused on a phosphor screen for viewing. Third generation devices achieve all amplification in microchannel plates as shown in Figure 4.

The tubes are angled to assure impact of the primary electrons with the lead lining of the tubes. A Second Course in Electronic Warfare Laser Target Designation Laser designators and range finders have long been used against fixed and mobile ground targets, and now are also significant threats to helicopters and fixed-wing aircraft.

A missile with a laser receiver can home on this scintillation, allowing extremely accurate target engagement. Normally, the laser illuminator called a designator is coded, improving the ability of the receiver in the missile to discriminate against sun glint and other interfering sources of energy.

102 pdf ew

The missile must have some sort of guidance scheme multiple sensors, moving reticule, and so forth to provide angular error signals for guidance to the target. Its receiver is designed to accept only laser energy at the wavelength of the designator. Its processing circuitry qualifies on the proper coding and converts the angular error signals into guidance commands.

In this case, one aircraft or UAV places the designator on the target and tracks the target if it is moving. A second aircraft fires a missile which homes on the scintillation of the designator from the target. The missiles are fire-and-forget weapons, allowing the attacking aircraft to engage multiple targets and leave the area as soon as the missiles are launched.

The designating aircraft must remain within line of sight of the target to keep the designator on the target until missile impact. The attacking platform places a laser designator on the target and fires its own laser homing missile. However, in some systems, there is a laser on the missile itself.

This allows the attack to continue even if the target maneuvers to avoid line of sight to the attacking platform. Note that a laser range finder on the attacking platform can determine an extremely accurate range to the target, providing a very tight firing solution—which complicates the task of any countermeasures approach.

This involves the use of laser detection systems as shown in Figure 4. These systems typically have four or six sensors distributed around the platform. This is normally adequate for ground vehicles. Aircraft typically have six sensors, providing roughly spherical 4 steradian coverage. Each sensor has a lens which focuses incoming laser signals on a twodimensional array which locates the direction to the laser to a single pixel 1-pixel-per-array element.

Greater location accuracy can be accomplished if multiple sensor outputs are used as elements of an interferometer. A Second Course in Electronic Warfare The laser-warning receiver processor determines the type of laser received and the direction of arrival.

It either passes this information to a radar-warning receiver which provides an integrated threat display or drives its own unique threat display. The laser-warning receiver can also support countermeasures against the laser designator or the associated weapons. If a low-power laser is scanned through the angular space containing the missile as in Figure 4.

By receiving and performing direction of arrival analysis on the reflected signal, the countermeasures system determines the angular location of the missile. A plume detector is another way that the countermeasure system can locate the missile. Active countermeasures see Figure 4. Since the missile location can be determined by either detecting its plume or detection of laser reflection from its homing receiver, a firing solution for a countermissile missile can be generated.

The designator must remain within line of sight of the target, so an accurate laser-warning receiver will give a firing solution for a missile to attack the designating platform ground or air.

The missile can also be attacked electronically by use of a high-power laser which can either dazzle the missile receiver saturating its sensor or actually damaging the sensor.

If a lower power laser has a deceptive jamming signal which causes the missile receiver to pass incorrect error signals to the missile guidance, the missile can be caused to miss its target. Passive countermeasures obscure the target, making it difficult for the targeting platform to track the target and keep the laser properly aimed.

Obscuration also reduces the power of the scintillation from the laser if it is on the target. Finally, obscuration reduces the propagation of the laser signal to the receiver in the missile, denying it the necessary error signals for guidance. Smoke formulated to obscure IR, visible light, or ultraviolet signals is an important countermeasure. Water-dispensing systems on ground targets can also generate dense fog around the protected platform—which effectively obscures a wide range of signal frequencies.

The flare breaks the lock of the missile onto the aircraft and causes the missile to home on the flare. Although the flare is much smaller than the aircraft it is protecting, it is significantly hotter. Therefore it radiates significantly more IR energy. Since the flare has more energy, the energy centroid is closer to the flare. As the flare separates from the EW Thus, the missile is lured away from the target. Once the aircraft leaves the tracking field of view of the missile, the missile homes on just the flare.

The black body curves in Figure 4. By measuring and comparing energy at two wavelengths i. Two-color tracking allows it to discriminate against the hotter flare and continue tracking the target. This creates a significant increase in the complexity required of countermeasures. To fool a two-color tracker, it is necessary to either use a large expendable object at the correct temperature or to fool the missile sensor in some other way to cause it to receive the proper energy ratio at the two measured wavelengths.

Flares have the disadvantage that they are expendable, therefore limited in number. Also, since they are very hot, they represent a significant safety hazard which prevents their use on civilian aircraft. They provide IR signals similar to those produced by the IR energy of a target passing through a reticule as described in Section 4. Optimum use of an IR jammer requires information about the spin-andchop frequencies of the missile seeker that is being jammed. This can be measured by scanning the missile tracker with a laser.

The IR detector surface is reflective, and the lens gives the laser a double advantage amplifying both on the way in and on the way back out. As the reticule moves over the sensor see Section 4. Once the missile tracking signal is determined, the IR jammer can create an erroneous pulse pattern as in Figure 4. Without direct information about the tracking in the specific attacking missile, generic false tracking signals can also be generated and transmitted. The IR jamming signal comprises pulses of IR energy which can be generated in several ways.

One way is to flash a Xenon lamp or an arc lamp. Another way, as shown in Figure 4. Mechanical shutters expose the hot brick to generate the required jamming signal.

Eu 102 a Second Course in Electronic Warfare PDF

Both of these techniques produce jamming signals over a wide angular area for wide aspect protection. A third way to generate the jamming signal is by use of an IR laser. The laser is easy to modulate and can produce very high level jamming signals, but is intrinsically narrow in beamwidth.

Therefore, it must be accurately pointed at the tracker it is jamming. This requires beam steering controlled by an IR sensor with high angular resolution. The sensor typically detects the IR signature of the platform carrying the tracker e. Because of its high signal levels at the receiver, the IR laser jammer can protect large platforms. Note that if a jammer located on the protected target fails to deceive the missile tracker, it may act as a beacon to improve the missile tracking accuracy.

Decoys can be fixed or can maneuver in ways to optimize deception of Infrared and Electro-Optical Considerations in Electronic Warfare weapon trackers. Under some circumstances, they can be larger than flares to provide more energy at lower temperatures.

Decoys radiating the same order of magnitude of IR energy as a fixed or mobile ground asset can saturate enemy targeting capabilities. IR chaff can burn or smolder to create the proper IR signature or can oxidize rapidly to raise its temperature to the proper level.

Since a chaff cloud occupies a large geometric area, it may be more effective against some kinds of tracking.

Like RF chaff, the IR chaff can be used either to break a missile lock or to increase the background temperature to make target acquisition more difficult. Subjects will include radio propagation, the nature of the threat environment, and individual signal characteristics.

It will also include discussions of search, intercept, and jamming issues relative to communication signals. The tactical communication environment is extremely dense in battlefield situations, which is an important consideration in all communications EW activities. However, there are also fixed point-topoint, satellite, and air-to-ground data links that must be considered communication signals as well. Table 5.

In general, the higher the frequency, the more dependent a communication link is on a clear line of sight between the transmitter and receiver—but the more bandwidth is available. There are also unique considerations for each band. Its characteristics vary with time of day, time of year, location, and conditions such as sunspot activity that impact the ionosphere.

Finally, for specific ionospheric conditions, propagation parameters, and other examples, the Federal Communication Commission has a Web site with loads of data http: In this section, we will discuss the ionosphere, ionospheric reflection, HF propagation paths, and single-site locator operation.

The primary references for this section are Mr. HF propagation can be line of sight, ground wave, or sky wave. Ground wave, which follows the Earth, is a strong function of the quality of the surface along the path. The FCC Web site has some curves for this propagation mode.

Beyond about km, HF propagation depends on sky waves reflected from the ionosphere. Its primary interest here is that it reflects radio transmissions in the medium- and high-frequency ranges.

As shown in Figure 5. Figure 5. It is an absorp- tive layer, with absorption decreasing with frequency. Its absorption peaks at noon and is minimal after sunset. It reflects radio signals for short- and medium-range HF propagation during the daytime. Its intensity is a function of solar radiation and varies with the seasons and sunspot activity. It causes short-term changes in HF propagation. It exists only during the daytime and is strongest during summer and periods of high sunspot activity.

It is most prominent at the middle latitudes. It is permanent, but extremely variable. It allows long-range and nighttime HF propagation. The virtual height as shown in Figure 5. This is the height measured by sounders which transmit vertically and measure the round-trip propagation Figure 5. EW Against Communications Signals time. As frequency is increased, the virtual height increases until the critical frequency is reached.

At this frequency, the transmission passes through the ionospheric layer. If there is a higher layer, the virtual height increases to the higher layer. The maximum usable frequency MUF is determined by the formula: If the sky wave passes through one layer, it may be reflected from a higher layer.

There can be one or more hops from the E layer, depending on the transmission distance. If the E layer is penetrated, one or more hops from the F layer can occur. At night, this would be from the F2 layer and in the daytime from the F1 layer. Depending on the local density of various layers, there can also be hops from the F layer to the E layer, back to the F layer, and finally to the Earth.

The received power from sky wave propagation is predicted by the following formula: A Second Course in Electronic Warfare where PT is the transmitter power, GT is the transmit antenna gain, GR is the receiving antenna gain, LB is the spreading loss, Li is the ionospheric absorption loss, LG is the ground reflection loss for multiple hops , YP is the miscellaneous loss focusing, multipath, polarization , and LF is the fading loss.

All terms of the equation are in decibel form. The measured elevation is the angle of reflection from the ionosphere. Both the line-of-sight and sky wave signals reach the aircraft, but the Figure 5.

EW Against Communications Signals difference in path length causes severe multipath interference. This makes intercept difficult and causes significant problems with the operation of emitter location systems. One solution to this problem is to use antennas which have gain pattern nulls at the top, for example, horizontal loops.

Since line-of-sight emitters are relatively close to the aircraft subvehicle point, the sky wave signals come in at very high elevation angles.

Thus, they are greatly attenuated by the antenna gain pattern. In this section, we will discuss the commonly used propagation models and their normal application. Note that we are considering only the losses related to link geometry. There are additional losses from atmosphere and rain, but in this frequency range, they are usually not too significant. It discusses both the simple and the complex models. These models input specific path characteristics and thus provide valuable information for fixed-location communication, but they are less useful in electronic warfare.

EW propagation typically deals with dynamic scenarios involving large numbers of actual and potential links. Therefore, we usually use either the free space, two-ray, or knife-edge diffraction propagation models in EW applications.