Techniques for Nuclear and Particle. Physics Experiments. A How-to Approach. Second Revised Edition. With Figures, 40 Tables and Numerous Worked. Techniques for nuclear and particle physics experiments. 1. Particles (N^cit*j pbysics)-Techaique. 2. Particles (Nuclear physicsJ-Erpcrimcnu. 3. Nuclear physics-. I have been teaching courses on experimental techniques in nuclear and particle physics to master students in physics and in engineering for many years.
|Language:||English, Spanish, German|
|ePub File Size:||17.76 MB|
|PDF File Size:||10.76 MB|
|Distribution:||Free* [*Regsitration Required]|
the experiment, learning the relevant techniques, setting up and troubleshooting principal types of detectors used in nuclear and particle physics experiments. Techniques for Nuclear and Particle Physics Experiments Digitally watermarked, DRM-free; Included format: PDF; ebooks can be used on all reading devices. The first € price and the £ and $ price are net prices, subject to local VAT. Prices indicated with * include VAT for books; the €(D) includes 7% for. Germany, the.
Jets signatures produced by quark interactions at high momentum transfers have led to great emphasis on finely segmented detection systems and to calorimetric detectors for the measurement of energy flow. The initial detector layers are used to characterize the charged-particle component and are designed to be nondestructive, i. The result is this second edition. Very precise time resolution billionths of a second. Recoiling protons are identified by the recoil detector. Free Preview. Hadronic cascades can be differentiated from the electromagnetic cascades that develop in much thinner layers of material.
The neutral pion decays to two photons a, and A,. The apparatus is relatively small and primarily uses two electromagnetic calorimeters. Figure 6. The detector is of moderately large size but simple construction; its function is to detect neutrinos and to distinguish between muon neutrinos and electron neutrinos. It is intended primarily for studying the production of charmed particles. A photon beam produced by the primary proton beam strikes a liquid hydrogen target.
Recoiling protons are identified by the recoil detector. The spectrometer has magnetic analysis to measure charged-particle momenta; Cerenkov counters to identify pions.
This sequence of analysis steps is the same as that used in most collider detectors, but the target is not surrounded by all the components of the detector as it is in a collider detector. A nuclear emulsion is a thick photographic emulsion that when developed shows the paths of charged particles that have passed through it.
This detector was used at Fermilab to measure the lifetimes of charmed mesons. The emulsion gave precise pictures of how the mesons decayed close to their production point.
Bubble Chamber The bubble chamber, invented in the s, was for many years the workhorse of elementary-particle physics experiments. A bubble chamber uses a superheated liquid, such as liquid hydrogen, neon, or Freon. Charged particles passing through this liquid leave tracks of tiny bubbles, which are photographed. The bubble chamber has gradually been replaced in most experimental applications by electronic detec- tors.
The latter are more versatile, often give more information about the products of the collision, and usually provide that information in a form that can be directly used in computers.
Nevertheless, the large volume and precise track information provided by bubble chambers. Used to measure the lifetime of charmed particles, the emulsions give a precise location for where the charmed particle decayed. Chief among these are the study of neutrino interactions and the study of particles with short lifetimes.
Recent improvements in bubble-chamber technology include high repetition rates, precise track measurements, and holographic photography. In recent years, apart from the sheer increase in scale of these experiments, data reduction has evolved to become more integrated with and intrinsic to an experiment's operation.
A particular example is Monte Carlo computer simulation, which has become an important means of experimental calibration. New detector capabilities have made this evolution both possible and necessary.
Very precise time resolution billionths of a second. In turn, these fast detectors can supply information in a form that can be. Advanced systems for the reduction of high-energy data have unified the traditionally separate functions of trigger decision making, data logging.
The trigger is critical to the success of many experiments because collisions may occur at a rate exceeding per second. In order to select those interactions that are of particular interest in an experiment. The trigger is a fast electronic decision made to record the data from a particular event on the basis of the signals received from the detector. The triggering decision itself may be based on detector input that no single computer could process fast enough.
Fortunately, the repetitive nature of these calculations can be adapted to the use of fast but relatively primitive processors working in parallel. These are the first steps in what traditionally would have been termed off-line analysis. Without the results of these and other computations' the operation of advanced detectors cannot be monitored or controlled.
Even the logging of data onto tape may use many levels of the data-acquisition system. One experiment in one year can accumulate data amounting to 10 million such reports. Because of the enormous software devel- opment required programs for real-time and off-line event analysis increasingly must share common subroutines and other features.
The distinction between real-time and off-line analysis is further blurred by the scale of processing power that must be dedicated -to a single experiment, in notable cases reaching a level comparable with that of a mayor computer. The trend toward large-solid-angle general-purpose detectors at the colliding-beam facilities has been noted. As the collider energy rises, the events become more complex, and the amount of computer time required to analyze these events becomes large.
For example, a Z" decay into hadrons has an average of about 20 charged particles and " neutral particles, and the off-line analysis time for such events in the detectors proposed for the SLC or LEP is of the order of seconds of central processor unit CPU time for moderate-size computers.
Millions of such events per year may need to be processed.
As another example, it has been estimated that a dedicated capacity equivalent to tens of moderate-size computers will be required to process the inter- esting events from the 2-TeV proton-antiproton collider at Fermilab. These requests for computer power are marginally met by current commercially available computers.
The supercomputers tend to be machines optimized for vector or array calculations typically encoun- tered in solving large sets of partial differential equations, such as in weather prediction.
These machines are not well suited to the large input-output 10 requirements of higb-energy physics nor to the large address-space requirements of the detector analysis codes. Some manufacturers have addressed the 10 problems and have reasonable CPU power but are relatively weak in modern software tools and in the system architecture to support them. These are important issues in high-energy physics, where the data production codes are being constantly improved. Tools that improve the efficiency of the physicist are clearly valuable.
Finally, the manufacturers of superminicomputers who have advanced the soft- ware state of the art do not produce machines of sufficient CPU power to analyze the data from a modern collider detector. The present situation is sufficiently serious to motivate noncommer- cial attempts to provide adequate computing power.
Most of these attempts are based on the relatively large fraction of CPU to 10 activity that characterizes detector event analysis, so that many relatively simple. Related projects involve even more specialized processors designed for lattice gauge theory calculations or accelerator ray tracing.
It is, of course, possible that commercial developments will prove adequate in the next few generations of machines, but the present situation is murky. Computing represents a significant expense at the national laboratories and universities in terms of actual hardware, support personnel, and physicist involvement. Even with the rapid decline in the unit costs of computing crudely a reduction by a factor of 2 every 3 years ' the overall costs of computing increase.
Thus computers of both large and medium size are now necessary parts of almost all elementary-particle physics experiments. Not only are they needed to reduce and study the data, but they are also used to monitor and control the experimental apparatus. The extensive use of computers has stimulated advances in some types of computer tech-. This is because physicists have been willing to work with computers that were still in an early stage of production, interacting closely with the computer manufacturers.
In this section we sketch some of the ways in which elementary particles are studied without using accelerators. Atomic, Optical, Electronic, and Cryogenic Experiments Elementary-particle physicists are concerned with precise measure- ments of the properties of the more stable particles, such as electrons, positrons, muons, protons, and neutrons.
For example, the electric charge of the proton is the same magnitude as the electric charge of the electron according to the most precise measurements that can be made. This is one of the reasons why we believe that there is a connection between the leptons the electron is a lepton and the quarks the proton is composed of quarks. Such precise measurements are carried out using the methods of atomic, optical, electronic, or cryogenic physics.
These include measurements of the electron g-factor, of the positronium Lamb shift, and of parity violation in atomic systems. These methods are also used to search for new types of particles in matter.
Two sorts of searches have particularly intrigued particle physicists. One is the search for free quarks as contrasted with the quarks that are bound together inside protons and neutrons. The other is the search for magnetic monopoles, that is, for an isolated magnetic pole. All known particles that have magnetic properties have two magnetic poles, one north and one south, of equal size. None of these searches has produced generally accepted evidence for free quarks or monopoles.
But more definitive searches are under way. Experiments Using Radioactive Material or Reactors There are experiments using radioactive material or reactors that are important in both elementary-particle physics and nuclear physics.
An outstanding example is the study of the radioactive beta decay of tntium.
An electron and a neutrino are produced in this decay' and the mass of this neutrino can be measured. The mass of the neutrino is a pressing question: Another example involves different forms of beta decay that have been proposed, such as the production of two electrons but no neutrino. This would violate our present theory of the weak force.
Reactors produce electron neutrinos, which are being used to study the stability of these particles. The most recent experiments find no confirmed evidence for neutrino instability. This illustrates how the exploration of an area in particle physics spreads over all the experimental techniques. Reactor experiments have also set important upper limits on the neutron's electric dipole moment. Experiments Using Cosmic Rays Cosmic-ray physics is concerned with three areas: It is the third area that concerns us here.
Earlier in this century, cosmic rays were the only source of very- high-energy particles; hence substantial discoveries in particle physics were often made with cosmic rays. Prominent examples are the discoveries of the positron, the muon, and some of the strange hadrons. However, accelerator experiments have gradually displaced cosmic-ray experiments.
Cosmic-ray experiments at present can only contribute to elementary-particle physics at extremely high energies, but unfortunately the flux is then small. This is shown in Figure 6. Here a matrix of about phototubes in two clusters separated by 3. These air showers come from the interactions of very-high-energy cosmic-ray protons energies above GeV with nuclei in the upper atmosphere.
Even an ambitious space station would not be able to support a detector sufficient to address this energy regime. By utilizing the atmosphere as a target, the Fly's Eye technique can explore a very large area, of the order of km or greater.
If the detector were high. The vertical scale is expressed as particles per square meter per second per steradian left and as particles per square kilometer per year per steradian right. The shading represents the experimental uncertainty of the flux determinations. Turning to another example, current unified theories predict that there should be massive magnetic monopoles with a rest mass of about 10'6 times the proton mass and further that these should have been produced in the early universe and still be present among cosmic rays.
An experiment in early reported evidence for the passage of one such monopole through a superconducting coil of a few cm' area. A large number of other experiments, using larger superconducting coils. Larger monopole detec- tors are now being built. Another volume in this survey describes cosmic-ray experiments in more detail.
This detector is seeking evidence for inverse beta decay produced by neutrinos from the Sun. This process. After some years of operation and a multitude of careful checks, the observed rate of argon production is only about one third to one fifth of the predicted rate.
Either our nuclear physics and astrophysics under- standing of the solar furnace in which hydrogen is burned in thermo- nuclear reactions is incorrect or some of the neutrinos decay before they reach the Earth and neutrinos are unstable or the experiment is wrong.
Although this important problem remains unsolved, no new solar neutrino detectors are now being built. Such a detector would be sensitive to the lower-energy neutrinos coming from the Sun.
This is an advantage, since the number of these neutrinos predicted by theory is less dependent on a complete understanding of the conditions in the interior of the Sun.
Searches for the Decay of the Proton Until the last decade the proton was regarded as absolutely stable; that is, it was assumed that the proton could not decay to any other particle. However, the realization that quarks, and hence the proton, are related to leptons has been growing.
Therefore we have begun to consider the possibility that the proton could decay to either an electron or a muon both leptons plus other particles. This possibility, made quantitative by grand unification theories, has led to a first-generation family of underground proton-decay experi- ments. As of the end of This implies that, depending. This detector employs about photomultipliers mounted on the walls of an m; water tank.
The signals are from Cerenkov radiation in the water. Several large detectors in Europe, India. Japan, and the United States are or soon will be operating with only somewhat smaller masses.
Some of these detectors use ionization for tracking and calorimetry' rather than the Cerenkov technique, and will therefore be more sensitive to some decay modes than the IMP experiment.
Together with further 1MB data these detectors should either identify proton decay or set lower limits to the proton decay lifetime of between and years for each of several expected decay modes. Jets signatures produced by quark interactions at high momentum transfers have led to great emphasis on finely segmented detection systems and to calorimetric detectors for the measurement of energy flow. New generations of quarks and leptons with lifetimes in the range of one trillionth of a second have stimulated new progress in the development of electronic systems for high-resolution vertex detec- tion.
Emphasis on rare processes has required virtually all major new detectors to be designed for efficient operation over nearly the full angular acceptance. For successful detection under conditions of high ambient rate, new levels of sophistication and integration of data- acquisition systems have become necessary. Continued progress in the development of instrumentation and detection systems is the experimental high-energy physicist's greatest challenge: Support for the development of high-energy physics instrumentation should grow in breadth and intensity.
Basic research in the development of new detectors for high-energy physics increasingly should be recognized and supported as a fundamental source of much progress in the field. Correspondingly, the experimen- tal stations at our front-line accelerators should be used as effectively as available technology will reasonably permit.
Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.
Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.
To search the entire text of this book, type in your search term here and press Enter. Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available. Do you enjoy reading reports from the Academies online for free? Sign up for email notifications and we'll let you know about new publications in your areas of interest when they're released.
Elementary-Particle Physics Chapter: Free Preview. Buy eBook. Buy Softcover. FAQ Policy. About this book Not quite six years have passed since the appearance of the first edition of this book. Show all. William R. Pages Radiation Protection. Biological Effects of Radiation Leo, Dr. General Characteristics of Detectors Leo, Dr. Ionization Detectors Leo, Dr. Scintillation Detectors Leo, Dr.
Photomultipliers Leo, Dr. Semiconductor Detectors Leo, Dr.