caite.info Environment PROGRAMMING THE UNIVERSE PDF

Programming the universe pdf

Thursday, May 2, 2019 admin Comments(0)

PDF - Programming the Universe. Is the universe actually a giant quantum computer? According to Seth Lloyd—Professor of Quantum-Mechanical Engineering. PDF | On Nov 13, , Ovidiu Racorean and others published Is there a Lloyd S, Programming the Universe: A Quantum Computer Scientist. PDF | Extending a geometrical and logical unification of mind, light, and matter, titled Programming the Universe [1] that the universe is itself a.


Author: KENISHA LOUNDER
Language: English, Spanish, French
Country: Marshall Islands
Genre: Fiction & Literature
Pages: 219
Published (Last): 12.01.2016
ISBN: 310-4-38610-661-1
ePub File Size: 25.65 MB
PDF File Size: 14.32 MB
Distribution: Free* [*Regsitration Required]
Downloads: 21153
Uploaded by: CLEMENTINA

will show that the universe can be regarded as a giant quantum com- .. programming some piece of the universe to behave like a universal. Programming the Universe: A Quantum Computer Scientist Takes on the Cosmos . Home · Programming the . Views 1MB Size Report. DOWNLOAD PDF. Programming the Universe and millions of other books are available for Amazon Kindle. Programming the Universe: A Quantum Computer Scientist Takes on the Cosmos Paperback – March 13, Seth Lloyd is Professor of Mechanical Engineering at MIT and a principal investigator at.

If the argument for complete randomness were true, then as new objects swam into view they would reveal completely random arrangements of matter—a sort of cosmic slush—rather than the quasars and ordered, if mysterious, objects that we do, in fact, see. Consequently, as they become more powerful and perform a more varied set of tasks, computers exhibit an unpredictability approaching that of human beings. How about this one: In Japanese mythology, Japan arises from the incestuous embrace of the brother and sister gods Izanagi and Izanami. Similarly, when hot gas moves a piston in a steam engine, it is because the water molecules that form the steam are exerting pressure on the piston head as they bounce off it. But room temperature steam will do no work. Two computing machines have the same computational power if each can simulate the other efficiently.

Part of the problem is that natural human language is frequently ambiguous: It would be useful to have an example of a situation in which a piece of information can be interpreted in only one way and in which the mechanism eliciting a response is completely known. Computers supply one such mechanism.

The computer program unambiguously instructs the computer to perform a particular sequence of those operations. The unambiguous nature of a computer program means that one and only one meaning is assigned to each statement.

If a statement in a computer language has more than one possible interpretation, an error message is the result: By comparison, human languages are rich in ambiguity: The basic idea of information is that one physical system—a digit, a letter, a word, a sentence—can be put into correspondence with another physical system.

The information stands for the thing. Two fingers can be used to represent two cows, two people, two mountains, two ideas. A word can stand for anything anything for which we have a word: By putting words in sentences, one can express—well, anything that can be expressed in words. The words in sequence can stand for a complicated thought. In the same way that words can represent ideas and things, so can bits. The word and the bit are means by which information is conveyed, though the interpreter must supply the meaning.

No answer. This was strange seeing as I was pretty sure my students had been using them since their first birthdays. I waited. Finally someone volunteered an answer: What about an analog computer? They store information on continuous voltage signals. OK, I said, then what was the first computer? Now the class had warmed up: Like the first tools, the first computers were rocks. Stonehenge may well have been a big rock computer for calculating the relations between the calendar and the arrangement of the planets.

The technology used in computing puts intrinsic limits on the computations that can be performed think rocks versus an Intel Pentium IV. And to deal with large numbers, you need a lot of rocks. If you kept the rocks in grooves on a wooden table, they were easier to move back and forth. Then it was discovered that if you used beads instead of rocks and mounted them on wooden rods, the beads were not only easy to move back and forth but also hard to lose.

The wooden computer, or abacus, is a powerful calculating tool. Before the invention of electronic computers, a trained abacus operator could out-calculate a trained adding-machine operator.

But the abacus is not merely a convenient machine for manipulating pebbles. It embodies a powerful mathematical abstraction: The concept of zero is the crucial piece of the Arabic number system—a system allowing arbitrarily large numbers to be represented and manipulated with ease—and the abacus is its mechanical incorporation.

But which came first? Given the origin of the word for zero and the 1 antiquity of the first abacus, it seems likely that the machine did. Sometimes, machines make ideas. Ideas also make machines. First rock, then wood: In the early seventeenth century, the Scottish mathematician John Napier discovered a way of changing the process of multiplication into addition.

He carved ivory into bars, ruled marks corresponding to numbers on the bars, and then performed multiplication by sliding the bars alongside each other until the marks corresponding to the two numbers lined up. The total length of the two bars together then gave the product of the two numbers.

Programming the Universe: A Quantum Computer Scientist Takes on the Cosmos

The slide rule was born. In the beginning of the nineteenth century, an eccentric Englishman named Charles Babbage proposed making computers out of metal. Each gear would register information by its position, and then process that information by meshing with other gears and turning.

It was designed with a central processing unit and a memory bank that could hold both program and data. Although effective mechanical calculators were available by the end of the nineteenth century, large-scale working computers had to await the development of electronic circuit technology in the beginning of the twentieth. By , an international competition had arisen between various groups to construct computers using electronic switches such as vacuum tubes or electromechanical relays.

The first simple electronic computer was built by Konrad Zuse in Germany in , followed by large-scale computers in the United States and Great Britain later in the s. In the s, vacuum tubes and electromechanical relays were replaced by transistors, semiconductor switches that were smaller and more reliable and required less energy.

A semiconductor is a material such as silicon that conducts electricity better than insulators such as glass or rubber, but less well than conductors such as copper. Starting in the late s, the transistors were made still smaller by etching them on silicon-based integrated circuits, which collected all the components needed to process information on one semiconductor chip.

Since the s, advances in photolithography—the science of engineering ever more detailed circuits— have halved the size of the components of integrated circuits every eighteen months or so. Nowadays, the wires in the integrated circuits in a run-of-the-mill computer are only 1, atoms wide.

For future reference, let me define some of the types of computers to which I will refer. A digital computer is a computer that operates by applying logic gates to bits; a digital computer can be electronic or mechanical.

A classical computer is a computer that computes using the laws of classical mechanics. A classical digital computer is one that computes by performing classical logical operations on classical bits. An electronic computer is one that computes using electronic devices such as vacuum tubes or transistors.

A digital electronic computer is a digital computer that operates electronically. Analog computers can be electronic or mechanical. A quantum computer is one that operates using the laws of quantum mechanics. Quantum computers have both digital and analog aspects.

Logic Circuits What are our ever more powerful computers doing? They are processing information by breaking it up into its component bits and operating on those bits a few at a time. As noted earlier, the information to be processed is presented to the computer in the form of a program, a set of instructions in a computer language. The computer looks at the program a few bits at a time, interprets the bits as an instruction, and executes the instruction.

Then it looks at the next few bits and executes that instruction. And so on. Physically, logic circuits consist of bits, wires, and gates. Bits, as we have seen, can register either 0 or 1; wires move the bits from one place to another; gates transform the bits one or two at a time. A COPY gate makes a copy of a bit: An AND gate takes two input bits and produces a single output bit equal to 1 if, and only if, both input bits are equal to 1; otherwise it produces the output 0.

An OR gate takes two input bits and produces an output bit equal to 1 if one or both of the input bits is equal to 1; if both input bits are equal to 0, then it produces the output 0. They make up a universal set of logic gates. Figure 3. Logic Gates Logic gates are devices that take one or more input bits and transform them into one or more output bits. Clockwise from upper left: A digital computer is a computer that operates by implementing a large logic circuit consisting of millions of logic gates.

Familiar computers such as Macintoshes and PCs are electronic realizations of digital computers. Figure 4. Logic Circuits file: A logic circuit can perform more complicated transformations of its input bits. In an electronic computer, bits are registered by electronic devices such as capacitors. A capacitor is like a bucket that holds electrons. To fill the bucket, a voltage is applied to the capacitor.

A capacitor at zero voltage has no excess electrons and is said to be uncharged. An uncharged capacitor in a computer registers a 0. A capacitor at non-zero voltage holds lots of excess electrons and registers a 1.

Pdf universe programming the

Capacitors are not the only electronic devices that computers use to store information. As always, any device that has two reliably distinguishable states can register a bit. In a conventional digital electronic computer, logic gates are implemented using transistors. A transistor can be thought of as a switch.

When the switch is closed, current flows through. A transistor has two inputs and one output. In a p-type transistor, when the first input is kept at low voltage the switch is closed, so current can flow from the second input to the output; n- and p-type transistors can be wired together to create AND, OR, NOT, and COPY gates.

When a computer computes, all it is doing is applying logic gates to bits. Computer games, word processing, number crunching, and spam all arise out of the electronic transformation of bits, one or two at a time. Uncomputability Up to this point, we have emphasized the underlying simplicity of information and computation. A bit is a simple thing; a computer is a simple machine. The only way to find out what a computer will do once it has embarked upon a computation is to wait and see what happens.

All sufficiently powerful systems of logic contain unprovable statements. The computational analog of an unprovable statement is an uncomputable quantity. A well-known problem whose answer is uncomputable is the so-called halting problem: Program a computer. Set it running. Does the computer ever halt and give an output? Or does it run forever? There is no general procedure to compute the answer to this question.

That is, no computer program can take as input another computer program and determine with percent probability whether the first computer program halts or not. Of course, for many programs, you can tell whether or not the computer will halt. A computer given this program as input prints 1,,,, then halts. But as a rule, no matter how long a computer has gone on computing without halting, you cannot conclude that it will never halt. Although it may sound abstract, the halting problem has many practical consequences.

Take, for example, the debugging of computer programs. Such a debugger would take as input the computer program, together with a description of what the program is supposed to accomplish, and then check to see that the program does what it is supposed to do. Such a debugger cannot exist. The universal debugger is supposed to verify that its input program gives the right output.

So the first thing a universal debugger should check is whether the input program has any output at all. But to verify that the program gives an output, the universal debugger needs to solve the halting problem.

That it cannot do. The only way to determine if the program will halt is to run it and see, and at that point, we no longer need the universal debugger. So the next time a bug freezes your computer, you can take solace in the deep mathematical truth that there is no systematic way to eliminate all bugs.

Or you can just curse and reboot. It is tempting to identify similar paradoxes in how human beings function. After all, human beings are masters of self-reference some humans seem capable of no other form of reference and are certainly subject to paradox. Humans are notoriously unable to predict their own future actions. This is an important feature of what we call free will. That is, our own future choices are inscrutable to ourselves.

They may not, of course, be inscrutable to others. I, after spending a long time scrutinizing the menu, would always order the half plate of chiles rellenos, with red and green chile, and posole instead of rice.

PDF - Programming the Universe

I felt strongly that I was exercising free will: My wife, however, knew exactly what I was going to order all the time. The inscrutable nature of our choices when we exercise free will is a close analog of the halting problem: Ironically, it is customary to assign our own unpredictable behavior and that of other humans to irrationality: In fact, it is just when we behave rationally, moving logically, like a computer, from step to step, that our behavior becomes provably unpredictable.

Rationality combines with self-reference to make our actions intrinsically paradoxical and uncertain. This lovely inscrutability of pure reason harks back to an earlier account of the role of logic in the universe. Reason is immortal exactly because it is not specific to any individual; instead, it is the common property of all reasoning beings. Computers certainly possess the ability to reason and the capacity for self-reference.

The universe pdf programming

And just because they do, their actions are intrinsically inscrutable. Consequently, as they become more powerful and perform a more varied set of tasks, computers exhibit an unpredictability approaching that of human beings. Programming computers to perform simple human tasks is difficult: By contrast, no special effort is required to program a computer to behave in unpredictable and annoying ways.

When it comes to their 2 capacity to screw things up, computers are becoming more human every day. Part One The universe is made of atoms and elementary particles, such as electrons, photons, quarks, and neutrinos. Although we will soon delve into a vision of the universe based on a computational model, we would be foolish not to first explore the stunning discoveries of cosmology and elementary-particle physics.

Science already affords us excellent ways of describing the universe in terms of physics, chemistry, and biology. The computational universe is not an alternative to the physical universe. The universe that evolves by processing information and the universe that evolves by the laws of physics are one and the same. The two descriptions, computational and physical, are complementary ways of capturing the same phenomena. Of course, humans have been speculating about the origins of the universe far longer than they have been dabbling in modern science.

Telling stories about the universe is as old as telling stories. In Norse mythology, the universe begins when a giant cow licks the gods out of the salty lip of a primordial pit. In Japanese mythology, Japan arises from the incestuous embrace of the brother and sister gods Izanagi and Izanami.

In one Hindu creation myth, all creatures rise out of the clarified butter obtained from the sacrifice of the thousand-headed Purusha. More recently, though, over the last century or so, astrophysicists and cosmologists have constructed a detailed history of the universe supported by observational evidence. The universe began a little less than 14 billion years ago, in a huge explosion called the Big Bang.

As it expanded and cooled down, various familiar forms of matter condensed out of the cosmic soup. Three minutes after the Big Bang, the building blocks for simple atoms such as hydrogen and helium had formed.

These building blocks clumped together under the influence of gravity to form the first stars and galaxies million years after the Big Bang.

Heavier elements, such as iron, were produced when these early stars exploded in supernovae. Our own sun and solar system formed 5 billion years ago, and life on Earth was up and running a little over a billion years later.

This conventional history of the universe is not as sexy as some versions, and dairy products enter into only its later stages, but unlike older creation myths, the scientific one has the virtue of being consistent with known scientific laws and observations. And even though it is phrased in terms of physics, the conventional history of the universe still manages to make a pretty good story. It has drama and uncertainty, and many questions remain: How did life arise?

Why is the universe so complex? What is the future of the universe and of life in particular? When we look into the Milky Way, our own galaxy, we see many stars much like our own.

Universe programming pdf the

When we look beyond, we see many galaxies apparently much like the Milky Way. There is a scripted quality to what we see, in which the same astral dramas are played out again and again by different stellar actors in different places. If the universe is in fact infinite file: The story of the universe is a kind of cosmic soap opera whose actors play out all possible permutations of the drama.

In conventional cosmology, the primary actor is energy—the radiant energy in light and the mass energy in protons, neutrons, and electrons. What is energy? As you may have learned in middle school, energy is the ability to do work. Energy makes physical systems do things. Famously, energy has the feature of being conserved: This is known as the first law of thermodynamics. But if energy is conserved, and if the universe started from nothing, then where did all of the energy come from?

Physics provides an explanation. Quantum mechanics describes energy in terms of quantum fields, a kind of underlying fabric of the universe, whose weave makes up the elementary particles—photons, electrons, quarks. The energy we see around us, then—in the form of Earth, stars, light, heat—was drawn out of the underlying quantum fields by the expansion of our universe. Gravity is an attractive force that pulls things together.

The energy in the quantum fields is almost always positive, and this positive energy is exactly balanced by the negative energy of gravitational attraction. As the expansion proceeds, more and more positive energy becomes available, in the form of matter and light—compensated for by the negative energy in the attractive force of the gravitational field. The conventional history of the universe pays great attention to energy: How much is there? Where is it? What is it doing? By contrast, in the story of the universe told in this book, the primary actor in the physical history of the universe is information.

Ultimately, information and energy play complementary roles in the universe: Information tells them what to do. The Second Law of Thermodynamics If we could look at matter at the atomic scale, we would see atoms dancing and jiggling every which way at random. The energy that drives this random atomic dance is called heat, and the information that determines the steps of this dance is called entropy. More simply, entropy is the information required to specify the random motions of atoms and molecules—motions too small for us to see.

Entropy is the information contained in a physical system that is invisible to us. Entropy is a measure of the degree of molecular disorder existing in a system: The second law of thermodynamics states that the entropy of the universe as a whole file: Manifestations of the second law are all around us.

Hot steam can run a turbine and do useful work. As steam cools, its jiggling molecules transfer some of their disorder into increased disorder in the surrounding air, heating it up. As the molecules of steam jiggle slower and slower, the air molecules jiggle faster and faster, until steam and air are at the same temperature.

When the difference in temperatures is minimized, the entropy of the system is maximized. But room temperature steam will do no work. Here is yet another way to conceive of entropy. Most information is invisible. The number of bits of information required to characterize the dance of atoms vastly outweighs the number of bits we can see or know.

Consider a photograph: It has an intrinsic graininess determined by the size of the grains of silver halide that make up the photographic film—or, if it is a digital photograph, by the number of pixels that make up the digital image on a screen. A high-quality digital image can register almost a billion bits of visible information. How did I come up with that number?

One thousand pixels per inch is a high resolution, comparable to the best resolution that can be distinguished with the naked eye. At this resolution, each square inch of the photograph contains a million pixels. An 8- by 6-inch color photograph with 1, pixels per inch has 48 million pixels.

Each pixel has a color. Digital cameras typically use 24 bits to produce 16 million colors, comparable to the number that the human eye can distinguish. So an 8- by 6-inch color digital photograph with 1, pixels per inch and 24 bits of color resolution has 1,,, bits of information. An easier way to see how many bits are required to register a photograph is to look at how rapidly the memory space in your digital camera disappears when you take a picture.

A typical digital camera takes high-resolution pictures with 3 million bytes—3 megabytes—of information. A byte is 8 bits, so each picture on the digital camera registers approximately 24 million bits. To describe them would require more than a million billion billion bits , or a 1 followed by 24 zeros.

The invisible jiggling atoms register vastly more information than the visible photograph they make up. A photograph that registered the same amount of visible information as the invisible information in a gram of atoms would have to be as big as the state of Maine. The number of bits registered by the jiggling atoms that make up the photographic image on film can be estimated as follows.

A grain of silver halide is about a millionth of a meter across and contains about a trillion atoms. There are tens of billions of grains of silver halide in the photographic film. Describing where an individual atom at room temperature is in its infinitesimal dance requires 10 to 20 bits per atom. The total amount of information registered by the atoms in the photograph is thus bits.

The billion bits of information visible in the photograph, as represented by the digital image, represent only a tiny fraction of this total.

Programming the Universe: A Quantum Computer Scientist Takes on the Cosmos - PDF Free Download

The remainder of the information contained in the matter of the photograph is invisible. This invisible information is the entropy of the atoms. Free Energy file: To experience another example of the first and second laws, take a bite of an apple. The sugars in the apple contain what is called free energy. Free energy is energy in a highly ordered form associated with a relatively low amount of entropy. In the case of the apple, the energy in sugar is stored not in the random jiggling of atoms but in the ordered chemical bonds that hold sugar together.

It takes much less information to describe the form energy takes in a billion ordered chemical bonds than it does to describe that same energy spread among a billion jiggling atoms.

The relatively small amount of information required to describe this energy makes it available for use: Pick the apple, take a bite. Every gram of glucose contains a few kilocalories of free energy. A calorie is the amount of energy required to raise one gram of water one degree Celsius. A kilocalorie, 1, calories, is what someone on a diet would normally call a Calorie: One hundred kilocalories is enough energy to lift a VW Bug one hundred feet in the air!

While you run, the free energy in the sugar is converted into motion by your muscles. In obedience to the first law of thermodynamics, the total amount of energy remains the same. Unfortunately, to reverse this process is not so easy. If you wanted to convert the energy in heat, which has lots of invisible information or entropy , back into energy in chemical bonds, which has much less entropy, you would have to do something with that extra information.

As we will discuss, the problem of finding a place for the extra bits in heat puts fundamental limits on how well engines, humans, brains, DNA, and computers can function. The universe we see around us arises from the interplay between these two quantities, interplay governed by the first and second laws of thermodynamics.

Energy is conserved. Information never decreases. It takes energy for a physical system to evolve from one state to another. That is, it takes energy to process information. The more energy that can be applied, the faster the physical transformation takes place and the faster the information is processed.

The maximum rate at which a physical system can process information is proportional to its energy. The more energy, the faster the bits flip. Earth, air, fire, and water in the end are all made of energy, but the different forms they take are determined by information.

To do anything requires energy. To specify what is done requires information. Energy and information are by nature no pun intended intertwined. The Story of the Universe: Part Two file: It is this interplay—this back-and-forth between information and energy—that makes the universe compute.

Over the last century, advances in the construction of telescopes have led to ever more precise observations of the universe beyond our solar system. The past decade has been a particularly remarkable one for observations of the heavens. Ground-based telescopes and satellite observatories have generated rich data describing what the universe looks like now, as well as what it looked like in the past.

The historical nature of cosmic observation proves useful as we attempt to untangle the early history of the universe. The universe began just under 14 billion years ago in a massive explosion. What happened before the 3 Big Bang? There was no time and no space. Not just empty space, but the absence of space itself. Time itself had a beginning. There is nothing wrong with beginning from nothing. Before zero, there are no positive numbers. Before the Big Bang, there was nothing—no energy, no bits.

Then, all at once, the universe sprang into existence. Time began, and with it, space. The newborn universe was simple; the newly woven fabric of quantum fields contained only small amounts of information and energy. At most, it required a few bits of information to describe. In fact, if—as some physical theories speculate—there is only one possible initial state of the universe and only one selfconsistent set of physical laws, then the initial state required no bits of information to describe.

Recall that to generate information, there must be alternatives—e. If there were no alternatives to the initial state of the universe, then exactly zero bits of information were required to describe it; it registered zero bits. This initial paucity of information is consistent with the notion that the universe sprang from nothing.

As soon as it began, though, the universe began to expand. As it expanded, it pulled more and more energy out of the underlying quantum fabric of space and time. The early universe remained simple and orderly: The energy that was created was free energy. This paucity of information did not last for long, however. As the expansion continued, the free energy in the quantum fields was converted into heat, increasing entropy, and all sorts of elementary particles were created.

These particles were hot: To describe this jiggling would take a lot of information. After a billionth of a second—the amount of time it takes light to travel about a foot—had passed, the amount of information contained within the universe was on the order of million billion billion billion billion billion bits.

To store that much information visually would require a photograph the size of the Milky Way. The Big Bang was also a Bit Bang. A lot had happened. But what was the universe computing during this initial billionth of a second? Science fiction writers have speculated that entire civilizations could have arisen and declined during this time—a time very much shorter than the blink of an eye.

We have no evidence of these fast-living folk. More likely, these early ops consisted of elementary particles bouncing off one another in random fashion. After this first billionth of a second, the universe was very hot. Almost all of the energy that had been drawn into it was now in the form of heat. Lots of information would have been required to describe the infinitesimal jigglings of the elementary particles in this state.

In fact, when all matter is at the same temperature, entropy is maximized. There was very little free energy—that is, order—at this stage, making the moments after the Big Bang a hostile time for processes like life. Life requires free energy. Even if there were some form of life that could have withstood the high temperatures of the Big Bang, that life-form would have had nothing to eat.

As the universe expanded, it cooled down. The elementary particles jiggled around more slowly. The amount of information required to describe their jiggles stayed almost the same, though, increasing gradually over time.

But, at the same time, the amount of space in which they were jiggling was increasing, requiring more bits to describe their positions. Thus, the total amount of information remained constant or increased in accordance with the second law of thermodynamics.

As the jiggles got slower and slower, bits and pieces of the cosmic soup began to condense out. This condensation produced some of the familiar forms of matter we see today. When the amount of energy in a typical jiggle became less than the amount of energy required to hold together some form of composite particle—a proton, for example—those particles formed.

When the jiggles of the constituent parts—quarks, in the case of a proton—were no longer sufficiently energetic to maintain them as distinct particles, they stuck together as a composite particle that condensed out of the cosmic soup.

Every time a new ingredient of the soup condensed out, there was a burst of entropy—new information was written in the cosmic cookbook. Particles condensed out of the jiggling soup in order of the energy required to bind them together. Protons and neutrons—the particles that make up the nuclei of atoms—condensed out a little more than a millionth of a second after the Big Bang, when the temperature was around 10 million million degrees Celsius. Atomic nuclei began to form at one second, and about a billion degrees.

After three minutes, the nuclei of the lightweight atoms—hydrogen, helium, deuterium, lithium, beryllium, and boron—had condensed. Electrons were still whizzing around too fast, though, for these nuclei to capture them and form complete atoms. Three hundred eighty thousand years after the Big Bang, when the temperature of file: Order from Chaos the Butterfly Effect Until the formation of atoms, almost all the information in the universe lay at the level of the elementary particle.

Nearly all bits were registered by the positions and velocities of protons, electrons, and so forth. On any larger scale, the universe still contained very little information: How uniform was it? Imagine the surface of a lake on a windless morning so calm that the reflections of the trees are indistinguishable from the trees themselves. Imagine the earth with no mountain larger than a molehill.

The early universe was more uniform still. Nowadays, by contrast, telescopes reveal huge variations and nonuniformities in the universe.

Matter clusters together to form planets such as Earth and stars such as the sun. Planets and suns cluster together to form solar systems. Our solar system clusters together with billions of others to form a galaxy, the Milky Way. The Milky Way, in turn, is only one of tens of galaxies in a cluster of galaxies— and our cluster of galaxies is only one cluster in a supercluster. This hierarchy of clusters of matter, separated by cosmic voids, makes up the present, large-scale structure of the universe.

How did this large-scale structure come about? Where did the bits of information come from? Their origins can be explained by the laws of quantum mechanics, coupled to the laws of gravity. Quantum mechanics is the theory that describes how matter and energy behave at their most fundamental levels.

At the small scale, quantum mechanics describes the behavior of molecules, atoms, and elementary particles. At larger scales, it describes the behavior of you and me. Larger still, it describes the behavior of the universe as a whole. The laws of quantum mechanics are responsible for the emergence of detail and structure in the universe. The theory of quantum mechanics gives rise to large-scale structure because of its intrinsically probabilistic nature.

Counterintuitive as it may seem, quantum mechanics produces detail and structure because it is inherently uncertain. The early universe was uniform: But it was not exactly the same. In quantum mechanics, quantities such as position, velocity, and energy density do not have exact values. Instead, their values fluctuate. We can describe their probable values—the most likely location of a particle, for example—but we cannot claim perfect certainty.

Because of these quantum fluctuations, some regions of the early universe were ever so slightly more dense than other regions. As time passed, the attractive force of gravity caused more matter to move toward these denser regions, further increasing their energy density, and decreasing density in the surrounding volume. Gravity thus amplified the effect of an initially tiny discrepancy, causing it to increase. Just such a tiny quantum file: Slightly later on, further fluctuations formed the seeds for the positions of individual galaxies within the cluster, and still later fluctuations seeded the positions of planets and stars.

In the process of creating this large-scale structure, gravity also created the free energy that living things require to survive. As the matter clumped together, it moved faster and faster, gaining energy from the gravitational field; that is, the matter heated up.

The larger the clump grew, the hotter the matter became. If enough matter clumped together, the temperature in the center of the clump rose to the point at which thermonuclear reactions are ignited: The light from the sun has lots of free energy —energy plants would use for photosynthesis, for example. As soon as plants came into existence, that is. Perhaps the most famous example of chaos is the so-called butterfly effect.

The minuscule quantum fluctuations of energy density at the time of the Big Bang are the butterfly effects that would come to yield the large-scale structure of the universe. Every galaxy, star, and planet owes its mass and position to quantum accidents of the early universe.

Chance is a crucial element of the language of nature. Every roll of the quantum dice injects a few more bits of detail into the world. As these details accumulate, they form the seeds for all the variety of the universe.

Every tree, branch, leaf, cell, and strand of DNA owes its particular form to some past toss of the quantum dice. Without the laws of quantum mechanics, the universe would still be featureless and bare. Gambling for money may be infernal, but betting on throws of the quantum dice is divine. In computer science, a universal computer is a device that can be programmed to process bits of information in any desired way.

Conventional digital computers of the sort on which this book is being written are universal computers, and their languages are universal languages. Human beings are capable of universal computation, and human languages are universal. Most systems that can be programmed to perform arbitrarily long sequences of simple transformations of information are universal. Universal computers can do pretty much anything with information. Two of the inventors of universal computers and universal languages, Alonzo Church and Alan Turing, hypothesized that any possible mathematical manipulation can be performed on a universal computer; that is, universal computers can generate mathematical patterns of any level of complexity.

A universal computer itself, though, need not file: Any desired transformation of however large a set of bits can be enacted by repeatedly performing operations on just one or two bits at a time. And any machine that can enact this sequence of simple logical operations is a universal computer. Significantly, universal computers can be programmed to transform information in any way desired, and any universal computer can be programmed to transform information in the same way as any other universal computer.

That is, any universal computer can simulate another, and vice versa. This intersimulatability means that all universal computers can perform the same set of tasks. This feature of computational universality is a familiar one: Of course, the program may take longer to run on the Mac than the PC, or vice versa. Programs written for a specific universal computer tend to run faster on that computer than the translated program runs on another.

But the translated program will still run. In fact, every universal computer can be shown not only to simulate every other universal computer, but to do so efficiently. The slowdown due to translation is relatively insignificant. Digital vs. Quantum The universe computes.

Its computer language consists of the laws of physics and their chemical and biological consequences. But is the universe nothing more than a universal digital computer, in the technical sense elucidated by Church and Turing? It is possible to give a precise scientific answer to this question.

The answer is No. The idea that the universe might be, at bottom, a digital computer is decades old. In the s, Edward Fredkin, then a professor at MIT, and Konrad Zuse, who constructed the first electronic digital computers in Germany in the early s, both proposed that the universe was fundamentally a universal digital computer.

More recently, this idea has found an advocate in the computer scientist Stephen Wolfram. The idea is an appealing one: In particular, computers whose architecture mimics the structure of space and time so-called cellular automata can efficiently reproduce the motions of classical particles and the interactions between them.

In addition to the aesthetic appeal of a digital universe, there is powerful observational evidence for the computational ability of physical laws. The laws of physics clearly support universal computation. The problem with identifying the universe as a classical digital computer is that the universe appears to be significantly more computationally powerful.

Two computing machines have the same computational power if each can simulate the other efficiently. But now consider whether or not the file: In fact, a conventional digital computer seems unable to simulate the universe efficiently.

At first, it might seem otherwise. After all, the laws of physics are apparently simple. Even if they turn out to be somewhat more complicated than we currently suspect, they are still mathematical laws that can be expressed in a conventional computer language; that is, a conventional computer can simulate the laws of physics and their consequences. If you had a large enough computer, then, you could program it using, for example, a language such as Java with descriptions of the initial state of the universe, and of the laws of physics, and set it running.

Eventually, you would expect this computer to come up with accurate descriptions of the state of the universe at any later time. The problem with such simulations is not that they are impossible, but that they are inefficient. The universe is fundamentally quantum-mechanical, and conventional digital computers have a hard time simulating quantum-mechanical systems.

Quantum mechanics is just as weird and counterintuitive for conventional computers as it is for human beings. In fact, in order to simulate even a tiny piece of the universe—consisting, say, of a few hundred atoms—for a tiny fraction of a second, a conventional computer would need more memory space than there are atoms in the universe as a whole, and would take more time to complete the task than the current age of the universe.

This is not to say that classical computers are useless for capturing certain aspects of quantum behavior: Classical bits are very bad at storing the information required to characterize a quantum system: What does this mean?

The failure of classical simulation of quantum systems suggests that the universe is intrinsically more computationally powerful than a classical digital computer.

But what about a quantum computer? A few years ago, acting on a suggestion from the physicist Richard Feynman, I showed that quantum computers can simulate any system that obeys the known laws of physics and even those that obey as yet undiscovered laws! In brief, the simulation proceeds as follows: First, map the state of every piece of a quantum system— every atom, electron, or photon—onto the state of some small set of quantum bits, known as a quantum register.

Because the register is itself quantum-mechanical, it has no problem storing the quantum information inherent in the original system on just a few quantum bits. Then enact the natural dynamics of the quantum system using simple quantum logic operations—interactions between quantum bits.

Because the dynamics of physical systems consists of interactions between its constituent parts, these interactions can be simulated directly by quantum logic operations mapped onto the bits in the quantum register that correspond to those parts.

The amount of time the quantum computer takes to perform the simulation is proportional to the time over which the simulated system evolves, and the amount of memory space required for the simulation is proportional to the number of subsystems or subvolumes of the simulated system.

The simulation proceeds by a direct mapping of the dynamics of the system onto the dynamics of the quantum computer.

Indeed, an observer that interacted with the quantum computer via a suitable interface would be unable to tell the difference between the quantum computer and the system itself.

All measurements made on the computer would yield exactly the same results as the analogous measurements made on the system.

Quantum computers, then, are universal quantum simulators. The universe is a physical system. Thus, it could be simulated efficiently by a quantum computer—one exactly the same size as the universe itself.

Because the universe supports quantum computation and can be efficiently simulated by a quantum computer, the universe is neither more nor less computationally powerful than a universal quantum computer. In fact, the universe is indistinguishable from a quantum computer. Consider a quantum computer performing an efficient simulation of the universe. Now, compare the results of measurements taken in the universe with measurements taken in the quantum computer.

Measurements in the universe are taken by one piece of the universe—in this case, us—on the remainder. The analogous processes occur in a quantum computer when one register of the computer gains information about another register. Because the quantum computer can perform an efficient and accurate simulation, the results of these two sets of measurements will be indistinguishable.

The universe possesses the same information-processing power as a universal quantum computer. A universal quantum computer can accurately and efficiently simulate the universe. The results of measurements made in the universe are indistinguishable from the results of measurement processes in a quantum computer.

We can now give a precise answer to the question of whether the universe is a quantum computer in the technical sense. The answer is Yes. What is the universe computing? Computation and Complexity The universe we see outside our windows is amazingly complex, full of form and transformation. Yet the laws of physics are simple, as far as we can tell. What is it about these simple laws that allows for such complex phenomena? Inspired by concepts of information, Boltzmann proposed an explanation for the order and diversity of the universe.

Suppose, said Boltzmann, that the information that defines the universe resulted from a completely random process, as if each bit were determined by the toss of a coin. This explanation of the order and diversity of the universe is equivalent to a well-known scenario, apparently proposed by the French mathematician Emile Borel at the beginning of the twentieth century. Borel imagined a million monkeys singes dactylographes typing at typewriters for ten hours a day.

He then went on to dismiss the probability of this happening as infinitesimally small. But none of the contemporary reports of the debate mention monkeys plugging away at typewriters which had barely been invented at the time. A typical monkey-typing story begins with a researcher assembling a simian team and teaching them to hit the typewriter keys.

One monkey inserts a fresh sheet of paper and begins to type: Act I, Scene I. It is possible, as well, that the information defining the universe was created by similarly random processes. After all, if we identify heads with 1 and tails with 0, tossing a coin repeatedly will eventually produce any desired string of bits of a finite length, including a bit string that describes the universe as a whole.

The explicit argument against the creation of long texts by completely random processes dates back more than two thousand years, to Cicero. If they believe this could have file: I doubt whether chance would succeed in spelling out a single verse! Although the universe could have been created entirely by random flips of a coin, it is highly unlikely, given the finite age and extent of the universe.

In fact, the chance of an ordered universe like ours arising out of random flips of a coin is so small as to be effectively zero. To see just how small, consider the monkeys once again. There are about fifty keys on a standard typewriter keyboard.

The probability of a monkey typing out a phrase with twenty-two characters is one divided by fifty raised to the twenty-second power, or about The record as of this writing is the first twenty-four letters of Henry IV, Part 2, typed after 2,, million billion billion billion monkey years.

The combination of very small probabilities together with the finite age and extent of the visible universe makes the completely random generation of order extremely unlikely. If the universe were infinite in age or extent, then sometime or somewhere every possible pattern, text, and string of bits would be generated. If the order we see were generated completely at random, then whenever we obtained new bits of information, they too would be highly likely to be random.

But this is not the case: If you question this statement, just go to the window and look out, or pick up an apple and bite into it. Either action will reveal new, but non-random bits. In astronomy, new galaxies and other cosmic structures, such as quasars, are 4 constantly swimming into view. If the argument for complete randomness were true, then as new objects swam into view they would reveal completely random arrangements of matter—a sort of cosmic slush—rather than the quasars and ordered, if mysterious, objects that we do, in fact, see.

But it is hugely improbable. The universe is full of photons—particles of light left over from the Big Bang. There are about photons, and each photon registers a few random bits. Enter Barnardo and Francisco. To create anything more complicated by a random process would require greater computational resources than the universe possesses. Boltzmann was wrong: He was wrong, too.

The existence of complex and intricate patterns does not require that these patterns be produced by a complex and intricate machine or intelligence. Computers are simple machines. They operate by performing a small set of almost trivial operations, over and over again.

But despite their simplicity, they can be programmed to produce patterns of any desired complexity, and the programs that produce these patterns need not possess any apparent order themselves: The generation of random bits does play a key role in the establishment of order in the universe, just not as directly as Boltzmann imagined. The universe contains random bits whose origins can be traced back to quantum fluctuations in the wake of the Big Bang. These random bits, introduced by quantum mechanics, in effect programmed the later behavior of the universe.

Back to the monkeys. The image of monkeys typing away at computers is ubiquitous, at least in cyberspace. What happens when the computer tries to execute this random program? Most of the time, it will become confused and stop, issuing an error message. Garbage in, garbage out. But some short computer programs—and thus, programs with a relatively high probability of being randomly generated—actually have interesting outputs.

Another short program will make the computer produce intricate fractals. Another short program will cause it to simulate the standard model of elementary particles. Powell of The New York Times writes:. In the space of dense, frequently thrilling and occasionally exasperating pages, … tackles computer logic, thermodynamics, chaos theory, complexity, quantum mechanics, cosmology, consciousness, sex and the origin of life — throwing in, for good measure, a heartbreaking afterword that repaints the significance of all that has come before.

The source of all this intellectual mayhem is the kind of Big Idea so prevalent in popular science books these days. Lloyd, a professor of mechanical engineering at M. Scientists have looked at it as a ragtag collection of particles and fields while failing to see what it is as a majestic whole: In an interview with Wired magazine, Lloyd writes:.

Not chunks of stuff, but chunks of information — ones and zeros. Atomic collisions are "ops. The universe is a quantum computer. Exploring big questions in accessible, comprehensive fashion, Lloyd's work is of vital importance to the general-science audience. From Wikipedia, the free encyclopedia. Dewey Decimal. April 2, The New York Times. Retrieved Issue