books
Infinite Powers
90 passages marked
Without calculus, we wouldn’t have cell phones, computers, or microwave ovens. We wouldn’t have radio. Or television. Or ultrasound for expectant mothers, or GPS for lost travelers. We wouldn’t have split the atom, unraveled the human genome, or put astronauts on the moon. We might not even have the Declaration of Independence.
Feynman asked Wouk if he knew calculus. No, Wouk admitted, he didn’t. “You had better learn it,” said Feynman. “It’s the language God talks.”
For reasons nobody understands, the universe is deeply mathematical. Maybe God made it that way. Or maybe it’s the only way a universe with us in it could be, because nonmathematical universes can’t harbor life intelligent enough to ask the question.
there seems to be something like a code to the universe, an operating system that animates everything from moment to moment and place to place. Calculus taps into this order and expresses it.
Isaac Newton was the first to glimpse this secret of the universe. He found that the orbits of the planets, the rhythm of the tides, and the trajectories of cannonballs could all be described, explained, and predicted by a small set of differential equations. Today we call them Newton’s laws of motion and gravity.
If anything deserves to be called the secret of the universe, calculus is it.
By inadvertently discovering this strange language, first in a corner of geometry and later in the code of the universe, then by learning to speak it fluently and decipher its idioms and nuances, and finally by harnessing its forecasting powers, humans have used calculus to remake the world.
What was the nature of light? Light, he realized, was an electromagnetic wave.
Maxwell’s prediction of electromagnetic waves prompted an experiment by Heinrich Hertz in 1887 that proved their existence. A decade later, Nikola Tesla built the first radio communication system, and five years after that, Guglielmo Marconi transmitted the first wireless messages across the Atlantic. Soon came television, cell phones, and all the rest.
When Maxwell translated his abstract symbols back into reality, they predicted that electricity and magnetism could propagate together as a wave of invisible energy moving at the speed of light.
It’s eerie that calculus can mimic nature so well, given how different the two domains are. Calculus is an imaginary realm of symbols and logic; nature is an actual realm of forces and phenomena. Yet somehow, if the translation from reality into symbols is done artfully enough, the logic of calculus can use one real-world truth to generate another. Truth in, truth out.
This is what Einstein marveled at when he wrote, “The eternal mystery of the world is its comprehensibility.” And it’s what Eugene Wigner meant in his essay “On the Unreasonable Effectiveness of Mathematics in the Natural Sciences” when he wrote, “The miracle of the appropriateness of the language of mathematics for the formulation of the laws of physics is a wonderful gift which we neither understand nor deserve.”
many of history’s greatest mathematicians and scientists have come down with cases of Pythagorean fever. The astronomer Johannes Kepler had it bad. So did the physicist Paul Dirac. As we’ll see, it drove them to seek, and to dream, and to long for the harmonies of the universe. In the end it pushed them to make their own discoveries that changed the world.
In a nutshell, calculus wants to make hard problems simpler. It is utterly obsessed with simplicity. That might come as a surprise to you, given that calculus has a reputation for being complicated.
It looks complicated because it’s trying to tackle complicated problems. In fact, it has tackled and solved some of the most difficult and important problems our species has ever faced.
Calculus succeeds by breaking complicated problems down into simpler parts. That strategy, of course, is not unique to calculus. All good problem-solvers know that hard problems become easier when they’re split into chunks. The truly radical and distinctive move of calculus is that it takes this divide-and-conquer strategy to its utmost extreme — all the way out to infinity.
Thus, calculus proceeds in two phases: cutting and rebuilding. In mathematical terms, the cutting process always involves infinitely fine subtraction, which is used to quantify the differences between the parts. Accordingly, this half of the subject is called differential calculus. The reassembly process always involves infinite addition, which integrates the parts back into the original whole. This half of the subject is called integral calculus.
This strategy can be used on anything that we can imagine slicing endlessly. Such infinitely divisible things are called continua and are said to be continuous, from the Latin roots con (together with) and tenere (hold), meaning uninterrupted or holding together.
Calculus ignores the inconvenience posed by atoms and other uncuttable entities, not because they don’t exist but because it’s useful to pretend that they don’t. As we’ll see, calculus has a penchant for useful fictions.
The Infinity Principle To shed light on any continuous shape, object, motion, process, or phenomenon — no matter how wild and complicated it may appear — reimagine it as an infinite series of simpler parts, analyze those, and then add the results back together to make sense of the original whole.
The creators of calculus were aware of the danger but still found infinity irresistible. Sure, occasionally it ran amok, leaving paradox, confusion, and philosophical havoc in its wake.
All this talk of desire and confusion might seem out of place, given that mathematics is usually portrayed as exact and impeccably rational. It is rational, but not always initially. Creation is intuitive; reason comes later. In the story of calculus, more than in other parts of mathematics, logic has always lagged behind intuition. This makes the subject feel especially human and approachable, and its geniuses more like the rest of us.
Three mysteries above all have spurred its development: the mystery of curves, the mystery of motion, and the mystery of change.
So this is how calculus began. It grew out of geometers’ curiosity and frustration with roundness.
The breakthrough came from insisting that curves were actually made of straight pieces. It wasn’t true, but one could pretend that it was. The only hitch was that those pieces would then have to be infinitesimally small and infinitely numerous. Through this fantastic conception, integral calculus was born. This was the earliest use of the Infinity Principle.
Johannes Kepler fell into a state of self-described “sacred frenzy” when he found his laws of planetary motion — because those patterns seemed to be signs of God’s handiwork. From a more secular perspective, the patterns reinforced the claim that nature was deeply mathematical, just as the Pythagoreans had maintained. The only catch was that nobody could explain the marvelous new patterns, at least not with the existing forms of math. Arithmetic and geometry were not up to the task, even in the hands of the greatest mathematicians.
Out of the tumult, differential calculus began to flower, but not without controversy. Some mathematicians were criticized for playing fast and loose with infinity. Others derided algebra as a scab of symbols. With all the bickering, progress was fitful and slow. And then a child was born on Christmas Day. This young messiah of calculus was an unlikely hero. Born premature and fatherless and abandoned by his mother at age three, he was a lonesome boy with dark thoughts who grew into a secretive, suspicious young man. Yet Isaac Newton would make a mark on the world like no one before or since.
Then he cracked the code of the universe. Newton discovered that motion of any kind always unfolds one infinitesimal step at a time, steered from moment to moment by mathematical laws written in the language of calculus. With just a handful of differential equations (his laws of motion and gravity), he could explain everything from the arc of a cannonball to the orbits of the planets.
His astonishing “system of the world” unified heaven and earth, launched the Enlightenment, and changed Western culture. Its impact on the philosophers and poets of Europe was immense.
in 1917 Albert Einstein applied calculus to a simple model of atomic transitions to predict a remarkable effect called stimulated emission (which is what the s and e stand for in laser, an acronym for light amplification by stimulated emission of radiation). He theorized that under certain circumstances, light passing through matter could stimulate the production of more light at the same wavelength and moving in the same direction, creating a cascade of light through a kind of chain reaction that would result in an intense, coherent beam. A few decades later, the prediction proved to be accurate. The first working lasers were built in the early 1960s.
Even in the subatomic realm where Newtonian physics breaks down, Newtonian calculus still works. In fact, it works spectacularly well.
THE BEGINNINGS OF mathematics were grounded in everyday concerns. Shepherds needed to keep track of their flocks. Farmers needed to weigh the grain reaped in the harvest. Tax collectors had to decide how many cows or chickens each peasant owed the king. Out of such practical demands came the invention of numbers. At first they were tallied on fingers and toes. Later they were scratched on animal bones. As their representation evolved from scratches to symbols, numbers facilitated everything from taxation and trade to accounting and census taking. We see evidence of all this in Meso-potamian clay tablets written more than five thousand years ago: row after row of entries recorded with the wedge-shaped symbols called cuneiform.
Along with numbers, shapes mattered too. In ancient Egypt, the measurement of lines and angles was of paramount importance. Each year surveyors had to redraw the boundaries of farmers’ fields after the summer flooding of the Nile washed the borderlines away. That activity later gave its name to the study of shape in general: geometry, from the Greek gē, “earth,” and metrēs, “measurer.”
Consider what happens when a raindrop hits a puddle: tiny ripples expand outward from the point of impact. Because they spread equally fast in all directions and because they started at a single point, the ripples have to be circles. Symmetry demands it.
Calculus began as an outgrowth of geometry. Back around 250 BCE in ancient Greece, it was a hot little mathematical startup devoted to the mystery of curves. The ambitious plan of its devotees was to use infinity to build a bridge between the curved and the straight. The hope was that once that link was established, the methods and techniques of straight-line geometry could be shuttled across the bridge and brought to bear on the mystery of curves. With infinity’s help, all the old problems could be solved. At least, that was the pitch.
This result for the area of a circle, A = rC/2, was first proved (using a similar but much more careful argument) by the ancient Greek mathematician Archimedes (287–212 BCE) in his essay “Measurement of a Circle.”
But it was only in the limit of infinitely many slices that it became truly rectangular. That’s the big idea behind calculus. Everything becomes simpler at infinity.
in calculus, the unattainability of the limit usually doesn’t matter. We can often solve the problems we’re working on by fantasizing that we can actually reach the limit and then seeing what that fantasy implies. In fact, many of the greatest pioneers of the subject did precisely that and made great discoveries by doing so. Logical, no. Imaginative, yes. Successful, very.
A limit is a subtle concept but a central one in calculus. It’s elusive because it’s not a common idea in daily life. Perhaps the closest analogy is the Riddle of the Wall. If you walk halfway to the wall, and then you walk half the remaining distance, and then you walk half of that, and on and on, will there ever be a step when you finally get to the wall?
It took about two thousand years for the limit concept to be rigorously defined. Until then, the pioneers of calculus got by just fine with intuition.
infinity is bridging two worlds again. This time it’s taking us from the rectilinear to the round, from sharp-cornered polygons to silky-smooth circles, whereas in the pizza proof, infinity brought us from round to rectilinear as it transformed a circle into a rectangle.
Of course, at any finite stage, a polygon is still just a polygon. It’s not yet a circle and it never becomes one. It gets closer and closer to being a circle, but it never truly gets there. We are dealing here with potential infinity, not completed infinity. So everything is airtight from the standpoint of logical rigor.
This is the allure of infinity. Everything becomes better there.
Should we take the plunge and say that a circle truly is a polygon with infinitely many infinitesimal sides? No. We mustn’t do that, mustn’t yield to that temptation. Doing so would be to commit the sin of completed infinity. It would condemn us to logical hell.
Like the biblical original sin, the original sin of calculus — the temptation to treat a circle as an infinite polygon with infinitesimally short sides — is very hard to resist, and for the same reason. It tempts us with the prospect of forbidden knowledge, with insights unavailable by ordinary means.
The root of the problem is infinity. Dividing by zero summons infinity in much the same way that a Ouija board supposedly summons spirits from another realm. It’s risky. Don’t go there.
What’s verboten is to imagine going all the way to a completed infinity of pieces of zero length. That, Aristotle felt, would lead to nonsense — as it does here, in revealing that zero times infinity can give any answer. And so he forbade the use of completed infinity in mathematics and philosophy. His edict was upheld by mathematicians for the next twenty-two hundred years.
In every branch of human thought, from religion and philosophy to science and mathematics, infinity has befuddled the world’s finest minds for thousands of years. It has been banished, outlawed, and shunned. It’s always been a dangerous idea. During the Inquisition, the renegade monk Giordano Bruno was burned alive at the stake for suggesting that God, in His infinite power, created innumerable worlds.
About two millennia before the execution of Giordano Bruno, another brave philosopher dared to contemplate infinity. Zeno of Elea (c. 490–430 BCE) posed a series of paradoxes about space, time, and motion in which infinity played a starring and perplexing role. These conundrums anticipated ideas at the heart of calculus and are still being debated today. Bertrand Russell called them immeasurably subtle and profound.
We aren’t sure what Zeno was trying to prove with his paradoxes because none of his writings have survived, if any existed to begin with. His arguments have come down to us through Plato and Aristotle, who summarized them mainly to demolish them. In their telling, Zeno was trying to prove that change is impossible. Our senses tell us otherwise, but our senses deceive us. Change, according to Zeno, is an illusion.
Three of Zeno’s paradoxes are particularly famous and strong. The first of them, the Paradox of the Dichotomy, is similar to the Riddle of the Wall but vastly more frustrating. It holds that you can’t ever move because before you can take a single step, you need to take a half a step. And before you can do that, you need to take a quarter of a step, and so on. So not only can’t you get to the wall — you can’t even start walking.
Another paradox, called Achilles and the Tortoise, maintains that a swift runner (Achilles) can never catch up to a slow runner (a tortoise) if the slow runner has been given a head start in a race. For by the time Achilles reaches the spot where the tortoise started, the tortoise will have moved a little bit farther down the track. And by the time Achilles reaches that new location, the tortoise will have crept slightly farther ahead. Since we all believe that a fast runner can overtake a slow runner, either our senses are deceiving us or there is something wrong in the way that we reason about motion, space, and time.
In these first two paradoxes, Zeno seemed to be arguing against space and time being fundamentally continuous, meaning that they can be divided endlessly.
His clever rhetorical strategy (some say he invented it) was proof by contradiction, known to lawyers and logicians as reductio ad absurdum, reduction to an absurdity.
Whenever we work with infinite decimals, we are doing calculus
So from the perspective of calculus, there really is no paradox about Achilles and the tortoise. If space and time are continuous, everything works out nicely.
In a third paradox, the Paradox of the Arrow, Zeno argued against an alternative possibility — that space and time are fundamentally discrete, meaning that they are composed of tiny indivisible units, something like pixels of space and time. The paradox goes like this. If space and time are discrete, an arrow in flight can never move, because at each instant (a pixel of time) the arrow is at some definite place (a specific set of pixels in space). Hence, at any given instant, the arrow is not moving. It is also not moving between instants because, by assumption, there is no time between instants. Therefore, at no time is the arrow ever moving.
But Zeno was wrong that motion would be impossible in such a world. We all know this from our experience of watching movies and videos on our digital devices. Our cell phones and DVRs and computer screens chop everything into discrete pixels, and yet, contrary to Zeno’s assertion, motion can take place perfectly well in these discretized landscapes. As long as everything is diced fine enough, we can’t tell the difference between a smooth motion and its digital representation. If we were to watch a high-resolution video of an arrow in flight, we’d actually be seeing a pixelated arrow materializing in one discrete frame after another. But because of our perceptual limitations, it would look like a smooth trajectory. Sometimes our senses really do deceive us.
On the analog clock, the second hand sweeps around in a beautifully uniform motion. It depicts time as flowing. Whereas on the digital clock, the second hand jerks forward in discrete steps, thwack, thwack, thwack. It depicts time as jumping.
Infinity can build a bridge between these two very different conceptions of time. Imagine a digital clock that advances through trillions of little clicks per second instead of one loud thwack. We would no longer be able to tell the difference between that kind of digital clock and a true analog clock.
For many practical purposes, the discrete can stand in for the continuous, as long as we slice things thinly enough. In the ideal world of calculus, we can go one better. Anything that’s continuous can be sliced exactly (not just approximately) into infinitely many infinitesimal pieces. That’s the Infinity Principle. With limits and infinity, the discrete and the continuous become one.
Quantum mechanics has something to say about that. It’s the branch of modern physics that describes how nature behaves at its smallest scales. It’s the most accurate physical theory ever devised, and it is legendary for its weirdness.
For instance, consider the Riddle of the Wall from a quantum perspective. If the walker were an electron, there’s a chance it might walk right through the wall. This effect is known as quantum tunneling. It actually occurs. It’s hard to make sense of this in classical terms, but the quantum explanation is that electrons are described by probability waves.…
The solution to Schrödinger’s equation shows that a small portion of the electron probability wave exists on the far side of an impenetrable barrier. This means there is some small but nonzero probability that the electron will be detected on the…
Alpha particles tunnel out of uranium nuclei at the predicted rate to produce the effect…
By applying calculus and quantum mechanics, physicists have opened a theoretical window on the microworld. The fruits of their insights include lasers and transistors, the chips in our…
But there is reason to believe that at much, much smaller scales of the universe, far below the atomic scale, space and time may ultimately lose their continuous character. We don’t know for sure what it’s like down there, but we can guess. Space and time might become as neatly pixelated as Zeno imagined in his Paradox of the Arrow,…
At such small scales, space and time might seethe and roil at random. They might…
Although there is no consensus about how to visualize space and time at these ultimate scales, there is universal agreement about how small those scales are likely to be. They are forced upon us by three fundamental constants of nature. One of them is the gravitational constant, G. It measures the strength of gravity in the universe. It appeared first in Newton’s theory of gravity and again in Einstein’s general theory of relativity. It is bound to occur in any future theory that supersedes them. The second constant, ħ (pronounced “h bar”), reflects the strength of quantum effects. It appears, for example, in Heisenberg’s uncertainty principle and in Schrödinger’s wave equation of quantum mechanics. The third constant is the speed of light, c. It is the speed limit for the universe. No signal of any kind can travel faster…
The corresponding Planck time is the time it would take light to traverse this distance, which is about 10–43 seconds. Space and time would no longer make sense below these scales. They’re the end of the line.
If real numbers are not real, why do mathematicians love them so much? And why are schoolchildren forced to learn about them? Because calculus needs them. From the beginning, calculus has stubbornly insisted that everything — space and time, matter and energy, all objects that ever have been or will be — should be regarded as continuous.
Plutarch goes on to say that when Archimedes was lost in his mathematics, he would have to be “carried by absolute violence to bathe.” It’s interesting that he was such a reluctant bather, given that a bath is the setting for the one story about him that everybody knows.
According to the Roman architect Vitruvius, Archimedes became so excited by a sudden insight he had in the bath that he leaped out of the tub and ran down the street naked shouting, “Eureka!” (“I have found it!”)
In a more serious vein, all students of science and engineering remember Archimedes for his principle of buoyancy (a body immersed in a fluid is buoyed up by a force equal to the weight of the fluid displaced) and his law of the lever (heavy objects placed on opposite sides of a lever will balance if and only if their weights are in inverse proportion to their distances from the fulcrum).
Archimedes’s principle of buoyancy explains why some objects float and others do not. It also underlies all of naval architecture, the theory of ship stability, and the design of oil-drilling platforms at sea.
The squeeze technique that Archimedes used (building on earlier work by the Greek mathematician Eudoxus) is now known as the method of exhaustion because of the way it traps the unknown number pi between two known numbers. The bounds tighten with each doubling, thus exhausting the wiggle room for pi.
To come to grips with π’s numerical value required a new kind of mathematics, one that could cope with curved shapes. How to measure the length of a curved line or the area of a curved surface or the volume of a curved solid — these were the cutting-edge questions that consumed Archimedes and led him to take the first steps toward what we now call integral calculus. Pi was its first triumph.
It may seem strange to modern minds that pi doesn’t appear in Archimedes’s formula for the area of a circle, A = rC/2, and that he never wrote down an equation like C = πd to relate the circumference of a circle to its diameter. He avoided doing all that because pi was not a number to him. It was simply a ratio of two lengths, a proportion between a circle’s circumference and its diameter. It was a magnitude, not a number.
In modern language, someone discovered the existence of irrational numbers. The suspicion is that this discovery shocked and disappointed the Greeks, since it belied the Pythagorean credo.
Today we accept pi as a number — a real number, an infinite decimal — and a fascinating one at that.
We will never know all the digits of pi. Nevertheless, those digits are out there, waiting to be discovered. As of this writing, twenty-two trillion digits have been computed by the world’s fastest computers. Yet twenty-two trillion is nothing compared to the infinitude of digits that define the actual pi.
There’s something so paradoxical about pi. On the one hand, it represents order, as embodied by the shape of a circle, long held to be a symbol of perfection and eternity. On the other hand, pi is unruly, disheveled in appearance, its digits obeying no obvious rule, or at least none that we can perceive. Pi is elusive and mysterious, forever beyond reach. Its mix of order and disorder is what makes it so bewitching.
With its yin and yang binaries, pi is like all of calculus in miniature. Pi is a portal between the round and the straight, a single number yet infinitely complex, a balance of order and chaos.
Calculus, for its part, uses the infinite to study the finite, the unlimited to study the limited, and the straight to study the curved. The Infinity Principle is the key to unlocking the mystery of curves, and it arose here first, in the mystery of pi.
Archimedes went deeper into the mystery of curves, again guided by the Infinity Principle, in his treatise The Quadrature of the Parabola.
A parabola describes the familiar arc of a three-point shot in basketball or water coming out of a drinking fountain. Actually, those arcs in the real world are only approximately parabolic.
As Sherlock Holmes later put it, “When you have eliminated the impossible, whatever remains, however improbable, must be the truth.”
He writes about it in a letter to his friend Eratosthenes, the librarian at Alexandria and the only mathematician of his era who could understand him. He confesses that even though his Method “does not furnish an actual demonstration” of the results he’s interested in, it helps him figure out what’s true. It gives him intuition. As he says, “It is easier to supply the proof when we have previously acquired, by the method, some knowledge of the questions than it is to find it without any previous knowledge.”
This is such an honest account of what it’s like to do creative mathematics. Mathematicians don’t come up with the proofs first. First comes intuition. Rigor comes later. This essential role of intuition and imagination is often left out of high-school geometry courses, but it is essential to all creative mathematics.
This unsurpassed genius, feeling the finiteness of his life against the infinitude of mathematics, recognizes that there is so much left to be done, that there are “other theorems which have not yet fallen to our share.” We all feel that, all of us mathematicians. Our subject is endless. It humbled even Archimedes himself.