Six months ago, I was sitting in a café near the Technical University of Munich with Professor Michael Wolf, who teaches calculus. We were discussing how students perceive integrals and derivatives. And then he said a phrase that hooked me: «You know what the weirdest part is? We teach them to calculate derivatives in five minutes, but it took humanity two thousand years to figure out what that actually is.»
I started thinking. Indeed – how did people come up with this stuff? Why integrals and derivatives specifically, and not something else? And, most importantly – how could one invent a mathematical apparatus that works with infinitesimally small quantities if such things don't strictly exist in the real world?
I spent several months figuring it out. I spoke with historians of science, re-read Newton's and Leibniz's original texts (in translation, of course), and examined the problems they were solving. Here is what I found out.
The Beginning: The Greeks and the Paradox of Infinity
It all started long before the 17th century. The ancient Greeks were already trying to work with infinity – they just didn't have the proper tools to do it correctly.
Take Zeno of Elea. He formulated his famous paradoxes around the 5th century BC. One of them goes like this: Achilles will never catch up to a tortoise if it started earlier. Why? By the time Achilles reaches the point where the tortoise was, it has moved further. By the time he reaches the new point, it has moved again. And so on, infinitely.
The paradox is, of course, solved simply: an infinite sum can have a finite value. But this wasn't obvious to the Greeks. They were generally wary of infinity.
Archimedes went further. He introduced the method of exhaustion – a technique to calculate the areas and volumes of complex shapes. The gist is simple: you inscribe a polygon inside a circle, then increase the number of its sides, then increase it again – and in the limit, you get the area of the circle.
It was brilliant. But it wasn't integral calculus. Archimedes essentially invented a new method for each problem he tackled. There was no universal algorithm.
The Middle Ages: Arabs, Indians, and Europeans Grapple with Concepts
After the fall of the Roman Empire, mathematics moved to the East. Arab and Indian scholars developed algebra and trigonometry, and worked with series.
The Indian mathematician Bhaskara II calculated the derivatives of trigonometric functions in the 12th century. He didn't call them derivatives – he didn't have that concept. But he understood that one could find the rate of change of one quantity relative to another.
In Europe in the 14th century, a group of scholars from Oxford – they were called the «Oxford Calculators» – were developing a theory of motion. They tried to describe how speed changes over time. One of them, Thomas Bradwardine, actually worked with the concept of instantaneous speed.
But again – these were isolated attempts. There was no systematic framework.
The 17th Century: Why Calculus Emerged Then
The 17th Century: Why Then Specifically?
Now for the main question: why did the breakthrough happen specifically in the 17th century?
I discussed this with Dieter Schmidt, a historian of science at the Max Planck Institute for the History of Science. He explained three key factors to me.
First – practical tasks. In the 17th century, Europe was actively building ships, developing artillery, and creating precision clocks. They needed to calculate projectile trajectories, ship speeds, and mechanism parameters. The old mathematics couldn't handle it.
Second – astronomy. Kepler discovered his laws of planetary motion. Galileo observed Jupiter's moons. A need arose to describe complex motions mathematically.
Third – philosophy. Descartes created analytical geometry. He showed that geometric shapes can be described by algebraic equations. This was a bridge between the geometry of the ancient Greeks and the algebra that was developing in parallel.
"Before Descartes, mathematicians thought about problems geometrically. After him, they started thinking algebraically. This opened up completely new possibilities", Schmidt told me.
Fermat and Tangents
Pierre de Fermat – a lawyer from Toulouse who pursued mathematics as a hobby – developed a method for finding maxima and minima of functions in the 1630s. He wanted to find tangents to curves.
His idea was this: take a point on a curve, add a small increment to it, observe how the function changes, then drive this increment to zero. In modern terms, he was effectively calculating the derivative.
However, Fermat didn't publish his works. He corresponded with other mathematicians and shared ideas, but there was no systematic presentation.
Newton: From the Apple to Fluxions
Isaac Newton developed his calculus in 1665–1666, when he was 22–23 years old. He called it the «method of fluxions». A fluxion is the rate of change of a quantity, i.e., the derivative. A fluent is the quantity itself.
Newton thought about motion. He was interested in physics, not abstract mathematics. He wanted to understand how to describe acceleration, and how to link force and motion.
His approach was like this: imagine that everything flows in time. An object's coordinates change, velocity changes, acceleration changes. How do we link these changes?
He introduced the concept of an infinitely small interval of time and looked at what happens during that interval. If the coordinate changed by a tiny amount, and time by a tiny interval, then speed is the ratio of these quantities.
In modern notation: if x is the coordinate and t is time, then velocity v = dx/dt. That is the derivative.
Newton also understood the inverse operation: if the speed is known at every moment in time, one can find the distance traveled. To do this, you need to «add up» all the small pieces of the path. That is the integral.
And here is the key discovery: these two operations are inverses of each other. The derivative and the integral are two sides of the same coin. This is called the fundamental theorem of calculus.
Leibniz: Symbolism and Philosophy
Gottfried Wilhelm Leibniz developed his calculus independently of Newton, in the 1670s. But his approach was completely different.
Leibniz was a philosopher and a diplomat. He didn't think about physics, but about logic and symbols. He wanted to create a universal language in which any reasoning could be described.
His contribution is the notation. It was Leibniz who came up with the integral symbol ∫ (an elongated letter S for summa – sum) and the designation for the differential, dx. We still use these symbols today.
Leibniz thought of infinitesimally small quantities as real, albeit very small, magnitudes. For Newton, they were more like limiting transitions. This created philosophical problems but worked practically.
I found a letter from Leibniz to a colleague where he writes: «My method allows solving in minutes problems that Archimedes would have spent months on». This was no exaggeration.
The Dispute Over Priority
Newton and Leibniz didn't know about each other's work until they began publishing. Newton developed his method earlier but published later. Leibniz published in 1684, Newton – only in 1687 in his Principia.
A scandal began. English mathematicians accused Leibniz of plagiarism. German ones defended him. The dispute was so fierce that the British and Continental schools of mathematics didn't communicate with each other for almost a hundred years.
Modern historians believe they both invented calculus independently. Newton had a more physical approach, Leibniz – a more formal one that was convenient for notation.
What Exactly Did Newton and Leibniz Invent?
What Exactly Did They Come Up With?
Let's break down exactly what Newton and Leibniz invented.
The derivative is a measure of how fast a function changes. If you have a graph, the derivative at a point is the slope of the tangent to the graph at that point. Physically, it is the instantaneous rate of change.
How do you calculate it? Take two very close points on the graph, draw a line through them, and look at its slope. Then bring the points closer and closer. In the limit, you get the slope of the tangent.
Formally: the derivative of a function f(x) at point x is the limit of the ratio (f(x+h) − f(x))/h as h approaches zero.
The integral is the inverse operation. If the derivative shows the rate of change, the integral sums up these changes.
Geometrically, the integral is the area under the function's graph. How do you find it? Split the area under the graph into narrow vertical strips, calculate the area of each (it's approximately a rectangle or trapezoid), and sum them up. The narrower the strips, the more accurate the result.
In the limit, when the width of the strips tends to zero, you get the exact area. That is the integral.
The Fundamental Theorem: Linking Two Operations
The most important discovery is the link between the derivative and the integral. They are inverses of each other.
Imagine: you are driving a car. The speedometer shows speed at every moment – this is the speed function v(t). Question: how many kilometers did you drive in an hour?
To find this out, you need to «sum up» all the small pieces of the path traveled during every tiny interval of time. This is the integral of speed over time.
The inverse problem: you have a graph of distance traveled x(t). Question: what was the speed at time moment t? Speed is the derivative of the path over time.
The fundamental theorem of calculus states: if you take the derivative of the integral of a function, you get the original function. And conversely: the integral of the derivative of a function gives the original function (up to a constant).
This turned two different operations into a unified system. Now it was possible to solve a huge class of problems uniformly.
First Applications: From Astronomy to Engineering
Newton immediately applied his calculus to physics. He derived the law of universal gravitation, calculated planetary orbits, and explained the motion of the Moon and the tides.
His colleague Edmund Halley used these methods to predict the return of a comet in 1758. The prediction came true – the comet actually returned. This was a triumph for the new mathematics.
The Bernoulli brothers – Swiss mathematicians – applied calculus to mechanics problems. Jakob Bernoulli solved the brachistochrone problem: along what curve must a ball roll to get from point A to point B in minimum time? The answer is a cycloid, and this can be proven using the calculus of variations.
Leonhard Euler – Johann Bernoulli's student – developed calculus to incredible heights. He applied it to hydrodynamics, acoustics, optics, and elasticity theory. He wrote the textbook Introduction to the Analysis of the Infinite, which became the basis for all subsequent calculus courses.
Philosophical Problems: What is an Infinitesimal?
But there was one big problem. What exactly are these infinitesimally small quantities?
Newton and Leibniz operated with quantities that «tend to zero, but are not equal to zero». This worked practically, but was logically shaky.
Bishop George Berkeley published the pamphlet The Analyst in 1734, where he mocked mathematicians. He wrote: «They discard infinitesimals when it suits them, and keep them when they need them. This is no more rigorous than theological reasoning».
Berkeley was right. The foundations of calculus were not rigorous. But mathematicians continued to use it because it gave correct results.
The 19th Century: Rigorous Foundations of Calculus
The 19th Century: Rigorous Foundations
The problem was solved only in the 19th century. Augustin-Louis Cauchy and Karl Weierstrass developed a rigorous theory of limits.
They reformulated all of calculus without infinitesimals. Instead, they used the concept of a limit: we say that a sequence tends to a number L if, for any pre-assigned accuracy, one can find a number starting from which all members of the sequence differ from L by less than this accuracy.
It sounds complicated, but it is rigorous. Now the derivative is defined as a limit, and the integral is also a limit. No mystical infinitesimals.
Georg Cantor added set theory to this. David Hilbert formulated the axiomatics. By the beginning of the 20th century, mathematical analysis had become a rigorous science with clear foundations.
Modern Applications: Why Calculus is Important Now
Modern Applications: Why This Is Important Now
Today, differential and integral calculus is a basic tool for engineers, physicists, economists, and programmers.
Engineers use differential equations to calculate structures, model processes, and control systems. When you ride in a car with a stability control system, there is a solution to differential equations happening in real-time behind it.
Physicists describe nature in the language of differential equations. Maxwell's equations for electromagnetism, the Schrödinger equation for quantum mechanics, Einstein's equations for gravity – all these are differential equations.
Economists use derivatives for optimization: how to maximize profit, how to minimize risks. This is called marginal analysis.
Programmers apply gradient descent – an optimization algorithm based on derivatives – to train neural networks. All modern machine learning stands on this concept.
Summary: The Enduring Legacy of Calculus
What's the Bottom Line?
Integrals and derivatives are not just abstract mathematical objects. They are tools for describing change and accumulation, motion and equilibrium, cause and effect.
17th-century mathematicians needed to combine the ideas of the ancient Greeks, medieval scholastics, Renaissance astronomers, and the algebra contemporary to them in order to create a unified system. It was a synthesis of two thousand years of mathematical development.
Newton and Leibniz took the last step: they realized that the derivative and the integral are two sides of the same coin, and created an algorithm for calculating them. Not separately for each problem, as the Greeks did, but a universal method.
Professor Wolf, whom I spoke with at the beginning of this story, told me at the end of our conversation: «Students memorize formulas and calculate derivatives. But they don't understand that behind this lies three hundred years of arguments, doubts, and breakthroughs. And that is exactly what real mathematics is – not formulas, but ideas».
Now, when I see an integral symbol or derivative notation, I don't just think about calculation techniques. I think about how people struggled with the concept of infinity for centuries, how they argued about the nature of time and motion, and how they gradually built a bridge between physical reality and abstract mathematics.
And that bridge works. It supports the weight of modern science and technology. Three hundred years after Newton and Leibniz.